3D video production method, device and equipment with variable visual angle and storage medium

文档序号:196215 发布日期:2021-11-02 浏览:41次 中文

阅读说明:本技术 可变视角的3d视频制作方法、装置、设备及存储介质 (3D video production method, device and equipment with variable visual angle and storage medium ) 是由 孙晓斐 唐浩 于 2021-07-30 设计创作,主要内容包括:本发明公开了一种可变视角的3D视频制作方法,包括:利用各结构光传感器分别采集目标物体的RGB图像和深度图像;利用全局优化得到的系统标定参数基于各RGB图像和各深度图像进行点云生成,得到各帧点云;对各帧点云进行网格重建,得到初始网格序列;结合几何约束和投影图像约束对初始网格序列中各幅网格进行网格对齐注册,得到各注册网格组;对各注册网格组进行纹理映射,得到纹理贴图,以利用纹理贴图对目标物体进行可变视角的3D视频制作。应用本发明所提供的可变视角的3D视频制作方法,较大地降低了对存储空间的需求,降低了系统算力要求,使得可视化效果更加精细。本发明还公开了一种装置、设备及存储介质,具有相应技术效果。(The invention discloses a 3D video production method with a variable visual angle, which comprises the following steps: respectively acquiring an RGB image and a depth image of a target object by utilizing each structured light sensor; performing point cloud generation on the basis of each RGB image and each depth image by using system calibration parameters obtained by global optimization to obtain each frame of point cloud; carrying out grid reconstruction on each frame of point cloud to obtain an initial grid sequence; carrying out grid alignment registration on each grid in the initial grid sequence by combining geometric constraint and projection image constraint to obtain each registered grid group; and performing texture mapping on each registered grid group to obtain a texture map, and performing 3D video production with a variable visual angle on the target object by using the texture map. By applying the 3D video production method with the variable visual angle, the requirement on storage space is greatly reduced, the calculation requirement of a system is reduced, and the visualization effect is more precise. The invention also discloses a device, equipment and a storage medium, which have corresponding technical effects.)

1. A method for variable-view 3D video production, comprising:

respectively acquiring an RGB image and a depth image of a target object by utilizing each structured light sensor;

performing point cloud generation on the basis of the RGB images and the depth images by using system calibration parameters obtained by global optimization to obtain point clouds of frames;

carrying out mesh reconstruction on the point clouds of the frames to obtain an initial mesh sequence;

carrying out grid alignment registration on each grid in the initial grid sequence by combining geometric constraint and projection image constraint to obtain each registered grid group;

and performing texture mapping on each registered grid group to obtain a texture map, and performing 3D video production with a variable visual angle on the target object by using the texture map.

2. The method of claim 1, wherein after obtaining the point clouds of each frame and before performing mesh reconstruction on the point clouds of each frame, the method further comprises:

splicing adjacent frame point clouds by utilizing an ICP (inductively coupled plasma) algorithm to obtain a spliced point cloud set;

carrying out mesh reconstruction on the point clouds of the frames to obtain an initial mesh sequence, wherein the mesh reconstruction comprises the following steps:

performing mesh reconstruction on each frame of point cloud according to the splicing sequence of each frame of point cloud in the spliced point cloud set to obtain an initial mesh sequence; and the arrangement sequence of each grid in the initial grid sequence is correspondingly consistent with the splicing sequence of each frame of point cloud in the spliced point cloud set.

3. The method of claim 2, wherein performing mesh-aligned registration on each mesh in the initial mesh sequence in combination with geometric constraints and projection image constraints to obtain each registered mesh group comprises:

according to the arrangement sequence of each grid in the initial grid sequence, carrying out grid alignment registration on two adjacent grids currently;

calculating registration errors of the two current adjacent grids after registration is finished by combining the geometric constraint and the projected image constraint;

judging whether the registration error is smaller than a preset error value or not;

if yes, dividing two adjacent grids into the same registered grid group;

if not, dividing the two adjacent grids into different registered grid groups;

determining a grid which is ranked later in two adjacent grids at present as an initial frame;

judging whether the registration of each grid in the initial grid sequence is finished or not;

if yes, counting to obtain each registration grid group;

if not, executing the step of carrying out grid alignment registration on the two current adjacent grids according to the arrangement sequence of each grid in the initial grid sequence.

4. The method of claim 3, wherein calculating registration errors of two currently adjacent meshes after registration is completed by combining the geometric constraint and the projected image constraint comprises:

calculating the nearest neighbor point pair distance, the deformation similarity and the key corresponding point distance of the two adjacent grids after the registration is finished by utilizing the geometric constraint;

calculating the pixel difference of two adjacent grids after registration is finished by utilizing the projected image constraint;

and carrying out weighted summation on the nearest neighbor point pair distance, the deformation similarity, the key corresponding point distance and the pixel difference to obtain the registration error.

5. The method for producing a 3D video with a variable viewing angle according to any one of claims 2 to 4, wherein after obtaining the stitched point cloud set, before performing mesh reconstruction on each frame of the point cloud according to a stitching order of each frame of the point cloud in the stitched point cloud set, the method further comprises:

and uniformly sampling the spliced point cloud set to remove redundant vertexes.

6. A variable-view 3D video production apparatus, comprising:

the image acquisition module is used for respectively acquiring an RGB image and a depth image of a target object by utilizing each structured light sensor;

the point cloud generating module is used for generating point clouds based on the RGB images and the depth images by using system calibration parameters obtained by global optimization to obtain point clouds of frames;

the grid reconstruction module is used for carrying out grid reconstruction on the point clouds of the frames to obtain an initial grid sequence;

the grid registration module is used for carrying out grid alignment registration on each grid in the initial grid sequence by combining geometric constraint and projection image constraint to obtain each registered grid group;

and the video production module is used for carrying out texture mapping on each registered grid group to obtain a texture mapping so as to carry out 3D video production with a variable visual angle on the target object by utilizing the texture mapping.

7. The variable-view 3D video production apparatus according to claim 6, further comprising:

the point cloud splicing module is used for splicing adjacent point clouds of each frame by utilizing an ICP (inductively coupled plasma) algorithm to obtain a spliced point cloud set after the point clouds of each frame are obtained and before the point clouds of each frame are subjected to grid reconstruction;

the grid reconstruction module is specifically a module for performing grid reconstruction on each frame of point cloud according to the splicing sequence of each frame of point cloud in the spliced point cloud set to obtain an initial grid sequence; and the arrangement sequence of each grid in the initial grid sequence is correspondingly consistent with the splicing sequence of each frame of point cloud in the spliced point cloud set.

8. The variable-view 3D video production apparatus according to claim 7, wherein the mesh registration module comprises:

the grid registration submodule is used for carrying out grid alignment registration on the two current adjacent grids according to the arrangement sequence of each grid in the initial grid sequence;

the error calculation submodule is used for calculating registration errors of two current adjacent grids after registration is finished by combining the geometric constraint and the projected image constraint;

the first judgment submodule is used for judging whether the registration error is smaller than the preset error value or not;

the first grid group division submodule is used for dividing two adjacent grids into the same registration grid group when the registration error is determined to be smaller than the preset error value;

the second grid group division submodule is used for dividing two adjacent grids into different registration grid groups when the registration error is determined to be greater than or equal to the preset error value;

an initial frame determining submodule, configured to determine a grid ranked later in two current adjacent grids as an initial frame;

the second judgment submodule is used for judging whether the registration of each grid in the initial grid sequence is finished;

the grid group counting submodule is used for counting to obtain each registered grid group when the fact that the registration of each grid in the initial grid sequence is finished is determined;

and the repeated execution sub-module is used for executing the step of carrying out grid alignment registration on the two current adjacent grids according to the arrangement sequence of the grids in the initial grid sequence when the situation that the registration of the grids in the initial grid sequence is not finished is determined.

9. A variable-view 3D video production apparatus, comprising:

a memory for storing a computer program;

a processor for implementing the steps of the variable-view 3D video production method according to any one of claims 1 to 5 when executing said computer program.

10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the variable-view 3D video production method according to any one of claims 1 to 5.

Technical Field

The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for producing a 3D video with a variable viewing angle.

Background

Conventional video is one image per frame, and only the subject can be observed from the shooting angle. 3D (Three-Dimensional) video can be viewed from any angle, providing an immersive experience for the viewer.

The existing 3D video production method with variable viewing angles is to generate point clouds for a target object, perform mesh reconstruction on the point clouds, and then generate texture images based on the reconstructed meshes, thereby completing the 3D video production with variable viewing angles. The video rendering process needs a large amount of storage space, has high requirements on system computing power and has poor visualization effect.

In summary, how to effectively solve the problems that the existing 3D video production method with variable viewing angle consumes a large amount of storage space, has high requirements on system computing power, has poor visualization effect, and the like, is a problem that needs to be solved by those skilled in the art at present.

Disclosure of Invention

The invention aims to provide a 3D video production method with a variable visual angle, which greatly reduces the requirement on storage space and the computational requirement of a system, so that the visualization effect is more precise; another object of the present invention is to provide a variable viewing angle 3D video production apparatus, device and computer readable storage medium.

In order to solve the technical problems, the invention provides the following technical scheme:

a variable-view 3D video production method, comprising:

respectively acquiring an RGB image and a depth image of a target object by utilizing each structured light sensor;

performing point cloud generation on the basis of the RGB images and the depth images by using system calibration parameters obtained by global optimization to obtain point clouds of frames;

carrying out mesh reconstruction on the point clouds of the frames to obtain an initial mesh sequence;

carrying out grid alignment registration on each grid in the initial grid sequence by combining geometric constraint and projection image constraint to obtain each registered grid group;

and performing texture mapping on each registered grid group to obtain a texture map, and performing 3D video production with a variable visual angle on the target object by using the texture map.

In a specific embodiment of the present invention, after obtaining the point clouds of each frame, before performing mesh reconstruction on the point clouds of each frame, the method further includes:

splicing adjacent frame point clouds by utilizing an ICP (inductively coupled plasma) algorithm to obtain a spliced point cloud set;

carrying out mesh reconstruction on the point clouds of the frames to obtain an initial mesh sequence, wherein the mesh reconstruction comprises the following steps:

performing mesh reconstruction on each frame of point cloud according to the splicing sequence of each frame of point cloud in the spliced point cloud set to obtain an initial mesh sequence; and the arrangement sequence of each grid in the initial grid sequence is correspondingly consistent with the splicing sequence of each frame of point cloud in the spliced point cloud set.

In a specific embodiment of the present invention, performing mesh alignment registration on each mesh in the initial mesh sequence in combination with geometric constraint and projection image constraint to obtain each registered mesh group, including:

according to the arrangement sequence of each grid in the initial grid sequence, carrying out grid alignment registration on two adjacent grids currently;

calculating registration errors of the two current adjacent grids after registration is finished by combining the geometric constraint and the projected image constraint;

judging whether the registration error is smaller than the preset error value or not;

if yes, dividing two adjacent grids into the same registered grid group;

if not, dividing the two adjacent grids into different registered grid groups;

determining a grid which is ranked later in two adjacent grids at present as an initial frame;

judging whether the registration of each grid in the initial grid sequence is finished or not;

if yes, counting to obtain each registration grid group;

if not, executing the step of carrying out grid alignment registration on the two current adjacent grids according to the arrangement sequence of each grid in the initial grid sequence.

In a specific embodiment of the present invention, calculating the registration error of two current adjacent grids after registration is completed by combining the geometric constraint and the projection image constraint, includes:

calculating the nearest neighbor point pair distance, the deformation similarity and the key corresponding point distance of the two adjacent grids after the registration is finished by utilizing the geometric constraint;

calculating the pixel difference of two adjacent grids after registration is finished by utilizing the projected image constraint;

and carrying out weighted summation on the nearest neighbor point pair distance, the deformation similarity, the key corresponding point distance and the pixel difference to obtain the registration error.

In a specific embodiment of the present invention, after obtaining a stitched point cloud set, before performing mesh reconstruction on each frame of point cloud according to a stitching sequence of each frame of point cloud in the stitched point cloud set, the method further includes:

and uniformly sampling the spliced point cloud set to remove redundant vertexes.

A variable-view 3D video production apparatus comprising:

the image acquisition module is used for respectively acquiring an RGB image and a depth image of a target object by utilizing each structured light sensor;

the point cloud generating module is used for generating point clouds based on the RGB images and the depth images by using system calibration parameters obtained by global optimization to obtain point clouds of frames;

the grid reconstruction module is used for carrying out grid reconstruction on the point clouds of the frames to obtain an initial grid sequence;

the grid registration module is used for carrying out grid alignment registration on each grid in the initial grid sequence by combining geometric constraint and projected image constraint to obtain each registered grid group;

and the video production module is used for carrying out texture mapping on each registered grid group to obtain a texture mapping so as to carry out 3D video production with a variable visual angle on the target object by utilizing the texture mapping.

In one embodiment of the present invention, the method further comprises:

the point cloud splicing module is used for splicing adjacent point clouds of each frame by utilizing an ICP (inductively coupled plasma) algorithm to obtain a spliced point cloud set after the point clouds of each frame are obtained and before the point clouds of each frame are subjected to grid reconstruction;

the grid reconstruction module is specifically a module for performing grid reconstruction on each frame of point cloud according to the splicing sequence of each frame of point cloud in the spliced point cloud set to obtain an initial grid sequence; and the arrangement sequence of each grid in the initial grid sequence is correspondingly consistent with the splicing sequence of each frame of point cloud in the spliced point cloud set.

In a specific embodiment of the present invention, the grid registration module includes:

the grid registration submodule is used for carrying out grid alignment registration on the two current adjacent grids according to the arrangement sequence of each grid in the initial grid sequence;

the error calculation submodule is used for calculating registration errors of two current adjacent grids after registration is finished by combining the geometric constraint and the projected image constraint;

the first judgment submodule is used for judging whether the registration error is smaller than the preset error value or not;

the first grid group division submodule is used for dividing two adjacent grids into the same registration grid group when the registration error is determined to be smaller than the preset error value;

the second grid group division submodule is used for dividing two adjacent grids into different registration grid groups when the registration error is determined to be greater than or equal to the preset error value;

an initial frame determining submodule, configured to determine a grid ranked later in two current adjacent grids as an initial frame;

the second judgment submodule is used for judging whether the registration of each grid in the initial grid sequence is finished;

the grid group counting submodule is used for counting to obtain each registered grid group when the fact that the registration of each grid in the initial grid sequence is finished is determined;

and the repeated execution sub-module is used for executing the step of carrying out grid alignment registration on the two current adjacent grids according to the arrangement sequence of the grids in the initial grid sequence when the situation that the registration of the grids in the initial grid sequence is not finished is determined.

A variable-view 3D video production apparatus comprising:

a memory for storing a computer program;

a processor for implementing the steps of the variable-view 3D video production method as described above when executing the computer program.

A computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the variable-view 3D video production method as set forth above.

The 3D video production method with the variable visual angle provided by the invention utilizes each structured light sensor to respectively collect RGB images and depth images of a target object; performing point cloud generation on the basis of each RGB image and each depth image by using system calibration parameters obtained by global optimization to obtain each frame of point cloud; carrying out grid reconstruction on each frame of point cloud to obtain an initial grid sequence; carrying out grid alignment registration on each grid in the initial grid sequence by combining geometric constraint and projection image constraint to obtain each registered grid group; and performing texture mapping on each registered grid group to obtain a texture map, and performing 3D video production with a variable visual angle on the target object by using the texture map.

According to the technical scheme, when system parameters are calibrated, the system parameters are globally optimized, each frame of point cloud is generated by using the globally optimized system parameters based on an RGB image and a depth image of a target object, grid reconstruction is carried out on each frame of point cloud, after an initial grid sequence is obtained, grid alignment registration is carried out on each grid by combining geometric constraint and projection image constraint, the topological consistency of each grid in each registered grid group is ensured, the requirement on storage space is greatly reduced, the requirement on system computing power is reduced, and the visualization effect is more fine.

Accordingly, the present invention further provides a device, an apparatus and a computer readable storage medium for 3D video production with a variable viewing angle corresponding to the method for 3D video production with a variable viewing angle, which have the above technical effects and are not described herein again.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

Fig. 1 is a flowchart illustrating an implementation of a method for making a 3D video with variable viewing angles according to an embodiment of the present invention;

FIG. 2 is a flowchart illustrating an alternative implementation of a method for 3D video production with variable viewing angles according to an embodiment of the present invention;

FIG. 3 is a block diagram of a variable-view 3D video production apparatus according to an embodiment of the present invention;

fig. 4 is a block diagram illustrating a structure of a variable-view 3D video production apparatus according to an embodiment of the present invention;

fig. 5 is a schematic structural diagram of a variable-view 3D video production apparatus according to this embodiment.

Detailed Description

In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of a method for making a variable-view 3D video according to an embodiment of the present invention, where the method may include the following steps:

s101: and respectively acquiring the RGB image and the depth image of the target object by utilizing each structured light sensor.

The method comprises the steps that a plurality of structured light sensors are preset, a 3D video construction system with a variable visual angle is constructed by the plurality of structured light sensors, and each structured light sensor mainly comprises a depth sensor (such as a 100-ten-thousand-pixel depth sensor), an RGB color mode camera (such as a 1200-thousand-pixel RGB camera) and an external synchronization pin. The depth sensor is used for acquiring a depth image; the RGB color mode camera is used for collecting RGB images; and the external synchronization pin is used for synchronizing the sensor data streams of the plurality of structured light sensors, and the plurality of structured light sensors are connected through the external synchronization pin to complete system construction. When 3D video production with a variable visual angle is required to be carried out on a target object, the RGB images and the depth images of the target object are respectively collected by utilizing the structured light sensors, and a plurality of RGB images and a plurality of depth images are obtained.

The target object may be any object to be 3D video-produced with a variable viewing angle, such as a human body, a plant, a building, and the like.

S102: and performing point cloud generation on the basis of each RGB image and each depth image by using the system calibration parameters obtained by global optimization to obtain each frame of point cloud.

After the system is built, parameter calibration is carried out on each structured light sensor, namely, the camera parameters of a single structured light sensor in the system are calibrated, and the parameters among the structured light sensors are calibrated. The calibration of the single-structure light sensor mainly comprises Infrared (IR) camera internal reference calibration, RGB camera internal reference calibration, IR and RGB camera external reference calibration and depth calibration. The calibration among the structured light sensors is mainly the external reference calibration among the structured light sensors. And global joint optimization is carried out on the camera parameters of the plurality of structured light sensors, so that the calibration precision is greatly improved.

After the structured light sensors are used for respectively acquiring the RGB images and the depth images of the target object, point cloud generation is carried out on the basis of the RGB images and the depth images by using system calibration parameters obtained through global optimization, and point clouds of frames are obtained. A colored point cloud may be generated, such as when the target object is a human body, the point cloud generation process may include: converting the depth image into an RGB image coordinate system by utilizing IR and RGB external reference information in the structured light sensor; detecting a human body on the depth image by using a human body segmentation algorithm, removing the RGB image and the non-human body part on the depth image, and reducing point cloud noise; the depth map is mapped to a colored point cloud using the RGB camera intrinsic parameters.

S103: and carrying out grid reconstruction on each frame of point cloud to obtain an initial grid sequence.

And after generating each frame of point cloud, carrying out grid reconstruction on each frame of point cloud to obtain an initial grid sequence. For example, Poisson Surface Reconstruction (Poisson Surface Reconstruction) may be used to obtain a mesh with water tightness, so as to obtain an initial mesh sequence.

The core idea of poisson reconstruction is that the point cloud represents the position of the object surface, and the normal vector represents the inside and outside directions. By implicitly fitting an indicator function derived from the object, an estimate of a smooth object surface can be given.

S104: and carrying out grid alignment registration on each grid in the initial grid sequence by combining geometric constraint and projection image constraint to obtain each registered grid group.

After the initial mesh sequence is obtained by performing mesh reconstruction on each frame of point cloud, a large amount of storage space is required because the obtained initial mesh sequence is non-aligned. Therefore, the mesh registration by combining the geometrical constraint and the projection image constraint can be preset. And carrying out grid alignment registration on each grid in the initial grid sequence by combining geometric constraint and projection image constraint to obtain each registered grid group. By carrying out grid alignment registration, a grid sequence with topological consistency is obtained, and the requirement on storage is reduced.

S105: and performing texture mapping on each registered grid group to obtain a texture map, and performing 3D video production with a variable visual angle on the target object by using the texture map.

After each registered grid group is obtained by carrying out grid alignment registration on each grid in the initial grid sequence, texture mapping is carried out on each registered grid group to obtain a texture mapping chart, such as a UV texture mapping chart, 3D video production with a variable visual angle is carried out on a target object by utilizing the texture mapping chart, so that the system outputs each registered grid group corresponding to the target object and a corresponding texture mapping chart, each registered grid group and the corresponding texture mapping chart can be stored in a file, and video playing can be carried out at any visual angle through visualization software contained in the system. The 3D video construction system with the variable visual angle, provided by the invention, is easy to build and low in cost, and reduces the calculation force requirement of the system, so that the visualization effect is more precise.

According to the technical scheme, after the initial grid sequence is obtained by generating each frame of point cloud according to the RGB image and the depth image of the target object and carrying out grid reconstruction on each frame of point cloud, grid alignment registration is carried out on each grid according to the preset registration precision threshold, the topological consistency of each grid in each registered grid group is ensured, the requirement on storage space is greatly reduced, the system calculation requirement is reduced, and the visualization effect is finer.

It should be noted that, based on the above embodiments, the embodiments of the present invention also provide corresponding improvements. In the following embodiments, steps that are the same as or correspond to those in the above embodiments may be referred to one another, and corresponding advantageous effects may also be referred to one another, which is not described in detail in the following modified embodiments.

Referring to fig. 2, fig. 2 is a flowchart illustrating another implementation of a method for making a variable-view 3D video according to an embodiment of the present invention, where the method may include the following steps:

s201: and respectively acquiring the RGB image and the depth image of the target object by utilizing each structured light sensor.

S202: and performing point cloud generation on the basis of each RGB image and each depth image by using preset system calibration parameters obtained by global optimization to obtain each frame of point cloud.

The system calibration mainly comprises the steps of calibration plate manufacturing, image acquisition, single-structure optical sensor internal parameter calibration, structure optical sensor parameter calibration and the like, the calibration system can adopt a Zhang Zhengyou camera calibration method, and the system calibration comprises the following main processes:

(1) calibration plate fabrication

The calibration board can adopt a chessboard as a template, and can be made of an advertisement KT board, ceramic or glass. The size of the calibration plate can be set according to actual conditions, if the field of view required by reconstruction of the human body grid is large, in order to guarantee reconstruction accuracy, the size of the calibration plate is required to be larger than 800 mm.

(2) Calibration data collection

The IR image, RGB image and depth image of the calibration plate are synchronously acquired by using a plurality of structured light sensors. In the acquisition process, the position and posture of the calibration plate need to be met:

1) the number of images of the calibration plate at the same time, which are acquired by two adjacent structured light sensors, is more than 20;

2) the number of images collected by each structured light sensor, including the calibration plate, is > 30.

(3) Single structure optical sensor camera parameter calibration

For each structured light sensor in the system, the calibration steps are as follows:

1. selecting an RGB image containing a complete calibration plate, carrying out checkerboard angular point positioning, and calculating internal parameters of an RGB camera;

2. calibrating internal parameters of the IR camera;

the calibration process of the internal parameters of the IR camera can comprise the following steps:

1) extracting an IR image containing a complete calibration plate, and performing image enhancement by using a limited contrast adaptive histogram equalization (CLAHE) algorithm;

2) and carrying out checkerboard corner positioning on the basis of the IR image after image enhancement, and calculating internal parameters of the IR camera.

3. Selecting RGB and IR images containing a calibration board at the same time from the collected RGB and IR image sets to form a calibration image set, and calculating external parameters of the IR and RGB cameras through the calibration image set;

4. calibrating the depth;

the depth calibration comprises the following steps:

1) selecting an IR picture containing a calibration plate, and carrying out distortion correction through internal parameters of a calibrated IR camera;

2) extracting a calibration board pixel area according to the position of the corner point of the checkerboard of the IR image;

3) obtaining a 3D coordinate corresponding to an angular point by using the position of the angular point of the checkerboard in the IR image and combining the depth image and the camera parameter, and solving a calibration board rotation and translation matrix by using the angular point position and the corresponding 3D coordinate and utilizing a solvePpransac pose estimation algorithm in opencv;

4) calculating the real distance from the origin of the camera to the calibration plate, namely obtaining the distance from the origin to the calibration plate by the calibration plate rotation and translation matrix obtained in the step 3);

5) extracting a corresponding region in the depth map through the pixel region of the calibration plate obtained in the step 2), obtaining a 3D point cloud corresponding to the region through camera parameters, and performing plane fitting to obtain a measurement distance from an origin to the calibration plate;

6) the linear fit measures the distance to the true distance.

(4) Parameter calibration between structured light sensors

When external reference calibration is carried out, the optical sensors with different structures calibrate the external reference through the RGB camera. The calibration steps are as follows:

constructing a relative external reference upper triangular matrix RP epsilon N multiplied by N, wherein N represents the total number of cameras, and matrix elements RPij=(Rij,tij) (0 ≦ i < j < N) for Camera CameraiCamera and CamerajAn extrinsic transformation matrix between, the extrinsic transformation matrix comprising an extrinsic rotation matrix RijAnd the external reference translation matrix tij

Constructing an upper triangular camera image matrix M e Rn×nFor each element MijCamera representationiCamera and CamerajSet of images captured simultaneously containing calibration plateMixing and extracting MijAngular point pixel coordinate vectorWherein the content of the first and second substances,camera representationiInformation of the coordinates of the corner points of,camera representationjCorner point coordinate information. Calibrating board angular point 3D coordinate vector Camera representationiThe 3D coordinates of the corner points of the slab are calibrated,camera representationjAnd calibrating the 3D coordinates of the corner points of the plate, wherein s represents the length of the vector.

The global optimization process of the system calibration parameters may include:

calculating the initial pose P of the camera under the world coordinate system according to the relative external reference matrix P0={Posei=(RWi,twi) I is more than or equal to 0 and less than N, wherein RWiFor the initial pose rotation matrix of the camera, twiSetting the set of positioned cameras as T for the translation matrix of the initial pose of the Camera, and selecting the Camera Camera0The Camera coordinate system of (2) is used as a world coordinate system, and Camera is used0Adding into T, setting the newly added Camera number as i, and for all cameras which are not in T, CamerajCalculating MijThe length of the middle CornerVec, the Camera with the largest length is selected and recorded as CamerakAccording to PikCamera for calculatingkPose under world coordinate System, CamerakAdd to T and repeat the process until all cameras are in T.

According to the phaseInitial pose P of machine0Calculating the position of the corner point on the calibration plate under the world coordinate systeml represents the number of images in which the corner is detected, and N represents the total number of cameras.

Calculating area of calibration plate { Areas of N pictures in jth frameiI is more than or equal to 0 and less than N, selecting the camera number A with the largest area, and passing through P0[A]Calculating the position of the 3D coordinate of the calibration board corner point in the world coordinate system

Global optimization external parameter matrix camera initial pose P0Camera reference matrix Intrinsics and corner world coordinate system position C, minimizing the reprojection error on all cameras.

For one point in the world coordinate system, X ═ X, y, z, in CameraiThe coordinates of the pixels in the camera picture areWherein R isi,tiBy camera pose P0[i]To obtain, Ki,DiFrom IntrinsicsiThus obtaining the product.

The objective function is as follows:

wherein, C is the position of the corner point in the world coordinate system, inrinsics is a camera reference matrix formed by the internal references of N cameras, P is the pose of the N cameras in the world coordinate system, and V is the camera list where the 3D point is visible. The variable { C, Intrinsics, P } to be optimized is initialized to the initial value obtained in the previous step.

The optimization process is as follows:

1. initializing Params ═ C (Intrinsics, P);

2. optimizing f (Params) to calculate a derivative f' (Params);

3. if | f' (Params) | < σ, the procedure ends;

4. updating a parameter Params-lr f' (Params), wherein lr represents the learning rate, and returning to the step 2;

in the optimization process, sigma is a threshold value, and when the modular length of the derivative is smaller than the value, the operation is ended.

S203: and splicing adjacent frame point clouds by utilizing an ICP (inductively coupled plasma) algorithm to obtain a spliced point cloud set.

After each frame of point cloud is generated, the point clouds obtained by the plurality of structured light cameras are fused into one point cloud, and adjacent frame point clouds are spliced by utilizing an ICP (inductively coupled plasma) algorithm to obtain a spliced point cloud set. The step of splicing adjacent frame point clouds by using the ICP algorithm may include:

(1) computing a closest point correspondence between two point cloudsWherein p isiAnd q isiIs the vertex in two point clouds, NcThe number of the corresponding point pairs;

(2) minimizing a cost function corresponding to the nearest point, and updating the current transformation matrix;

for a plurality of obtained point clouds C1,…,CMConstructing an adjacency matrix A e {0, 1} according to its relative position in spaceM×MAnd is used for representing the alignment relation between the point clouds. Point cloud ChAnd CkThe alignment error between is:

is NhThe closest points correspond, and the global alignment error is the sum of each pair of views:

solution g of function1,…,gMArgmin (e) is the absolute camera position where the M point clouds are aligned. This is a non-linear least squares optimization, solved using Ceres solution.

In the above cost function, d (p, q) measures the distance between two vertices, depending on the absolute coordinate positions g of the two camerash=(RCh,tch) And gk=(RCk,tck),

d(p,q)=(1-σ)dg(p,q)+σdc(p,q);

dg(p,q)=((RCh·p+tch)-(RCk·q+tck))T·(RCk·nq);

dc(p,q)=Cq(RCk -1(RChp+tch-tck))-Cp(p);

Wherein, RCiRepresenting absolute coordinate rotation matrix, tciRepresenting the absolute coordinate translation matrix, i ∈ { h, k }, dg(. is a point-to-plane geometric energy term, nqIs the normal at the vertex q, dc(. is a color constraint term, Cp(p) is the color of the vertex p, Cq(. DEG) is a pre-computed continuous color function on the tangent plane at the vertex q, σ being a constant, RCk -1Representing the inverse of the absolute coordinate rotation matrix.

S204: and uniformly sampling the spliced point cloud set to remove redundant vertexes.

After point clouds of a plurality of structured light sensors are transformed to a world coordinate system, uniform sampling is carried out on the fused point clouds, redundant vertexes are removed, and therefore the spliced point cloud set is simplified.

S205: and carrying out mesh reconstruction on each frame of point cloud according to the splicing sequence of each frame of point cloud in the spliced point cloud set to obtain an initial mesh sequence.

And the arrangement sequence of each grid in the initial grid sequence is correspondingly consistent with the splicing sequence of each frame of point cloud in the spliced point cloud set.

After removing the redundant vertexes, performing mesh reconstruction on each frame of point cloud according to the splicing sequence of each frame of point cloud in the spliced point cloud set to obtain an initial mesh sequence, so that the arrangement sequence of each mesh in the initial mesh sequence is correspondingly consistent with the splicing sequence of each frame of point cloud in the spliced point cloud set.

S206: and carrying out grid alignment registration on the two current adjacent grids according to the arrangement sequence of each grid in the initial grid sequence.

After the initial grid sequence is obtained, grid alignment registration is carried out on the two adjacent grids according to the arrangement sequence of each grid in the initial grid sequence. For a trellis sequence MiI e (0, …, n), the trellis registration starts from the first frame and registers to the next frame in turn. The grid registration is performed according to the arrangement sequence of each grid in the initial grid sequence, so that the efficient and orderly grid registration is ensured.

S207: and calculating the registration error of the current two adjacent grids after the registration is finished by combining the geometric constraint and the projection image constraint.

After the grid alignment registration of the current two adjacent grids is completed, the registration error of the current two adjacent grids after the registration is completed is calculated by combining the geometric constraint and the projected image constraint.

In one embodiment of the present invention, step S207 may include the following steps:

the method comprises the following steps: calculating the nearest neighbor point pair distance, the deformation similarity and the key corresponding point distance of the two adjacent grids after the registration is finished by utilizing geometric constraint;

step two: calculating the pixel difference of two adjacent grids after registration is finished by utilizing the constraint of the projected image;

step three: and carrying out weighted summation on the nearest neighbor point pair distance, the deformation similarity, the key corresponding point distance and the pixel difference to obtain a registration error.

For convenience of description, the above steps may be combined for illustration.

And calculating the nearest neighbor point pair distance, the deformation similarity and the key corresponding point distance of the two current adjacent grids after registration is finished by utilizing geometric constraint, and calculating the pixel difference of the two current adjacent grids after registration is finished by utilizing projected image constraint. And presetting the nearest neighbor point pair distance, the deformation similarity, the key corresponding point distance and the weight value of the pixel difference, and carrying out weighted summation on the nearest neighbor point pair distance, the deformation similarity, the key corresponding point distance and the pixel difference to obtain a registration error.

The registration algorithm employs a non-rigid iterative closest point algorithm with image constraints. Suppose template S ═ VS,ES) In which V isSRepresenting n vertices, ESRepresenting an edge. The transformation matrix of each vertex is Xi∈R3×4Then the transformation matrix of all vertices is X ═ X1,…,Xn]T∈R3×4nThe cost function to be optimized by the algorithm is as follows:

E=Ed(X)+αEs(X)+βEl(X)+EI(X);

in the above-mentioned formula,

wherein E isd(X) constraining the distance of the nearest neighbor point pair, Es(X) constraining neighboring vertices of the template meshSimilarity of deformation of El(X) constraining the distance between key correspondences, EIAnd (X) representing a constraint term of the rendered image, wherein alpha and beta are hyper-parameters, alpha controls the rigidity degree of the template grid, and beta controls the constraint degree of the key point pair. XiFor the transformation matrix to be the ith element of X, representing the transformation matrix of the ith vertex in the template mesh, XjRepresenting the transformation matrix as the jth element of X, l representing the coordinates of the 3D key points in the scan grid, viIs the ith vertex in the template, wiIs the weight of the vertex, dist2(x, y) represents the Euclidean distance of vertex x and vertex y,represents vertex XiviAt the closest point on the scan grid, G ═ diag (1, 1, 1, γ) where γ is the hyperparameter, to weigh the rotation and translation weights,are the corresponding key points.

Constraining item E for rendering an imageI(X), firstly, building views virtual cameras in the space where the grid is located, then rendering the template grid and the target grid to the virtual cameras, and comparing the similarity of the rendered images. Function Renderi(S) rendering the textured mesh S into an image at view angle i. Function:

σ(I0,I1)=fIOU(I0,I1)||I0-I1||;

first through a function fIOUAnd (DEG) intersecting pixels of the two images, and then calculating pixel difference of the intersected parts of the images.

S208: determining whether the registration error is smaller than a predetermined error, if so, performing step S209, and if not, performing step S210.

Presetting a registration error threshold, after calculating registration errors of two current adjacent grids after registration is completed, judging whether the registration errors are smaller than a preset error value, if so, indicating that the two current adjacent grids have the same topological structure, executing step S209, otherwise, indicating that the two current adjacent grids have different topological structures, and executing step S210.

S209: and dividing the two current adjacent grids into the same registration grid group.

And when the registration error is determined to be smaller than the preset error value, the current two adjacent grids are indicated to have the same topological structure, and the current two adjacent grids are divided into the same registration grid group.

S210: and dividing the two current adjacent grids into different registered grid groups.

And when the registration error is determined to be larger than or equal to the preset error value, the current two adjacent grids are shown to have different topological structures, and the current two adjacent grids are divided into different registration grid groups.

S211: and determining the grid which is ranked next in two adjacent grids at present as an initial frame.

After the division of the registered grid group is completed for the current two adjacent grids, the grid which is ranked later in the current two adjacent grids is determined as an initial frame.

S212: and judging whether the registration of each grid in the initial grid sequence is finished, if not, executing the step S206, and if so, executing the step S213.

And judging whether all grids in the initial grid sequence are registered, if not, executing the step S206 aiming at the rest grids in the initial grid sequence, and if so, executing the step S213.

S213: and counting to obtain each registration grid group.

And after determining that all grids in the initial grid sequence are completely registered, performing statistical operation on all the registered grid groups obtained by division to obtain all the registered grid groups.

S214: and performing texture mapping on each registered grid group to obtain a texture map, and performing 3D video production with a variable visual angle on the target object by using the texture map.

And after counting to obtain each registered grid group, performing texture mapping on each registered grid group to obtain a texture mapping, and performing 3D video production with a variable visual angle on the target object by using the texture mapping.

Each of the obtained aligned registration mesh groups may be represented as GI ═ G0,…,Gm}={{M0,…},…,{…,MnAnd (4) texture mapping is carried out in two parts, namely model parameterization and UV texture map generation.

Model parameterization was performed using uvaatlas. Uvaatlas is a microsoft-sourced tool for creating and packaging texture atlases. The tool segments the mesh, then each partial mesh corresponds to a region on the 2D image, and the tool outputs image coordinates on the 2D image corresponding to each vertex on each mesh. Parameterizing the first frame of each group in the GI to obtain a parametric model PI ═ { P }0,…,Pm}。

The algorithm for generating a texture map for a set of meshes and corresponding texture coordinates M, P is as follows:

1. for each vertex v on MiCalculate its visible camera list Vi

The camera visibility list is calculated as follows:

1) note that the point cloud generated by k cameras is { points under the world coordinate system0,…,pointsk-1Removing boundaries for each corona;

2) for vertex viThe shortest distance to the point cloud of each camera is calculated, and if the shortest distance is less than a given threshold, the camera is added to the visible camera list.

2. For each triangle in the mesh, the visible camera list for that triangle is judged by the visible camera list of 3 vertices. And selecting the optimal visible camera according to the projection area of the triangle in the visible cameras.

3. And traversing each triangle in the grid, acquiring a 2D projection picture of each camera in the optimal camera list, and accumulating the 2D projection pictures into a corresponding triangle area in the UV texture map.

4. And averaging the pixel values accumulated on the UV map, and outputting the texture map.

Corresponding to the above method embodiment, the present invention further provides a variable-view 3D video production apparatus, and the variable-view 3D video production apparatus described below and the variable-view 3D video production method described above may be referred to in correspondence with each other.

Referring to fig. 3, fig. 3 is a block diagram illustrating a variable-view 3D video production apparatus according to an embodiment of the present invention, where the apparatus may include:

the image acquisition module 31 is used for respectively acquiring an RGB image and a depth image of a target object by using each structured light sensor;

the point cloud generating module 32 is configured to perform point cloud generation based on each RGB image and each depth image by using the system calibration parameters obtained through global optimization to obtain each frame of point cloud;

a mesh reconstruction module 33, configured to perform mesh reconstruction on each frame of point cloud to obtain an initial mesh sequence;

a grid registration module 34, configured to perform grid alignment registration on each grid in the initial grid sequence in combination with the geometric constraint and the projection image constraint to obtain each registered grid group;

the video production module 35 is configured to perform texture mapping on each registered grid group to obtain a texture map, so as to perform 3D video production with a variable viewing angle on the target object by using the texture map.

According to the technical scheme, after the initial grid sequence is obtained by generating each frame of point cloud according to the RGB image and the depth image of the target object and carrying out grid reconstruction on each frame of point cloud, grid alignment registration is carried out on each grid according to the preset registration precision threshold, the topological consistency of each grid in each registered grid group is ensured, the requirement on storage space is greatly reduced, the system calculation requirement is reduced, and the visualization effect is finer.

In one embodiment of the present invention, the apparatus may further include:

the point cloud splicing module is used for splicing adjacent frame point clouds by utilizing an ICP (inductively coupled plasma) algorithm to obtain a spliced point cloud set after obtaining each frame point cloud and before carrying out grid reconstruction on each frame point cloud;

the mesh reconstruction module 33 is specifically a module for performing mesh reconstruction on each frame of point cloud according to the splicing sequence of each frame of point cloud in the spliced point cloud set to obtain an initial mesh sequence; and the arrangement sequence of each grid in the initial grid sequence is correspondingly consistent with the splicing sequence of each frame of point cloud in the spliced point cloud set.

In one embodiment of the present invention, the grid registration module 34 comprises:

the grid registration submodule is used for carrying out grid alignment registration on two adjacent grids at present according to the arrangement sequence of each grid in the initial grid sequence;

the error calculation submodule is used for calculating registration errors of the current two adjacent grids after registration is finished by combining geometric constraint and projected image constraint;

the first judgment submodule is used for judging whether the registration error is smaller than the preset error value or not;

the first grid group division submodule is used for dividing two adjacent grids into the same registration grid group when the registration error is determined to be smaller than the preset error value;

the second grid group division submodule is used for dividing two adjacent grids into different registration grid groups when the registration error is determined to be greater than or equal to the preset error value;

an initial frame determining submodule, configured to determine a grid ranked later in two current adjacent grids as an initial frame;

the second judgment submodule is used for judging whether the registration of each grid in the initial grid sequence is finished;

the grid group counting submodule is used for counting to obtain each registered grid group when the fact that the registration of each grid in the initial grid sequence is finished is determined;

and the repeated execution sub-module is used for executing the step of carrying out grid alignment registration on the two current adjacent grids according to the arrangement sequence of the grids in the initial grid sequence when the situation that the registration of the grids in the initial grid sequence is not finished is determined.

In one embodiment of the present invention, the error calculation sub-module includes:

the distance and similarity calculation unit is used for calculating the nearest neighbor point pair distance, the deformation similarity and the key corresponding point distance of the current two adjacent grids after registration is finished by utilizing geometric constraint;

the pixel difference calculating unit is used for calculating the pixel difference of the current two adjacent grids after the registration is finished by utilizing the constraint of the projected image;

and the error calculation unit is used for weighting and summing the nearest neighbor point pair distance, the deformation similarity, the key corresponding point distance and the pixel difference to obtain a registration error.

In one embodiment of the present invention, the apparatus may further include:

and the uniform sampling module is used for uniformly sampling the spliced point cloud set to remove redundant vertexes before performing grid reconstruction on each frame of point cloud according to the splicing sequence of each frame of point cloud in the spliced point cloud set after obtaining the spliced point cloud set.

In correspondence with the above method embodiment, referring to fig. 4, fig. 4 is a schematic diagram of a variable-view 3D video production apparatus provided by the present invention, which may include:

a memory 332 for storing a computer program;

a processor 322 for implementing the steps of the variable-view 3D video production method of the above-described method embodiments when executing the computer program.

Specifically, referring to fig. 5, fig. 5 is a schematic diagram illustrating a specific structure of a variable-view 3D video production apparatus provided in this embodiment, the variable-view 3D video production apparatus may generate a relatively large difference due to different configurations or performances, and may include a processor (CPU) 322 (e.g., one or more processors) and a memory 332, where the memory 332 stores one or more computer applications 342 or data 344. Memory 332 may be, among other things, transient or persistent storage. The program stored in memory 332 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a data processing device. Further, the processor 322 may be configured to communicate with the memory 332 to execute a series of instruction operations in the memory 332 on the variable-view 3D video production device 301.

The variable-view 3D video production device 301 may also include one or more power sources 326, one or more wired or wireless network interfaces 350, one or more input-output interfaces 358, and/or one or more operating systems 341.

The steps in the variable-view 3D video production method described above may be implemented by the structure of a variable-view 3D video production apparatus.

Corresponding to the above method embodiment, the present invention further provides a computer-readable storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the steps of:

respectively acquiring an RGB image and a depth image of a target object by utilizing each structured light sensor; performing point cloud generation on the basis of each RGB image and each depth image by using system calibration parameters obtained by global optimization to obtain each frame of point cloud; carrying out grid reconstruction on each frame of point cloud to obtain an initial grid sequence; carrying out grid alignment registration on each grid in the initial grid sequence by combining geometric constraint and projection image constraint to obtain each registered grid group; and performing texture mapping on each registered grid group to obtain a texture map, and performing 3D video production with a variable visual angle on the target object by using the texture map.

The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

For the introduction of the computer-readable storage medium provided by the present invention, please refer to the above method embodiments, which are not described herein again.

The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device, the apparatus and the computer-readable storage medium disclosed in the embodiments correspond to the method disclosed in the embodiments, so that the description is simple, and the relevant points can be referred to the description of the method.

The principle and the implementation of the present invention are explained in the present application by using specific examples, and the above description of the embodiments is only used to help understanding the technical solution and the core idea of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:控制装置及3D眼镜

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类