Method for realizing close-range photogrammetry and three-dimensional visualization

文档序号:1950485 发布日期:2021-12-10 浏览:9次 中文

阅读说明:本技术 一种实现近景摄影测量和三维可视化的方法 (Method for realizing close-range photogrammetry and three-dimensional visualization ) 是由 张文志 任筱芳 柳广春 邹友峰 薛永安 宋明伟 蔡来良 杨文府 杨森 杜梦豪 于 2021-07-02 设计创作,主要内容包括:本发明公开了一种实现近景摄影测量和三维可视化的方法,包括设备校验,摄影测量及三维可视化处理等三个步骤。其中本发明一方面通用型好,操作实施便捷,可有效的满足利用多种不同类型普通数码相机进行近景摄影测量作业的需要,同时可满足各类场景全面近景摄影测量作业的需要;另一方面有效克服了内方位元素和构象畸变系数不稳定而导致测量精度差的缺陷,并可实现测绘与三维建模同步进行,从而在极大的提高了近景摄影测量精度的同时,另可有效提高测量数据获取的便捷性和直观性。(The invention discloses a method for realizing close-range photogrammetry and three-dimensional visualization, which comprises three steps of equipment verification, photogrammetry, three-dimensional visualization processing and the like. The invention has good general type and convenient operation and implementation, can effectively meet the requirement of carrying out close-range photogrammetry operation by utilizing various different types of common digital cameras, and can meet the requirement of comprehensive close-range photogrammetry operation of various scenes; on the other hand, the defect of poor measurement accuracy caused by unstable distortion coefficients of internal orientation elements and conformation is effectively overcome, and the synchronous operation of mapping and three-dimensional modeling can be realized, so that the close-range photogrammetry accuracy is greatly improved, and the convenience and the intuitiveness of measurement data acquisition can be effectively improved.)

1. A method for realizing close-range photogrammetry and three-dimensional visualization is characterized by comprising the following steps:

s1, checking the equipment, firstly selecting the video acquisition equipment participating in the photogrammetry operation, then taking the collinear equation based on the space back intersection as the data operation basis, taking the coordinates of the image point as the observed value, and carrying out the orientation element (x) in the selected video acquisition equipment0,y0) Radial distortion parameter (k)1,k2,k3) Eccentric distortion parameter (p)1,p2) Distortion parameter in area array (b)1,b2) Calculating and measuring, adjusting and setting the photogrammetry operation video acquisition equipment according to the calculation result, and recovering the correct shape of the image light beam;

s2, photogrammetry, namely, according to the structural characteristics of the target object to be measured, mounting and positioning the video acquisition equipment selected in the step S1 and participating in photogrammetry operation according to any one of two basic modes, namely a straight photography mode and a rotary multi-baseline photography mode, wherein the distance between the video acquisition equipment for photogrammetry operation and the target object is 1/4-1/5 of the average photography depth;

s3, performing three-dimensional visualization processingAfter the step of S2, the measured inner orientation element (x) is calculated in the step of S10,y0) Radial distortion parameter (k)1,k2,k3) Eccentric distortion parameter (p)1,p2) Distortion parameter in area array (b)1,b2) The parameters and the target object image data collected in the step S2 are transmitted to a three-dimensional visual computer application program together, target area control point data are generated according to the recorded target object image data collected in the step S2, then the mapping data are the collected target object image data to be matched, and point clouds are synchronously generated in the target object image through control point adjustment operation; and finally, editing the point cloud to obtain the three-dimensional mapping image data of the target object.

2. The method for realizing close-range photogrammetry and three-dimensional visualization as claimed in claim 1, wherein in the step of S1:

the collinearity equation for the spatial back intersection is:

3. the method for realizing close-range photogrammetry and three-dimensional visualization as claimed in claim 1, wherein said step S1 is implemented by:

firstly, establishing a calibration field indoors, measuring and positioning coordinates of indoor control points, observing each control point by a loopback method, observing one loopback and only measuring the distance once;

and then, carrying out data processing on the observation data by using Australis software, and obtaining corresponding parameters of the video acquisition equipment selected to participate in the photogrammetry operation.

4. The method for realizing close-range photogrammetry and three-dimensional visualization as claimed in claim 1, wherein the control point spacing or arrangement structure type is: the space array distribution of the distance of 30cm multiplied by 20cm or 30cm multiplied by 30 cm.

5. The method for realizing close-up photogrammetry and three-dimensional visualization as claimed in claim 1, wherein in the step S2, when the photographic object is a vertical wall, a parallel photography mode is adopted; when a photographic subject with a large difference in imaging depth is imaged, a photographic method of rotating multiple baselines is employed.

6. The method for realizing close-range photogrammetry and three-dimensional visualization as claimed in claim 1, wherein in the step S3, the three-dimensional visualization computer application program is: LensPhotoo V2.0.

7. The method for realizing close-range photogrammetry and three-dimensional visualization as claimed in claim 1, wherein in the step S3, the control point adjustment function is:

in the formula: k1,K2Radial distortion is poor; p1,P2The eccentricity distortion becomes poor.

8. The method for realizing close-range photogrammetry and three-dimensional visualization as claimed in claim 1, wherein in the step S3, when editing the point cloud, specific editing rules are as follows:

1) processing the production point cloud according to automatic matching, and removing redundant points;

2) when the model range is defined, attention should not be paid to the overlarge range, multiple times of circle selection can be carried out, and a three-dimensional model is preferentially reconstructed;

3) when defining the range of different partitions, it should be noted that a certain degree of overlap is preserved between each.

Technical Field

The invention belongs to a visual mapping technology, and particularly relates to a method for realizing close-range photogrammetry and three-dimensional visualization.

Background

With the continuous development of photogrammetry technology, close-range photogrammetry is also fully developed, and the close-range photogrammetry is more and more superior as an important supplementary means for aerial photogrammetry. The common digital camera replaces a photographing theodolite, photographing is carried out in a flexible photographing mode, and the digital photogrammetry workstation is used for close-range photogrammetry and becomes the inevitable direction of the development of the close-range photogrammetry.

Because the mechanical structure of a common digital camera is unstable, the distortion coefficients of internal orientation elements and conformation are unstable, which brings difficulty to the calibration of the camera and restricts the popularization and application of close-range photogrammetry technology.

Therefore, in view of the current situation, there is an urgent need to develop a close-range photogrammetry method which overcomes the defects existing in the operation of the existing equipment and effectively improves the measurement efficiency and precision so as to meet the needs of actual work.

Disclosure of Invention

The invention provides a method for realizing close-range photogrammetry and three-dimensional visualization, which aims to solve the problems in the background technology.

In order to achieve the technical purpose, the invention provides the following technical scheme:

a method for realizing close-range photogrammetry and three-dimensional visualization comprises the following steps:

s1, device verification, selecting the video capture device to participate in the photogrammetry operation, and then using the mathematical model based on space back intersection and collinearityThe equation is used as the data operation basis, the coordinate of the image point is used as an observed value, and the selected orientation element (x) in the video acquisition equipment is subjected to0,y0) Radial distortion parameter (k)1,k2,k3) Eccentric distortion parameter (p)1,p2) Distortion parameter in area array (b)1,b2) Calculating and measuring, adjusting and setting the photogrammetry operation video acquisition equipment according to the calculation result, and recovering the correct shape of the image light beam;

s2, photogrammetry, namely, according to the structural characteristics of the target object to be measured, mounting and positioning the video acquisition equipment selected in the step S1 and participating in photogrammetry operation according to any one of two basic modes, namely a straight photography mode and a rotary multi-baseline photography mode, wherein the distance between the video acquisition equipment for photogrammetry operation and the target object is 1/4-1/5 of the average photography depth;

s3, three-dimensional visualization processing, after the step S2 is completed, the measured internal orientation element (x) is calculated in the step S10,y0) Radial distortion parameter (k)1,k2,k3) Eccentric distortion parameter (p)1, p2) Distortion parameter in area array (b)1,b2) The parameters and the target object image data collected in the step S2 are transmitted to a three-dimensional visualization computer system together, target area control point data are generated according to the recorded target object image data collected in the step S2, then the mapping data are the collected target object image data to be matched, and point clouds are synchronously generated in the target object image through control point adjustment operation; and finally, editing the point cloud to obtain the three-dimensional mapping image data of the target object.

Further, in the step S1:

the collinearity equation based on the spatial back intersection is:

further, in the step S1, when the verification job is performed:

firstly, establishing a calibration field indoors, measuring and positioning coordinates of indoor control points, observing each control point by a loopback method, observing one loopback and only measuring the distance once;

then, carrying out data processing on the observation data by using Australis software, and obtaining corresponding parameters of the video acquisition equipment selected to participate in the photogrammetry operation;

further, the control point pitch or arrangement structure type is: the space array distribution of the distance of 30cm multiplied by 20cm or 30cm multiplied by 30 cm.

Further, in the step S2, when the photographing object is a vertical wall, a parallel photographing mode is adopted; when a photographic subject with a large difference in imaging depth is imaged, a photographic method of rotating multiple baselines is employed.

Further, in the step S3, the three-dimensional visualization computer application program is: LensPhotoo-V2.0

Further, in step S3, the control point adjustment function is:

further, in the step S3, when editing the point cloud, specific editing rules are as follows:

1) processing the production point cloud according to automatic matching, and removing redundant points;

2) when the model range is defined, attention should not be paid to the overlarge range, multiple times of circle selection can be carried out, and a three-dimensional model is preferentially reconstructed;

3) when defining the range of different partitions, it should be noted that a certain degree of overlap is preserved between each.

On one hand, the invention has good universality and convenient operation and implementation, can effectively meet the requirements of carrying out close-range photogrammetry operation by utilizing various different types of common digital cameras, and can meet the requirements of comprehensive close-range photogrammetry operation of various scenes; on the other hand, the defect of poor measurement accuracy caused by unstable distortion coefficients of internal orientation elements and conformation is effectively overcome, and the synchronous operation of mapping and three-dimensional modeling can be realized, so that the close-range photogrammetry accuracy is greatly improved, and the convenience and the intuitiveness of measurement data acquisition can be effectively improved.

Drawings

FIG. 1 is a schematic flow diagram of the process of the present invention;

FIG. 2 is a schematic diagram of a partial structure of a calibration field;

FIG. 3 is a data statistics table of camera calibration results;

FIG. 4 is a schematic diagram of a point cloud subsection of a target object;

FIG. 5 is a schematic structural diagram of a target after three-dimensional visualization;

Detailed Description

In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further described with the specific embodiments.

As shown in fig. 1-5, a method for performing close-up photogrammetry and three-dimensional visualization includes the following steps:

s1, checking the equipment, firstly selecting the video acquisition equipment participating in the photogrammetry operation, then taking a mathematical model and a collinear equation based on space back intersection as data operation bases, taking the coordinates of image points as observed values, and carrying out alignment on the selected internal orientation elements (x) of the video acquisition equipment0,y0) Radial distortion parameter (k)1,k2,k3) Eccentric distortion parameter (p)1,p2) Distortion parameter in area array (b)1,b2) Calculating and measuring, adjusting and setting the photogrammetry operation video acquisition equipment according to the calculation result, and recovering the correct shape of the image light beam;

s2, photogrammetry, namely, according to the structural characteristics of the target object to be measured, mounting and positioning the video acquisition equipment selected in the step S1 and participating in photogrammetry operation according to any one of two basic modes, namely a straight photography mode and a rotary multi-baseline photography mode, wherein the distance between the video acquisition equipment for photogrammetry operation and the target object is 1/4-1/5 of the average photography depth;

s3, three-dimensional visualization processing, after the step S2 is completed, the measured internal orientation element (x) is calculated in the step S10,y0) Radial distortion parameter (k)1,k2,k3) Eccentric distortion parameter (p)1, p2) Distortion parameter in area array (b)1,b2) The parameters and the target object image data collected in the step S2 are transmitted to a three-dimensional visualization computer system together, target area control point data are generated according to the recorded target object image data collected in the step S2, then the mapping data are the collected target object image data to be matched, and point clouds are synchronously generated in the target object image through control point adjustment operation; and finally, editing the point cloud to obtain the three-dimensional mapping image data of the target object.

Of particular note, in the step S1:

the collinearity equation mathematical model based on spatial back intersection is:

meanwhile, in the step S1, when the verification job is performed:

firstly, establishing a calibration field indoors, measuring and positioning coordinates of indoor control points, observing each control point by a loopback method, observing one loopback and only measuring the distance once;

then, carrying out data processing on the observation data by using Australis software, and obtaining corresponding parameters of the video acquisition equipment selected to participate in the photogrammetry operation;

further preferably, the control point pitch or arrangement structure type is: the space array distribution of the distance of 30cm multiplied by 20cm or 30cm multiplied by 30 cm.

In this embodiment, in the step S2, when the object to be photographed is a vertical wall, a parallel photography mode is adopted; when a photographic subject with a large difference in imaging depth is imaged, a photographic method of rotating multiple baselines is employed.

In addition, in the step S3, the three-dimensional visualization computer application program is: LensPhotoo-V2.0

In the step S3, the control point adjustment function is:

specifically, in the step S3, when editing the point cloud, the specific editing rule is as follows:

1) processing the production point cloud according to automatic matching, and removing redundant points;

2) when the model range is defined, attention should not be paid to the overlarge range, multiple times of circle selection can be carried out, and a three-dimensional model is preferentially reconstructed;

3) when defining the range of different partitions, it should be noted that a certain degree of overlap is preserved between each.

In order to fully explain the technical content related to the present invention and facilitate understanding and mastering of the technical content described in the present invention for those skilled in the relevant field, the technical solution described in the present invention will be described with reference to the EOS450D digital camera as a specific embodiment:

as shown in fig. 1-5, a method for performing close-up photogrammetry and three-dimensional visualization includes the following steps:

and S1, equipment verification, namely, firstly, performing calibration on the EOS450D digital camera to recover the correct shape of the image light beam, namely, acquiring the internal orientation elements and the image formation distortion coefficient of the image through calibration. The digital camera calibration content comprises: principal point coordinates (x)0,y0) The measurement of (f), the measurement of the principal distance (f), the measurement of the optical distortion coefficient, and the measurement of the distortion coefficient in the CCD area array. The calibration is carried out by adopting a mathematical model based on space back intersection. Based on a collinear equation, the coordinate of an image point is used as an observed value, and internal and external orientation elements, distortion coefficients and other additional parameters of the camera are solved;

during actual verification, a verification field is established indoors, coordinates of indoor control points are measured by adopting a GTS-3100 TOPCON total station, observation is carried out by a echo method, one echo is observed, and the distance is measured only once. The total station is a 5' grade total station, the measurement precision can reach millimeter level, and the measurement precision completely meets the requirement.

And (3) carrying out data processing on the observation data by using Australis software, and acquiring camera parameters of an EOS450D digital camera: inner orientation element (x)0,y0) Radial distortion parameter (k)1,k2, k3) Eccentric distortion parameter (p)1,p2) Distortion parameter in area array (b)1,b2)。

S2, photogrammetry, namely, according to the structural characteristics of the target object to be measured, mounting and positioning the video acquisition equipment selected in the step S1 and participating in photogrammetry operation according to any one of two basic modes, namely a straight photography mode and a rotary multi-baseline photography mode, wherein the distance between the video acquisition equipment for photogrammetry operation and the target object is 1/4-1/5 of the average photography depth; when the photographic object is a vertical wall surface, a parallel photography mode is adopted; when the imaging depth difference is large, a multi-base-line rotating photographing method is adopted;

s3, three-dimensional visualization processing, after the step S2 is completed, the measured internal orientation element (x) is calculated in the step S10,y0) Radial distortion parameter (k)1,k2,k3) Eccentric distortion parameter (p)1, p2) Distortion parameter in area array (b)1,b2) The parameters and the target object image data collected in the step S2 are transmitted to a three-dimensional visualization computer system together, target area control point data are generated according to the recorded target object image data collected in the step S2, then the mapping data are the collected target object image data to be matched, and point clouds are synchronously generated in the target object image through control point adjustment operation; and finally, editing the point cloud to obtain the three-dimensional mapping image data of the target object.

The method comprises the following steps of obtaining the adjustment precision of current data through adjustment operation of control points, specifically:

0.000880, the general reference value is: less than 1/2 pixels are valid data;

error in three directions X, Y, Z: [ rmsx ] 0.0009

[rmsy]:0.0009

[rmsz]:0.0004

Error in plane of 0.0013

Depth distance 3.4280

Relative accuracy of depth 1/2607

Relative accuracy of plane 1/9581

Relative accuracy of point location 1/2516

On one hand, the method has good universality and convenient operation and implementation, can effectively meet the requirement of carrying out close-range photogrammetry operation by utilizing various different types of common digital cameras, and can meet the requirement of comprehensive close-range photogrammetry operation of various scenes; on the other hand, the defect of poor measurement accuracy caused by unstable distortion coefficients of internal orientation elements and conformation is effectively overcome, and the synchronous operation of mapping and three-dimensional modeling can be realized, so that the close-range photogrammetry accuracy is greatly improved, and the convenience and the intuitiveness of measurement data acquisition can be effectively improved.

The foregoing is a more detailed description of the present invention and is not to be construed as limiting the invention. To those skilled in the art to which the invention relates, numerous changes, substitutions and alterations can be made without departing from the spirit of the invention, and these changes are deemed to be within the scope of the invention as defined by the appended claims.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于激光测绘的土地规划测量器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!