Binocular collaborative drop point measurement method without additional control point

文档序号:1919332 发布日期:2021-12-03 浏览:22次 中文

阅读说明:本技术 一种无额外控制点的双目协同落点测量方法 (Binocular collaborative drop point measurement method without additional control point ) 是由 谷俊豪 张永栋 赵梓年 田野 陈洪林 高新 于 2021-08-23 设计创作,主要内容包括:本发明属于机器视觉与摄影测量技术领域,提供一种无额外控制点的双目协同落点测量方法,首先提前对靶面进行测量,再通过规划两台相机的布设位置,使每台相机均能观测到目标区域和另一台相机,利用相机的空间位置及相机的图像坐标,建立相机的相对位姿关系模型;然后根据目标点的像素坐标和靶面信息测得目标点的位置;最后可带入目标点的测量值完成相机的标定。本发明无需额外控制点,大幅简化测量准备流程,能适应更多场景;该方法无需迭代优化,计算效率高,结果稳定。(The invention belongs to the technical field of machine vision and photogrammetry, and provides a binocular collaborative drop point measurement method without additional control points.A target surface is measured in advance, then the arrangement positions of two cameras are planned, so that each camera can observe a target area and the other camera, and a relative pose relation model of the cameras is established by utilizing the space positions of the cameras and the image coordinates of the cameras; then, measuring the position of the target point according to the pixel coordinates of the target point and the target surface information; finally, the measured value of the target point can be brought in to finish the calibration of the camera. The method does not need additional control points, greatly simplifies the measurement preparation process, and can adapt to more scenes; the method does not need iterative optimization, and has high calculation efficiency and stable result.)

1. A binocular cooperative drop point measuring method without additional control points includes recording parametersA, B for the two cameras, X as the target point to be measured on the target surface, and P as the space coordinate of the X pointX(xX,yX,zX) The other target points are Pi(xi,yi,zi) The camera is provided with an RTK terminal and can output the space coordinate of the position in real time, and the measuring method is characterized by comprising the following steps:

step 1, measuring a target surface equation fTarget surface(P)=0

Step 2, obtaining internal parameters of camera A and camera B

Obtaining internal parameters of the two cameras according to an internal calibration method or camera delivery performance parameters, wherein the internal parameters comprise a lens focal length f, a pixel size d and a principal point coordinate (u) of the camera A0,v0) Lens focal length f ' of camera B, pixel size d ', principal point coordinate (u '0,v′0);

Step 3, before the target is hit, placing the cameras A, B near the target area to ensure that each camera can observe the target surface area and the other camera;

step 4, obtaining and recording the images of the target movement, selecting images of the target when the target is hit from the images, and recording the coordinates of cameras displayed by RTK at the target hitting time to be P respectivelyA(xA,yA,zA)、PB(xB,yB,zB) The pixel coordinates of the camera B, X point and the other target points in the camera A image are (u)B,vB)、(uX,vX)、(ui,vi) The pixel coordinates of the camera A, X point and the other target points in the camera B image are (u'A,v′A)、(u′X,v′X)、(u′i,v′i);

Step 5, calculating target space coordinates according to the camera position information and the acquired image

5.1 in the image coordinate System of Camera A, Camera A is located at the origin pA(0,0,0), the image point corresponding space coordinates of the camera B and the X point are respectively pB(uB-u0,vB-v0,fx)、pX(uX-u0,vX-v0,fx) (ii) a In the image coordinate system of camera B, camera B is located at the origin p'B(0,0,0), and the image point corresponding space coordinates of the camera A and the X point are p'A(u′A-u′0,v′A-v′0,f′x)、p′X(u′X-u′0,v′X-v′0,f′x) WhereinAndis the equivalent focal length of the camera;

5.2 according to the pinhole imaging model, the optical center of the camera, the image point and the target point are collinear, an equation is established:

5.3 according to the geometrical relationship, the point P can be obtainedXTo line PAPBThe distance h is:

and the coordinate P of the vertical pointCComprises the following steps:

5.4 so far, the information on P has been obtainedX3 equations of (c):

solving the equation to obtain the coordinate P of the target to be measuredX

2. The binocular collaborative drop point measurement method without additional control points as claimed in claim 1, wherein the step 1 adopts the following method to measure a target surface equation, a 3-point method is used to determine any one plane, an RTK terminal or a camera with RTK is sequentially placed on any 3 non-collinear points on the target surface and respectively records the RTK result as P1(x1,y1,z1)、P2(x2,y2,z2)、P3(x3,y3,z3);

And 3, the point positions are selected randomly without marking or special processing, and the RTK terminal can be recovered and reused.

3. The binocular collaborative drop point measurement method without additional control points according to claim 1, wherein the step 1 is to obtain a target surface equation by adopting a method of surveying and mapping or reading a target structure design and the like.

4. The binocular collaborative drop point measurement method without additional control points of claim 1, wherein the camera A, B is placed on an aerial drone, and the aerial drone hovers at a suitable position to reach the observation position of step 2.

5. The binocular collaborative drop point measurement method without additional control points of claim 1, wherein the two cameras A, B involved in the measurement can calibrate external parameters of the two cameras as a P2P problem after the spatial positions of the two cameras and the spatial coordinates of the object to be measured are known.

6. The binocular collaborative drop point measurement method without additional control points according to claim 5, wherein the binocular collaborative drop point measurement method is characterized in thatThe external parameter calibration process of the camera is as follows, the rotation matrix corresponds to the rotation relation from the world coordinate system to the camera coordinate system, namely the vector before rotationWith the rotated vectorSatisfies the following conditions:

the set of normalized orthogonal vectors before camera a rotation is:

the vectors corresponding to the rotation are:

both satisfy (V'A)T=RAVA TDue to V'A、RA、VAAre unitary matrices, so:

RA=(V′A)TVA (9)

the translation vector for camera a is then:

TA=-RA·PA T (10)

similarly, the external parameter R of the camera B can be calibratedB、TB

7. The binocular collaborative drop point measurement method without additional control points according to claim 1, wherein when multiple solutions are found in the equation solution of step 5.4, if the camera is placed right and the roll angle is small, the default screening condition is set; if more than two target points exist on the target surface, the consistency of the external parameters of the camera measured twice is used as a screening condition.

8. The binocular collaborative drop point measurement method without additional control points as claimed in claim 7, wherein if there are points to be measured outside the target surface, the measurement of the remaining points is completed by using a conventional intersection measurement method based on the calibration result of the camera external parameters.

9. The binocular collaborative drop point measurement method without additional control points according to claim 8, wherein the measurement process of the remaining points is as follows:

according to the pinhole imaging model, the space coordinate P of the target pointiAnd imaging coordinate pi、p′iThe mapping relation is satisfied:

wherein A isAAnd ABThe reference matrices for cameras A, B, respectively, may be expressed as:

equation set of two cameras simultaneously, can solve Pi

The technical field is as follows:

the invention belongs to the technical field of machine vision and photogrammetry, and particularly relates to a camera image-based drop point measuring method.

Background

According to the principle of photogrammetry, the coordinates of the landing points are measured based on the double cameras, and the position and posture information of the cameras are acquired. For a camera mounted on an unmanned aerial vehicle or a turntable, the position or the posture of the camera changes with time, and cannot be calibrated in advance. The current calibration methods mainly comprising a calibration means can be divided into: one type is a calibration method based on a plurality of control points, the camera position and posture (Wang P, Xu G, Wang Z, et al. an effective solution to the permanent-three-point position proplem [ J ]. Computer Vision and Image Understanding,2018,166:81-87.) are calculated by utilizing the images and space coordinates of at least 3 control points, the method has high measurement precision and comprehensive solution content, but the control points with obvious sufficient characteristics in the field of view are required, and certain requirements are provided for environment or measurement preparation work; the other type is that a POS system and the like are used for directly obtaining the position and the posture of a camera for measurement (Wang P, Xu G, Wang Z, et al. an effective solution to the active-three-point position distribution [ J ]. Computer Vision and Image Understanding,2018,166:81-87.), in the aspect of the position measurement technology, the small positioning equipment based on the RTK principle realizes mass production, can be installed on a small platform such as an unmanned aerial vehicle and the like, and has the positioning precision reaching the centimeter magnitude, but in the aspect of the posture measurement technology, the requirements of miniaturization, high precision and non-standing cannot be simultaneously met currently, and the high-precision calibration of cameras carried on the unmanned aerial vehicle and a small turntable cannot be realized; in addition, some new methods are proposed successively, and by combining the advantages of the two methods, part of information which is easy to obtain is quickly measured by using equipment such as a GPS (global positioning system), a level meter and the like, and then the rest information which is difficult to measure is solved based on the distributed control points (Wangzjun, Dengzxian, Cao Yuu, and the like, quick calibration [ J ] of a field large-view-field single-camera space coordinate measurement system optical precision engineering, 2017, 25: 1961 and 1967.), but the equipment such as the level meter and the like is only suitable for obtaining attitude information of a static camera, and the method still needs a certain number of control points for assistance.

In some measurement environments, a point to be measured is an intersection point of a target and a fixed plane (or the target moves on only one fixed plane), the plane is called a target surface, the intersection point is called a drop point, the unknown quantity can be reduced by one dimension at the moment, RTK equipment and other equipment are used for obtaining camera space position information, and measurement can be completed without using an additional control point.

Disclosure of Invention

The invention aims to provide a binocular collaborative drop point measuring method without additional control points, which solves the technical problem that a plurality of control points must be arranged or expensive measuring equipment is used for acquiring the camera attitude in the existing drop point measuring method

In order to achieve the above purpose and solve the above technical problems, the technical solution of the present invention is as follows:

a binocular collaborative drop point measurement method without additional control points is characterized in that A, B are recorded for two cameras participating in measurement, X is recorded for a target point to be measured on a target surface, and P is recorded for the spatial coordinate of the X pointX(xX,yX,zX) The other target points are Pi(xi,yi,zi) The camera is provided with an RTK terminal and can output the space coordinate of the position in real time, and the measuring method is characterized by comprising the following steps:

step 1, measuring a target surface equation fTarget surface(P)=0

Step 2, obtaining internal parameters of camera A and camera B

Obtaining internal parameters of the two cameras according to an internal calibration method or camera delivery performance parameters, wherein the internal parameters comprise a lens focal length f, a pixel size d and a principal point coordinate (u) of the camera A0,v0) Lens focal length f ' of camera B, pixel size d ', principal point coordinate (u '0,v′0);

Step 3, before the target is hit, placing the cameras A, B near the target area to ensure that each camera can observe the target surface area and the other camera;

step 4, obtaining and recording the images of the target movement, selecting the target landing time image from the images, and recording the target landing time RTK displayRespectively, are PA(xA,yA,zA)、PB(xB,yB,zB) The pixel coordinates of the camera B, X point and the other target points in the camera A image are (u)B,vB)、(uX,vX)、(ui,vi) The pixel coordinates of the camera A, X point and the rest of the target points in the camera B image are (u'A,v′A)、(u′X,v′X)、 (u′i,v′i);

Step 5, calculating target space coordinates according to the camera position information and the acquired image

5.1 in the image coordinate System of Camera A, Camera A is located at the origin pA(0,0,0), the image point corresponding space coordinates of the camera B and the X point are respectively pB(uB-u0,vB-v0,fx)、pX(uX-u0,vX-v0,fx) (ii) a In the image coordinate system of camera B, camera B is located at the origin p'B(0,0,0), and the image point corresponding space coordinates of the camera A and the X point are p'A(u′A-u′0,v′A-v′0,f′x)、p′X(u′X-u′0,v′X-v′0,f′x) WhereinAndis the equivalent focal length of the camera;

5.2 according to the pinhole imaging model, the optical center of the camera, the image point and the target point are collinear, an equation is established:

5.3 according to the geometrical relationship, the point P can be obtainedXTo line PAPBA distance h of:

And the coordinate P of the vertical pointCComprises the following steps:

5.4 so far, the information on P has been obtainedX3 equations of (c):

the coordinate P of the target to be measured can be obtained by solving the equationX

The effective benefits of the invention are:

1. the invention provides a binocular collaborative drop point measurement method without additional control points. The invention does not need additional control points, greatly simplifies the measurement preparation process, can adapt to more scenes, and can finish high-precision point-falling measurement work under the condition that the preparation period is short or the test site is inconvenient to arrange control points (such as on the water surface).

2. The method does not need iterative optimization, has high calculation efficiency and stable result;

2. the invention mainly uses a camera and RTK equipment, has mature technology, low price and small volume and weight, and can be carried on small platforms such as unmanned aerial vehicles and the like;

3. the invention can finish camera calibration by regarding the drop point and another camera as a control point after the drop point measurement is finished, and is continuously used for measuring a target point on a non-target surface.

Drawings

FIG. 1 is a schematic view of a measurement layout of a binocular collaborative drop point measurement method without additional control points according to the present invention;

FIG. 2 is a diagram illustrating the result of a camera according to embodiment 1 of the present invention;

2(a) is a schematic image of a camera A;

and 2(B) is a schematic image of the camera B.

Detailed Description

The implementation of the present invention is explained and illustrated in detail below with reference to the accompanying drawings.

The general realization idea of the invention is that firstly, a target surface is measured in advance, then the arrangement positions of two cameras are planned, so that each camera can observe a target area and the other camera, and a relative pose relation model of the cameras is established by utilizing the space positions of the cameras and the image coordinates of the cameras; then, measuring the position of the target point according to the pixel coordinates of the target point and the target surface information; finally, the measured value of the target point can be brought in to finish the calibration of the camera. The method does not need additional control points and initial value iterative optimization, can measure the spatial position of a target point on the target surface, completes the calibration of the camera at the same time, and the calibration result can be used for measuring the spatial positions of other target points (which do not need to be positioned on the target surface).

A binocular collaborative drop point measurement method without additional control points is disclosed, and the measurement layout is shown in figure 1.

Recording the two cameras as A, B respectively, recording the target to be measured on the target surface as X, and recording the space coordinate of X point as PX(xX,yX,zX) The other target points are Pi(xi,yi,zi) The camera is provided with an RTK terminal and can output the space coordinate of the position in real time. The main measurement steps are as follows:

step 1, measuring a target surface equation

Generally, a 3-point method can be used for determining any plane, an RTK terminal (or a camera with an RTK) is sequentially placed at any 3 non-collinear points on a target surface, and RTK results are recorded as P1(x1,y1,z1)、 P2(x2,y2,z2)、P3(x3,y3,z3) 3 point positions can be selected at will without marking or special processing, and the RTK terminalCan be recovered and reused, and the target surface equation is shown as the formula (1):

if the conditions allow, the target surface equation can also be obtained by methods such as mapping or reading the target structure design.

Step 2, obtaining main internal parameters of the two cameras according to an internal calibration method or camera delivery performance parameters, wherein the main internal parameters comprise a lens focal length f, a pixel size d and a principal point coordinate (u) of the camera A0,v0) Lens focal length f ' of camera B, pixel size d ', principal point coordinate (u '0,v′0)。

Step 3, before the target is hit, arranging a camera A, B or hovering an aerial photography unmanned aerial vehicle near the target area to ensure that the camera can observe the target surface area and another camera;

that is, after the layout is completed, the target surface area and the camera B can be observed from the camera A, and the target surface area and the camera A can also be observed from the camera B.

In addition, the camera can be placed in the aerial photography unmanned aerial vehicle, the unmanned aerial vehicle is hovered at a suitable position in the air, and the observation effect is achieved. Adopt unmanned aerial vehicle to carry out the selection of camera position, do not receive the restriction that the position was laid to the tradition, the selection of camera position can more kind nimble.

Step 4, obtaining and recording the images of the target movement, selecting images of the target when the target is hit from the images, and recording the coordinates of cameras displayed by RTK at the target hitting time to be P respectivelyA(xA,yA,zA)、PB(xB,yB,zB) The pixel coordinates of the camera B, X point and the other target points in the camera A image are (u)B,vB)、(uX,vX)、(ui,vi) The pixel coordinates of the camera A, X point and the rest of the target points in the camera B image are (u'A,v′A)、(u′X,v′X)、 (u′i,v′i)。

Step 5, calculating the space coordinate of the target according to the camera position information and the acquired image, wherein the calculation process is as follows:

5.1 in the image coordinate System of Camera A, Camera A is located at the origin pA(0,0,0), the image point corresponding space coordinates of the camera B and the X point are respectively pB(uB-u0,vB-v0,fx)、pX(uX-u0,vX-v0,fx) (ii) a In the image coordinate system of camera B, camera B is located at the origin p'B(0,0,0), and the image point corresponding space coordinates of the camera A and the X point are p'A(u′A-u′0,v′A-v′0,f′x)、p′X(u′X-u′0,v′X-v′0,f′x) WhereinAndis the equivalent focal length of the camera.

5.2 according to the pinhole imaging model, the optical center, the image point and the target point of the camera are collinear, an equation can be established:

5.3 according to the geometrical relationship, the point P can be obtainedXTo line PAPBThe distance h is:

and the coordinate P of the vertical pointCComprises the following steps:

5.4 so far, the information on P has been obtainedX3 equations of (c):

solving the equation to obtain the coordinate P to be measuredX

Therefore, the overall implementation process of the invention is described in detail, and on the basis of the invention, the calibration of the camera external parameters can be realized under the condition of no control point, namely the calibration of the rotation matrix R and the translation vector T is realized. For each camera participating in measurement, pixel coordinates of the other camera and the drop point can be extracted from the field of view, and the spatial positions of the two cameras and the spatial coordinates of the drop point are known to form a P2P problem which can be solved, so that external parameters of the two cameras can be calibrated.

The rotation matrix corresponds to the rotation relationship from the world coordinate system to the camera coordinate system, i.e. the vector before rotationWith the rotated vectorSatisfies the following conditions:

the set of normalized orthogonal vectors before camera a rotation is:

the vectors corresponding to the rotation are:

both satisfy (V'A)T=RAVA TFromFrom V'A、RA、VAAre unitary matrices, so:

RA=(V′A)TVA

the translation vector for camera a is then:

TA=-RA·PA T (9)

similarly, the external parameter R of the camera B can be calibratedB、TB

When the equation of step 5.4 is solved, at most two groups of solutions appear, and other constraint conditions are used for screening. In most use cases, the camera is placed right and the roll angle is small, and can be used as a default screening condition; in addition, when there are more than two target points on the target surface, consistency of the externally measured parameters of the camera measured twice can be used as a screening condition.

In addition, when there are points to be measured outside the target surface, the measurement of the remaining points is completed based on the conventional intersection measurement method using the calibration result obtained in step 6.

According to the pinhole imaging model, the space coordinate P of the target pointiAnd imaging coordinate pi、p′iThe mapping relation is satisfied:

wherein A isAAnd ABThe reference matrices for cameras A, B, respectively, may be expressed as:

equation set of two cameras simultaneously, can solve Pi

Example 1

The invention adopts a simulation mode to carry out implementation verification. In the virtual environment, the target plane equation is 2x + y +6z equal to 90, a target point (3,18,11) exists on the target plane, and a target point (0, -5,50) exists outside the target plane.

Two cameras are arranged near the target area and are respectively fixed at points (170, -280,176) and (180,289,191), the optical axes of the cameras point to azimuth angles (north and west) of 21 degrees and 157 degrees, the pitch angles are-11 degrees and-15.5 degrees, and the roll angles are 1 degree and-2 degrees. The two cameras are of the same model, the intrinsic parameters are known, the resolution is 4000 multiplied by 2000, the focal length is 16mm, the pixel size is 4.5 mu m, the intrinsic parameter error is not considered, and the distortion is considered to be corrected.

Coordinates of three random non-collinear points on the target surface are acquired by using RTK, wherein the coordinates are (0.02, -0.04, 15.00), (49.93, -10.03, -0.17), (29.95, 89.98, -10.03), RTK positioning coordinates of a camera A are (169.91, -280.02, 175.99), RTK positioning coordinates of a camera B are (180.01, 288.98, 191.02), and centimeter-level errors caused by RTK positioning are considered in the measurement values.

Camera imaging results as shown in fig. 2, each camera can see the target point and the other camera at the same time.

The true value and the measured value of the pixel coordinate of each point in the two camera images are shown in the following table, and under the current technical level, the pixel extraction precision is at a sub-pixel level, so that certain errors exist between the true value and the measured value.

By substituting known conditions into the above calculation process (steps 1, 5, 6), the main process quantities can be obtained:

the measured target surface equation is: 0.3156x +0.1553y +0.9361z-14.0414 ═ 0;

distance h is 243.9795 m;

vertical point coordinate PC=(175.0673,10.5244,183.6647)。

Two sets of solutions are finally obtained:

the first set of solutions: point X coordinate PX=(2.82,18.14,11.04)

Corresponding camera A external parameter matrix

Equivalent to a camera azimuth angle of 20.99 degrees, a pitch angle of-11.49 degrees and a roll angle of 0.99 degrees;

corresponding camera B external parameter matrix

Equivalent to 156.98 deg. camera azimuth angle, -15.49 deg. pitch angle, -1.97 deg. roll angle.

The second set of solutions: point X coordinate PX=(208.37,16.32,-57.96)

Corresponding camera A external parameter matrix

Equivalent to a camera azimuth angle of-1.98 degrees, a pitch angle of-24.63 degrees and a roll angle of 63.66 degrees;

corresponding camera B external parameter matrix

Equivalent to a camera azimuth of 177.39 deg., a pitch angle of-27.29 deg., and a roll angle of-57.65 deg..

Based on the constraint condition that the roll angle of the camera is smaller, excluding the second group of solutions, and measuring the coordinates P of the point i according to the corresponding extrinsic parameter matrix of the first group of solutionsi=(-0.11,-4.92,50.03)。

To sum up, the coordinates of both target points are solved, and the coordinates of point X and point i are (2.82,18.14,11.04), (-0.11, -4.92,50.03), compared with the true values of the coordinates of two points (3,18,11), (0, -5,50), the errors are 0.23m and 0.14m, respectively. Higher accuracy has been achieved with this embodiment under long lay-out distances of 350m to 400m and large field angle measurements of 58.72 ° × 31.42 °.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种针对空中声乐成像的测量标靶式交互装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!