Unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method and storage medium

文档序号:1962556 发布日期:2021-12-14 浏览:8次 中文

阅读说明:本技术 一种无人机低空飞行位姿无控多视测量方法及存储介质 (Unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method and storage medium ) 是由 童小华 陈鹏 石海博 赵佳俊 谢欢 冯永玖 刘世杰 金雁敏 许雄 柳思聪 王超 于 2021-09-10 设计创作,主要内容包括:本发明涉及一种无人机低空飞行位姿无控多视测量方法,该方法包括以下步骤:步骤S1:基于相机标定获取相机的内方位参数;同时采用五点相对定向解析算法求解本质矩阵,获取相机的外方位参数初值;步骤S2:采用相对定向参数优化算法进行位姿参数估算,包括通过共面条件方程确定的误差方程和约束条件方程,采用间接平差方法求解未知参数,进行位姿参数估算;步骤S3:采用绝对定向算法将模型坐标系转换为自定义局部物方坐标系;步骤S4:无人机飞行参数解算。与现有技术相比,本发明可应用于无控制点的位姿测量场景,适用性更强,且测量精度达到毫米级。(The invention relates to an unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method, which comprises the following steps: step S1: acquiring internal orientation parameters of the camera based on camera calibration; meanwhile, solving the essential matrix by adopting a five-point relative orientation analysis algorithm to obtain an initial value of the external orientation parameter of the camera; step S2: estimating pose parameters by adopting a relative orientation parameter optimization algorithm, wherein the pose parameters comprise an error equation and a constraint condition equation determined by a coplanar condition equation, and the pose parameters are estimated by solving unknown parameters by adopting an indirect adjustment method; step S3: converting the model coordinate system into a self-defined local object coordinate system by adopting an absolute orientation algorithm; step S4: resolving flight parameters of the unmanned aerial vehicle. Compared with the prior art, the method can be applied to pose measurement scenes without control points, the applicability is stronger, and the measurement precision reaches millimeter level.)

1. An unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method is characterized by comprising the following steps:

step S1: acquiring internal orientation parameters of the camera based on camera calibration; meanwhile, solving the essential matrix by adopting a five-point relative orientation analysis algorithm to obtain an initial value of the external orientation parameter of the camera;

step S2: estimating pose parameters by adopting a relative orientation parameter optimization algorithm, wherein the pose parameters comprise an error equation and a constraint condition equation determined by a coplanar condition equation, and the pose parameters are estimated by solving unknown parameters by adopting an indirect adjustment method;

step S3: converting the model coordinate system into a self-defined local object coordinate system by adopting an absolute orientation algorithm;

step S4: and resolving flight parameters of the unmanned aerial vehicle.

2. The unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method according to claim 1, wherein the step S1 comprises the following steps:

step S11: estimating internal orientation parameters of the camera based on camera calibration;

step S12: obtaining an essential matrix constraint equation corresponding to each pair of homonymous points:

wherein, PrAnd PlLeft and right camera image space coordinates, q, representing a certain object space point, respectivelyrAnd q islRespectively representing the left image coordinate and the right image coordinate of a certain object space point; e is an essential matrix, and F is a basic matrix;

step S13: and solving an essential matrix constraint equation by adopting a five-point relative orientation analysis algorithm to obtain each parameter in the essential matrix E, thereby determining an initial value of the exterior orientation parameter of the camera.

3. The unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method according to claim 2, wherein the expression of the essential matrix E in the step S12 is as follows:

wherein R is three external azimuth angle elements of the camera(ii) a composed rotation matrix, (T)x Ty Tz) Three exterior orientation line elements of the camera;

the basic matrix F expression is as follows:

where E is the essential matrix, MrAnd MlThe matrix is composed of left and right camera inner orientation elements, and its expression isWherein c isx、cyAs principal point coordinates, fx、fyIs the camera focal length.

4. The unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method according to claim 1, wherein the relative orientation parameter optimization algorithm in the step S2 is a continuous image pair relative orientation parameter optimization algorithm, and specifically refers to a process of solving an outer orientation element of a right image relative to a left image with the left image as a reference.

5. The unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method according to claim 1, wherein the unknown parameters of the step S2 comprise: nine directional cosines a in the rotation matrix Ri,bi,ciI-1, 2,3, and three baseline components (B)x,By,Bz)。

6. The unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method according to claim 5, wherein the step S2 specifically comprises the following steps:

step S21: and determining a coplanar condition equation by taking the left image as a reference:

wherein (x)0,y0,f1) And (x'0,y'0,f2) Inner orientation elements of the left and right cameras, ai,bi,ci(i is 1,2,3) is defined by the angle of rotationNine direction cosines obtained by calculation, (B)x,By,Bz) Three baseline components representing a sum of squares as a constant value, satisfy(X1,Y1,Z1)、(X2,Y2,Z2) The object space coordinates of the left and right images (x)1,y1,z1)、(x2,y2,z2) The image space coordinates of the left image and the right image respectively, R is a rotation matrix, and RR is satisfiedT=RTR=I;

Step S22: determining an error equation from the coplanar condition equation:

v=Ax-l (4)

the requirements are met,

xT=[dBx dBy dBz da1 da2 da3 db1 db2 db3 dc1 dc2 dc3] (5)

l=-F0=BzX2Y1+ByX1Z2+BxY2Z1-BxY1Z2-BzX1Y2-ByX2Z1 (6)

wherein, each element in the coefficient matrix a is:

step S23: obtaining a constraint condition equation, wherein the constraint condition equation comprises a baseline constraint condition equation and six rotation matrix element constraint equations, and the expression is as follows:

wherein, ai,bi,ciI is defined by the angle of rotation 1,2,3Nine direction cosines obtained by calculation, (B)x,By,Bz) Three baseline components representing a sum of squares as a constant value;

the arrangement formula (8) is in the form of a normal equation:

Cx-W=0 (9)

wherein the content of the first and second substances,

the optimal estimation of 12 unknown parameters is solved by an indirect adjustment method, and the function model is as follows:

Nbb=ATA,U=ATl (11)

the normal equation is expressed as:

then the solution formula of the value to be estimated is:

wherein, KSIs Lagrange multiplier, X0To approximate values, x is a number of corrections;

step S24: after the parameters of the relative orientation algorithm are obtained, the three-dimensional coordinates of all target points in each frame of image under the model coordinate system are solved through a point projection coefficient method or a forward intersection algorithm based on a collinear equation.

7. The unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method according to claim 6, wherein the calculation process of the point projection coefficient method in the step S24 is specifically as follows:

step S241: based on the formula (13), object coordinates (X) of the left and right images are calculated1,Y1,Z1)、(X2,Y2,Z2):

Wherein (x)1,y1,z1)、(x2,y2,z2) The image space coordinates of the left image and the right image are respectively; (x)0,y0,f1) And (x'0,y'0,f2) The inner orientation elements of the left camera and the right camera are respectively; r1、R2Rotation matrixes of the left camera and the right camera are respectively;

step S242: respectively calculating point projection coefficients N of the left camera and the right camera1、N2

Wherein (B)x,By,Bz) Is three baseline components, (X)1,Y1,Z1)、(X2,Y2,Z2) Respectively the object space coordinates of the left image and the right image;

step S243: calculating the point position coordinates (X) of the target point p in the model coordinate systemp,Yp,Zp):

8. The unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method according to claim 1, wherein the step S3 is: converting the model coordinate system to a user-defined local object coordinate system through a parameter coordinate conversion model, wherein the expression of the parameter coordinate conversion model is as follows:

h represents a posture matrix, X, Y and Z represent three-dimensional coordinates of a target point under a self-defined local object space coordinate system, and X represents a three-dimensional coordinate of the target point under the self-defined local object space coordinate systemp,Yp,Zp) Representing the three-dimensional coordinates of the target point p in the model coordinate system, ((s))Δ X, Δ Y, Δ Z) represents the offset of the target in X, Y and Z directions, M is a coordinate rotation matrix, and λ represents a scale parameter;

and calculating to obtain three-dimensional coordinates (X, Y and Z) of the target point under the self-defined local object space coordinate system.

9. The unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method according to claim 1, wherein the step S4 specifically comprises the following steps:

step S41: resolving the displacement;

defining the displacement value of the target point at the initial time as 0mm, the displacement values of the target point in the X, Y and Z directions at the time n are:

wherein the content of the first and second substances,andrepresenting displacement values of the target point in the n-time X, Y and Z-direction, respectively; x0、Y0And Z0Coordinate values representing the target point at the initial time X, Y and in the Z direction, respectively; xn、YnAnd ZnCoordinate values representing the target point at the n time X, Y and the Z direction, respectively;

step S42: resolving the speed;

defining the speed of the target point at the moment n as the average speed of the target point between the moment n-1 and the moment n +1, and expressing the following expression:

wherein the content of the first and second substances,andrepresenting the velocity of the target point in the X, Y and Z directions at time n, Xn+1、Yn+1And Zn+1Representing the three-dimensional spatial coordinates, X, of the target point at time n + 1X, Y and in the Z directionn-1、Yn-1And Zn-1Representing three-dimensional space coordinates of the target point in X, Y and Z directions at the moment of n-1; Δ T represents the time interval of adjacent time instants;

step S43: resolving the acceleration;

defining the acceleration of the target point at the moment n as the average acceleration of the target point between the moment n-1 and the moment n +1, and expressing the following expression:

wherein the content of the first and second substances,andthe acceleration of the target point at time n is represented,andrepresenting the velocity of the target point in the X, Y and Z directions at time n + 1;andrepresenting the velocity of the target point in the Z direction and at time instant X, Y n-1; Δ T represents the time interval of adjacent images;

step S44: resolving the attitude;

and (3) obtaining an attitude matrix H through the time sequence coordinate transformation of the target point:

wherein, Xn+1、Yn+1And Zn+1Representing the spatial coordinates, X, of the target point at time n + 1X, Y and in the Z directionn、YnAnd ZnRepresenting the spatial coordinates of the target point in the n-th X, Y and Z directions, Δ Xn+1、ΔYn+1And Δ Zn+1Represents the offset of the target in X, Y and Z directions at time n + 1;

through coordinate rotation, a rotation matrix M is obtained:

solving the external orientation angle element of the subsequent state relative to the initial state space attitude change:

10. a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 9.

Technical Field

The invention relates to the field of unmanned aerial vehicle pose measurement, in particular to an unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method and a storage medium.

Background

The position and flight attitude of the unmanned aerial vehicle during flying in the air are important parameters for representing the navigation condition and flight control of the unmanned aerial vehicle, and have very important significance for research and development of an unmanned aerial vehicle system, flight attitude control, real-time planning and track adjustment, accident analysis and the like. The difficulty in acquiring the air flight parameters such as the pose of the unmanned aerial vehicle and the like in a video measurement mode is that available control points are lacked in the air background. When the common visual field of the high-speed cameras for simultaneous measurement does not have enough control point positions, how to estimate the pose of the cameras and calculate the high-precision three-dimensional coordinates of the target by the pose is a difficult problem.

Disclosure of Invention

The invention aims to overcome the defects in the prior art and provide an unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method and a storage medium which do not need control points and have high precision.

The purpose of the invention can be realized by the following technical scheme:

according to a first aspect of the invention, an unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method comprises the following steps:

step S1: acquiring internal orientation parameters of the camera based on camera calibration; meanwhile, solving the essential matrix by adopting a five-point relative orientation analysis algorithm to obtain an initial value of the external orientation parameter of the camera;

step S2: estimating pose parameters by adopting a relative orientation parameter optimization algorithm, wherein the pose parameters comprise an error equation and a constraint condition equation determined by a coplanar condition equation, and the pose parameters are estimated by solving unknown parameters by adopting an indirect adjustment method;

step S3: converting the model coordinate system into a self-defined local object coordinate system by adopting an absolute orientation algorithm;

step S4: and resolving flight parameters of the unmanned aerial vehicle.

Preferably, the step S1 includes the steps of:

step S11: estimating internal orientation parameters of the camera based on camera calibration;

step S12: obtaining an essential matrix constraint equation corresponding to each pair of homonymous points:

wherein, PrAnd PlLeft and right camera image space coordinates, q, representing a certain object space point, respectivelyrAnd q islRespectively representing the left image coordinate and the right image coordinate of a certain object space point; e is an essential matrix, and F is a basic matrix;

step S13: and solving an essential matrix constraint equation by adopting a five-point relative orientation analysis algorithm to obtain each parameter in the essential matrix E, thereby determining an initial value of the exterior orientation parameter of the camera.

Preferably, the expression of the essential matrix E in step S12 is as follows:

wherein R is three external azimuth angle elements of the camera(ii) a composed rotation matrix, (T)x Ty Tz) Three exterior orientation line elements of the camera;

the basic matrix F expression is as follows:

where E is the essential matrix, MrAnd MlThe matrix is composed of left and right camera inner orientation elements, and its expression isWherein c isx、cyAs principal point coordinates, fx、fyIs the camera focal length.

Preferably, the relative orientation parameter optimization algorithm in step S2 is a continuous image pair relative orientation parameter optimization algorithm, specifically, a process of determining an external orientation element of the right image relative to the left image based on the left image.

Preferably, the unknown parameters of step S2 include: nine directional cosines a in the rotation matrix Ri,bi,ciI-1, 2,3, and three baseline components (B)x,By,Bz)。

Preferably, the step S2 specifically includes the following steps:

step S21: and determining a coplanar condition equation by taking the left image as a reference:

wherein (x)0,y0,f1) And (x'0,y'0,f2) Inner orientation elements of the left and right cameras, ai,bi,ci(i is 1,2,3) is defined by the angle of rotationNine direction cosines obtained by calculation, (B)x,By,Bz) Three baseline components representing a sum of squares as a constant value, satisfy(X1,Y1,Z1)、(X2,Y2,Z2) The object space coordinates of the left and right images (x)1,y1,z1)、(x2,y2,z2) The image space coordinates of the left image and the right image respectively, R is a rotation matrix, and RR is satisfiedT=RTR=I;

Step S22: determining an error equation from the coplanar condition equation:

v=Ax-l (4)

the requirements are met,

xT=[dBx dBy dBz da1 da2 da3 db1 db2 db3 dc1 dc2 dc3] (5)

l=-F0=BzX2Y1+ByX1Z2+BxY2Z1-BxY1Z2-BzX1Y2-ByX2Z1 (6)

wherein, each element in the coefficient matrix a is:

step S23: obtaining a constraint condition equation, wherein the constraint condition equation comprises a baseline constraint condition equation and six rotation matrix element constraint equations, and the expression is as follows:

wherein, ai,bi,ciI is defined by the angle of rotation 1,2,3Nine direction cosines obtained by calculation, (B)x,By,Bz) Three baseline components representing a sum of squares as a constant value;

the arrangement formula (8) is in the form of a normal equation:

Cx-W=0 (9)

wherein the content of the first and second substances,

the optimal estimation of 12 unknown parameters is solved by an indirect adjustment method, and the function model is as follows:

Nbb=ATA,U=ATl (11)

the normal equation is expressed as:

then the solution formula of the value to be estimated is:

wherein, KSIs Lagrange multiplier, X0To approximate values, x is a number of corrections;

step S24: after the parameters of the relative orientation algorithm are obtained, the three-dimensional coordinates of all target points in each frame of image under the model coordinate system are solved through a point projection coefficient method or a forward intersection algorithm based on a collinear equation.

Preferably, the calculation process of the point projection coefficient method in step S24 is specifically as follows:

step S241: based on the formula (13), object coordinates (X) of the left and right images are calculated1,Y1,Z1)、(X2,Y2,Z2):

Wherein (x)1,y1,z1)、(x2,y2,z2) The image space coordinates of the left image and the right image are respectively; (x)0,y0,f1) And (x'0,y'0,f2) The inner orientation elements of the left camera and the right camera are respectively; r1、R2Rotation matrixes of the left camera and the right camera are respectively;

step S242: respectively calculating point projection coefficients N of the left camera and the right camera1、N2

Wherein (B)x,By,Bz) Is three baseline components, (X)1,Y1,Z1)、(X2,Y2,Z2) Are respectively shown on the left,Object space coordinates of the right image;

step S243: calculating the point position coordinates (X) of the target point p in the model coordinate systemp,Yp,Zp):

Preferably, the step S3 is: converting the model coordinate system to a user-defined local object coordinate system through a parameter coordinate conversion model, wherein the expression of the parameter coordinate conversion model is as follows:

h represents a posture matrix, X, Y and Z represent three-dimensional coordinates of a target point under a self-defined local object space coordinate system, and X represents a three-dimensional coordinate of the target point under the self-defined local object space coordinate systemp,Yp,Zp) Representing the three-dimensional coordinates of the target point p in a model coordinate system, (delta X, delta Y, delta Z) representing the offset of the target in X, Y and Z directions, M being a coordinate rotation matrix, and lambda representing a scale parameter;

and calculating to obtain three-dimensional coordinates (X, Y and Z) of the target point under the self-defined local object space coordinate system.

Preferably, the step S4 specifically includes the following steps:

step S41: resolving the displacement;

defining the displacement value of the target point at the initial time as 0mm, the displacement values of the target point in the X, Y and Z directions at the time n are:

wherein the content of the first and second substances,andrespectively representing target pointsDisplacement values at time n X, Y and in the Z direction; x0、Y0And Z0Coordinate values representing the target point at the initial time X, Y and in the Z direction, respectively; xn、YnAnd ZnCoordinate values representing the target point at the n time X, Y and the Z direction, respectively;

step S42: resolving the speed;

defining the speed of the target point at the moment n as the average speed of the target point between the moment n-1 and the moment n +1, and expressing the following expression:

wherein the content of the first and second substances,andrepresenting the velocity of the target point in the X, Y and Z directions at time n, Xn+1、Yn+1And Zn+1Representing the three-dimensional spatial coordinates, X, of the target point at time n + 1X, Y and in the Z directionn-1、Yn-1And Zn-1Representing three-dimensional space coordinates of the target point in X, Y and Z directions at the moment of n-1; Δ T represents the time interval of adjacent time instants;

step S43: resolving the acceleration;

defining the acceleration of the target point at the moment n as the average acceleration of the target point between the moment n-1 and the moment n +1, and expressing the following expression:

wherein the content of the first and second substances,andthe acceleration of the target point at time n is represented,andrepresenting the velocity of the target point in the X, Y and Z directions at time n + 1;andrepresenting the velocity of the target point in the Z direction and at time instant X, Y n-1; Δ T represents the time interval of adjacent images;

step S44: resolving the attitude;

and (3) obtaining an attitude matrix H through the time sequence coordinate transformation of the target point:

wherein, Xn+1、Yn+1And Zn+1Representing the spatial coordinates, X, of the target point at time n + 1X, Y and in the Z directionn、YnAnd ZnRepresenting the spatial coordinates of the target point in the n-th X, Y and Z directions, Δ Xn+1、ΔYn+1And Δ Zn+1Represents the offset of the target in X, Y and Z directions at time n + 1;

through coordinate rotation, a rotation matrix M is obtained:

solving the external orientation angle element of the subsequent state relative to the initial state space attitude change:

according to a second aspect of the invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method described above.

Compared with the prior art, the invention has the following advantages:

1) the invention solves the video measurement problem of the low-altitude flight pose of the unmanned aerial vehicle under the condition of no control point. Firstly, obtaining external orientation elements of a camera under a model coordinate system by an essential matrix analysis method, and providing pose parameter initial values for a subsequent relative orientation parameter optimization algorithm; then converting the model coordinate system into a self-defined local object coordinate system through an absolute orientation algorithm, solving the three-dimensional space coordinate of the unmanned aerial vehicle target under the local object coordinate system, and further analyzing to obtain the flying speed, acceleration and attitude angle;

2) the unmanned aerial vehicle attitude correction method can realize the rapid correction of the unmanned aerial vehicle flight track and attitude under the condition of not increasing the dead weight of the unmanned aerial vehicle in the actual scene under the uncontrolled condition;

3) the invention aims at the point location measurement result of the unmanned aerial vehicle in the air, wherein the elevation precision reaches millimeter level, the plane precision is centimeter level, and the total positioning precision is about 10 cm-20 cm.

Drawings

FIG. 1 is a flow chart of an unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method of the invention;

FIG. 2 is a schematic illustration of a coplanar condition;

FIG. 3 is a schematic diagram of coordinate transformation in the X direction;

FIG. 4 is a schematic diagram of Y-direction coordinate transformation;

FIG. 5 is a schematic diagram of Z-direction coordinate transformation;

FIG. 6 is a schematic illustration of the subject and flight area;

FIG. 7 is a schematic view of a high-speed camera placement;

fig. 8 is a schematic diagram of control point distribution and numbering in an experimental control field of an unmanned aerial vehicle;

FIG. 9 shows the X-direction displacement measurement;

FIG. 10 shows the Y-direction displacement measurement;

FIG. 11 shows the Z-direction displacement measurement;

FIG. 12 is the X direction velocity measurement;

FIG. 13 is a Y direction velocity measurement;

FIG. 14 is a Z-direction velocity measurement;

FIG. 15 is an X-direction acceleration diagram;

FIG. 16 is a Y direction acceleration diagram;

fig. 17 is a Z-direction acceleration diagram.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.

For high-speed video measurement, accurate geometric orientation elements are the key to guarantee the three-dimensional reconstruction precision. The conventional method is to arrange a certain number of control point positions in the public view of the joint measurement high-speed camera, and the position and the posture of the camera are calculated by the control points with three-dimensional coordinates and the coordinates of the corresponding image points on the image. The problem that enough control points are lacked exists in some specific measurement scenes, therefore, the patent provides a high-speed camera pose estimation method for no control points, and the problem of camera pose estimation is solved by using a relative orientation algorithm. Firstly, the exterior orientation elements of the camera under a model coordinate system are obtained through an essential matrix analysis method, and pose parameter initial values are provided for a subsequent relative orientation algorithm. And then, the model coordinate system can be converted into a self-defined local object coordinate system through an absolute orientation algorithm, so that the three-dimensional space coordinates of the target point under the local object coordinate system are solved, and the target three-dimensional reconstruction task is completed.

The method of the present invention will be described in detail with reference to specific examples.

1. Experimental scenario

The experimental object of this experiment is unmanned aerial vehicle, and experimental flying height is about 40 m. As shown in fig. 6, in order to test the high-speed camera uncontrolled measurement accuracy, the drone repeatedly flies in an area of 40m × 40m, flying back and forth 8 times (one way) at intervals of vertical height 5m, i.e., flying laterally in the designated measurement area at heights of 40m, 35m, 30m, 25m, 20m, 15m, 10m, and 5m, respectively.

2. High speed video measurement scheme

In photogrammetry, every two cameras are required to form a stereo measurement combination for measurement. This experiment adopts two 400 ten thousand pixels's high-speed camera to shoot unmanned aerial vehicle whole motion process in step. During the experiment, the high-speed camera should remain stationary. The high speed camera is equipped with a fixed focus lens of 20mm or 35 mm.

As shown in fig. 7, in an actual test experiment, the high-speed camera would photograph the target object in a centrosymmetric, cross-directional photography. The base line length of the two high-speed cameras is set to 7m in view of the length constraint of the synchronization control line. The two high-speed cameras should be horizontally spaced about 40m from the target area, and the pitch angles of the two cameras are about 22.5 °.

In the photogrammetry three-dimensional calculation process, as the high-speed camera is kept unchanged in the experimental process, a pair of homonymy points is added to the flight position of the unmanned aerial vehicle in the measurement area at each moment. The processing steps are as follows: 1) manually screening homonymous point locations (unmanned aerial vehicles or manual mark feature point locations); 2) estimating the orientation parameters in the camera by using a calibration method; 3) solving the external orientation parameters of the camera by using the intrinsic matrix; 4) and carrying out three-dimensional reconstruction on the unknown point positions of the unmanned aerial vehicle flight so as to obtain the three-dimensional motion state of the unmanned aerial vehicle.

As shown in fig. 1, the method specifically comprises the following steps:

2.1 method for determining initial value of neutral position in relative orientation algorithm

In the field of computer stereo vision, the essential matrix E and the basic matrix F determine the epipolar (epipolar line) geometric relationship between stereo cameras. The intrinsic matrix E is composed of rotation and translation parameters (external orientation parameters) between the stereo cameras, and the fundamental matrix F includes internal orientation parameters of the stereo cameras in addition to the external orientation parameters. The essential matrix condition equation and the basic matrix condition equation are respectively expressed as follows:

wherein:

in the formula (1), PrAnd PlLeft and right camera image space coordinates, q, representing a certain object space point, respectivelyrAnd q islRespectively representing the left image coordinate and the right image coordinate of a certain object space point; r is a rotation matrix consisting of three external orientation angle elements between cameras; [ T ]x Ty Tz]Then it consists of three outside orientation elements between the cameras and M is a matrix consisting of inside orientation elements of the cameras.

After camera calibration is completed, each pair of homonymous points can obtain an essential matrix constraint equation. Therefore, the problem of analyzing the exterior orientation elements of the stereo camera can be converted into a problem of mathematically solving a multivariate multiple equation.

In order to solve each parameter in the intrinsic matrix, a five-point relative orientation analysis algorithm proposed by Nister (2004) can robustly solve each parameter in the intrinsic matrix, and then the external orientation angle element and the line element of the stereo camera are recovered from the intrinsic matrix. The algorithm has been integrated in the open source computer vision library OpenCV, enabling the camera orientation task to be completed quickly and efficiently (Kaehler and Bradski, 2018).

In the present invention, the algorithm is used to solve the initial values of the outside camera orientation parameters to provide initial conditions for the subsequent relative orientation parameter optimization algorithm.

2.2 high speed video measurement relative orientation-Absolute orientation resolution

Relative orientation refers to restoring or determining the relative relationship of the photographic beams of the same image, i.e., resolving the relative orientation elements of the stereopair. This section will illustrate the entire orientation process using successive image pairs as an example for the relative orientation method. FIG. 2 shows a three-dimensional moldSchematic representation of the strict relative orientation in type, O1p1And O2p2Is a pair of homonymous rays, the baseline O between the pair of homonymous rays and the camera1O2Coplanarity, i.e., the mixed product of these three vectors is 0, satisfies the coplanarity equation. If the left image is taken as a reference, the coplanar condition equation can be expressed as:

in the formula (3), (x)0,y0,f1) And (x'0,y'0,f2) Inner orientation elements, a, of two photographs, respectivelyi,bi,ci(i-1, 2,3) is the cosine of nine directions, which can be determined by the angle of rotationObtained by calculation, (B)x,By,Bz) Three baseline components.

The continuous pair relative orientation is based on the left image, and the external orientation element (i.e., relative orientation element) of the right image relative to the left image is obtained. In the resolving process, nine directions in the rotation matrix R are cosine (a)i,bi,ci) As unknowns, and three baseline components (B)x,By,Bz) As an unknown. Therefore, the relative orientation algorithm needs to solve for 12 unknowns in total.

However, since there are only three independent parameters in the rotation matrix R and only two independent parameters in the three baseline components, 7 conditional equations can be added. The rotation matrix R is an orthogonal matrix, i.e. RRT=RTR ═ I, 6 conditional equations can be listed (Jue, 2008); the sum of the squares of the three baseline components being constant, i.e.1 conditional equation can be listed as well. In summary, the solution process of the continuous image pair relative orientation algorithm is as follows:

the error equation listed by the coplanar condition equation is shown in equation (4).

v=Ax-l (4)

In the formula (4), the first and second groups,

xT=[dBx dBy dBz da1 da2 da3 db1 db2 db3 dc1 dc2 dc3] (5)

l=-F0=BzX2Y1+ByX1Z2+BxY2Z1-BxY1Z2-BzX1Y2-ByX2Z1 (6)

in equation (7), the coefficient matrix of the error equation is as follows:

in the above analytical model, the seven constraint equations include a baseline constraint equation and six rotation matrix element constraint equations, as shown in equation (8),

equation (8) can be organized into the following form of the normal equation:

Cx-W=0 (9)

in the formula (9), the reaction mixture,

the optimal estimation of the unknown number can be solved through indirect adjustment with constraint conditions, and the function model is as follows:

Nbb=ATA,U=ATl (11)

the formula of the normal equation can be expressed as:

then the solution formula of the value to be estimated is:

the above-mentioned solving process is an iterative optimization process, so that it needs better initial value of parameter to achieve the goal of fast convergence of parameter to be estimated. The initial value of the out-of-camera orientation element in the relative orientation algorithm can be obtained by intrinsic matrix analysis. After the relative orientation parameters are obtained, the three-dimensional coordinates of all target points (tracking points) in each frame of image under the model coordinate system can be solved through a point projection coefficient method or a forward intersection algorithm based on a collinear equation. The calculation process of the point projection coefficient method is as follows:

as can be seen from the formula (13),

the point projection coefficient can be obtained by equation (16).

Further, the point coordinate (X) of the target point in the model coordinate system is calculated by the formula (17)p,Yp,Zp)。

In the absolute orientation analysis process, the model coordinate system can be converted into a custom coordinate system by a seven-parameter coordinate conversion method.

Equation (18) is a seven-parameter coordinate transformation model in which (X, Y, Z) represents the three-dimensional coordinates of the target point in the custom coordinate system, (Δ X, Δ Y, Δ Z) represents the offset of the target in X, Y and Z directions, (Xp,Yp,Zp) And the three-dimensional coordinates of the target point under the model coordinate system are represented, M represents a coordinate rotation matrix, and lambda represents a scale parameter.

2.3 unmanned aerial vehicle flight parameter resolution

2.3.1 displacement resolving;

defining the displacement value of the target point at the initial time as 0mm, the displacement values of the target point in the X, Y and Z directions at the time n are:

wherein the content of the first and second substances,andrepresenting displacement values of the target point in the n-time X, Y and Z-direction, respectively;X0、Y0And Z0Coordinate values representing the target point at the initial time X, Y and in the Z direction, respectively; xn、YnAnd ZnCoordinate values representing the target point at the n time X, Y and the Z direction, respectively;

2.3.2 speed resolving;

defining the speed of the target point at the moment n as the average speed of the target point between the moment n-1 and the moment n +1, and expressing the following expression:

wherein the content of the first and second substances,andrepresenting the velocity of the target point in the X, Y and Z directions at time n, Xn+1、Yn+1And Zn+1Representing the three-dimensional spatial coordinates, X, of the target point at time n + 1X, Y and in the Z directionn-1、Yn-1And Zn-1Representing three-dimensional space coordinates of the target point in X, Y and Z directions at the moment of n-1; Δ T represents the time interval of adjacent time instants;

2.3.3 resolving the acceleration;

defining the acceleration of the target point at the moment n as the average acceleration of the target point between the moment n-1 and the moment n +1, and expressing the following expression:

wherein the content of the first and second substances,andthe acceleration of the target point at time n is represented,andrepresenting the velocity of the target point in the X, Y and Z directions at time n + 1;andrepresenting the velocity of the target point in the Z direction and at time instant X, Y n-1; Δ T represents the time interval of adjacent images;

2.3.4 attitude resolving, as shown in FIGS. 3-5;

and (3) obtaining an attitude matrix H through the time sequence coordinate transformation of the target point:

wherein, Xn+1、Yn+1And Zn+1Representing the spatial coordinates, X, of the target point at time n + 1X, Y and in the Z directionn、YnAnd ZnRepresenting the spatial coordinates of the target point in the n-th X, Y and Z directions, Δ Xn+1、ΔYn+1And Δ Zn+1Represents the offset in the Z direction and at time X, Y at n + 1;

through coordinate rotation, a rotation matrix M is obtained:

solving the external orientation angle element of the subsequent state relative to the initial state space attitude change:

3. experimental results and accuracy

Fig. 8 is a schematic diagram of the distribution and numbering of control points of the experimental control field of the unmanned aerial vehicle, and the following table 1 shows the camera orientation results.

TABLE 1 Camera orientation results

FIGS. 9-11 show the displacement measurements in the X, Y, and Y directions, respectively; FIGS. 12-14 show velocity measurements in the X, Y, and Y directions, respectively; FIGS. 15-17 show acceleration measurements in the X, Y, and Y directions, respectively.

And for 50 unmanned plane point positions which are uniformly and randomly extracted, acquiring the postures of the cameras in the model coordinate system through relative orientation, and determining three directions and actual dimensions in space by using control points on the ground. According to experimental results, the altitude precision of the point location measurement result of the aerial unmanned aerial vehicle reaches millimeter level, the plane precision is centimeter level, and the total positioning precision is about 10 cm-20 cm.

To summarize: the invention provides an unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method, which solves the problem of camera pose estimation by using a relative orientation algorithm, aiming at the video measurement problem of the unmanned aerial vehicle low-altitude flight pose without a control point. Firstly, obtaining external orientation elements of a camera under a model coordinate system by an essential matrix analysis method, and providing an initial value of a pose parameter for a subsequent relative orientation algorithm; and then converting the model coordinate system into a self-defined local object coordinate system through an absolute orientation algorithm, solving the three-dimensional space coordinate of the unmanned aerial vehicle target under the local object coordinate system, and further analyzing to obtain the flying speed, acceleration and attitude angle. The achievement of the invention has potential application value in the aspects of research and development of the unmanned aerial vehicle system and the like.

It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.

The electronic device of the present invention includes a Central Processing Unit (CPU) that can perform various appropriate actions and processes according to computer program instructions stored in a Read Only Memory (ROM) or computer program instructions loaded from a storage unit into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device can also be stored. The CPU, ROM, and RAM are connected to each other via a bus. An input/output (I/O) interface is also connected to the bus.

A plurality of components in the device are connected to the I/O interface, including: an input unit such as a keyboard, a mouse, etc.; an output unit such as various types of displays, speakers, and the like; storage units such as magnetic disks, optical disks, and the like; and a communication unit such as a network card, modem, wireless communication transceiver, etc. The communication unit allows the device to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.

The processing unit performs the various methods and processes described above, such as methods S1-S4. For example, in some embodiments, the methods S1-S4 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as a storage unit. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device via ROM and/or the communication unit. When the computer program is loaded into RAM and executed by the CPU, one or more of the steps of methods S1-S4 described above may be performed. Alternatively, in other embodiments, the CPU may be configured to perform methods S1-S4 in any other suitable manner (e.g., by way of firmware).

The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.

Program code for implementing the methods of the present invention may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of the present invention, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

29页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种用于建筑测量引线的激光定位仪

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!