Unmanned aerial vehicle landing pose filtering estimation method and system based on visual anchor point

文档序号:849040 发布日期:2021-03-16 浏览:18次 中文

阅读说明:本技术 一种基于视觉锚点的无人机降落位姿滤波估计方法及系统 (Unmanned aerial vehicle landing pose filtering estimation method and system based on visual anchor point ) 是由 相晓嘉 周晗 唐邓清 常远 闫超 黄依新 陈紫叶 兰珍 刘兴宇 李子杏 于 2020-11-09 设计创作,主要内容包括:本发明公开一种基于视觉锚点的无人机降落位姿滤波估计方法及系统,该方法面向无人机降落过程中的空间位置和姿态估计需求,构建了基于视觉锚点测量的无人机位姿估计扩展卡尔曼滤波模型,依托扩展卡尔曼滤波器理论,实现了在最小误差二范数平方和指标下的无人机位姿的最优估计,并且有效降低了降落过程中地基视觉的观测误差对无人机位姿估计准确性的影响,相比传统方法大幅提升了无人机降落过程中位姿估计的准确性和鲁棒性。(The invention discloses an unmanned aerial vehicle landing pose filtering estimation method and system based on visual anchor points, the method is oriented to the space position and attitude estimation requirements in the unmanned aerial vehicle landing process, an unmanned aerial vehicle pose estimation extended Kalman filtering model based on visual anchor point measurement is constructed, the optimal estimation of the unmanned aerial vehicle pose under the minimum error two-norm square sum index is realized by depending on an extended Kalman filter theory, the influence of the observation error of ground vision in the landing process on the unmanned aerial vehicle pose estimation accuracy is effectively reduced, and the pose estimation accuracy and robustness in the unmanned aerial vehicle landing process are greatly improved compared with the traditional method.)

1. An unmanned aerial vehicle landing pose filtering estimation method based on visual anchor points is characterized by comprising the following steps:

constructing an unmanned aerial vehicle pose estimation extended Kalman filtering model according to the measurement condition of the visual anchor point in the landing process of the unmanned aerial vehicle; the model comprises a system state prediction equation and a system observation equation;

defining a visual anchor point measurement value of the unmanned aerial vehicle according to the image generalized characteristics and the application scene characteristics of the unmanned aerial vehicle;

acquiring the space pose of the unmanned aerial vehicle at the last moment, and acquiring a predicted value of the space pose of the unmanned aerial vehicle at the current moment by using a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the last moment;

obtaining a predicted value of a measured value of the unmanned aerial vehicle visual anchor point at the current moment by using a system observation equation according to the predicted value of the unmanned aerial vehicle space pose at the current moment and other system observed quantities, and obtaining a predicted value of an image position of the unmanned aerial vehicle visual anchor point at the current moment according to the predicted value of the measured value;

the method comprises the steps of obtaining an image position measurement value of an unmanned aerial vehicle visual anchor point at the current moment, estimating an extended Kalman filtering model according to the pose of the unmanned aerial vehicle, and obtaining the space pose of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation by utilizing an unmanned aerial vehicle space pose prediction value, an image position prediction value of the unmanned aerial vehicle visual anchor point at the current moment and an image position measurement value of the unmanned aerial vehicle visual anchor point at the current moment.

2. The unmanned aerial vehicle landing pose filtering estimation method based on the visual anchor point according to claim 1, wherein the obtaining of the predicted value of the spatial pose of the unmanned aerial vehicle at the current moment by using a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the spatial pose of the unmanned aerial vehicle at the last moment comprises:

according to the acceleration item input by the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the last moment, a system state prediction equation is utilized to obtain a predicted value of the space pose of the unmanned aerial vehicle at the current moment as follows:

xk|k-1=fs(xk-1|k-1,uk)

in the formula (f)s(. is a system state prediction equation; u. ofkInput of the unmanned aerial vehicle system at the current moment;

according to the application scene of the unmanned aerial vehicle, ignoring the dynamic part of the unmanned aerial vehicle motion, and obtaining the predicted value of the space pose of the unmanned aerial vehicle at the current moment as follows:

in the formula, xk|k-1The predicted value of the space pose of the unmanned aerial vehicle at the current moment is obtained; fkIs a state transition matrix; x is the number ofk-1|k-1The space pose of the unmanned aerial vehicle at the last moment; i is3×3Is an identity matrix; Δ tk|k-1A 3 × 3 diagonal matrix with diagonal elements of Δ t, where Δ t is a time difference between a current time and a previous time;is a position;is the speed;is the attitude Euler angle;is the angular velocity.

3. The unmanned aerial vehicle landing pose filtering estimation method based on visual anchor points according to claim 1, wherein the unmanned aerial vehicle visual anchor point measurement value is defined according to the image generalized characteristics and the application scene characteristics of the unmanned aerial vehicle, and the method comprises the following steps:

according to the image generalized characteristics and the application scene characteristics of the unmanned aerial vehicle, defining the vision anchor point measurement value of the unmanned aerial vehicle as follows:

wherein z is a measured value of a visual anchor point of the unmanned aerial vehicle; m is the number of visual anchor points;the image position of the mth visual anchor point.

4. The unmanned aerial vehicle landing pose filtering estimation method based on visual anchor points according to claim 1, wherein the obtaining of the predicted value of the measured value of the visual anchor point of the unmanned aerial vehicle at the current moment by using a system observation equation according to the predicted value of the spatial pose of the unmanned aerial vehicle at the current moment and other system observations comprises:

according to the predicted value of the space pose of the unmanned aerial vehicle at the current moment and other system observed quantities, the predicted value of the measured value of the visual anchor point of the unmanned aerial vehicle at the current moment is obtained by using a system observation equation:

in the formula, zk|k-1The predicted value of the unmanned aerial vehicle visual anchor point measured value at the current moment; h (-) is a system observation equation; s is an image projection normalization factor; k' is the internal parameter matrix of the camera, f is the focal length of the camera, dxAnd dyThe actual width and height of each pixel, respectively, (c)x,cy) Pixel coordinates of the central point of the image;a homogeneous transformation matrix representing the transformation from the pan-tilt coordinate system g to the camera coordinate system c;a homogeneous transformation matrix representing the transformation from the head base coordinate system g' to the head coordinate system g;a homogeneous transformation matrix representing the world coordinate system w to the holder base coordinate system g';a homogeneous transformation matrix representing the unmanned aerial vehicle body coordinate system b to the world coordinate system w;coordinate system of target body for all visual anchor pointsIs spatially homogeneous matrix.

5. The unmanned aerial vehicle landing pose filtering estimation method based on visual anchor points according to claim 1, wherein the unmanned aerial vehicle spatial pose at the current moment is obtained through an unmanned aerial vehicle state updating equation by using the unmanned aerial vehicle spatial pose prediction value, the image position prediction value of the unmanned aerial vehicle visual anchor point at the current moment and the image position measurement value of the unmanned aerial vehicle visual anchor point at the current moment according to the unmanned aerial vehicle pose estimation extended Kalman filtering model, and the method comprises the following steps:

predicting the error covariance matrix at the current moment by using the error covariance matrix at the previous moment according to the fact that a system state prediction equation is a linear equation;

according to the predicted current-time error covariance matrix, obtaining a current-time Kalman gain by using an unmanned aerial vehicle space pose predicted value and an image position predicted value of an unmanned aerial vehicle visual anchor point at the current time;

and updating the state and the error covariance matrix of the unmanned aerial vehicle according to the Kalman gain at the current moment, and obtaining the space pose of the unmanned aerial vehicle at the current moment by using the image position measurement value of the visual anchor point of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation.

6. The unmanned aerial vehicle landing pose filtering estimation method based on visual anchor points according to claim 5, wherein the system state prediction equation is a linear equation, and the error covariance matrix at the current moment is predicted by using the error covariance matrix at the previous moment, and the method comprises the following steps:

according to the fact that a system state prediction equation is a linear equation, the error covariance matrix at the current moment is predicted by the aid of the error covariance matrix at the previous moment:

in the formula, Pk|k-1Predicting an error covariance matrix at the current moment; pk-1|k-1An error covariance matrix at the previous moment; fkIs a state transition matrix; qkA covariance matrix of the noise is predicted for the states.

7. The unmanned aerial vehicle landing pose filtering estimation method based on visual anchor points according to claim 5, wherein the current time Kalman gain is obtained by utilizing the predicted value of the unmanned aerial vehicle spatial pose and the predicted value of the image position of the unmanned aerial vehicle visual anchor point at the current time according to the predicted current time error covariance matrix, and the method comprises the following steps:

according to the predicted current time error covariance matrix, utilizing the unmanned aerial vehicle space pose predicted value xk|k-1And the image position predicted value h (x) of the unmanned aerial vehicle visual anchor point at the current momentk|k-1) And obtaining the current Kalman gain as follows:

in the formula, KkA current time Kalman gain; hkIs a measurement matrix; skAn observation margin covariance matrix; rkTo observe the noise covariance matrix.

8. The unmanned aerial vehicle landing pose filtering estimation method based on visual anchor points according to claim 5, wherein the unmanned aerial vehicle state and error covariance matrix are updated according to the current moment Kalman gain, and the current moment unmanned aerial vehicle space pose is obtained by using the image position measurement value of the current moment unmanned aerial vehicle visual anchor point, comprising the following steps:

updating the state and the error covariance matrix of the unmanned aerial vehicle according to Kalman gain at the current moment, and obtaining the space pose of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation by utilizing the image position measurement value of the visual anchor point of the unmanned aerial vehicle at the current moment:

xk|k=xk|k-1+Kk(zk-zk|k-1)

Pk|k=(1-KkHk)Pk|k-1

in the formula, xk|kThe space pose of the unmanned aerial vehicle at the current moment is obtained; z is a radical ofkThe measured value of the visual anchor point of the unmanned aerial vehicle at the current moment; pk|kIs the current time error covariance matrix.

9. The utility model provides an unmanned aerial vehicle descends position appearance filtering estimation system based on vision anchor point which characterized in that includes:

the model construction module is used for constructing an unmanned aerial vehicle pose estimation extended Kalman filtering model according to the measurement condition of the visual anchor point in the landing process of the unmanned aerial vehicle; the model comprises a system state prediction equation and a system observation equation; defining a visual anchor point measurement value of the unmanned aerial vehicle according to the image generalized characteristics and the application scene characteristics of the unmanned aerial vehicle;

the unmanned aerial vehicle space pose prediction module is used for acquiring the space pose of the unmanned aerial vehicle at the last moment, and obtaining a predicted value of the space pose of the unmanned aerial vehicle at the current moment by using a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the last moment;

the image position prediction module of the unmanned aerial vehicle visual anchor point is used for obtaining a predicted value of a measured value of the unmanned aerial vehicle visual anchor point at the current moment by using a system observation equation according to the predicted value of the space pose of the unmanned aerial vehicle at the current moment and other system observed quantities, and obtaining an image position predicted value of the unmanned aerial vehicle visual anchor point at the current moment according to the predicted value of the measured value;

and the unmanned aerial vehicle space pose acquisition module is used for acquiring an image position measurement value of the unmanned aerial vehicle visual anchor point at the current moment, estimating an extended Kalman filtering model according to the unmanned aerial vehicle pose, and acquiring the unmanned aerial vehicle space pose at the current moment through an unmanned aerial vehicle state updating equation by utilizing an unmanned aerial vehicle space pose prediction value, an image position prediction value of the unmanned aerial vehicle visual anchor point at the current moment and an image position measurement value of the unmanned aerial vehicle visual anchor point at the current moment.

10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the method of any of claims 1 to 7.

Technical Field

The invention relates to the technical field of unmanned aerial vehicle autonomous landing, in particular to an unmanned aerial vehicle landing pose filtering estimation method and system based on visual anchor points.

Background

In the autonomous taking-off and landing process of the unmanned aerial vehicle, the position and the posture of the unmanned aerial vehicle are acquired in real time by combining an airborne global positioning system and an inertial navigation system, and the unmanned aerial vehicle is a main means for sensing the state of the unmanned aerial vehicle at present. Considering that factors such as a magnetic field and temperature in the environment easily cause interference to the airborne positioning system, in the whole landing process of the unmanned aerial vehicle, the stable and accurate pose information cannot be provided for the unmanned aerial vehicle only by depending on the airborne positioning system. The ground monocular vision system is used for observing the landing process of the unmanned aerial vehicle, the real-time estimation of the space position and the attitude of the unmanned aerial vehicle can be realized by using the computer vision technology, and the auxiliary airborne positioning system provides more accurate and stable real-time space pose information for the unmanned aerial vehicle. At present, the traditional methods of estimating the spatial position and the attitude of a target according to a two-dimensional image by utilizing a binocular ranging principle, a PnP problem solution and the like have more defects in the aspects of accuracy and robustness. Therefore, a high-precision strong-robustness unmanned aerial vehicle pose estimation method based on monocular vision is urgently needed to be designed.

Disclosure of Invention

The invention provides an unmanned aerial vehicle landing pose filtering estimation method and system based on visual anchor points, which are used for overcoming the defects of low precision, low strong robustness and the like in the prior art.

In order to achieve the purpose, the invention provides an unmanned aerial vehicle landing pose filtering estimation method based on a visual anchor point, which comprises the following steps:

constructing an unmanned aerial vehicle pose estimation extended Kalman filtering model according to the measurement condition of the visual anchor point in the landing process of the unmanned aerial vehicle; the model comprises a system state prediction equation and a system observation equation;

defining a visual anchor point measurement value of the unmanned aerial vehicle according to the image generalized characteristics and the application scene characteristics of the unmanned aerial vehicle;

acquiring the space pose of the unmanned aerial vehicle at the last moment, and acquiring a predicted value of the space pose of the unmanned aerial vehicle at the current moment by using a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the last moment;

obtaining a predicted value of a measured value of the unmanned aerial vehicle visual anchor point at the current moment by using a system observation equation according to the predicted value of the unmanned aerial vehicle space pose at the current moment and other system observed quantities, and obtaining a predicted value of an image position of the unmanned aerial vehicle visual anchor point at the current moment according to the predicted value of the measured value;

the method comprises the steps of obtaining an image position measurement value of an unmanned aerial vehicle visual anchor point at the current moment, estimating an extended Kalman filtering model according to the pose of the unmanned aerial vehicle, and obtaining the space pose of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation by utilizing an unmanned aerial vehicle space pose prediction value, an image position prediction value of the unmanned aerial vehicle visual anchor point at the current moment and an image position measurement value of the unmanned aerial vehicle visual anchor point at the current moment.

In order to achieve the above object, the present invention further provides a system for filtering and estimating landing pose of an unmanned aerial vehicle based on a visual anchor point, comprising:

the model construction module is used for constructing an unmanned aerial vehicle pose estimation extended Kalman filtering model according to the measurement condition of the visual anchor point in the landing process of the unmanned aerial vehicle; the model comprises a system state prediction equation and a system observation equation; defining a visual anchor point measurement value of the unmanned aerial vehicle according to the image generalized characteristics and the application scene characteristics of the unmanned aerial vehicle;

the unmanned aerial vehicle space pose prediction module is used for acquiring the space pose of the unmanned aerial vehicle at the last moment, and obtaining a predicted value of the space pose of the unmanned aerial vehicle at the current moment by using a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the last moment;

the image position prediction module of the unmanned aerial vehicle visual anchor point is used for obtaining a predicted value of a measured value of the unmanned aerial vehicle visual anchor point at the current moment by using a system observation equation according to the predicted value of the space pose of the unmanned aerial vehicle at the current moment and other system observed quantities, and obtaining an image position predicted value of the unmanned aerial vehicle visual anchor point at the current moment according to the predicted value of the measured value;

and the unmanned aerial vehicle space pose acquisition module is used for acquiring an image position measurement value of the unmanned aerial vehicle visual anchor point at the current moment, estimating an extended Kalman filtering model according to the unmanned aerial vehicle pose, and acquiring the unmanned aerial vehicle space pose at the current moment through an unmanned aerial vehicle state updating equation by utilizing an unmanned aerial vehicle space pose prediction value, an image position prediction value of the unmanned aerial vehicle visual anchor point at the current moment and an image position measurement value of the unmanned aerial vehicle visual anchor point at the current moment.

To achieve the above object, the present invention further provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.

Compared with the prior art, the invention has the beneficial effects that:

the unmanned aerial vehicle landing pose filtering estimation method based on the visual anchor points, provided by the invention, is oriented to the space position and attitude estimation requirements in the unmanned aerial vehicle landing process, an unmanned aerial vehicle pose estimation extended Kalman filtering model based on the visual anchor point measurement is constructed, the optimal estimation of the unmanned aerial vehicle pose under the minimum error two-norm square sum index is realized by depending on an extended Kalman filter theory, the influence of observation errors of foundation vision in the landing process on the unmanned aerial vehicle pose estimation accuracy is effectively reduced, and the pose estimation accuracy and robustness in the unmanned aerial vehicle landing process are greatly improved compared with the traditional method.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.

FIG. 1 is a flow chart of the unmanned aerial vehicle landing pose filtering estimation method based on visual anchor points according to the present invention;

FIG. 2 is a diagram of a ground based vision system estimating in real time a plurality of physical coordinate systems involved in the unmanned aerial vehicle landing process;

FIG. 3 is a diagram of a landing trajectory and positioning and attitude determination error curve of an unmanned aerial vehicle generated by the method of the present invention and a conventional method;

fig. 4 is a root mean square error estimation diagram of the target pose in three stages of unmanned aerial vehicle landing by using the method provided by the present invention and the conventional method.

The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.

The invention provides an unmanned aerial vehicle landing pose filtering estimation method based on a visual anchor point, which comprises the following steps of:

101: constructing an unmanned aerial vehicle pose estimation extended Kalman filtering model according to the measurement condition of the visual anchor point in the landing process of the unmanned aerial vehicle; the model comprises a system state prediction equation and a system observation equation;

the method has the advantages that observation errors exist in the foundation vision in the landing process of the unmanned aerial vehicle, and the accuracy of unmanned aerial vehicle pose estimation can be directly influenced, so that the unmanned aerial vehicle pose estimation extended Kalman filtering model is constructed to reduce the observation errors.

102: defining a visual anchor point measurement value of the unmanned aerial vehicle according to the image generalized characteristics and the application scene characteristics of the unmanned aerial vehicle;

the generalized image characteristics of the unmanned aerial vehicle are mainly divided into 3 types of points, lines and planes.

The characteristics of the application scene mainly refer to differences of landing tracks of the unmanned aerial vehicles and differences of environmental factors such as weather and time, so that the characteristics of harsher conditions such as lines and surfaces are difficult to stably form in the imaging of the unmanned aerial vehicles, and the target motion range is wide.

103: acquiring the space pose of the unmanned aerial vehicle at the last moment, and acquiring a predicted value of the space pose of the unmanned aerial vehicle at the current moment by using a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the last moment;

the space pose of the unmanned aerial vehicle comprises a position and a pose Euler angle in a world coordinate system.

The input of the unmanned aerial vehicle system at the current moment is mainly an acceleration item.

104: obtaining a predicted value of a measured value of the visual anchor point of the unmanned aerial vehicle at the current moment by using a system observation equation according to the predicted value of the space pose of the unmanned aerial vehicle at the current moment and other system observed quantities, and obtaining a predicted value of the image position of the visual anchor point of the unmanned aerial vehicle at the current moment according to the predicted value of the measured value;

105: the method comprises the steps of obtaining an image position measurement value of an unmanned aerial vehicle visual anchor point at the current moment, estimating an extended Kalman filtering model according to the pose of the unmanned aerial vehicle, and obtaining the space pose of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation by utilizing an unmanned aerial vehicle space pose prediction value, an image position prediction value of the unmanned aerial vehicle visual anchor point at the current moment and an image position measurement value of the unmanned aerial vehicle visual anchor point at the current moment.

The image position measurement value of the unmanned aerial vehicle visual anchor at the current moment is obtained by detection through the existing visual anchor detection method, such as SIFT, Harris isocenter features or LoG, DoG isocenter features in the image.

The unmanned aerial vehicle landing pose filtering estimation method based on the visual anchor points, provided by the invention, is oriented to the space position and attitude estimation requirements in the unmanned aerial vehicle landing process, an unmanned aerial vehicle pose estimation extended Kalman filtering model based on the visual anchor point measurement is constructed, the optimal estimation of the unmanned aerial vehicle pose under the minimum error two-norm square sum index is realized by depending on an extended Kalman filter theory, the influence of observation errors of foundation vision in the landing process on the unmanned aerial vehicle pose estimation accuracy is effectively reduced, and the pose estimation accuracy and robustness in the unmanned aerial vehicle landing process are greatly improved compared with the traditional method.

In one embodiment, for step 101, the unmanned aerial vehicle pose estimation extended kalman filter model includes a system state prediction equation fs and a system observation equation h. Aiming at the pose estimation problem of the unmanned aerial vehicle, the state x of the unmanned aerial vehicle is obtained by the unmanned aerial vehicle in a world coordinate systemPosition inSpeed of rotationAttitude euler angleAnd corresponding angular velocityThe method comprises the following steps:

in another embodiment, for step 102, defining a visual anchor point measurement for the drone based on the image generalized features and application scene characteristics of the drone comprises:

according to the image generalized characteristics and the application scene characteristics of the unmanned aerial vehicle, defining the vision anchor point measurement value of the unmanned aerial vehicle as follows:

wherein z is a measured value of a visual anchor point of the unmanned aerial vehicle; m is the number of visual anchor points;the image position of the mth visual anchor point.

In an unmanned aerial vehicle pose estimation extended Kalman filtering model, an unmanned aerial vehicle visual anchor point measurement value z is formed by the generalized image characteristics of an unmanned aerial vehicle. Common generalized features are mainly classified into point, line and plane 3. These features tend to have more intuitive physical meanings that are easy to understand. Wherein, line, face characteristic are though more common in unmanned aerial vehicle formation of image, nevertheless very easily because shelter from destroying its integrality, reduce the precision that the characteristic detected. The point features often correspond to inflection points, line intersections or areas with differences from the surroundings of the object contour in the image, and the imaging is more stable than line and surface features. In addition, in the unmanned aerial vehicle landing application scene, consider the difference of unmanned aerial vehicle landing orbit and the environmental factor difference such as weather, time of day, be difficult to in unmanned aerial vehicle's formation of image the stable more harsh characteristics of condition such as line, face. In conclusion, the vision inflection point of the unmanned aerial vehicle is used as the anchor point generalized characteristic of the target, and the measurement value z is defined.

In a next embodiment, for step 103, obtaining a predicted value of the spatial pose of the drone at the current time by using a system state prediction equation according to the input of the drone system at the current time and the spatial pose of the drone at the previous time includes:

301: according to the acceleration item input by the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the last moment, a system state prediction equation is utilized to obtain a predicted value of the space pose of the unmanned aerial vehicle at the current moment as follows:

xk|k-1=fs(xk-1|k-1,uk)

in the formula (f)s(. is a system state prediction equation; u. ofkInput of the unmanned aerial vehicle system at the current moment;

302: according to the application scene of the unmanned aerial vehicle, ignoring the dynamic part of the unmanned aerial vehicle motion, and obtaining the predicted value of the space pose of the unmanned aerial vehicle at the current moment as follows:

in the formula, xk|k-1The predicted value of the space pose of the unmanned aerial vehicle at the current moment is obtained; fkIs a state transition matrix; x is the number ofk-1|k-1The space pose of the unmanned aerial vehicle at the last moment; i is3×3Is an identity matrix; Δ tk|k-1A 3 × 3 diagonal matrix with diagonal elements of Δ t, where Δ t is a time difference between a current time and a previous time;is a position;is the speed;is the attitude Euler angle;is the angular velocity.

According to the kinematics and dynamics of the target motion, the motion of the target from the time k-1 to the time k is determined by the velocity at the time k-1, the angular velocity and the acceleration and angular acceleration during this time period. In an unmanned aerial vehicle landing application scene, the pose estimation time intervals of two adjacent times are measured in milliseconds, namely the time interval delta t between the k-1 moment and the k moment is small, so that the dynamic part of the unmanned aerial vehicle motion can be ignored, namely the unmanned aerial vehicle does uniform motion and rotation within the time interval delta t.

In a next embodiment, for step 104, obtaining the predicted value of the measurement value of the visual anchor point of the unmanned aerial vehicle at the current moment by using a system observation equation according to the predicted value of the spatial pose of the unmanned aerial vehicle at the current moment and other system observations, includes:

according to the predicted value of the space pose of the unmanned aerial vehicle at the current moment and other system observed quantities, the predicted value of the measured value of the visual anchor point of the unmanned aerial vehicle at the current moment is obtained by using a system observation equation:

in the formula, zk|k-1The predicted value of the unmanned aerial vehicle visual anchor point measured value at the current moment; h (-) is a system observation equation; s is an image projection normalization factor; k' is the internal parameter matrix of the camera, f is the focal length of the camera, dxAnd dyThe actual width and height of each pixel, respectively, (c)x,cy) Pixel coordinates of the central point of the image; according to the physical coordinate system involved in the ground based vision system and the drone system shown in figure 2,a homogeneous transformation matrix representing the transformation from the pan-tilt coordinate system g to the camera coordinate system c;a homogeneous transformation matrix representing the transformation from the head base coordinate system g' to the head coordinate system g;a homogeneous transformation matrix representing the world coordinate system w to the holder base coordinate system g';the homogeneous transformation matrix from the coordinate system b of the unmanned aerial vehicle body to the world coordinate system w can be directly formed by x according to the transformation relation among euler angles, translation vectors and the homogeneous transformation matrix among the coordinate systemsk|k-1Is contained inDerivation and obtaining;coordinate system of target body for all visual anchor pointsIs spatially homogeneous matrix.

Other system overview measures includeAnd

the system observation equation h can be regarded as that all visual anchor points are taken from the target coordinate systemConversion to image coordinate systemInvolving multiple coordinate system transformations and image projection of spatial points.

In this embodiment, the predicted value z of the unmanned aerial vehicle visual anchor point measurement value at the current moment is used as the basisk|k-1The image position predicted value of the unmanned aerial vehicle visual anchor point at the current moment can be directly obtained (the image position is included in a world coordinate system)Middle position and attitude Euler angle)

In another embodiment, for step 105, estimating an extended kalman filter model from the pose of the drone, obtaining the spatial pose of the drone at the current time through a drone state update equation using the predicted value of the spatial pose of the drone, the predicted value of the image position of the drone visual anchor at the current time, and the measured value of the image position of the drone visual anchor at the current time, includes:

501: predicting the error covariance matrix at the current moment by using the error covariance matrix at the previous moment according to the fact that a system state prediction equation is a linear equation;

502: according to the predicted current-time error covariance matrix, obtaining a current-time Kalman gain by using an unmanned aerial vehicle space pose predicted value and an image position predicted value of an unmanned aerial vehicle visual anchor point at the current time;

503: and updating the state and the error covariance matrix of the unmanned aerial vehicle according to the Kalman gain at the current moment, and obtaining the space pose of the unmanned aerial vehicle at the current moment by using the image position measurement value of the visual anchor point of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation.

In a certain embodiment, for step 501, predicting the error covariance matrix at the current time by using the error covariance matrix at the previous time according to the system state prediction equation as a linear equation includes:

according to the fact that a system state prediction equation is a linear equation, the error covariance matrix at the current moment is predicted by the aid of the error covariance matrix at the previous moment:

in the formula, Pk|k-1Predicting an error covariance matrix at the current moment; pk-1|k-1An error covariance matrix at the previous moment; fkIs a state transition matrix; qkA covariance matrix of the noise is predicted for the states.

In a next embodiment, for step 502, obtaining a current-time kalman gain using the predicted value of the spatial pose of the unmanned aerial vehicle and the predicted value of the image position of the visual anchor of the unmanned aerial vehicle at the current time according to the predicted current-time error covariance matrix, includes:

according to the predicted current time error covariance matrix, utilizing the unmanned aerial vehicle space pose predicted value xk|k-1And the image position predicted value h (x) of the unmanned aerial vehicle visual anchor point at the current momentk|k-1) And obtaining the current Kalman gain as follows:

in the formula, KkA current time Kalman gain; hkIs a measurement matrix; skAn observation margin covariance matrix; rkTo observe the noise covariance matrix.

In another embodiment, for step 503, updating the unmanned aerial vehicle state and the error covariance matrix according to the current-time kalman gain, and obtaining the current-time unmanned aerial vehicle spatial pose by using the image position measurement value of the current-time unmanned aerial vehicle visual anchor point, includes:

updating the state and the error covariance matrix of the unmanned aerial vehicle according to the Kalman gain at the current moment, and obtaining the space pose of the unmanned aerial vehicle at the current moment by using the image position measurement value of the visual anchor point of the unmanned aerial vehicle at the current moment:

xk|k=xk|k-1+Kk(zk-zk|k-1)

Pk|k=(1-KkHk)Pk|k-1

in the formula, xk|kThe space pose of the unmanned aerial vehicle at the current moment is obtained; z is a radical ofkThe measured value of the visual anchor point of the unmanned aerial vehicle at the current moment; pk|kIs the current time error covariance matrix.

The unmanned aerial vehicle landing pose filtering estimation method based on the visual anchor points is explained by a specific application example, a foundation visual object system is constructed, and the method is used for estimating the space pose of the unmanned aerial vehicle in the landing process in real time. In order to verify the advantages of the method disclosed by the invention compared with the traditional method, a traditional PnP target pose resolving algorithm is adopted for comparison. Fig. 3 shows the landing trajectory of the drone generated using the method of the present invention (denoted FP) and the conventional PnP method (denoted NP). When the drone is located at point E, F and point G, respectively, the drone image taken by the ground based camera and the visual anchor point of the drone are shown in the figure. Fig. 4 counts the Root Mean Square Error (RMSE) of the target pose estimation of FP and NP at three stages of unmanned plane landing in two sets of physical experiments. Overall, the FP exhibits higher estimation accuracy in both position and pose than the conventional NP. The estimation error of the two methods gradually decreases as the unmanned aerial vehicle is closer to the camera. The reason is that as the unmanned aerial vehicle approaches the camera, the larger the imaging scale of the unmanned aerial vehicle is, the lower the ratio of the visual anchor observation error of the same pixel in the target area is, and the smaller the influence on the attitude estimation accuracy is. According to the practical application requirement of unmanned aerial vehicle landing, the unmanned aerial vehicle should be located above the runway as much as possible in the whole landing process, and therefore the tolerance of the unmanned aerial vehicle on the positioning error of the runway plane is determined by the length and the width of the runway. In general, the positioning error along the runway should be lower than 70m, and the positioning error along the vertical direction of the runway should be lower than 20m, i.e. the root mean square error in the Y direction is lower than 70m, and the root mean square error in the X direction is lower than 20 m. According to the statistical result, the FP meets the requirements at each stage of unmanned aerial vehicle landing. Secondly, the precision of the off-ground overestimated value of the unmanned aerial vehicle in the ground-near stage is very critical, and the Z-direction root mean square error in the fluttering stage is reflected in a centralized manner. FP achieved a Z-direction estimated root mean square error of less than 1m in both sets of experiments. Finally, the roll angle and pitch angle accuracy of the unmanned aerial vehicle is also very critical in the unmanned aerial vehicle pulling and floating stage. The root mean square errors of the roll angle and the pitch angle of the unmanned aerial vehicle in the pulling and floating stage of the FP algorithm are respectively lower than 5 degrees and 2 degrees. Overall, compared with the conventional NP method, the FP shows higher unmanned aerial vehicle positioning and attitude determination precision and stronger robustness to observation errors.

In conclusion, the unmanned aerial vehicle pose estimation extended Kalman filtering model based on the visual anchor point observation quantity is constructed for the requirements of space position and attitude estimation in the unmanned aerial vehicle landing process, compared with the traditional method, the unmanned aerial vehicle pose estimation accuracy and robustness are remarkably improved, powerful technical support is provided for constructing the unmanned aerial vehicle autonomous landing foundation visual auxiliary system, and the unmanned aerial vehicle autonomous landing foundation visual auxiliary system has high practical value.

The invention also provides an unmanned aerial vehicle landing pose filtering estimation system based on the visual anchor points, which comprises the following steps:

the model construction module is used for constructing an unmanned aerial vehicle pose estimation extended Kalman filtering model according to the measurement condition of the visual anchor point in the landing process of the unmanned aerial vehicle; the model comprises a system state prediction equation and a system observation equation;

the unmanned aerial vehicle space pose prediction module is used for acquiring the space pose of the unmanned aerial vehicle at the last moment, and obtaining a predicted value of the space pose of the unmanned aerial vehicle at the current moment by using a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the last moment;

the image position prediction module of the unmanned aerial vehicle visual anchor point is used for defining the measured value of the unmanned aerial vehicle visual anchor point according to the image generalized characteristics and the application scene characteristics of the unmanned aerial vehicle; obtaining a predicted value of a measured value of the visual anchor point of the unmanned aerial vehicle at the current moment by using a system observation equation according to the predicted value of the space pose of the unmanned aerial vehicle at the current moment, and obtaining a predicted value of an image position of the visual anchor point of the unmanned aerial vehicle at the current moment according to the predicted value of the measured value;

and the unmanned aerial vehicle space pose acquisition module is used for acquiring the image position measurement value of the unmanned aerial vehicle visual anchor point at the current moment, estimating an extended Kalman filtering model according to the unmanned aerial vehicle pose, and acquiring the unmanned aerial vehicle space pose at the current moment by using the unmanned aerial vehicle space pose prediction value, the image position prediction value of the unmanned aerial vehicle visual anchor point at the current moment and the image position measurement value of the unmanned aerial vehicle visual anchor point at the current moment.

The invention further provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.

The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:建筑物楼层内人员导航方法、多层建筑内人员导航方法、系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!