Insect-imitated vision integrated navigation method based on polarized light, optical flow vector and binocular vision sensor

文档序号:934024 发布日期:2021-03-05 浏览:2次 中文

阅读说明:本技术 一种基于偏振光、光流矢量、双目视觉传感器的仿昆虫视觉组合导航方法 (Insect-imitated vision integrated navigation method based on polarized light, optical flow vector and binocular vision sensor ) 是由 褚金奎 陈建华 李金山 张然 于 2020-11-17 设计创作,主要内容包括:一种基于偏振光、光流矢量、双目视觉传感器的仿昆虫视觉组合导航方法,对空间点像素坐标系和世界坐标系进行转换;根据ORB角点提取对空间某特征点进行相邻帧匹配;利用偏振光传感器测得偏航角和光流传感器获得一个相对位置的测量引导;计算光流矢量并利用光流矢量对运动物体和误匹配点进行剔除;进行BA优化,利用能量函数,将双目视觉传感器匹配点数据、偏航角数据和光流数据优化;利用BA优化后的参数求增广矩阵,解得变换矩阵,反求解矩阵得到姿态角和位移完成导航。本发明仿照昆虫利用偏振光、图像通过纯视觉进行导航定位,有着无线电导航和卫星导航等非自主导航方式所无法具有的隐蔽性、抗干扰性,具有高精度和较强的鲁棒性。(A method for simulating insect vision integrated navigation based on polarized light, optical flow vector and binocular vision sensor, which converts space point pixel coordinate system and world coordinate system; extracting and matching adjacent frames of a certain characteristic point in the space according to the ORB angular point; measuring a yaw angle and an optical flow sensor by using a polarized light sensor to obtain a measurement guide of a relative position; calculating an optical flow vector and removing a moving object and a mismatching point by using the optical flow vector; performing BA optimization, and optimizing binocular vision sensor matching point data, yaw angle data and optical flow data by using an energy function; and solving the augmented matrix by using the parameters after BA optimization to obtain a transformation matrix, and reversely solving the matrix to obtain an attitude angle and a displacement to finish navigation. The invention imitates the insect to utilize polarized light and image to navigate and position through pure vision, has concealment and anti-interference which can not be provided by non-autonomous navigation modes such as radio navigation and satellite navigation, and has high precision and stronger robustness.)

1. An insect-imitated vision combined navigation method based on polarized light, optical flow vectors and a binocular vision sensor is characterized by comprising the following steps:

step 1: the pixel coordinate of the point P is [ u, v ] based on the measurement of the binocular vision sensor]TKnowing the distance f from the physical imaging plane to the pinhole, let the P coordinate [ X, Y, Z ] of the point in the terrestrial coordinate system]TThe coordinates of the corresponding physical imaging plane P 'point are [ X', Y ', Z']TThe pixel coordinates are scaled by a factor of alpha on the u-axis and by a factor of beta on the v-axis, with the origin shifted by cx,cy]TObtaining the P 'coordinate and the pixel coordinate [ u, v' according to the Z/f ═ X/X ═ Y/Y]TThe relationship of (1) is:

let fx=αf,fyβ f, a transformation of pixel coordinates to spatial coordinates is obtained:

the matrix formed by the intermediate quantities is called an internal reference matrix of the camera and is represented by K;

step 2: the binocular camera is composed of two horizontally placed left eye and right eye cameras, OL、ORIs the optical center of the left and right cameras, f is the focal length, z is the scene depth, uL、uRAs coordinates of the imaging plane, PL、PRObtaining image points on imaging planes in the left camera and the right camera according to the similarity relation:

d=uL-uR

and step 3: according to the camera in two frames of image I1、I2The relative motion relationship between the two frames is R and t, the two frames of images are matched with corresponding characteristic points, and the spatial homogeneous coordinate of the point P is P ═ X, Y, Z,1]TPoint P is at I2The coordinate of the projection of the corresponding point onto the pixel plane is x1=[u1,v1,1]TDefining an augmented matrix [ R | t ] according to the matching points]Obtaining a solving equation s.x ═ R | t according to the corresponding relation between the adjacent images]·p;

And 4, step 4: the yaw angle directly obtained by the binocular sensor is larger than the displacement error, the course angle is corrected by using polarized light, and the polarization angle obtained based on the multi-direction polarized light sensor detection unit isTo obtain corresponding polarization vectorVectorThe coordinates are:the initial course angle measured by the polarized light sensor during the initial period of the combined navigation system isThe initial pose is as follows: R-I, t-0; the current course angle measured by the polarized light sensor isThe change angle of the pose of the combined navigation platform relative to the geographical north direction is as follows:two adjacent frames of polarized light data are collected as

And 5: between two adjacent frames, t is obtainedk-1To tkThe displacement vectors of all the feature points of the navigation system between moments areObtaining t by an optical flow sensork-1To tkThe temporal optical flow vector isSetting a threshold value A ifAndthe feature points are considered to be correctly matched, and the feature points are considered to be mismatching points or moving objects to be removed if the feature points are not in the range;

step 6: BA optimization, camera motion per frame image restorable camera pose C1…CmObserved by a cameraThe road sign is X1…XnA point X in a terrestrial coordinate systemjThe image is projected by converting the coordinate system of the camera to a coordinate system of the camera with hij=(fxXij/Xij+cx,fyYij/Zij+cy)TTo obtain a function hij=h(Ci,Xj)TTaking two adjacent frames (C)i,Ci+1) Inter-all-polarized light data setInverse solution CiObtaining the coupling of course angle and polarized light data to obtain function fi=f(Ci,Pj)TAnd solving a BA optimization function:

wherein EijAnd gammaijRespectively a covariance matrix of a motion equation and a covariance of polarized light data, and when a BA optimization function is minimum, solving to obtain optimized parameters;

and 7: if the number of matched characteristic points is too small and the data of the binocular vision sensor is lost after BA optimization, the binocular vision model is recovered by using a light flow velocity model for short positioning and navigation until the number of matched characteristic points is large, and if the number of matched characteristic points is enough, the data of the binocular vision sensor is not lost, the step 8 is directly executed;

and 8: substituting a pair of matching points into the formula s · x ═ R | t ] · p yields:

obtaining a constraint equation:

define the row vector of T:

t1=(t1,t2,t3,t4)T

t2=(t5,t6,t7,t8)T

t3=(t9,t10,t11,t12)T

the same representation for the other points yields the system of equations:

further obtaining:

and solving the position, speed and attitude information of the corrected carrier according to the optimized transformation matrix T.

Technical Field

The invention belongs to the field of bionic integrated navigation, relates to an insect-imitating integrated navigation system and a positioning method thereof, and particularly relates to a method for building and positioning a test platform of an integrated navigation system based on optical flow vectors, polarized light and binocular vision sensors.

Background

The current common navigation technologies mainly include inertial navigation, satellite navigation, astronomical navigation, geomagnetic navigation, and the like. The inertial navigation belongs to autonomous navigation and has the advantages of short-time high precision, no interference and the like, but the accumulated error is generated in the integration process, and the longer the time is, the larger the error is, so the inertial navigation is not suitable for long-distance and long-time navigation; the satellite navigation is a non-autonomous navigation mode, has the characteristics of globality and all-weather, is easy to be interfered, cannot be normally used in streets with tall buildings and dense jungles, and is commonly used by GPS and Beidou satellite navigation at present; the astronomical navigation is a navigation technology which can calculate the current course and the position of a carrier according to the known celestial body position, belongs to autonomous navigation, and has the biggest defects of low precision and low integration level; the geomagnetic navigation is to measure and calculate the attitude information of a carrier by utilizing the distribution rule of a geomagnetic field, belongs to autonomous navigation, has no error accumulation, and is easily influenced by the magnetic field generated by surrounding magnetic materials. Patent numbers disclosed in the prior art: (CN103323005A), the SINS, the GPS, the polarization sensor and the interference in the environment are classified, interference modeling is carried out on the interference of different types, the modeling interference is counteracted by designing an interference observer, the interference cancellation and the interference suppression are carried out on the interference by designing the method of the interference observer and a robust technology, and the method can be used for improving the navigation precision of the carrier in the earth atmosphere; patent numbers disclosed in the prior art: (CN109916394A), a combined navigation algorithm for fusing optical flow position and speed information is provided, the position and speed information output by an optical flow sensor and data of an MEMS IMU, a magnetometer, an air pressure altimeter and a laser ranging sensor are utilized, an extended Kalman filter is adopted to complete data fusion, and the position, speed and attitude information of a carrier is calculated, because the magnetometer is adopted, the carrier is easily influenced by a magnetic field generated by a surrounding magnetic material, and the anti-interference capability is weak; patent numbers disclosed in the prior art: (CN109470237A), a combined navigation attitude measurement method based on polarized light and geomagnetism is provided, but only attitude angles alpha, beta and gamma can be solved, and the combined navigation attitude measurement method does not have a positioning navigation function; patent numbers disclosed in the prior art: (CN108362288A), based on the polarized light SLAM method of unscented Kalman filtering, the stability and precision of the unmanned aerial vehicle SLAM system are improved to a certain extent by utilizing the characteristics that the polarized light information and the laser radar information are matched and complemented and are not interfered by other outside, but the influence of mismatching on navigation cannot be further solved.

The research of biologists finds that insects have good navigation capacity, for example, the compound eye of a bee is very sensitive to the change of visual information in a tiny time interval, the characteristic is more suitable for processing dynamic signals, when the bee moves in the environment, the motion of an image forms a series of continuously-changed light 'flow' information on the retina of the bee, the bee is navigating by analyzing dynamic optical flow, and a lot of useful information can be obtained through the optical flow to provide navigation use; for example, cricket, villus in a neurogenic rod of a small-eye structure in an eye DRA area have the characteristics of being regular and consistent in axial direction and perpendicular in radial direction, the structural characteristics enable optic nerve cells to have high polarized light sensitivity, polarized light is a natural basic physical field and carries azimuth information, cricket has three types of POL-neurons, the main response directions of the cricket are respectively 10 degrees, 60 degrees and 130 degrees, and the cricket can obtain the included angle between a body axis and a solar meridian by comprehensively processing output signals of the three types of POL-neurons, so that a navigation function is realized; for another example, most insects have two eyes, and according to the principle of binocular distance measurement, when a scene is observed from different angles, the perception of the scene in terms of distance can be easily obtained. According to the bionic navigation principle, when the optical center of the camera, the corresponding pixel points of the two pictures and the distance between the pixel points and the camera in the actual scene can be easily obtained through a triangular relation, the polarized light sensor can sensitively output a polarized azimuth angle, the advantages of no accumulation along with time and the like are achieved, the autonomy is high, the yaw angle can be obtained, and the distance measurement and the speed measurement can be realized.

The traditional combined navigation system has weak anti-interference capability, and the navigation system without the GPS or inertial navigation combination can only measure the posture and cannot position. In summary, a novel bionic combined navigation method is provided, a binocular vision sensor, an optical flow sensor and a polarized light sensor are combined, and a novel combined navigation mode is provided, wherein the novel combined navigation mode has concealment, interference resistance and higher robustness which cannot be achieved by non-autonomous navigation modes such as radio navigation and satellite navigation.

Disclosure of Invention

Aiming at the weak anti-interference capability of the existing navigation system, the invention provides a novel bionic combined navigation method, which combines a polarized light sensor, an optical flow and a binocular vision sensor, performs data fusion by using Kalman filtering, and performs BA optimization at the rear end.

The technical scheme of the invention is as follows:

an insect-imitated vision combined navigation method based on polarized light, optical flow vectors and binocular vision sensors comprises the following steps:

step 1: the pixel coordinate of the point P is [ u, v ] based on the measurement of the binocular vision sensor]TKnowing the distance f (focal length) from the physical imaging plane to the pinhole, let the P coordinate [ X, Y, Z ] of the point in the terrestrial coordinate system]TThe coordinates of the corresponding physical imaging plane P 'point are [ X', Y ', Z']TThe pixel coordinates are scaled by a factor of alpha on the u-axis, by a factor of beta on v, with the origin shifted by cx,cy]TThe P ' coordinate and the pixel coordinate [ u, v ' can be obtained from Z/f ═ -X/X ' ═ -Y/Y]TThe relationship of (1) is:

let fx=αf,fyβ f, a transformation of pixel coordinates to spatial coordinates is obtained:

step 2: the binocular camera is composed of two horizontally placed left eye and right eye cameras, OL、ORIs the optical center of the left and right cameras, f is the focal length, z is the scene depth, uL、uRAs coordinates of the imaging plane, PL、PRFor image points on the imaging plane in the left camera and the right camera, the following can be obtained according to the similarity relation:

and step 3: according to the camera in two frames of image I1、I2The relative motion relationship between the two frames is R and t, the two frames of images are matched with corresponding characteristic points, and the spatial homogeneous coordinate of the point P is P ═ X, Y, Z,1]TPoint P is at I2The coordinate of the projection of the corresponding point onto the pixel plane is x1=[u1,v1,1]TDefining an augmented matrix [ R | t ] according to the matching points]Obtaining a solving equation s.x ═ R | t according to the corresponding relation between the adjacent images]·p。

And 4, step 4: the yaw angle directly obtained by the binocular sensor is larger than the displacement error, the course angle is corrected by using polarized light, and the polarization angle obtained based on the multi-direction polarized light sensor detection unit isAt the moment that the corresponding polarization vector can be obtainedVectorThe coordinates are:the initial course angle measured by the polarized light sensor during the initial period of the combined navigation system isThe initial pose is as follows: R-I, t-0. The current course angle measured by the polarized light sensor isThe change angle of the pose of the combined navigation platform relative to the geographical north direction is as follows:two adjacent frames of polarized light data are collected as

And 5: between two adjacent frames, t is obtainedk-1To tkThe displacement vectors of all the feature points of the navigation system between moments areObtaining t by an optical flow sensork-1To tkThe temporal optical flow vector isSetting a threshold value A ifAndthe feature points are considered to be correctly matched, and the feature points are considered to be mismatching points or moving objects to be removed if the feature points are not in the range.

Step 6: BA optimization, camera gesture can be recovered to C by each frame of image of camera motion1…CmThe road sign observed by the camera is X1…XnA point X in a terrestrial coordinate systemjConverted to a camera coordinate system and projected on an imagehij=(fxXij/Xij+cx,fyYij/Zij+cy)TThe function h can be obtainedij=h(Ci,Xj)TTaking two adjacent frames (C)i,Ci+1) Data set P of all polarized lighti={p1…pni}, inverse solution CiObtaining the coupling of course angle and polarized light data to obtain function fi=f(Ci,Pj)TAnd solving a BA optimization function:

wherein EijAnd gammaijAnd respectively solving the parameters after optimization when the covariance matrix of the motion equation and the covariance BA optimization function of the polarized light data are minimum.

And 7: and if the matched characteristic points are too few after BA optimization, the data of the binocular vision sensor is lost, the binocular vision model is recovered by using the light flow velocity model for short positioning and navigation till the characteristic points are matched more, and if the matched points are enough, the data of the binocular vision sensor is not lost, the step 8 is directly executed.

And 8: substituting a pair of matching points into the formula s · x ═ R | t ] · p yields:

the constraint equation can be derived:

define the row vector of T:

t1=(t1,t2,t3,t4)T

t2=(t5,t6,t7,t8)T

t3=(t9,t10,t11,t12)T

the same representation for the other points yields the system of equations:

further obtaining:

and solving the position, speed and attitude information of the corrected carrier according to the optimized transformation matrix T.

The invention has the beneficial effects that: the invention adopts polarized light, optical flow vector and binocular vision sensor, designs a bionic navigation system which carries out navigation and positioning purely depending on vision, has good navigation capability and strong anti-interference capability under the condition of not using GPS and inertial navigation, can carry out navigation work under a complex environment, and is a full-autonomous navigation mode with high precision and strong robustness.

Drawings

FIG. 1 is a schematic diagram of the system of the present invention;

FIG. 2 is a flow chart of the navigation computation of the method of the present invention;

FIG. 3 illustrates the principle of binocular distance measurement of the method of the present invention;

FIG. 4 is a schematic diagram of the algorithm flow of the method of the present invention;

FIG. 5 is an isometric view of the system of the present invention;

in the figure: 1 binocular vision sensor; 2 polarized light sensor.

Detailed Description

The following further describes a specific embodiment of the present invention with reference to the drawings and technical solutions.

Step 1: and (4) carrying out corner detection by using a FAST algorithm to select feature points.

(1) Selecting a pixel P on the image, assuming its brightness is Ip

(2) A threshold value T is set.

(3) With the pixel P as the center, 16 pixels on a circle with a radius of 3 are selected.

(4) And selecting the pixel points with the brightness on the circle larger than Ip + T or Ip-T as the characteristic points.

Step 2: establishing a camera coordinate system O-X-Y-Z, and a space point P under an earth coordinate system, projecting through an optical center O, and then falling on a physical imaging plane O '-X' -Y ', wherein an imaging point is P', and the coordinate of the point P is [ X, Y, Z ]]TThe coordinate of point P 'is [ X', Y ', Z']TThe physical imaging plane is at a distance f (focal length) from the aperture. According to the triangle similarity relationship, the following can be obtained:

the simplification can result in:

the pixel coordinates of P' are obtained in the pixel plane: [ u, v ]]T. The pixel coordinates are scaled by a factor of alpha on the u-axis, by a factor of beta on v, with the origin shifted by cx,cy]T. P' coordinate and pixel coordinate u, v]TThe relationship of (1) is:

let fx=αf,fy=βf:

Then

Writing it in matrix form yields the following equation:

the matrix formed by the intermediate quantities is called the internal reference matrix of the camera and is denoted by K. A conversion of pixel coordinates to spatial coordinates can thereby be obtained.

And step 3: can be obtained according to the imaging mechanism of a binocular cameraDepth obtained by finishingParallax d ═ uL-uR

The position of a point P under a world coordinate system can be obtained by pixel coordinates and depth information:

and 4, step 4: matching feature points, solving by using PnP, and using a camera to obtain two frame images I1、I2The relative motion relationship between the two is R and t, the camera center is O1、O2The two frames of images have a corresponding set of feature points p1、p2Two feature points should be the projection of a point in space in two imaging planes, according to p1Pixel coordinates of points and depth calculation P points' spatial homogeneous coordinates are P ═ X, Y, Z,1]TPoint P is at I2Corresponding point p2Projected onto the pixel plane at the coordinate x1=[u1,v1,1]TDefining an augmented matrix [ R | t ] according to the matching points]Obtaining a solving equation s.x ═ R | t according to the corresponding relation between the adjacent images]·p。

The above formula is developed to obtain:

the constraint equation can be derived:

define the row vector of T:

t1=(t1,t2,t3,t4)T

t2=(t5,t6,t7,t8)T

t3=(t9,t10,t11,t12)T

the same representation for the other points yields the system of equations:

the linear solution of the matrix T can be realized through 6 pairs of matching points, and when the matching points are more than 6 pairs, the optimal value is solved by using BA optimization.

And 5: the polarized light assists to correct the yaw angle error, and the polarized light detection unit obtains the polarized angle ofCorresponding polarization vector can be obtained under the coordinate system established by the polarized light detection unitVectorThe coordinates are:wherein K may take. + -. 1. The value of K can be determined by placing the relation between the body axis direction of the polarized light sensor and the true north direction of the earth, and the initial course angle measured by the polarized light sensor when the integrated navigation system is in the initial state isThe initial pose is as follows: R-I, t-0, where I is the identity matrix. Assuming that when the camera shoots the current frame, the course angle measured by the polarized light sensor isThe change angle of the robot pose with respect to the geographical true north direction is:

two adjacent frames (C)i,Ci+1) All polarized light dataset Pi={p1…pni},PiNamely the yaw angle.

Step 6: optical flow-assisted correction by which the X-and Y-axis velocities V can be obtainedx,VyPose transformation of optical flow sensor to camera coordinate systemcoThe X, Y-direction displacement obtained by the optical flow sensor can be used to obtain X, Y-direction displacement in the camera coordinate system. The relative pose between the pose of the current frame and the pose of the previous frame can be obtained by the constant-speed model, and on the basis, the pose information obtained by the sensor is used for correcting the constant-speed model, so that the initial pose has a better initial value. The relative pose obtained by the constant velocity model is:

an optical flow sensor for obtaining a measurement guide of a relative pose to obtain optical flow data Mi={m1…mni}. And 7: optical flow vector removing shadow of moving object on navigation systemIn the interval between two adjacent frames, note tk-1The feature point set of the key frame is collected at the moment(wherein) Move to tkThe feature point set of the key frame is collected at the moment(wherein) Can obtain tk-1To tkThe displacement vectors of all the feature points of the navigation system between moments areObtaining t by an optical flow sensork-1To tkThe temporal optical flow vector isSetting a threshold A, comparing the optical flow vector with the displacement vector, if at allAndif the feature points are not in the range, the feature points are regarded as mismatching points or moving objects to be removed.

And 8: BA optimization to obtain an optimal solution, and the camera attitude can be restored to C by each frame of image of camera motion1…CmThe road sign observed by the camera is X1…Xn,CiExternal reference R comprising cameraiAnd ti,RiAnd tiA point X under the earth coordinate systemjConversion to camera coordinate system (X)ij,Yij,Zij)=Ri(Xj-ti) Projected on an image with hij=(fxXij/Xij+cx,fyYij/Zij+cy)TThe function h can be obtainedij=h(Ci,Xj)TTwo adjacent frames (C)i,Ci+1) The inter-all-polarized-light data set is denoted as set Pi={p1…pni}, inverse solution CiObtaining the coupling of course angle and polarized light data to obtain function fi=f(Ci,Pj)TAnd solving a BA optimization function:

wherein f (C)i,Pi) Is PiAct on CiPosterior movement parameter, wherein EijAnd gammaijRespectively, when the covariance matrix of the motion equation and the covariance of the polarized light data are minimum, the optimized parameters are obtained by solving.

And step 9: obtaining the optimal parameter according to BA optimization to solve the augmentation matrix [ R | t]Then, the matrix is solved to obtain the pose R and the pose t, the camera internal reference matrix K is known, the pose can be solved by solving the augmentation matrix, and the homogeneous three-dimensional coordinates [ X, Y, Z,1 ] of the characteristic points are solved through a binocular camera]TAccording to the pixel coordinate [ u, v ] matched to the second picture]T

The following can be obtained:

according to the constraint equation:

define the row vector of T:

t1=(t1,t2,t3,t4)T

t2=(t5,t6,t7,t8)T

t3=(t9,t10,t11,t12)T

get t by solution1,t2,t3R and t are then obtained, denoted by SE 3:

and obtaining the position, speed and attitude information of the carrier by inverse solution according to the transformation matrix T.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:高精度的数字孪生场景中的激光融合定位方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!