Multi-sensor fusion-based odometer method and device

文档序号:419684 发布日期:2021-12-21 浏览:4次 中文

阅读说明:本技术 一种基于多传感器融合的里程计方法及装置 (Multi-sensor fusion-based odometer method and device ) 是由 刘光伟 赵季 于 2020-06-19 设计创作,主要内容包括:本申请提供了一种基于多传感器融合的里程计方法及装置,涉及高精度地图技术领域。方法应用于一种搭载有多种传感器的可移动物体上,方法包括:实时获得可移动物体上搭载的各种传感器采集的传感器数据;对各种传感器采集的传感器数据分别进行建模,建立可移动物体位姿的约束关系;对可移动物体位姿的约束关系进行联合优化求解,确定可移动物体的位姿结果。本申请实施例可以实现可移动物体在特征稀疏、GPS信号较差的场景中的实时位姿估计,其结果准确,鲁棒性较好。(The application provides a multi-sensor fusion-based odometer method and device, and relates to the technical field of high-precision maps. The method is applied to a movable object carrying a plurality of sensors, and comprises the following steps: acquiring sensor data acquired by various sensors carried on a movable object in real time; modeling sensor data acquired by various sensors respectively, and establishing a constraint relation of the pose of the movable object; and carrying out joint optimization solution on the constraint relation of the pose of the movable object, and determining the pose result of the movable object. The method and the device can realize real-time pose estimation of the movable object in scenes with sparse features and poor GPS signals, and have accurate results and good robustness.)

1. A multi-sensor fusion-based odometer method applied to a movable object carrying multiple sensors, the method comprising:

acquiring sensor data acquired by various sensors carried on a movable object in real time;

modeling sensor data acquired by various sensors respectively, and establishing a constraint relation of the pose of the movable object;

and carrying out joint optimization solution on the constraint relation of the pose of the movable object, and determining the pose result of the movable object.

2. The method of claim 1, wherein the plurality of sensors includes an Inertial Measurement Unit (IMU), a wheel speed meter, a lidar, and a barometer; wherein the IMU includes an accelerometer and a gyroscope.

3. The method of claim 2, wherein the obtaining sensor data collected by various sensors mounted on the movable object in real time comprises:

and acquiring triaxial acceleration data measured by an accelerometer, triaxial angular velocity data measured by a gyroscope, wheel speed data of a movable object measured by a wheel speed meter, point cloud data measured by a laser radar and height observation data measured by a barometer in real time.

4. The method of claim 3, wherein modeling the sensor data collected by the various sensors separately to establish a constrained relationship of the pose of the movable object comprises:

modeling is carried out according to triaxial acceleration data measured by an accelerometer, and roll angle constraint and pitch angle constraint of the movable object are established;

performing kinematic modeling by using an Ackermann model according to triaxial angular velocity data measured by a gyroscope and wheel speed data of a movable object measured by a wheel speed meter, and establishing Ackermann model constraint of the horizontal position and the yaw angle of the movable object;

modeling is carried out according to point cloud data measured by the laser radar, and laser radar pose constraints of the movable object are established;

modeling is performed according to altitude observation data measured by the barometer, and barometer constraint of the altitude position of the movable object is established.

5. The method of claim 4, wherein the jointly optimizing the constrained relationship to the pose of the movable object to determine the pose result of the movable object comprises:

and performing joint optimization solution on the roll angle constraint, the pitch angle constraint, the ackerman model constraint, the laser radar pose constraint and the barometer constraint by adopting a nonlinear optimization method, and determining a pose result of the movable object.

6. The method of claim 5, wherein modeling from the tri-axial acceleration data measured by the accelerometer to establish roll and pitch constraints for the movable object comprises:

modeling is carried out according to triaxial acceleration data measured by the accelerometer, and a roll angle estimated value theta of the IMU in a world coordinate system is determinedrollAnd pitch angle estimate θpitch(ii) a Wherein the content of the first and second substances,ax、ay、azthree-axis acceleration data representing accelerometer measurements;

according to the roll angle estimated value thetarollAnd pitch angle estimate θpitchEstablishing a roll angle constraint r of the movable objectRoll(X) and Pitch Angle constraint rPitch(X); wherein r isRoll(X)=θroll-arcsin(-R13);rPitch(X)=θpitch-arctan2(R23,R33) (ii) a X represents the pose of the IMU in a world coordinate system, is a state variable to be optimized and comprises a position p and a pose q; r is a rotation matrix form of the attitude q in the state variable X to be optimized, R23、R33、R13Respectively, the elements of the corresponding row and column in the rotation matrix R.

7. The method of claim 5, wherein performing kinematic modeling using an ackermann model based on the tri-axis angular velocity data measured by the gyroscope and the wheel speed data of the movable object measured by the wheel speed meter to establish ackermann model constraints for horizontal position and yaw angle of the movable object comprises:

determining the advance of the movable object under the world coordinate system according to the three-axis angular velocity data measured by the gyroscopeIntegral value of angle of direction with y-axis:wherein, thetaiAn angle integral value indicating an angle between the advancing direction of the movable object at the i-th time and the y-axis; t represents the t-th time;the method comprises the steps of obtaining a rotation transformation relation from a vehicle body coordinate system to an IMU coordinate system in advance;the yaw angle in the triaxial angular velocity data measured by the gyroscope at the t-th moment;

measuring the speed of the left rear wheel of the movable object at the ith moment in the vehicle body coordinate system according to the wheel speed meterAnd the speed of the right rear wheel in the vehicle body coordinate systemDetermining the speed v of the center of the rear axle of the movable object in the vehicle coordinate systemi(ii) a Wherein the content of the first and second substances, is a previously known speed noise;

performing kinematic modeling by adopting an Ackerman model, and determining a pose transfer equation of the movable object under a world coordinate system:

xi+1=xi+vi·Δt·sinθi

yi+1=yi+vi·Δt·cosθi

wherein, Δ t is the time difference between two adjacent measuring moments of the wheel speed meter;xi、yirepresenting a horizontal position of the movable object in a world coordinate system;

according to the measuring frequency of the laser radar, x between the k-th time and the k + 1-th time of two adjacent laser radarsi、yi、θiIntegrating to determine x in world coordinate systemi、yi、θiRespective change delta xk(k+1)、δyk(k+1)、δθk(k+1)

Determining the pose transformation relation from the IMU coordinate system to the vehicle body coordinate system according to the external reference between the vehicle body coordinate system and the IMU coordinate systemAnd determining the pose transformation relation of the IMU between the k moment and the k +1 moment in the world coordinate systemWherein:

ackerman model constraint r for establishing movable objectAkerman(X); wherein:

x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized.

8. The method of claim 5, wherein modeling from lidar measured point cloud data to establish lidar pose constraints for the movable object comprises:

performing motion compensation on each frame of point cloud data measured by the laser radar, and determining the position of each point in each frame of point cloud data after motion compensation;

extracting the characteristics of each frame of point cloud data after motion compensation, and dividing points in each frame of point cloud data into line characteristic points and plane characteristic points according to curvature information of the points in each frame of point cloud data;

superposing preset frame point cloud data before current frame point cloud data according to the pose estimated, and determining a local line feature map and a local surface feature map corresponding to the current frame point cloud data;

obtaining the initial pose of the laser radar of the current frame under a world coordinate system according to the external parameters between the laser radar and the IMU:

wherein p isLiDARFor the initial position of the laser radar at the current moment in the world coordinate system, RLiDARFor the initial attitude, R, of the laser radar at the current moment in the world coordinate systemIMU、tIMURespectively representing the attitude and the position of the IMU at the current moment in a world coordinate system,andobtaining an attitude transformation relation and a position transformation relation respectively through external reference calibration between the laser radar and the IMU in advance;

searching a local line feature map according to a data index established for each point by adopting a KD-Tree algorithm in advance to obtain a plurality of near-neighbor points corresponding to each line feature point in the current frame point cloud data, and searching a local surface feature map to obtain a plurality of near-neighbor points corresponding to each plane feature point in the current frame point cloud data;

according to line characteristic point x in current frame point cloud datalFitting a plurality of corresponding neighbor points to obtain a straight line, and connecting the line with a characteristic point xlThe distance function from the straight line is used as a line characteristic point error function;

the line characteristic point error function is:wherein the content of the first and second substances,andany two points on the straight line;

according to the plane characteristic point x in the current frame point cloud datapFitting a plurality of corresponding adjacent points to obtain a plane Ax + By + Cz + D as 0, and fitting the surface feature point xpThe distance function from the plane is used as a surface characteristic point error function; wherein A, B, C and D represent parameters of the fitted plane;

the surface feature point error function is:where n represents the matrix: n ═ (a, B, C);

establishing laser radar pose constraint r of the movable object according to the line characteristic point error function and the surface characteristic point error functionLiDAR(X); wherein:

x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized; n islineRepresenting the number of line feature points, n, in the current frame point cloud dataplaneRepresenting the number of plane feature points in the current frame point cloud data.

9. The method of claim 5, wherein modeling from altitude observations of barometer measurements, establishing barometer constraints on the altitude position of the movable object, comprises:

height observation data Z at the current moment measured by barometerk+1Altitude observation data Z at initial time measured in advance by barometer0Height estimation value of IMU (inertial measurement Unit) measurement at current moment in world coordinate systemAnd an estimate of the height of the IMU in the world coordinate system at the initial moment measured in advanceModeling to establish barometer constraint r for height position of movable objectAltimeter(X); wherein:

x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized;respectively, rotation data and translation data of the barometer coordinate system at the current moment to the world coordinate system, which are known in advance.

10. The method of claim 5, wherein jointly optimizing the roll angle constraint, the pitch angle constraint, the ackermann model constraint, the lidar pose constraint, and the barometer constraint using a nonlinear optimization method to determine a pose result for the movable object comprises:

for the transverse roll angle constraint rRoll(X), pitch angle constraint rPitch(X), Ackerman model constraint rAkerman(X) pose constraint r of laser radarLiDAR(X) and barometer constraint rAltimeter(X) solving a nonlinear least square problem for the joint optimization cost function by adopting an optimization algorithm, and determining a pose result of the IMU of the movable object in a world coordinate system;

wherein, the joint optimization cost function is:

wherein the content of the first and second substances,respectively corresponding to each constraint item and preset information matrixes; x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized.

11. A multi-sensor fusion-based odometer device applied to a movable object on which a plurality of types of sensors are mounted, the device comprising:

the sensor data acquisition unit is used for acquiring sensor data acquired by various sensors carried on the movable object in real time;

the constraint relation establishing unit is used for respectively modeling sensor data acquired by various sensors and establishing a constraint relation of the pose of the movable object;

and the joint optimization unit is used for performing joint optimization solution on the constraint relation of the pose of the movable object and determining the pose result of the movable object.

12. A computer-readable storage medium comprising a program or instructions for implementing the multi-sensor fusion-based odometry method according to any one of claims 1 to 10, when said program or instructions are run on a computer.

13. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the multi-sensor fusion based odometry method according to any one of claims 1 to 10.

14. A computer server comprising a memory and one or more processors communicatively coupled to the memory; the memory has stored therein instructions executable by the one or more processors to cause the one or more processors to implement a multi-sensor fusion based odometry method as claimed in any one of claims 1 to 10.

Technical Field

The application relates to the technical field of high-precision maps, in particular to a multi-sensor fusion-based odometer method and device.

Background

At present, with the development of an automatic driving technology and an intelligent robot technology, how to ensure the accurate driving of an automatic driving vehicle and an intelligent robot becomes a hot point problem. In the automatic driving technology, a high-precision map is generally applied, which is different from a traditional navigation map, the high-precision map contains a large amount of driving assistance information, and the most important information depends on accurate three-dimensional representation of a road network, such as intersection layout, road sign positions and the like. In addition, the high-precision map also contains a lot of semantic information, meaning of different colors on communication traffic lights can be reported on the map, the high-precision map can indicate speed limit of roads, the position of the start of a left-turn lane and the like. One of the most important features of high-precision maps is precision, which enables autonomous vehicles and the like to reach centimeter-level precision, which is important to ensure the safety of autonomous vehicles.

In the fields of automatic driving and robots, the construction of high-precision maps generally needs to be applied to the odometer technology. The conventional odometer technology at present comprises a visual odometer method, a visual inertia odometer method, a laser inertia odometer method and the like. For the field of automatic driving, the laser odometer method and the laser inertia odometer method are mainly used in the field of automatic driving because the visual characteristics are sparse in a road scene and the vehicle speed is high, and the accuracy and the robustness of pose estimation are difficult to ensure by using the visual odometer method and the visual inertia odometer method. When the laser odometry method and the laser inertia odometry method are applied in the field of automatic driving, the inventor finds that in a common road scene, markers such as lamp poles, guardrails, flower beds, trees and the like usually exist, and the laser radar can establish relatively accurate geometric constraint through observation of the markers. However, in the scenes with sparse features and poor GPS signals, such as tunnels, sea-crossing bridges, deserts and gobi, similar markers do not exist, stable features are difficult to extract from the laser radar observation data, and precise geometric constraints cannot be constructed, so that in these scenes, the conventional laser odometry method and the laser inertial odometry method are degraded, cannot perform accurate pose estimation, and cannot meet the requirements for high-precision map construction in automatic driving.

Disclosure of Invention

The embodiment of the application provides a multi-sensor fusion-based odometer method and device, and the problem of inaccurate pose estimation in scenes with sparse features and poor GPS signals can be solved.

In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:

in a first aspect of the embodiments of the present application, there is provided a multi-sensor fusion-based odometer method applied to a movable object carrying multiple sensors, the method including:

acquiring sensor data acquired by various sensors carried on a movable object in real time;

modeling sensor data acquired by various sensors respectively, and establishing a constraint relation of the pose of the movable object;

and carrying out joint optimization solution on the constraint relation of the pose of the movable object, and determining the pose result of the movable object.

In addition, according to a second aspect of the embodiments of the present invention, there is provided a multi-sensor fusion-based odometer device applied to a movable object having a plurality of types of sensors mounted thereon, the device including:

the sensor data acquisition unit is used for acquiring sensor data acquired by various sensors carried on the movable object in real time;

the constraint relation establishing unit is used for respectively modeling sensor data acquired by various sensors and establishing a constraint relation of the pose of the movable object;

and the joint optimization unit is used for performing joint optimization solution on the constraint relation of the pose of the movable object and determining the pose result of the movable object.

In addition, according to a third aspect of embodiments of the present application, there is provided a computer-readable storage medium including a program or instructions for implementing the multi-sensor fusion-based odometry method according to the first aspect when the program or instructions are run on a computer.

In addition, in a fourth aspect of embodiments of the present application, there is provided a computer program product containing instructions that, when run on a computer, cause the computer to perform the multi-sensor fusion-based odometry method according to the first aspect.

Additionally, a fifth aspect of embodiments herein provides a computer server comprising a memory, and one or more processors communicatively coupled to the memory; the memory has stored therein instructions executable by the one or more processors to cause the one or more processors to implement a multi-sensor fusion based odometry method as described in the first aspect above.

According to the odometer method and device based on multi-sensor fusion, sensor data collected by various sensors carried on a movable object are obtained in real time, then the sensor data collected by the various sensors can be modeled respectively, a constraint relation of the position and the attitude of the movable object is established, and therefore the constraint relation of the position and the attitude of the movable object can be subjected to combined optimization solution, and the position and attitude result of the movable object is determined. By the method and the device, the real-time pose estimation of the movable object in scenes with sparse features and poor GPS signals can be realized, the result is accurate, and the robustness is good.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.

Fig. 1 is a first flowchart of a multi-sensor fusion-based odometry method provided in an embodiment of the present application;

fig. 2 is a second flowchart of a multi-sensor fusion-based odometry method provided in the embodiment of the present application;

fig. 3 is a schematic diagram of a tunnel scenario in an embodiment of the present application;

FIG. 4 is a graph illustrating a comparison of results obtained from a prior art method used in a tunnel scenario in an embodiment of the present application with a multi-sensor fusion-based odometry method provided in an embodiment of the present application;

fig. 5 is a schematic structural diagram of an odometer device based on multi-sensor fusion according to an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

In order to make the present application better understood by those skilled in the art, some technical terms appearing in the embodiments of the present application are explained below:

a movable object: the mobile robot is an object capable of carrying out map acquisition, such as a vehicle, a mobile robot, an aircraft and the like, and various sensors, such as a laser radar, a camera and the like, can be carried on the movable object.

ICP: iterative Closest Point algorithm is mainly used for accurate splicing of depth images in computer vision, and accurate splicing is achieved by continuously iterating and minimizing corresponding points of source data and target data. There are many variants, and how to obtain a better splicing effect efficiently and robustly is the main hot spot.

GNSS: global Navigation Satellite System, Global Navigation Satellite System.

GPS: global Positioning System, Global Positioning System.

An IMU: the Inertial Measurement Unit is a device for measuring the three-axis attitude angle (or angular velocity) and acceleration of an object.

High-precision maps: different from the traditional navigation map, the high-precision map contains a large amount of driving assistance information, and the most important information depends on the accurate three-dimensional representation of a road network, such as intersection layout, road sign positions and the like. In addition, the high-precision map also contains a lot of semantic information, meaning of different colors on communication traffic lights can be reported on the map, the high-precision map can indicate speed limit of roads, the position of the start of a left-turn lane and the like. One of the most important features of high-precision maps is precision, which enables a vehicle to reach a centimeter-level precision, which is important to ensure the safety of an autonomous vehicle.

Mapping (Mapping): and constructing a high-precision map describing the current scene according to the estimated real-time pose of the vehicle or the mobile robot and the acquired data of the vision sensors such as the laser radar and the like.

Pose (Pose): the general term for position and orientation includes 6 degrees of freedom, including 3 degrees of freedom for position and 3 degrees of freedom for orientation. The 3 degrees of freedom of orientation are typically expressed in Pitch (Pitch), Roll (Roll), Yaw (Yaw).

Frame (Frame): the sensor finishes one-time observation of received measurement data, for example, one frame of data of the camera is a picture, and one frame of data of the laser radar is a group of laser point clouds.

Sub-map (Submap): the global map is composed of a plurality of sub-maps, and each sub-map comprises observation results of continuous multiple frames.

Registration (Registration): and matching the observation results of the same area at different moments and different positions to obtain the relative pose relationship between the two observation moments.

NDT: the Normal distribution Transform, a Normal distribution transformation algorithm, is a registration algorithm that is applied to a statistical model of three-dimensional points, using standard optimization techniques to determine the optimal match between two point clouds.

NovAtel: in the field of precision Global Navigation Satellite Systems (GNSS) and its subsystems, leading suppliers of products and technologies are in the position. The embodiment of the present application shows a NovAtel integrated navigation system.

LOAM: LiDAR mapping and mapping, laser ranging and mapping.

KD-tree: a K-Dimensional tree is a data structure for partitioning a K-Dimensional data space. The method is mainly applied to searching of multidimensional space key data (such as range searching and nearest neighbor searching).

SVD: singular Value Decomposition, is an important matrix Decomposition in linear algebra.

LIO-Mapping: lidar Inertial Odometry and Mapping, Lidar Inertial range measurement and Mapping.

Odometer (odometer): a method of estimating the pose of a movable object using data obtained from sensors of the object.

In some embodiments of the present application, the term "vehicle" is to be broadly interpreted to include any moving object, including, for example, an aircraft, a watercraft, a spacecraft, an automobile, a truck, a van, a semi-trailer, a motorcycle, a golf cart, an off-road vehicle, a warehouse transport vehicle or a farm vehicle, and a vehicle traveling on a track, such as a tram or train, and other rail vehicles. The "vehicle" in the present application may generally include: power systems, sensor systems, control systems, peripheral devices, and computer systems. In other embodiments, the vehicle may include more, fewer, or different systems.

Wherein, the driving system is the system for providing power motion for the vehicle, includes: engine/motor, transmission and wheels/tires, power unit.

The control system may comprise a combination of devices controlling the vehicle and its components, such as a steering unit, a throttle, a brake unit.

The peripheral devices may be devices that allow the vehicle to interact with external sensors, other vehicles, external computing devices, and/or users, such as wireless communication systems, touch screens, microphones, and/or speakers.

Based on the vehicle described above, the sensor system and the automatic driving control device are also provided in the automatic driving vehicle.

The sensor system may include a plurality of sensors for sensing information about the environment in which the vehicle is located, and one or more actuators for changing the position and/or orientation of the sensors. The sensor system may include any combination of sensors such as global positioning system sensors, inertial measurement units, radio detection and ranging (RADAR) units, cameras, laser rangefinders, light detection and ranging (LIDAR) units, and/or acoustic sensors; the sensor system may also include sensors (e.g., O) that monitor the vehicle interior systems2Monitors, fuel gauges, engine thermometers, etc.).

The autopilot control apparatus may include a processor and a memory, the memory having stored therein at least one machine-executable instruction, the processor executing the at least one machine-executable instruction to perform functions including a map engine, a positioning module, a perception module, a navigation or routing module, and an autopilot control module. The map engine and the positioning module are used for providing map information and positioning information. The sensing module is used for sensing things in the environment where the vehicle is located according to the information acquired by the sensor system and the map information provided by the map engine. And the navigation or path module is used for planning a driving path for the vehicle according to the processing results of the map engine, the positioning module and the sensing module. The automatic control module inputs and analyzes decision information of modules such as a navigation module or a path module and the like and converts the decision information into a control command output to a vehicle control system, and sends the control command to a corresponding component in the vehicle control system through a vehicle-mounted network (for example, an electronic network system in the vehicle, which is realized by CAN (controller area network) bus, local area internet, multimedia directional system transmission and the like), so as to realize automatic control of the vehicle; the automatic control module can also acquire information of each component in the vehicle through a vehicle-mounted network.

At present, in some scenes with sparse features and poor GPS signals (such as tunnels, sea-crossing bridges, deserts and gobi and the like), the laser odometry method and the laser inertial odometry method which are commonly used in the field of automatic driving generally degrade, cannot perform accurate pose estimation, and can not meet the requirement of high-precision map construction in automatic driving due to the fact that a pose estimation result is lost possibly along with the lapse of time.

The embodiment of the application aims to provide a multi-sensor fusion-based odometry method and a multi-sensor fusion-based odometry device, so as to solve the problem that in the prior art, a laser odometry method and a laser inertia odometry method which are commonly used for automatic driving cannot accurately estimate the pose in scenes with sparse features and poor GPS signals.

As shown in fig. 1, an embodiment of the present application provides a multi-sensor fusion-based odometer method, which is applied to a movable object carrying multiple sensors, and includes:

step 101, acquiring sensor data acquired by various sensors carried on a movable object in real time.

And 102, modeling sensor data acquired by various sensors respectively, and establishing a constraint relation of the pose of the movable object.

And 103, carrying out joint optimization solution on the constraint relation of the pose of the movable object, and determining the pose result of the movable object.

In order to make those skilled in the art better understand the present invention, the embodiments of the present application will be described below with reference to specific embodiments, as shown in fig. 2, the embodiments of the present application provide a multi-sensor fusion-based odometer method, which is applied to a movable object carrying various sensors, which may include an inertial measurement unit IMU, a wheel speed meter, a laser radar, and a barometer; wherein the IMU includes an accelerometer and a gyroscope.

The method comprises the following steps:

step 201, obtaining triaxial acceleration data measured by an accelerometer, triaxial angular velocity data measured by a gyroscope, wheel speed data of a movable object measured by a wheel speed meter, point cloud data measured by a laser radar and height observation data measured by a barometer in real time.

After step 201, steps 202 to 205 are continued.

Step 202, modeling is carried out according to triaxial acceleration data measured by the accelerometer, and roll angle constraint and pitch angle constraint of the movable object are established.

The accelerometer in the IMU can measure three-axis acceleration data under an IMU coordinate system in real time, the measured three-axis acceleration data generally consists of two parts, namely gravity acceleration and self acceleration of the movable object, but the self acceleration of the movable object is usually far less than the gravity acceleration, so the influence of the self acceleration of the movable object can be ignored.

Specifically, step 202 here can be implemented as follows:

modeling is performed according to triaxial acceleration data measured by the accelerometer.

The established mathematical model has the following relationship:

in the above mathematical model, ax、ay、azThree-axis acceleration data representing accelerometer measurements;a rotation matrix from an IMU coordinate system to a world coordinate system; g represents the normalized gravitational acceleration; a isrIndicating the vehicle body acceleration.

By simplifying the mathematical model, the roll angle estimated value theta of the IMU under the world coordinate system can be determinedrollAnd pitch angle estimate θpitch(ii) a Wherein the content of the first and second substances,ax、ay、azrepresenting triaxial acceleration data measured by the accelerometer.

In order to reduce the degree of freedom of joint optimization in subsequent steps and avoid rapid degradation of a mileometer method due to characteristic sparsity in scenes such as tunnels and sea-crossing bridges, the application proposes that the roll angle estimated value theta is usedrollAnd pitch angle estimate θpitchAs a fixed constraint, to be added to the subsequent joint optimization process. In addition to this, the present invention is,since the state variable of the attitude needs to be represented by quaternion in the joint optimization, the quaternion needs to be converted into a rotation matrix, and then the rotation matrix needs to be converted into an Euler angle form, so that the state variable of the attitude can be estimated according to the roll angle thetarollAnd pitch angle estimate θpitchEstablishing a roll angle constraint r of the movable objectRoll(X) and Pitch Angle constraint rPitch(X); wherein r isRoll(X)=θroll-arcsin(-R13);rPitch(X)=θpitch-arctan2(R23,R33) (ii) a X represents the pose of the IMU in a world coordinate system, is a state variable to be optimized and comprises a position p and a pose q; r is a rotation matrix form of the attitude q in the state variable X to be optimized, R23、R33、R13Respectively, the elements of the corresponding row and column in the rotation matrix R.

And step 203, performing kinematic modeling by using an Ackerman model according to the triaxial angular velocity data measured by the gyroscope and the wheel speed data of the movable object measured by the wheel speed meter, and establishing Ackerman model constraint of the horizontal position and the yaw angle of the movable object.

Specifically, step 203 here can be implemented as follows:

the application can be used for carrying out the kinematic modeling of the movable object based on the Ackerman model. For the convenience of calculation, in the ackermann kinematic model, a vehicle body coordinate system is generally established with the center of the rear axis of the movable object (for example, the rear axis of the vehicle) as the origin.

In general, default inputs of the ackerman kinematics model are the speed of the movable object and the steering wheel angle, but in practical application, the inventor finds that the accuracy of the steering wheel angle is generally difficult to guarantee, and in order to improve the accuracy and the robustness of the whole odometry method, the application applies an angle integral value of an included angle between the advancing direction of the movable object and the y axis in a world coordinate system to replace the steering wheel angle. Therefore, it is necessary to determine the angle integral value of the angle between the advancing direction of the movable object and the y axis in the world coordinate system according to the three-axis angular velocity data measured by the gyroscope: wherein, thetaiAn angle integral value indicating an angle between the advancing direction of the movable object at the i-th time and the y-axis; t represents the t-th time;the method comprises the steps of obtaining a rotation transformation relation from a vehicle body coordinate system to an IMU coordinate system in advance;the yaw angle in the triaxial angular velocity data measured by the gyroscope at the time t.

Then, in the ackerman kinematics model, the speed of the left rear wheel of the movable object at the ith moment measured by the wheel speed meter under the vehicle body coordinate system can be measuredAnd the speed of the right rear wheel in the vehicle body coordinate systemDetermining the speed v of the center of the rear axle of the movable object in the vehicle coordinate systemi(ii) a Wherein the content of the first and second substances, is a previously known velocity noise.

Then, the pose transfer equation of the movable object under the world coordinate system can be determined by adopting the kinematics modeling of the Ackerman model:

Xi+1=xi+vi.Δt·sinθi

yi+1=yi+vi·Δt·cosθi

wherein, Δ t is the time difference between two adjacent measuring moments of the wheel speed meter; x is the number ofi、yiRepresenting the horizontal position of the movable object in the world coordinate system.

Since the measurement frequency of the IMU and the wheel speed meter is usually higher than the frequency of the laser radar, the x between the k time and the k +1 time of two adjacent laser radars can be measured according to the measurement frequency of the laser radari、yi、θiIntegrating to determine x in world coordinate systemi、yi、θiRespective change delta xk(k+1)、δyk(k+1)、δθk(k+1)

Then, the pose transformation relation from the IMU coordinate system to the vehicle body coordinate system can be determined according to the external reference between the vehicle body coordinate system and the IMU coordinate systemAnd determining the pose transformation relation of the IMU between the k moment and the k +1 moment in the world coordinate systemWherein:

thus, Ackerman model constraints r for the movable object can be establishedAkerman(X); wherein:

x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized. For example in the formula Xk、Xk+1The poses of the IMU at the k-th and k + 1-th moments in a world coordinate system are respectively shown.

And 204, modeling according to the point cloud data measured by the laser radar, and establishing laser radar pose constraint of the movable object.

Here, the step 204 may be implemented as follows, for example, including the following steps:

step 2041, performing motion compensation on each frame of point cloud data measured by the laser radar, and determining the position of each point in each frame of point cloud data after motion compensation.

The reason why motion compensation is required is: the lidar is generally of a mechanical structure, and a certain time (usually 0.1s or 0.05s) is required for completing one frame of scanning, and due to the high-speed movement of a movable object (such as a vehicle) in the time, the acquired raw data of the lidar is influenced by the movement, so that a measured value is deviated from a real value. In order to reduce the influence of movable motion, the pose transformation relation of the IMU under the world coordinate system can be obtained according to the Ackerman model estimationAnd performing motion compensation on the raw data measured by the laser radar. Because the time interval between two times of scanning is very short, the motion between two frames can be assumed to be linear motion, and the pose of the points acquired by the laser radar in one frame relative to the starting time of the frame can be obtained through time stamp interpolation, so that all the points acquired by the laser radar in one frame are converted into the starting time of the frame, and the position of each point after motion compensation is determined.

Step 2042, extracting the features of each frame of point cloud data after motion compensation, and dividing the points in each frame of point cloud data into line feature points and plane feature points according to the curvature information of the points in each frame of point cloud data.

This step 2042 may be specifically implemented as follows:

and obtaining any point on a wire harness and a plurality of points in a preset range of any point on the wire harness from the frame of point cloud data after motion compensation. Here, since the laser points measured by the laser radar are arranged according to the beam, a plurality of points within the preset range can be found for each laser point according to the beam, such as a plurality of laser points on the left and right sides of the beam (for example, 5 laser points are respectively located on the left and right sides, but not limited thereto).

According to the coordinate sum of any point in the laser radar coordinate systemAnd determining the curvature of any point in the coordinates of a plurality of points in a preset range of any point on the wire harness under the laser radar coordinate system. For example, the curvature at any point may be determined using the following curvature calculation formula:wherein c representsCurvature at a point;respectively representing the coordinates of the ith and the j point on the kth line in the current frame under a laser radar coordinate system, S representing a point set consisting of a plurality of points on the left and the right sides of the ith point, and | S | representing the number of points contained in the point set.

According to a preset curvature threshold value, when the curvature of one point is larger than the curvature threshold value, the one point is taken as a line characteristic point, and when the curvature of the one point is smaller than the curvature threshold value, the one point is taken as a plane characteristic point.

Step 2043, overlapping preset frame point cloud data before the current frame point cloud data according to the pose estimated, and determining a local line feature map and a local area feature map corresponding to the current frame point cloud data.

Specifically, the pose estimation is performed incrementally, so that the line feature points, the surface feature points and the corresponding poses of each frame of point cloud before the current frame are known, and therefore, preset frame point cloud data (such as 15 frames of point cloud data) before the current frame point cloud data can be overlaid according to the poses obtained by the pose estimation, and a corresponding local line feature map (composed of line feature points) and a local surface feature map (composed of plane feature points) can be obtained.

2044, obtaining the initial pose of the laser radar of the current frame under a world coordinate system according to the external parameters between the laser radar and the IMU:

wherein p isLiDARFor the initial position of the lidar at the present moment in the world coordinate system, pLiDARFor the initial attitude, R, of the laser radar at the current moment in the world coordinate systemIMU、tIMURespectively representing the attitude and the position of the IMU at the current moment in a world coordinate system,andand obtaining the attitude transformation relation and the position transformation relation respectively through external reference calibration between the laser radar and the IMU.

Step 2045, according to a data index established for each point by adopting a KD-Tree algorithm in advance, searching in a local line feature map to obtain a plurality of adjacent points corresponding to each line feature point in the current frame point cloud data, and searching in the local plane feature map to obtain a plurality of adjacent points corresponding to each plane feature point in the current frame point cloud data.

2046, according to the line feature point x in the current frame point cloud datalFitting corresponding several adjacent points (for example 5 points) to obtain a straight line, and making line feature point xlThe distance function from the straight line is used as a line characteristic point error function;

the line characteristic point error function is:wherein the content of the first and second substances,andany two points on the line.

2047, according to the plane feature point x in the current frame point cloud datapFitting (for example, By SVD decomposition) the corresponding several neighboring points (for example, 5 points) to obtain a plane Ax + By + Cz + D as 0, and fitting the plane feature point xpThe distance function from the plane is taken as the face feature point error function.

Where A, B, C and D represent the parameters of the fitted plane.

The surface feature point error function is:where n represents the matrix: n is (a, B, C).

2048, establishing a lidar pose constraint r of the movable object according to the line characteristic point error function and the surface characteristic point error functionLiDAR(X)。

Wherein:

x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized; n islineRepresenting the number of line feature points, n, in the current frame point cloud dataplaneRepresenting the number of plane feature points in the current frame point cloud data.

Step 205, modeling is performed according to the altitude observation data measured by the barometer, and barometer constraint of the altitude position of the movable object is established.

Specifically, the barometer may obtain the current altitude by measuring the atmospheric pressure. Although factors such as sudden temperature changes, air flow shocks, etc. can affect the absolute accuracy of the barometer height measurement, the relative accuracy of the barometer observations is generally high. The low height estimation accuracy is always a prominent problem of the current mainstream mileage calculation method, so in order to improve the estimation accuracy of the odometer in the height direction and reduce the system accumulated error, the following method can be adopted in the embodiment of the application:

height observation data Z at the current moment measured by barometerk+1Barometer pre-measurementHeight observation data Z of initial time of measurement0Height estimation value of IMU (inertial measurement Unit) measurement at current moment in world coordinate systemAnd an estimate of the height of the IMU in the world coordinate system at the initial moment measured in advanceModeling to establish barometer constraint r for height position of movable objectAltimeter(X); wherein:

x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized;respectively, rotation data and translation data of the barometer coordinate system at the current moment to the world coordinate system, which are known in advance.

And step 206, performing joint optimization solution on the roll angle constraint, the pitch angle constraint, the ackermann model constraint, the laser radar pose constraint and the barometer constraint by adopting a nonlinear optimization method, and determining a pose result of the movable object.

In particular, r can be constrained here to the roll angleRoll(X), pitch angle constraint rPitch(X), Ackerman model constraint rAkerman(X) pose constraint r of laser radarLiDAR(X) and barometer constraint rAltimeterAnd (X) solving the nonlinear least square problem of the joint optimization cost function by adopting an optimization algorithm, and determining the pose result of the IMU of the movable object in a world coordinate system (namely the maximum posterior probability estimation of the current state variable X to be optimized). The optimization algorithm may be a gauss-newton algorithm or a Levenberg-Marquardt algorithm (L-M algorithm, Levenberg-Marquardt method), but is not limited thereto.

Wherein, the joint optimization cost function is:

wherein the content of the first and second substances,and the information matrixes are preset information matrixes corresponding to the constraint items respectively.

Therefore, the odometer method based on the fusion of multiple sensors (including the laser radar, the IMU, the wheel speed meter and the barometer) realized in the steps 201 to 206 can obtain accurate relative poses between frames acquired by the laser radar, can meet the real-time pose estimation in the scenes with sparse features such as tunnels, sea-crossing bridges and the like and poor GPS signals, and has good pose result accuracy and robustness.

In an embodiment of the present application, the inventor performs experimental verification on the multi-sensor fusion-based odometry method implemented in the present application, and the process is as follows:

in order to verify the accuracy and robustness of the multi-sensor fusion-based odometer method, a data acquisition vehicle provided with sensors such as a laser radar, an IMU (inertial measurement unit), a wheel speed meter and a barometer is used in the embodiment of the application, data of a section of extra-long tunnel is acquired for experimental verification, the total length of the tunnel is about 9.2Km, as shown in FIG. 3, the scene features in the tunnel are sparse, and the walls on two sides are smooth planes.

After the multi-sensor fusion-based odometry method of the embodiment of the present application is adopted in the scenario shown in fig. 3, a comparison experiment is performed on the same data by combining the most representative laser mileage calculation method LOAM and the laser inertial navigation mileage calculation method LIO-Mapping in the prior art. The experimental result is shown in fig. 4, wherein the horizontal and vertical coordinates of fig. 4 are used for representing position information of the IMU in the pose in the world coordinate system, group-try represents the true value of the pose, and Sensor-Fusion-Odometry represents the multi-Sensor Fusion-based odometer method according to the embodiment of the present application. It can be seen that in the experimental scene, both the LOAM algorithm and the LIO-Mapping algorithm are seriously degraded, the whole course cannot be run, the pose of the IMU carried by the data acquisition vehicle in a world coordinate system is lost, and the requirement of tunnel Mapping is not met completely; under the same condition, the odometer method based on multi-sensor fusion can run the whole process, and the final pose estimation result obtains accurate relative poses among frames in the tunnel although inevitable accumulated errors exist, so that a foundation is laid for subsequent tunnel construction.

As shown in fig. 5, an embodiment of the present invention provides a multi-sensor fusion-based odometer device applied to a movable object having a plurality of sensors mounted thereon, including:

the sensor data obtaining unit 31 is configured to obtain sensor data collected by various sensors mounted on the movable object in real time.

And the constraint relation establishing unit 32 is used for respectively modeling the sensor data acquired by various sensors and establishing the constraint relation of the pose of the movable object.

And the joint optimization unit 33 is configured to perform joint optimization solution on the constraint relationship of the pose of the movable object, and determine a pose result of the movable object.

In addition, an embodiment of the present application further provides a computer-readable storage medium, which includes a program or instructions, and when the program or instructions are executed on a computer, the multi-sensor fusion-based odometry method described in fig. 1 and fig. 2 above is implemented. The specific implementation process is described in the above method embodiment, and is not described herein again.

In addition, the present application also provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the multi-sensor fusion-based odometry method described in fig. 1 and 2 above. The specific implementation process is described in the above method embodiment, and is not described herein again.

In addition, the embodiment of the application also provides a computer server, which comprises a memory and one or more processors which are connected with the memory in a communication way; the memory has stored therein instructions executable by the one or more processors to cause the one or more processors to implement the multi-sensor fusion based odometry method of fig. 1 and 2 described above. The specific implementation process is described in the above method embodiment, and is not described herein again.

According to the odometer method and device based on multi-sensor fusion, sensor data collected by various sensors carried on a movable object are obtained in real time, then the sensor data collected by the various sensors can be modeled respectively, a constraint relation of the position and the attitude of the movable object is established, and therefore the constraint relation of the position and the attitude of the movable object can be subjected to combined optimization solution, and the position and attitude result of the movable object is determined. By the method and the device, the real-time pose estimation of the movable object in scenes with sparse features and poor GPS signals can be realized, the result is accurate, and the robustness is good.

As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

The principle and the implementation mode of the present application are explained by applying specific embodiments in the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于统计相似度量的组合导航鲁棒滤波方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!