Fusion positioning method, device, equipment and computer readable storage medium

文档序号:613832 发布日期:2021-05-07 浏览:21次 中文

阅读说明:本技术 一种融合定位方法、装置、设备和计算机可读存储介质 (Fusion positioning method, device, equipment and computer readable storage medium ) 是由 周阳 张涛 陈美文 刘运航 何科君 于 2020-12-29 设计创作,主要内容包括:本申请涉及机器人领域,提供了融合定位方法、装置、设备和计算机可读存储介质,以将路标定位与二维栅格地图定位结合,实现对机器人的精确定位。所该方法包括:通过机器人搭载的二维激光雷达对环境物进行采样,获取环境物的雷达点云相关数据;将环境物中路标L-i在路标地图上的位置P-i与路标地图进行邻近匹配,获取路标地图上距离位置P-i最近的路标L-j的位置P-j;使用环境物的去畸变点云数据在二维栅格地图进行扫面匹配,获取机器人在二维栅格地图中的全局位姿作为机器人的全局位姿观测值G-(pose);根据路标L-i在路标地图上的位置P-i、路标L-j在路标地图上的位置P-j、机器人的估计位姿和全局位姿观测值G-(pose)对机器人的位姿进行迭代优化,直至误差E-1和误差E-2最小时为止。(The application relates to the field of robots and provides a fusion positioning method, a fusion positioning device, fusion positioning equipment and a computer readable storage medium, so that road sign positioning and two-dimensional grid map positioning are combined to realize accurate positioning of a robot. The method comprises the following steps: sampling an environmental object through a two-dimensional laser radar carried by a robot, and acquiring radar point cloud related data of the environmental object; marking the road sign L in the environment object i Position P on a roadmap i Carrying out proximity matching with the road sign map to obtain a distance position P on the road sign map i Nearest road sign L j Position P of j (ii) a Scanning matching is carried out on the two-dimensional grid map by using the distortion-removed point cloud data of the environment object, and the global pose of the robot in the two-dimensional grid map is obtained and used as the global pose observation value G of the robot pose (ii) a According to the road sign L i Position P on a roadmap i Road sign L j Map on road signPosition P of j Estimated pose and global pose observed value G of robot pose Iteratively optimizing the pose of the robot until an error E is obtained 1 And error E 2 At a minimum time.)

1. A fusion localization method, comprising:

sampling an environmental object through a two-dimensional laser radar carried by a robot to acquire radar point cloud related data of the environmental object, wherein the radar point cloud related data comprises a position P of a road sign Li in the environmental object on a road sign mapiAnd undistorted point cloud data of the environmental object;

will the road sign LiPosition P on a roadmapiCarrying out proximity matching with a road sign map to obtain the distance on the road sign map from the position PiThe position Pj of the nearest landmark Lj;

scanning and matching the two-dimensional grid map by using the distortion-removed point cloud data of the environment object, and acquiring the global pose of the robot in the two-dimensional grid map as the global pose observed value G of the robotpose

According to the road sign LiPosition P on a roadmapiThe position Pj of the landmark Lj on the landmark map, the estimated pose of the robot and the global pose observed value GposeIteratively optimizing the pose of the robot until an error E is obtained1And error E2Minimum time, the error E1Is the road sign LiThe difference value of the position of the road sign Lj on the road sign map, and the error E2An estimated pose for the robot and the global pose observation GposeThe difference between them.

2. The fusion positioning method of claim 1, wherein the acquiring radar point cloud related data of the environmental object by sampling the environmental object through the robot-mounted two-dimensional laser radar comprises:

sampling the environmental object through the two-dimensional laser radar to obtain original radar point cloud data of the environmental object;

based on the relative pose of each point in the original radar point cloud data, carrying out distortion removal on the original radar point cloud data of the environmental object to obtain distortion-removed point cloud data of the environmental object;

for the road sign L in the environment objectiClustering the distortion-removed point cloud data to obtain the road sign LiThe point cloud cluster of (1);

solving the geometric center coordinate of the point cloud cluster under the robot body coordinate system, and taking the geometric center coordinate of the point cloud cluster under the robot body coordinate system as the road sign LiPosition P on a roadmapi

3. The fusion localization method of claim 2, wherein the undistorting the original radar point cloud data of the environmental object based on the relative pose of each point in the original radar point cloud data to obtain the undistorted point cloud data of the environmental object comprises:

after a scanning period is determined, a new origin of a laser radar coordinate system where the two-dimensional laser radar is located is determined;

multiplying a relative pose delta T with a coordinate P of each point in the original radar point cloud data under a laser radar coordinate system of an old origin, taking a result delta T P after multiplication as a coordinate of each point in the original radar point cloud data, wherein the relative pose delta T is robot pose data obtained by each point in the original radar point cloud data relative to the new origin.

4. The fusion localization method of claim 2, wherein after the de-distorting the raw radar point cloud data of the environmental object to obtain the de-distorted point cloud data of the environmental object, the method further comprises:

comparing the laser reflection intensity corresponding to each point in the distortion-removed point cloud data of the environment object with a preset light intensity threshold;

keeping the point cloud data of which the laser reflection intensity is higher than the preset light intensity threshold value in the distortion-removed point cloud data of the environment object as the road sign L in the environment objectiThe undistorted point cloud data.

5. The fusion positioning method as claimed in claim 1, wherein the method uses the undistorted point cloud data of the environment object to perform scan matching on a two-dimensional grid map, and obtains the global pose of the robot in the two-dimensional grid map as the global pose observed value G of the robotposeThe method comprises the following steps:

determining a plurality of candidate poses in a pose search space according to the pose of the robot at the moment before the current moment;

projecting the undistorted point cloud data of the environmental object to the two-dimensional grid map by using each candidate pose in the candidate poses, and calculating a matching score of each candidate pose on the two-dimensional grid map;

determining the candidate pose with the highest matching score of the candidate poses on the two-dimensional grid map as the global pose of the robot in the two-dimensional grid map.

6. The fusion localization method of claim 1, wherein said localization method is based on said landmark LiPosition P on a roadmapiThe position Pj of the landmark Lj on the landmark map, the estimated pose of the robot and the global pose observed value GposeIteratively optimizing the pose of the robot until an error E is obtained1And error E2At a minimum, comprising:

constructing a nonlinear least squares problem with the pose of the robot as a state;

with said error E1Iteratively optimizing the pose of the robot for the error constraint of the non-linear least squares problem until the error E1And said error E2At a minimum time.

7. The fusion localization method of claim 1, wherein said localization method is based on said landmark LiPosition P on a roadmapiThe position Pj of the landmark Lj on the landmark map, the estimated pose of the robot and the global pose observed value GposeIteratively optimizing the pose of the robot until an error E is obtained1And error E2At a minimum, comprising:

based on the road sign LiPosition P on a roadmapiThe position Pj of the landmark Lj on the landmark map, the estimated pose of the robot and the global pose observed value GposeConstructing a particle filter model or an extended Kalman filter model;

continuously correcting the estimated pose of the robot by adopting the particle filter model or the extended Kalman filter model until an error E1And error E2At a minimum time.

8. A fusion positioning apparatus, comprising:

the robot comprises a first acquisition module and a second acquisition module, wherein the first acquisition module is used for sampling an environment object through a two-dimensional laser radar carried by a robot to acquire radar point cloud related data of the environment object, and the radar point cloud related data comprises road signs L in the environment objectiPosition P on a roadmapiAnd undistorted point cloud data of the environmental object;

a second obtaining module for obtaining the road sign LiPosition P on a roadmapiCarrying out proximity matching with a road sign map to obtain the distance on the road sign map from the position PiThe position Pj of the nearest landmark Lj;

a third obtaining module, configured to perform scan matching on a two-dimensional grid map using the distortion-removed point cloud data of the environmental object, and obtain a global pose of the robot in the two-dimensional grid map as a global pose observed value G of the robotpose

Iterative optimizationA module for identifying the road sign L according toiPosition P on a roadmapiThe position Pj of the landmark Lj on the landmark map, the estimated pose of the robot and the global pose observed value GposeIteratively optimizing the pose of the robot until an error E is obtained1And error E2Minimum time, the error E1Is the road sign LiThe difference value of the position of the road sign Lj on the road sign map, and the error E2An estimated pose for the robot and the global pose observation GposeThe difference between them.

9. An apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 7 when executing the computer program.

10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.

Technical Field

The present invention relates to the field of robots, and in particular, to a fusion positioning method, apparatus, device, and computer-readable storage medium.

Background

The application of the mobile robot in various occasions, and the accurate positioning is the key point that the robot can successfully complete tasks. Generally, the positioning of the robot is realized based on a three-dimensional or two-dimensional laser radar, wherein the positioning method of the two-dimensional laser radar mainly realizes the estimation of the pose of the robot by matching laser radar point cloud data with a pre-constructed two-dimensional grid map. However, the positioning method also suffers from problems because some long corridors, open spaces and other situations with certain degrees of freedom exceeding the observation range of the two-dimensional laser radar exist in practical application scenes. Under the condition, the two-dimensional laser radar cannot completely constrain three degrees of freedom of the robot in the two-dimensional space pose, so that the pose of the robot is estimated wrongly.

In order to solve the above problems encountered when the robot is located in a long corridor or in an open environment, one solution is to use a marker (Landmark) made by hand to perform auxiliary location, for example, a reflector or a reflector pillar is used to perform auxiliary location. However, only high-grade reflective road signs such as reflective columns or reflective plates are used for auxiliary positioning, and the high-grade reflective road signs cannot be adapted to environments with high requirements for environmental improvement.

Disclosure of Invention

The application provides a fusion positioning method, a fusion positioning device and a computer readable storage medium, which are used for combining road sign positioning and two-dimensional grid map positioning to realize accurate positioning of a robot.

In one aspect, the present application provides a fusion positioning method, including:

sampling an environmental object through a two-dimensional laser radar carried by a robot to acquire radar point cloud related data of the environmental object, wherein the radar point cloud related data comprisesThe road sign LiPosition P on a roadmapiAnd undistorted point cloud data of the environmental object;

will the road sign LiPosition P on a roadmapiCarrying out proximity matching with a road sign map to obtain the distance on the road sign map from the position PiNearest road sign LjPosition P ofj

Scanning and matching the two-dimensional grid map by using the distortion-removed point cloud data of the environment object, and acquiring the global pose of the robot in the two-dimensional grid map as the global pose observed value G of the robotpose

According to the road sign LiPosition P on a roadmapiThe road sign LjPosition P on the road sign mapjThe estimated pose of the robot and the global pose observation GposeIteratively optimizing the pose of the robot until an error E is obtained1And error E2Minimum time, the error E1Is a road sign LiAnd the road sign LjDifference in position on the road sign map, the error E2An estimated pose for the robot and the global pose observation GposeThe difference between them.

In another aspect, the present application provides a fusion positioning apparatus, including:

the robot comprises a first acquisition module and a second acquisition module, wherein the first acquisition module is used for sampling an environment object through a two-dimensional laser radar carried by a robot to acquire radar point cloud related data of the environment object, and the radar point cloud related data comprises road signs L in the environment objectiPosition P on a roadmapiAnd undistorted point cloud data of the environmental object;

a second obtaining module for obtaining the road sign LiPosition P on a roadmapiCarrying out proximity matching with a road sign map to obtain the distance on the road sign map from the position PiNearest road sign LjPosition P ofj

A third acquisition module for teration using the environmental objectScanning and matching the variable point cloud data on a two-dimensional grid map, and acquiring the global pose of the robot in the two-dimensional grid map as the global pose observation value G of the robotpose

An iterative optimization module for optimizing the road sign L according to the road sign LiPosition P on a roadmapiThe road sign LjPosition P on the road sign mapjThe estimated pose of the robot and the global pose observation GposeIteratively optimizing the pose of the robot until an error E is obtained1And error E2Minimum time, the error E1Is a road sign LiAnd the road sign LjDifference in position on the road sign map, the error E2An estimated pose for the robot and the global pose observation GposeThe difference between them.

In a third aspect, the present application provides an apparatus, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the technical solution of the above fusion positioning method when executing the computer program.

In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the above technical solution of the fusion positioning method.

According to the technical scheme provided by the application, on one hand, the environmental object is sampled to match the road sign L in the environmental objectiPosition P ofiNearest road sign LjObtaining the road sign LiWith road sign LjPosition difference E on road sign map1(ii) a On the other hand, the undistorted point cloud data of the environmental object is used for scanning matching on the two-dimensional grid map, and then the estimated pose of the robot and the global pose observed value G of the robot are obtainedposeDifference value E between2And finally, performing iterative optimization on the pose of the robot until an error E is obtained1And error E2At minimum, compared with the prior art that only road sign auxiliary positioning is adopted, the technical scheme of the application integrates road sign positioning and two-dimensional grid map positioning, so that the positioning of the robot is more accurate.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

Fig. 1 is a flowchart of a fusion positioning method provided in an embodiment of the present application;

FIG. 2 is a schematic structural diagram of a fusion positioning device provided in an embodiment of the present application;

fig. 3 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

In this specification, adjectives such as first and second may only be used to distinguish one element or action from another, without necessarily requiring or implying any actual such relationship or order. References to an element or component or step (etc.) should not be construed as limited to only one of the element, component, or step, but rather to one or more of the element, component, or step, etc., where the context permits.

In the present specification, the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.

The application provides a fusion positioning method, which can be applied to a robot, wherein the robot can be a robot operating in a restaurant, such as a dish delivery robot, a medicine delivery robot operating in a medical place, such as a hospital, a transfer robot operating in a place such as a warehouse, and the like. As shown in fig. 1, the fusion positioning method mainly includes steps S101 to S104, which are detailed as follows:

step S101: sampling an environmental object through a two-dimensional laser radar carried by a robot, and acquiring radar point cloud related data of the environmental object, wherein the radar point cloud related data comprises a road sign L in the environmental objectiPosition P on a roadmapiAnd undistorted point cloud data of the environmental object.

In the embodiment of the present application, the environment refers to an environment in which the robot works, the environment object refers to all objects in the environment in which the robot works, and the environment object includes a road sign and other objects arranged in the environment, such as a certain cargo, a tree, a wall, or a table, and the like, and the road sign may be a rod-shaped object such as a reflective column, a reflective strip, and the like arranged in the environment in which the robot works and within a visible range of the two-dimensional laser radar, wherein the diameter of the reflective column may be about 5 centimeters, and the width of the reflective strip may also be about 5 centimeters. In order to play a more accurate auxiliary positioning role, at least more than 3 road signs such as the reflective columns, the reflective strips and the like are required to be arranged, the road signs are not required to be arranged on the same straight line as much as possible, the distance is kept to be more than 1 meter as much as possible, and the like.

As an embodiment of the present application, the two-dimensional laser radar carried by the robot is used to sample the environmental object, and the acquisition of the radar point cloud related data of the environmental object can be realized through steps S1011 to S1014, which are described as follows:

step S1011: and sampling the environmental object through a two-dimensional laser radar to obtain original radar point cloud data of the environmental object.

In the embodiment of the application, the sampling process of the two-dimensional laser radar carried by the robot is the same as that of the prior art, the current environment is scanned in real time by emitting laser beams to the surrounding environment, and flying is adoptedCalculating the distance between the robot and a road sign in the environment by a time ranging method, wherein when each laser beam hits an environment object, the angle of the laser beam relative to the robot, the distance between the laser source and the hit environment object and other information form original radar point cloud data of the environment object; road sign L sampled by two-dimensional laser radariIs any one of the landmarks in the environment.

Step S1012: and based on the relative pose of each point in the original radar point cloud data, carrying out distortion removal on the original radar point cloud data of the environmental object to obtain distortion-removed point cloud data of the environmental object.

Because the robot moves when the two-dimensional laser radar samples the environment, the origin of the coordinates of each sampling point in the laser radar coordinate system is different. For example, when the two-dimensional laser radar samples the second point, because the robot generates a displacement at this time, the origin of the laser radar coordinate system where the two-dimensional laser radar is located also generates a displacement, and similarly, the origin of the laser radar coordinate system is different from the second point to the last point at the end of a scanning period, and the sampling points at each time, so that the radar point cloud data is distorted, and the distortion should be eliminated. Specifically, based on the relative pose of each point in the original radar point cloud data, the original radar point cloud data of the environmental object is subjected to distortion removal, and the distortion removal point cloud data of the environmental object can be obtained by: after a scanning period is determined, multiplying a new origin of a laser radar coordinate system where the two-dimensional laser radar is located by a coordinate P of each point in original radar point cloud data under the laser radar coordinate system of the old origin, and taking a result delta T P after multiplication as a coordinate of each point in the original radar point cloud data, wherein the relative pose delta T is robot pose data obtained by each point in the original radar point cloud data relative to the new origin, and the new origin of the laser radar coordinate system where the two-dimensional laser radar is located after the scanning period is determined means that when one scanning period of the two-dimensional laser radar is finished and the two-dimensional laser radar samples the last point of a frame of laser radar point cloud data, the central point of the two-dimensional laser radar at the moment is determined to be taken as the new origin of the laser radar coordinate system where the two-dimensional laser radar is located.

Step S1013: for road sign L in environment objectiClustering the distortion-removed point cloud data to obtain a road sign LiThe point cloud cluster of (1).

As described above, the environmental object includes the road signs such as the reflective columns and the reflective stripes disposed in the environment, and thus the distortion-removed point cloud data of the environmental object obtained in step S1012 also includes the road sign L in the environmental objectiThe undistorted point cloud data. As to how to extract the road sign L in the environmental objectiThe method comprises the steps of conducting distortion removal on original radar point cloud data of an environment object to obtain distortion removal point cloud data of the environment object, comparing laser reflection intensity corresponding to each point in the distortion removal point cloud data of the environment object with a preset light intensity threshold value, removing points with the laser reflection intensity being smaller than the preset light intensity threshold value, keeping point cloud data with the laser reflection intensity being higher than the preset light intensity threshold value (namely, being larger than or equal to the preset light intensity threshold value) in the distortion removal point cloud data of the environment object, and taking the point cloud data with the laser reflection intensity being higher than the preset light intensity threshold value as road signs L in the environment objectiThe undistorted point cloud data. For example, if the laser reflection intensity that can be detected by the two-dimensional laser radar is assumed to be 0 to 1, since the laser reflection intensity of the road signs such as the reflective columns and the reflective strips is generally above 0.85, the preset light intensity threshold value may be set to 0.85, and then the point cloud data with the laser reflection intensity higher than 0.85 in the distortion-removed point cloud data of the environmental object is retained, and the point cloud data with the laser reflection intensity higher than 0.85 is the road sign L in the environmental objectiThe undistorted point cloud data.

Considering that a Noise-Based Density Clustering (dbss) algorithm has the advantages of being capable of Clustering dense data sets of any shapes, finding abnormal points while Clustering, and rarely having Clustering bias, in the embodiment of the application, the dbss algorithm can be used for carrying out Clustering on a landmark L in an environmentiClustering the distortion-removed point cloud data to obtain a road sign LiThe point cloud cluster of (1).

Step S1014: way findingMark LiThe point cloud is clustered on the geometric center coordinate of the robot body coordinate system to form a road sign LiThe geometrical center coordinate of the point cloud cluster under the robot body coordinate system is used as a road sign LiPosition P on a roadmapi

After clustering, the road sign L is obtainediAfter the point cloud of (a), a road sign L may be clusterediThe point cloud cluster is regarded as a mass point, and the road sign L can be obtained by a geometric or physical methodiThe point cloud is clustered on the geometric center coordinate of the robot body coordinate system to form a road sign LiThe geometrical center coordinate of the point cloud cluster under the robot body coordinate system is used as a road sign LiPosition P on a roadmapi. Because in the embodiment of the application, the diameter of the reflective column and the width of the reflective strip are both small, the road sign L is simply obtainediThe geometrical center of the point cloud cluster can approximately reflect the geometrical centers of the road signs such as the reflective columns, the reflective strips and the like.

Step S102: marking the road sign L in the environment objectiPosition P on a roadmapiCarrying out proximity matching with the road sign map to obtain a distance position P on the road sign mapiNearest road sign LjPosition P ofj

In the embodiment of the present application, a Landmark map refers to a map that reflects the position of a Landmark (Landmark) in a world coordinate system, where a point in the Landmark map is regarded as a Landmark. As mentioned above, the road sign LiPosition P on a roadmapiNamely the road sign LiThe point cloud of (2) is clustered on the geometric center coordinate under the robot body coordinate system. In particular, the road sign L may be mapped using a nearest neighbor algorithm, such as the kNN (k-nearest neighbor) algorithmiPosition P on a roadmapiCarrying out proximity matching with the road sign map to obtain a distance position P on the road sign mapiNearest road sign LjPosition P ofj

Step S103: scanning matching is carried out on the two-dimensional grid map by using the distortion-removed point cloud data of the environment object, and the global pose of the robot in the two-dimensional grid map is obtained and used as the global pose observation value G of the robotpose

In the embodiment of the present application, a two-dimensional Grid Map is also referred to as a two-dimensional Grid probability Map or a two-dimensional Occupancy Grid Map (Occupancy Map), in which a plane is divided into grids, and each Grid is assigned an Occupancy rate (Occupancy). The occupancy rate refers to a state where one grid is occupied by an obstacle (an occupied state), a state where no obstacle is occupied (a Free state), or a probability between the two states on a two-dimensional occupancy grid map, where the state occupied by an obstacle is represented by 1, the state occupied by no obstacle is represented by 0, and the value between the two states is represented by 0 to 1.

According to the embodiment of the application, the undistorted point cloud data of the environment object is used for scanning and matching on the two-dimensional grid map, and the global pose of the robot in the two-dimensional grid map is obtained and used as the global pose observed value G of the robotposeThis can be achieved by steps S1031 to S1033, which are explained as follows:

step S1031: and determining a plurality of candidate poses in the pose search space according to the pose of the robot at the moment before the current moment.

The pose of the robot is a state of the robot in the world coordinate system, and the state is expressed by using a position and a posture, specifically, the pose of the robot at a certain moment is expressed by an abscissa x and an ordinate y of the robot in the world coordinate system and an included angle θ relative to the abscissa, that is, (x, y, θ). Assuming that the pose of the robot at the time immediately before the current time is (x, y, theta) by estimation, each component of the pose is given an increment Δ x, Δ y, and Δ theta, and several candidate poses (x, y, Δ theta) exist in the pose search space (Δ x, Δ y, Δ theta)1,y1,θ1)、(x2,y2,θ2)、…、(xi,yi,θi)、…、(xn,yn,θn)。

Step S1032: and projecting the distortion-removed point cloud data of the environment object to the two-dimensional grid map by using each candidate pose of the candidate poses, and calculating the matching score of each candidate pose on the two-dimensional grid map.

Here, projecting the distortion-removed point cloud data of the environmental object onto the two-dimensional grid map means that the distortion-removed point cloud data of the environmental object is converted into a certain occupied position or grid on the two-dimensional grid map according to a certain conversion relationship, and when the candidate poses are different, the distortion-removed point cloud data of the environmental object is converted into different grids on the two-dimensional grid map. Assume a certain candidate pose (x)i,yi,θi) The distortion-removed point cloud data of the environment object is (x)j,yj,θj) Corresponding to candidate pose (x)i,yi,θi) Of (x)j,yj,θj) For grid position projected on two-dimensional grid mapAnd then:

wherein zk is the position on the two-dimensional grid mapThe number of the grid of (2).

Whereas as previously mentioned, different grids are given different occupancy rates. Corresponding to a certain candidate pose (x)i,yi,θi) When the matching score on the two-dimensional grid map is the sum of the occupancy rates of different grids when each undistorted point cloud data in the undistorted point cloud data of the environment object is projected to the different grids on the two-dimensional grid map, that is, any undistorted point cloud data (x) in the undistorted point cloud data of the environment objectj,yj,θj) Projected onto a two-dimensional grid map asThe corresponding occupancy rate of the grid of (2) is OjCorresponding to candidate pose (x)i,yi,θi) Matching score on the two-dimensional grid map when the distortion-removed point cloud data of the environment object is projected to the two-dimensional grid mapWherein m is the number of the distortion-removed point cloud data of the environment object.

Step S1033: and determining the candidate pose with the highest matching score of the candidate poses on the two-dimensional grid map as the global pose of the robot in the two-dimensional grid map.

From the analysis in step S1032, it can be known that n different matching scores Score will be obtained for the n candidate posesiDetermining the candidate pose with the highest matching Score of the n candidate poses on the two-dimensional grid map as the global pose of the robot in the two-dimensional grid map, namely selecting Scorek=max{ScoreiWill be compared with ScorekCorresponding candidate pose (x)k,yk,θk) Determining a global pose of the robot in the two-dimensional grid map as a global pose observation G of the robotpose

Step S104: according to the road sign LiPosition P on a roadmapiRoad sign LjPosition P on a roadmapjEstimated pose and global pose observed value G of robotposeAnd performing iterative optimization on the pose of the robot until an error E1And error E2Minimum time, wherein error E1Is a road sign LiWith road sign LjDifference in position on road-sign map, error E2An estimated pose and global pose observation G for a robotposeThe difference between them.

According to the road sign L as an embodiment of the present applicationiPosition P on a roadmapiRoad sign LjOn road-marking mapsPosition PjEstimated pose and global pose observed value G of robotposeAnd performing iterative optimization on the pose of the robot until an error E1And error E2The minimum time may be: constructing a nonlinear least squares problem with the pose of the robot as the state, and error E1Iteratively optimizing the pose of the robot for the error constraint condition of the nonlinear least squares problem until an error E1And error E2At a minimum time, here, error E1Is a road sign LiWith road sign LjDifference in position on road-sign map, error E2An estimated pose and global pose observation G for a robotposeThe difference between them.

As another embodiment of the present application, according to the road sign LiPosition P on a roadmapiRoad sign LjPosition P on a roadmapjEstimated pose and global pose observed value G of robotposeAnd performing iterative optimization on the pose of the robot until an error E1And error E2The minimum time may also be: based on road sign LiPosition P on a roadmapiRoad sign LjPosition P on a roadmapjEstimated pose and global pose observed value G of robotposeConstructing a particle filter model or an extended Kalman filter model, and continuously correcting the estimated pose of the robot by adopting the particle filter model or the extended Kalman filter model until an error E1And error E2Error E up to the minimum, here1And error E2Meaning and error E mentioned in the preceding examples1And error E2Have the same meaning. In the embodiment of the present application, Extended Kalman Filtering (EKF) is an Extended form of standard Kalman filtering in a nonlinear situation, and is a high-efficiency recursive Filter, i.e. autoregressive Filter, and the basic idea is to linearize the nonlinear system using taylor series expansion and then Filter the data using a Kalman filtering framework, and particle filtering is to approximately represent a summary by finding a set of random samples propagating in a state spaceThe rate density function uses the mean of the samples, which are visually referred to as "particles", instead of the integral operation, to obtain a process of minimum variance estimation of the system state.

Based on the road sign LiPosition P on a roadmapiRoad sign LjPosition P on a roadmapjEstimated pose and global pose observed value G of robotposeConstructing a particle filter model or an extended Kalman filter model, and adopting the particle filter model to continuously correct the estimated pose of the robot until an error E1And error E2The minimum time may be: acquiring a current particle swarm based on the estimated pose of the robot and the pose variation in a preset time period, wherein each particle in the particle swarm represents one pose of the robot, and the particle swarm is used for representing the probability distribution of the pose of the robot; distributing a weight to each particle in the current particle swarm, and obtaining a first estimated pose of the robot according to the distributed weight, wherein the weight is used for representing the probability that each particle is the current actual pose; based on the first estimation pose and the road sign LiPosition P on a roadmapiRoad sign LjPosition P on a roadmapjAnd global pose observation GposeScanning and matching by adopting a preset scanning and matching model to obtain a second estimated pose; continuously correcting the second estimation pose until the error E1And error E2And taking the obtained estimated pose as the final pose of the robot.

As can be seen from the fusion positioning method illustrated in fig. 1, in the technical scheme of the present application, on one hand, the sampling is performed through the environment object, and the road sign L in the environment object is matchediPosition P ofiNearest road sign LjObtaining the road sign LiWith road sign LjPosition difference E on road sign map1(ii) a On the other hand, the undistorted point cloud data of the environmental object is used for scanning matching on the two-dimensional grid map, and then the estimated pose of the robot and the global pose observed value G of the robot are obtainedposeDifference value E between2And finally, performing iterative optimization on the pose of the robot until an error E is obtained1And error E2At minimum, compared with the prior art that only road sign auxiliary positioning is adopted, the technical scheme of the application integrates road sign positioning and two-dimensional grid map positioning, so that the positioning of the robot is more accurate.

Referring to fig. 2, a fusion positioning apparatus provided in the embodiment of the present application may include a first obtaining module 201, a second obtaining module 202, a third obtaining module 203, and an iterative optimization module 204, which are detailed as follows:

a first obtaining module 201, configured to sample an environmental object through a two-dimensional laser radar carried by a robot, and obtain radar point cloud related data of the environmental object, where the radar point cloud related data includes a landmark L in the environmental objectiPosition P on a roadmapiAnd undistorted point cloud data of the environmental object;

a second obtaining module 202, configured to obtain the landmark L in the environmental objectiPosition P on a roadmapiCarrying out proximity matching with the road sign map to obtain a distance position P on the road sign mapiNearest road sign LjPosition P ofj

A third obtaining module 203, configured to perform scanning matching on the two-dimensional grid map using the distortion-removed point cloud data of the environmental object, and obtain a global pose of the robot in the two-dimensional grid map as a global pose observed value G of the robotpose

An iterative optimization module 204 for optimizing the traffic according to the road sign LiPosition P on a roadmapiRoad sign LjPosition P on a roadmapjEstimated pose and global pose observed value G of robotposeAnd performing iterative optimization on the pose of the robot until an error E1And error E2Minimum time, wherein error E1Is a road sign LiWith road sign LjDifference in position on road-sign map, error E2An estimated pose and global pose observation G for a robotposeThe difference between them.

Optionally, the first obtaining module 201 illustrated in fig. 2 may include a sampling unit, a correcting unit, a clustering unit, and a center finding unit, wherein:

the sampling unit is used for sampling the environmental object through the two-dimensional laser radar to obtain original radar point cloud data of the environmental object;

the correction unit is used for carrying out distortion removal on the original radar point cloud data of the environment object based on the relative pose of each point in the original radar point cloud data to obtain distortion removal point cloud data of the environment object;

a clustering unit for clustering the road signs L in the environmentiClustering the distortion-removed point cloud data to obtain a road sign LiThe point cloud cluster of (1);

a central calculating unit for calculating the road sign LiThe point cloud is clustered on the geometric center coordinate of the robot body coordinate system to form a road sign LiThe geometrical center coordinate of the point cloud cluster under the robot body coordinate system is used as a road sign LiPosition P on a roadmapi

Alternatively, the correction unit may include a new origin determination unit and a multiplication unit, wherein:

the new origin determining unit is used for determining a new origin of a laser radar coordinate system where the two-dimensional laser radar is located after a scanning period;

and the multiplying unit is used for multiplying the relative pose delta T and the coordinate P of each point in the original radar point cloud data under the laser radar coordinate system of the old origin, and the multiplied result delta T is used as the coordinate of each point in the original radar point cloud data, wherein the relative pose delta T is the robot pose data obtained by each point in the original radar point cloud data relative to the new origin.

Optionally, the apparatus illustrated in fig. 2 may further comprise a comparison module and a screening module, wherein:

the comparison module is used for comparing the laser reflection intensity corresponding to each point in the distortion-removed point cloud data of the environmental object with a preset light intensity threshold value;

a screening module for retaining the point cloud data with the laser reflection intensity higher than the preset light intensity threshold in the distortion-removed point cloud data of the environment object as the road sign L in the environment objectiThe undistorted point cloud data.

Optionally, the third obtaining module 203 illustrated in fig. 2 may include a searching unit, a matching unit, and a global pose determining unit, wherein:

the searching unit is used for determining a plurality of candidate poses in the pose searching space according to the pose of the robot at the moment before the current moment;

the matching unit is used for projecting the distortion-removed point cloud data of the environmental object to the two-dimensional grid map by utilizing each candidate pose of the candidate poses, and calculating the matching score of each candidate pose on the two-dimensional grid map;

and the global pose determining unit is used for determining the candidate pose with the highest matching score on the two-dimensional grid map by using a plurality of candidate poses as the global pose of the robot in the two-dimensional grid map.

Optionally, the iterative optimization module 204 illustrated in fig. 2 may include a problem construction unit and a first correction unit, wherein:

the construction unit is used for constructing a nonlinear least squares problem by taking the pose of the robot as a state;

a first correction unit for correcting the error E1Iteratively optimizing the pose of the robot for the error constraint condition of the nonlinear least squares problem until an error E1And error E2At a minimum time.

Optionally, the iterative optimization module 204 illustrated in fig. 2 may include a model construction unit and a second correction unit, wherein:

a model construction unit for constructing a model based on the road sign LiPosition P on a roadmapiRoad sign LjPosition P on a roadmapjEstimated pose and global pose observed value G of robotposeConstructing a particle filter model or an extended Kalman filter model;

a second correction unit for continuously correcting the estimated pose of the robot by using the particle filter model or the extended Kalman filter model until the error E1And error E2At a minimum time.

From the above description of the technical solutions, the present application will be clearOn one hand, the technical scheme is to sample through an environment object and match the position P of the road sign Li in the environment objectiNearest road sign LjObtaining the road sign Li and the road sign LjPosition difference E on road sign map1(ii) a On the other hand, the undistorted point cloud data of the environmental object is used for scanning matching on the two-dimensional grid map, and then the estimated pose of the robot and the global pose observed value G of the robot are obtainedposeDifference value E between2And finally, performing iterative optimization on the pose of the robot until an error E is obtained1And error E2At minimum, compared with the prior art that only road sign auxiliary positioning is adopted, the technical scheme of the application integrates road sign positioning and two-dimensional grid map positioning, so that the positioning of the robot is more accurate.

Please refer to fig. 3, which is a schematic structural diagram of an apparatus according to an embodiment of the present application. As shown in fig. 3, the apparatus 3 of this embodiment mainly includes: a processor 30, a memory 31 and a computer program 32, such as a program of a fusion positioning method, stored in the memory 31 and executable on the processor 30. The processor 30, when executing the computer program 32, implements the steps in the above-described embodiment of the fusion localization method, such as the steps S101 to S104 shown in fig. 1. Alternatively, the processor 30, when executing the computer program 32, implements the functions of the modules/units in the above-described apparatus embodiments, such as the functions of the first obtaining module 201, the second obtaining module 202, the third obtaining module 203 and the iterative optimization module 204 shown in fig. 2.

Illustratively, the computer program 32 of the fusion localization method mainly comprises: sampling an environmental object through a two-dimensional laser radar carried by a robot, and acquiring radar point cloud related data of the environmental object, wherein the radar point cloud related data comprises a road sign L in the environmental objectiPosition P on a roadmapiAnd undistorted point cloud data of the environmental object; marking the road sign L in the environment objectiPosition P on a roadmapiCarrying out proximity matching with the road sign map to obtain a distance position P on the road sign mapiNearest road sign LjPosition P ofj(ii) a Scanning and matching the two-dimensional grid map by using the distortion-removed point cloud data of the environment object to obtainTaking the global pose of the robot in the two-dimensional grid map as the observed value G of the global pose of the robotpose(ii) a According to the road sign LiPosition P on a roadmapiRoad sign LjPosition P on a roadmapjEstimated pose and global pose observed value G of robotposeAnd performing iterative optimization on the pose of the robot until an error E1And error E2Minimum time, wherein error E1Is a road sign LiWith road sign LjDifference in position on road-sign map, error E2An estimated pose and global pose observation G for a robotposeThe difference between them. The computer program 32 may be partitioned into one or more modules/units, which are stored in the memory 31 and executed by the processor 30 to accomplish the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 32 in the device 3. For example, the computer program 32 may be divided into functions of a first acquisition module 201, a second acquisition module 202, a third acquisition module 203, and an iterative optimization module 204 (modules in a virtual device), and the specific functions of each module are as follows: a first obtaining module 201, configured to sample an environmental object through a two-dimensional laser radar carried by a robot, and obtain radar point cloud related data of the environmental object, where the radar point cloud related data includes a landmark L in the environmental objectiPosition P on a roadmapiAnd undistorted point cloud data of the environmental object; a second obtaining module 202, configured to obtain the landmark L in the environmental objectiPosition P on a roadmapiCarrying out proximity matching with the road sign map to obtain a distance position P on the road sign mapiNearest road sign LjPosition P ofj(ii) a A third obtaining module 203, configured to perform scanning matching on the two-dimensional grid map using the distortion-removed point cloud data of the environmental object, and obtain a global pose of the robot in the two-dimensional grid map as a global pose observed value G of the robotpose(ii) a An iterative optimization module 204 for optimizing the traffic according to the road sign LiPosition P on a roadmapiRoad sign LjOn the roadPosition P on a mapjEstimated pose and global pose observed value G of robotposeAnd performing iterative optimization on the pose of the robot until an error E1And error E2Minimum time, wherein error E1Is a road sign LiWith road sign LjDifference in position on road-sign map, error E2An estimated pose and global pose observation G for a robotposeThe difference between them.

The device 3 may include, but is not limited to, a processor 30, a memory 31. Those skilled in the art will appreciate that fig. 3 is merely an example of a device 3 and does not constitute a limitation of device 3 and may include more or fewer components than shown, or some components in combination, or different components, e.g., a computing device may also include input-output devices, network access devices, buses, etc.

The Processor 30 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The memory 31 may be an internal storage unit of the device 3, such as a hard disk or a memory of the device 3. The memory 31 may also be an external storage device of the device 3, such as a plug-in hard disk provided on the device 3, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 31 may also include both an internal storage unit of the device 3 and an external storage device. The memory 31 is used for storing computer programs and other programs and data required by the device. The memory 31 may also be used to temporarily store data that has been output or is to be output.

It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as required to different functional units and modules, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a non-transitory computer readable storage medium. Based on such understanding, all or part of the processes in the method of the embodiments may also be implemented by instructing related hardware through a computer program, where the computer program of the fusion positioning method may be stored in a computer readable storage medium, and when being executed by a processor, the computer program may implement the steps of the embodiments of the methods, that is, sampling an environmental object through a two-dimensional laser radar mounted on a robot, and acquiring radar point cloud related data of the environmental object, where the radar point cloud related data includes a landmark L in the environmental objectiPosition P on a roadmapiAnd undistorted point cloud data of the environmental object; marking the road sign L in the environment objectiPosition P on a roadmapiCarrying out proximity matching with the road sign map to obtain a distance position P on the road sign mapiNearest road sign LjPosition P ofj(ii) a Scanning and matching the two-dimensional grid map by using the distortion-removed point cloud data of the environment object to obtainTaking the global pose of the robot in the two-dimensional grid map as the observed value G of the global pose of the robotpose(ii) a According to the road sign LiPosition P on a roadmapiRoad sign LjPosition P on a roadmapjEstimated pose and global pose observed value G of robotposeAnd performing iterative optimization on the pose of the robot until an error E1And error E2Minimum time, wherein error E1Is a road sign LiWith road sign LjDifference in position on road-sign map, error E2An estimated pose and global pose observation G for a robotposeThe difference between them. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The non-transitory computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the non-transitory computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, non-transitory computer readable media does not include electrical carrier signals and telecommunications signals as subject to legislation and patent practice. The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application. The above-described embodiments further illustrate the objects, aspects and advantages of the present application in detail, it being understood that the invention is not limited to the details of the embodiments shown and describedThe above description is only exemplary of the present application and should not be taken as limiting the scope of the present application, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present application should be included in the present invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于三维激光扫描技术的桥梁水下无线监测重构装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类