Multi-sensor-based equipment docking method

文档序号:85354 发布日期:2021-10-08 浏览:4次 中文

阅读说明:本技术 一种基于多传感器设备对接方法 (Multi-sensor-based equipment docking method ) 是由 李越 赖志林 于 2021-07-21 设计创作,主要内容包括:本发明公开了一种基于多传感器设备对接方法,利用机器人上的摄像头扫描对接设备上的二维码,通过aruco算法得到二维码中心坐标Q,依据机器人中心相对于地图的位置、摄像头在机器人上的安装位置,得到二维码相对于机器人的坐标Q1,依据二维码在对接设备上的位置,得到对接设备相对于机器人的坐标点P2,坐标点P2为目标对接点,已知机器人的当前坐标R0、目标对接点的坐标P2,在对接设备的对接方向上取过渡点P3,利用Bezier一阶曲线规划P2至P3路线为路径path1,利用Bezier二阶曲线规划P3至R0路线为路径Path2,将路径path1和路径path2合并为路径path3,控制机器人沿路径path3移动,使机器人的当前坐标R0和目标对接点P2重合。本发明可防止与对接设备的侧边发生干涉,提高对接精度。(The invention discloses a multi-sensor equipment docking method, which utilizes a camera on a robot to scan a two-dimensional code on docking equipment, obtaining a center coordinate Q of the two-dimensional code through an aruco algorithm, obtaining a coordinate Q1 of the two-dimensional code relative to the robot according to the position of the center of the robot relative to a map and the installation position of a camera on the robot, obtaining a coordinate point P2 of the docking equipment relative to the robot according to the position of the two-dimensional code on the docking equipment, wherein the coordinate point P2 is a target docking point, and knowing a current coordinate R0 of the robot and a coordinate P2 of the target docking point, taking a transition point P3 in the docking direction of the docking equipment, taking a Path from P2 to P3 as a Path1 by using Bezier first-order curve planning, taking a Path from P3 to R0 as a Path2 by using Bezier second-order curve planning, combining the Path1 and the Path2 into a Path3, and controlling the robot to move along the Path3 so that the current coordinate R0 of the robot is superposed with the target docking point P2. The invention can prevent the interference with the side edge of the butt joint equipment and improve the butt joint precision.)

1. A multi-sensor device-based docking method is characterized in that: the method comprises the following steps:

s1, the robot goes to a preset docking point P1(x1, y1, yaw1) near the docking equipment through a built-in navigation system, wherein P1(x1, y1, yaw1) is a coordinate relative to the origin of a map;

s2, acquiring current coordinates R0(x0, y0 and yaw0) of the robot, wherein R0(x0, y0 and yaw0) are coordinates relative to the origin of the map;

s3, scanning the two-dimensional code on the docking equipment by a camera on the robot, and calculating by an aroco algorithm to obtain a center coordinate Q (x0, y0, yaw0) of the two-dimensional code;

s4, calculating to obtain coordinates Q1(x1, y1, yaw1) of the two-dimensional code relative to the robot according to the position of the center of the robot relative to the map and the installation position of the camera on the robot;

s5, according to the position of the two-dimensional code on the docking equipment, calculating to obtain a coordinate P2(x2, y2 and yaw2) of the docking equipment relative to the robot and a coordinate P2(x2, y2 and yaw2) as a target docking point through coordinate conversion;

s6, knowing current coordinates R0(x0, y0, yaw0) of the robot and coordinates P2(x2, y2, yaw2) of a target docking point, taking a transition point P3(x3, y3, yaw3) in the docking direction of the docking device, wherein the transition point P3(x3, y3, yaw3) is a coordinate point relative to the robot;

s7, planning a route from P2 to P3 to be a straight path1 by using a Bezier first-order curve, planning a route from P3 to R0 to be a path2 by using a Bezier second-order curve, and combining the path1 and the path2 into a path 3;

and S8, controlling the robot to move along the planned path3 by adopting a Pure Pursuit route tracking algorithm, so that the current coordinates R0(x0, y0, yaw0) of the robot are coincided with the target butt point P2(x2, y2, yaw 2).

2. The multi-sensor-based device docking method according to claim 1, wherein: the method of taking the transition point P3 in the docking direction in step S6 is:

s61, setting an intersection point of a docking target point P2 and a docking equipment boundary along a docking direction as a 1; s62, setting the distance between the docking target point P2 and the a1 as a, and the distance from the a1 to the transition point P3 as S1;

s63, if the robot is circular and the radius of the robot is R, S1> R, if the robot is rectangular and the length of the robot is L, S1> L/2, the distance from the target butt joint point P2 to the transition point P3 is S2, S2 is more than or equal to a + L/2, and the distance in the Y direction is S3;

s64, setting the transition point P3(x3, y3, yaw3) and the target butt point P2(x2, y2, yaw2) to be consistent in direction, and establishing a coordinate system with the target butt point P2 by the formula:

x3 ═ X2+ S2 ═ cos (yaw2) -S3 ═ sin (yaw2), y3 ═ y2+ S2 ═ sin (yaw2) + S3 ═ cos (yaw2), yaw3 ═ yaw2, and transition points P3(X3, y3, yaw3) were obtained.

3. The multi-sensor-based device docking method according to claim 2, wherein: the method also comprises the step between the step S2 and the step S3 that the robot angle is calibrated through an IMU sensor built in the robot, and the difference between the yaw0 value of the current coordinate R0 of the robot and the yaw1 value of the preset docking point P1 is smaller than or equal to a preset threshold value.

4. The multi-sensor-based device docking method according to claim 3, wherein: the preset threshold is 1 °.

5. The multi-sensor-based device docking method according to claim 4, wherein: when the robot is controlled to move along the planned path3 in step 8, the two-dimensional code recognized again is compared with the coordinate Q1(x1, y1, yaw1) of the robot and the center coordinate Q (x0, y0, yaw0) of the two-dimensional code recognized for the first time, the difference between the two is calculated, whether the difference between the two exceeds the threshold value is judged, if yes, step S4 is repeatedly executed, and if not, the robot continues to move along the path 3.

6. The multi-sensor-based device docking method according to claim 5, wherein: the method of calculating the difference between the coordinate Q1(x1, y1, yaw1) of the re-recognized two-dimensional code with respect to the robot and the center coordinate Q (x0, y0, yaw0) of the primarily recognized two-dimensional code is:

dx is x0-x1, Dy is y0-y1, and Yaw is Yaw0-Yaw1, and if Dx is less than 0.01 and Dy is less than 0.01 and Yaw is less than 1, the movement is continued along path3, otherwise step S4 is repeatedly performed.

Technical Field

The invention relates to the technical field of mobile robot equipment docking, in particular to a multi-sensor equipment docking method.

Background

At present, in the field of intelligent robots, when the robots are in butt joint with auxiliary accessories such as charging and goods shelves, the problems of low positioning efficiency, low speed and low stability exist. The existing method generally utilizes sensors such as two-dimensional code positioning, laser or vision positioning and the like to perform edge control, but the data are unstable in the moving process of a robot, so that the positioning accuracy is reduced, the efficiency is not high and the stability is poor.

Disclosure of Invention

The invention aims to overcome the defects of the prior art and provide a multi-sensor-based equipment docking method which can prevent the side edge of docking equipment from interfering, avoid docking failure, improve experience effect and docking accuracy and has strong universality.

The invention is realized by the following technical scheme: a multi-sensor device based docking method comprises the following steps:

s1, the robot goes to a preset docking point P1(x1, y1, yaw1) near the docking equipment through a built-in navigation system, wherein P1(x1, y1, yaw1) is a coordinate relative to the origin of a map;

s2, acquiring current coordinates R0(x0, y0 and yaw0) of the robot, wherein R0(x0, y0 and yaw0) are coordinates relative to the origin of the map;

s3, scanning the two-dimensional code on the docking equipment by a camera on the robot, and calculating by an aroco algorithm to obtain a center coordinate Q (x0, y0, yaw0) of the two-dimensional code;

s4, calculating to obtain coordinates Q1(x1, y1, yaw1) of the two-dimensional code relative to the robot according to the position of the center of the robot relative to the map and the installation position of the camera on the robot;

s5, according to the position of the two-dimensional code on the docking equipment, calculating to obtain a coordinate P2(x2, y2 and yaw2) of the docking equipment relative to the robot and a coordinate P2(x2, y2 and yaw2) as a target docking point through coordinate conversion;

s6, knowing current coordinates R0(x0, y0, yaw0) of the robot and coordinates P2(x2, y2, yaw2) of a target docking point, taking a transition point P3(x3, y3, yaw3) in the docking direction of the docking device, wherein the transition point P3(x3, y3, yaw3) is a coordinate point relative to the robot;

s7, planning a route from P2 to P3 to be a straight path1 by using a Bezier first-order curve, planning a route from P3 to R0 to be a path2 by using a Bezier second-order curve, and combining the path1 and the path2 into a path 3;

and S8, controlling the robot to move along the planned path3 by adopting a Pure Pursuit route tracking algorithm, so that the current coordinates R0(x0, y0, yaw0) of the robot are coincided with the target butt point P2(x2, y2, yaw 2).

Further: the method of taking the transition point P3 in the docking direction in step S6 is:

s61, setting an intersection point of a docking target point P2 and a docking equipment boundary along a docking direction as a 1;

s62, setting the distance between the docking target point P2 and the a1 as a, and the distance from the a1 to the transition point P3 as S1;

s63, if the robot is circular and the radius of the robot is R, S1> R, if the robot is rectangular and the length of the robot is L, S1> L/2, the distance from the target butt joint point P2 to the transition point P3 is S2, S2 is more than or equal to a + L/2, and the distance in the Y direction is S3;

s64, setting the transition point P3(x3, y3, yaw3) and the target butt point P2(x2, y2, yaw2) to be consistent in direction, and establishing a coordinate system with the target butt point P2 by the formula:

x3 ═ X2+ S2 ═ cos (yaw2) -S3 ═ sin (yaw2), y3 ═ y2+ S2 ═ sin (yaw2) + S3 ═ cos (yaw2), yaw3 ═ yaw2, and transition points P3(X3, y3, yaw3) were obtained.

Further: the method also comprises the step between the step S2 and the step S3 that the robot angle is calibrated through an IMU sensor built in the robot, and the difference between the yaw0 value of the current coordinate R0 of the robot and the yaw1 value of the preset docking point P1 is smaller than or equal to a preset threshold value.

Further: the preset threshold is 1 °.

Further: when the robot is controlled to move along the planned path3 in step 8, the two-dimensional code recognized again is compared with the coordinate Q1(x1, y1, yaw1) of the robot and the center coordinate Q (x0, y0, yaw0) of the two-dimensional code recognized for the first time, the difference between the two is calculated, whether the difference between the two exceeds the threshold value is judged, if yes, step S4 is repeatedly executed, and if not, the robot continues to move along the path 3.

Further: the method of calculating the difference between the coordinate Q1(x1, y1, yaw1) of the re-recognized two-dimensional code with respect to the robot and the center coordinate Q (x0, y0, yaw0) of the primarily recognized two-dimensional code is:

dx is x0-x1, Dy is y0-y1, and Yaw is Yaw0-Yaw1, and if Dx is less than 0.01 and Dy is less than 0.01 and Yaw is less than 1, the movement is continued along path3, otherwise step S4 is repeatedly performed.

The invention has the beneficial effects that:

compared with the prior art, the invention scans the two-dimensional code on the docking equipment through the camera carried by the robot, calculates the center coordinate Q (x0, y0, yaw0) of the two-dimensional code through an aroco algorithm, calculates the coordinate Q1(x1, y1, yaw1) of the two-dimensional code relative to the robot according to the position of the center of the robot relative to the map and the installation position of the camera on the robot, calculates the coordinate point P2 of the docking equipment relative to the robot through coordinate transformation according to the position of the two-dimensional code on the docking equipment, takes the transition point P3 in the docking direction of the docking equipment according to the current coordinate R0 and the coordinate P2 of the target docking point, takes the transition point P3 as a coordinate point relative to the robot, thereby converts the target docking point P2, the transition point P3 and the current coordinate R0 of the robot into the robot map coordinate system, and the intersection point of the transition point P3 and the docking equipment along the docking direction from the docking target point P2 is larger than the radius of the robot or other equipment or is larger than one half of the length of the robot or other equipment, then the paths from P2 to P3 are planned to be a straight Path1 by using Bezier first-order curves, the paths from P3 to R0 are planned to be a Path2, the Path1 and the Path2 are combined to be a Path3, and the robot is controlled to move along the planned Path3 by using a Pure Pursuit Path tracking algorithm, so that the current coordinates of the robot R0(x0, y0, yaw0) and the target docking point P2(x2, y2, yaw2) are coincided.

The robot adopts transition point P3 to restrain when moving along planned path3, before the robot docks with the butt joint equipment, make the center of the robot perpendicular to the plane central point of the butt joint equipment earlier, therefore, the radius of rotation of robot or other equipment is considered when getting transition point P3, before the robot has not moved to make its center perpendicular to the plane central point of the butt joint equipment yet, can prevent the side of robot or other equipment and butt joint equipment from interfering, avoid the butt joint failure, improve experience effect and butt joint precision, and the positioning method can adopt two-dimensional code, laser radar, infrared sensor, known existing positioning sensor and positioning algorithm such as ultrasonic sensor can all realize, the commonality is stronger.

Drawings

FIG. 1 is a flow chart of a multi-sensor device based docking method of the present invention;

fig. 2 is a schematic diagram of the docking of the robot and the docking device of the present invention.

Description of reference numerals: 1-robot, 2-docking equipment.

Detailed Description

Referring to fig. 1, the invention relates to a multi-sensor device-based docking method, which comprises the following steps:

s1, the robot goes to a preset docking point P1(x1, y1, yaw1) near the docking device through a built-in navigation system, and P1(x1, y1, yaw1) is a coordinate relative to the origin of the map.

S2, acquiring current coordinates R0(x0, y0 and yaw0) of the robot, wherein R0(x0, y0 and yaw0) are coordinates relative to the origin of the map.

S3, scanning the two-dimensional code on the docking equipment by a camera on the robot, and calculating by an aroco algorithm to obtain a center coordinate Q (x0, y0, yaw0) of the two-dimensional code.

And S4, calculating to obtain coordinates Q1(x1, y1, yaw1) of the two-dimensional code relative to the robot according to the position of the center of the robot relative to the map and the installation position of the camera on the robot.

And S5, calculating to obtain a coordinate P2(x2, y2 and yaw2) of the docking equipment relative to the robot and a coordinate P2(x2, y2 and yaw2) as a target docking point through coordinate conversion according to the position of the two-dimensional code on the docking equipment.

S6, knowing the current coordinates R0(x0, y0, yaw0) of the robot, the coordinates P2(x2, y2, yaw2) of the target docking point, taking the transition point P3(x3, y3, yaw3) in the docking direction of the docking apparatus, the transition point P3(x3, y3, yaw3) being the coordinate point relative to the robot.

Referring to fig. 2, specifically, the method for taking the transition point P3 in the docking direction is as follows:

s61, an intersection point of the docking target point P2 and the boundary of the docking device along the docking direction is set as a 1. S62. let the distance between the docking target point P2 and a1 be a, and the distance from a1 to the transition point P3 be S1.

S63, if the robot is circular and the radius of the robot is R, S1> R, if the robot is rectangular and the length of the robot is L, S1> L/2, the distance from the target butt point P2 to the transition point P3 is S2, S2 is more than or equal to a + L/2, and the distance in the Y direction is S3.

S64, setting the transition point P3(x3, y3, yaw3) and the target butt point P2(x2, y2, yaw2) to be consistent in direction, and establishing a coordinate system with the target butt point P2 by the formula:

x3 ═ X2+ S2 ═ cos (yaw2) -S3 ═ sin (yaw2), y3 ═ y2+ S2 ═ sin (yaw2) + S3 ═ cos (yaw2), yaw3 ═ yaw2, and transition points P3(X3, y3, yaw3) were obtained.

S7, planning a route from P2 to P3 to be a straight Path1 by using a Bezier first-order curve, planning a route from P3 to R0 to be a Path2 by using a Bezier second-order curve, and combining the Path1 and the Path2 into a Path 3;

and S8, controlling the robot to move along the planned path3 by adopting a Pure Pursuit route tracking algorithm, so that the current coordinates R0(x0, y0, yaw0) of the robot are coincided with the target butt point P2(x2, y2, yaw 2).

Specifically, when the robot is controlled to move along the planned path3, the two-dimensional code recognized again is compared with the coordinate Q1(x1, y1, yaw1) of the robot and the center coordinate Q (x0, y0, yaw0) of the two-dimensional code recognized for the first time, the difference between the two is calculated, whether the difference between the two exceeds the threshold value is judged, if yes, step S4 is repeatedly executed, and if not, the robot continues to move along the path 3.

The method of calculating the difference between the coordinate Q1(x1, y1, yaw1) of the re-recognized two-dimensional code with respect to the robot and the center coordinate Q (x0, y0, yaw0) of the primarily recognized two-dimensional code is:

dx is x0-x1, Dy is y0-y1, and Yaw is Yaw0-Yaw1, and if Dx is less than 0.01 and Dy is less than 0.01 and Yaw is less than 1, the movement is continued along path3, otherwise step S4 is repeatedly performed.

The method also comprises the step between the step S2 and the step S3 that the robot angle is calibrated through an IMU sensor built in the robot, and the difference between the yaw0 value of the current coordinate R0 of the robot and the yaw1 value of the preset docking point P1 is smaller than or equal to a preset threshold value.

Specifically, the preset threshold is 1 °.

The multi-sensor-based equipment docking method can be applied to larger docking equipment, and comprises docking schemes of charging of a robot and an unmanned sweeper, warehousing of the unmanned sweeper, dumping of garbage by the unmanned sweeper, water adding of the unmanned sweeper, docking and charging of an unmanned truck and the like.

The above detailed description is specific to possible embodiments of the present invention, and the embodiments are not intended to limit the scope of the present invention, and all equivalent implementations or modifications that do not depart from the scope of the present invention are intended to be included within the scope of the present invention.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种机器人移动控制方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类