Robot control method, control system and chip based on vision and laser fusion

文档序号:740483 发布日期:2021-04-23 浏览:14次 中文

阅读说明:本技术 视觉与激光融合的机器人控制方法、控制系统及芯片 (Robot control method, control system and chip based on vision and laser fusion ) 是由 肖刚军 于 2020-12-22 设计创作,主要内容包括:本发明公开了一种视觉与激光融合的机器人控制方法、控制系统及芯片,属于智能清洁机器人技术领域。所述机器人具有感知装置,所述方法包括,在清洁作业之前确定当前清洁机器人的原始位置;通过感知装置旋转拍摄所述比赛场地,通过获取的感知数据确定场地参考线,根据参考线的形状信息确定场地类型;基于预设的场地类型与易脏区域的对应关系,根据所确定的场地类型确定比赛场地的易脏区域;当收到清洁指令时,优先控制所述清洁机器人清洁所述易脏区域。可以智能地对比赛场地进行清洁,清洁效率高。(The invention discloses a vision and laser fused robot control method, a control system and a chip, and belongs to the technical field of intelligent cleaning robots. The robot having a sensing device, the method comprising, prior to a cleaning operation, determining a home position of a current cleaning robot; the competition field is shot in a rotating mode through a sensing device, a field reference line is determined through the obtained sensing data, and the field type is determined according to the shape information of the reference line; determining the dirtiness area of the competition field according to the determined field type based on the corresponding relation between the preset field type and the dirtiness area; and when receiving a cleaning instruction, preferentially controlling the cleaning robot to clean the dirtiness area. Can clean the competition field intelligently, clean efficiently.)

1. A robot control method integrating vision and laser is characterized in that: the method is used for a cleaning robot having a sensing device including a camera and a lidar, the method comprising:

determining a home position of a current cleaning robot before a cleaning job;

the competition field is shot in a rotating mode through a sensing device, a field reference line is determined through the obtained sensing data, and the field type is determined according to the shape information of the reference line;

determining the dirtiness area of the competition field according to the determined field type based on the corresponding relation between the preset field type and the dirtiness area;

and when receiving a cleaning instruction, preferentially controlling the cleaning robot to clean the dirtiness area.

2. The method of claim 1, wherein: the field type is any one of a badminton field, a basketball field and a tennis field.

3. The method of claim 1, wherein: the cleaning instructions include first cleaning instructions for instructing the cleaning robot to complete cleaning within a first time period and second cleaning instructions for instructing the cleaning robot to complete cleaning within a second time period, wherein the first time period is less in duration than the second time period.

4. The method of claim 3, wherein: and when the cleaning instruction is the first cleaning instruction, controlling the cleaning robot to return to the original position after cleaning the easily-dirty area.

5. The method of claim 4, wherein: and when the cleaning instruction is the second cleaning instruction, controlling the cleaning robot to clean the whole field.

6. The method of claim 1, wherein prioritizing cleaning of the soil susceptible region by the cleaning robot further comprises:

controlling the cleaning robot to acquire image data of the field;

determining the light reflection information of the ground according to the image data;

determining the water stain position of the ground according to the reflection information;

and controlling the cleaning robot to clean the water stain position.

7. The utility model provides a vision and laser fusion's robot control system, its characterized in that, the robot is cleaning machines people, cleaning machines people includes the perception device, the perception device includes camera and lidar, cleaning machines people still includes:

a determination module for determining an original position of the current cleaning robot before the cleaning job;

the sensing module is used for rotationally shooting the competition field through the sensing device, determining a field reference line through the acquired sensing data, and determining the field type according to the shape information of the reference line;

the corresponding module is used for determining the dirtiness area of the competition field according to the determined field type based on the corresponding relation between the preset field type and the dirtiness area;

and the control module is used for preferentially controlling the cleaning robot to clean the easily-dirty area when receiving a cleaning instruction.

8. The apparatus of claim 7, wherein the control module is further configured to:

controlling the cleaning robot to acquire image data of the field;

determining the light reflection information of the ground according to the image data;

determining the water stain position of the ground according to the reflection information;

and controlling the cleaning robot to clean the water stain position.

9. A chip, wherein a computer program is stored, and the computer program is loaded and executed by a processor to implement the vision and laser fused robot control method according to any one of claims 1 to 6.

Technical Field

The invention relates to the technical field of intelligent robots, in particular to a robot control method, a control system and a chip with vision and laser fusion.

Background

With the development of intelligent technology, the cleaning robot has more and more purposes, can clean windows and dust, wipe the floor and clean hair, is an indispensable element of intelligent home, and has more and more intelligence with the development of scientific technology in recent years. The existing cleaning robot is very suitable for the family environment during operation, and can automatically sense the new environment and calculate the operation area and the operation time after entering the new environment, so that charging and operation are intelligently controlled.

However, the application is less and not wide enough for other working environments outside the home.

Disclosure of Invention

The invention provides a robot control method, a control system and a chip integrating vision and laser, and the specific technical scheme is as follows:

a vision and laser fused robot control method for a cleaning robot having a perception device including a camera and a lidar, the method comprising: determining a home position of a current cleaning robot before a cleaning job; the competition field is shot in a rotating mode through a sensing device, a field reference line is determined through the obtained sensing data, and the field type is determined according to the shape information of the reference line; determining the dirtiness area of the competition field according to the determined field type based on the corresponding relation between the preset field type and the dirtiness area; and when receiving a cleaning instruction, preferentially controlling the cleaning robot to clean the dirtiness area.

Further, the court type is any one of a badminton court, a basketball court and a tennis court.

Further, the cleaning instructions include first cleaning instructions to instruct the cleaning robot to complete cleaning within a first time period and second cleaning instructions to instruct the cleaning robot to complete cleaning within a second time period, wherein the first time period is less in duration than the second time period.

Further, when the cleaning instruction is the first cleaning instruction, the cleaning robot is controlled to return to the original position after cleaning the dirtiness area.

Further, when the cleaning instruction is the second cleaning instruction, the cleaning robot is controlled to clean the whole field.

Further, after preferentially controlling the cleaning robot to clean the dirtiness area, the method further comprises the following steps: controlling the cleaning robot to acquire image data of the field; determining the light reflection information of the ground according to the image data; determining the water stain position of the ground according to the reflection information; and controlling the cleaning robot to clean the water stain position.

The utility model provides a vision and laser fusion's robot control system, the robot is cleaning machines people, cleaning machines people includes the perception device, the perception device includes camera and lidar, cleaning machines people still includes: a determination module for determining an original position of the current cleaning robot before the cleaning job; the sensing module is used for rotationally shooting the competition field through the sensing device, determining a field reference line through the acquired sensing data, and determining the field type according to the shape information of the reference line; the corresponding module is used for determining the dirtiness area of the competition field according to the determined field type based on the corresponding relation between the preset field type and the dirtiness area; and the control module is used for preferentially controlling the cleaning robot to clean the easily-dirty area when receiving a cleaning instruction.

Further, the control module is further configured to: controlling the cleaning robot to acquire image data of the field; determining the light reflection information of the ground according to the image data; determining the water stain position of the ground according to the reflection information; and controlling the cleaning robot to clean the water stain position.

A chip having stored therein a computer program to be loaded and executed by a processor to implement a vision and laser fused robot control method as described above.

The beneficial effects of the invention include but are not limited to: when the cleaning robot performs a job compared to a field, the original position of the cleaning robot at present may be determined before the cleaning job; the competition field is shot in a rotating mode through a sensing device, a field reference line is determined through the obtained sensing data, and the field type is determined according to the shape information of the reference line; determining the dirtiness area of the competition field according to the determined field type based on the corresponding relation between the preset field type and the dirtiness area; and when receiving a cleaning instruction, preferentially controlling the cleaning robot to clean the dirtiness area. According to the technology provided by the invention, different field operation modes can be determined according to different types of the competition fields, the cleaning work of the competition fields can be rapidly finished in the competition gaps, the cleaning efficiency is improved, and the cleaning effect is improved.

Drawings

Fig. 1 is a schematic flow chart illustrating a vision and laser-integrated robot control method according to an embodiment of the present invention;

fig. 2 is a block diagram illustrating a structure of a cleaning apparatus for a playing field according to an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.

Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.

The existing stadium cleaning is usually performed manually, the cleaning is usually required to be rapid and accurate, the requirement on a sweeper is very high, and for some very important events, once careless mistakes are made in cleaning, the final result of the competition can be directly determined. The inventors have noticed that the cleaning robot can clean the foreign materials as well as have the floor wiping function, and thus conceived a method of applying the cleaning robot to the stadium cleaning.

Fig. 1 is a flowchart of a vision and laser-integrated robot control method according to an exemplary embodiment of the present invention. Referring to fig. 1, the method is applied to a cleaning robot having a sensing device including a camera and a lidar, and includes the steps of:

in step 201, the home position of the current cleaning robot is determined before the cleaning operation. For the special environment of the venue, the cleaning robot must accurately determine the position information of the cleaning robot firstly, and the position of the cleaning robot cannot influence the performance of the sportsman. For example, if the venue is a basketball court, the cleaning robot may preferably be able to be at least 5 meters from the basketball court. In the aspect of determining self-positioning, the existing mature positioning mode can be applied, for example, the GPS technology is adopted.

Step 202, shooting the competition field in a rotating mode through a sensing device, determining a field reference line through the obtained sensing data, and determining the type of the field according to the shape information of the reference line; the shooting competition field is collected in a circle area through an image collecting device or a radar and the like, the competition field generally has reference lines, the type of the venue can be determined according to boundary lines and the like in the field, for example, basketball has penalty lines, boundary lines, three lines and the like, and the type of the venue can be determined rapidly and uniquely according to the specific lines. For the rotation of the sensing device, the robot itself may rotate, or the camera may be set to a rotatable structure, and the Laser Radar (english: Laser Radar) is a Radar system that emits a Laser beam to detect the position, speed, and other characteristic quantities of the target. The working principle is to transmit a detection signal (laser beam) to a target, then compare the received signal (target echo) reflected from the target with the transmitted signal, and after appropriate processing, obtain the relevant information of the target, such as target distance, orientation, height, speed, attitude, even shape, etc. For the line identification, the hough detection algorithm of the same tea shop is adopted, but the embodiment is not limited to the adoption of the one mode, and any algorithm capable of realizing line segment identification can be protected in the invention.

And 203, determining the dirtiness area of the competition field according to the determined field type based on the corresponding relation between the preset field type and the dirtiness area. Technical personnel in this field can understand, and the easily dirty region in different places is also different, and to the basketball court, the sweat stain in the three-line is more usually, and is more to the sweat stain in badminton region before the net usually, can determine easily dirty region according to the different situations in different places, and specific easily dirty region can be detected out through predetermineeing the model, predetermine the model and can obtain through the training after gathering the actual place image of different competitions and marking easily dirty region.

And 204, when a cleaning instruction is received, preferentially controlling the cleaning robot to clean the easily-dirty area. When the cleaning robot receives a cleaning instruction, such as remote control, the cleaning robot can preferentially clean the easily dirty area in the field, which is suitable for the competition with short competition break links, and can also perform careful cleaning if time is sufficient after preferentially processing the easily dirty area.

According to an embodiment of the present invention, the court type may be any one of a badminton court, a basketball court, and a tennis court.

The cleaning instructions include first cleaning instructions to instruct the cleaning robot to complete cleaning within a first time period and second cleaning instructions to instruct the cleaning robot to complete cleaning within a second time period, wherein the first time period is less in duration than the second time period. The intermediate rest time is different for different match types, the cleaning robot can at least comprise two operation modes, one is a quick mode, and the other is a detailed mode, the cleaning instruction comprises a first cleaning instruction and a second cleaning instruction, the first cleaning instruction is used for indicating that the cleaning robot completes cleaning in a first time period, and the second cleaning instruction is used for indicating that the cleaning robot completes cleaning in a second time period, wherein the duration of the first time period is less than that of the second time period. The first cleaning command is a fast mode, and the second cleaning command is a detailed mode.

And when the cleaning instruction is the first cleaning instruction, controlling the cleaning robot to return to the original position after cleaning the easily-dirty area. When the cleaning instruction is the first cleaning instruction, namely the quick mode, the cleaning instruction returns to the original position after directly cleaning the easily-dirty area.

And when the cleaning instruction is the second cleaning instruction, controlling the cleaning robot to clean the whole field. When the cleaning order is the second cleaning order, i.e., the detailed mode, the entire floor is cleaned.

The invention discloses a vision and laser fused robot control method. The method comprises the following steps:

step 401, determining the original position of the current cleaning robot before the cleaning operation; for the special environment of the venue, the cleaning robot must accurately determine the position information of the cleaning robot firstly, and the position of the cleaning robot cannot influence the performance of the sportsman. For example, if the venue is a basketball court, the cleaning robot may preferably be able to be at least 5 meters from the basketball court. In the aspect of determining self-positioning, the existing mature positioning mode can be applied, for example, the GPS technology is adopted.

Step 402, shooting the competition field by rotating a sensing device, determining a field reference line according to the acquired sensing data, and determining the type of the field according to the shape information of the reference line; the shooting competition field is collected in a circle area through an image collecting device or a radar and the like, the competition field generally has reference lines, the type of the venue can be determined according to boundary lines and the like in the field, for example, basketball has penalty lines, boundary lines, three lines and the like, and the type of the venue can be determined rapidly and uniquely according to the specific lines. For the line identification, the hough detection algorithm of the same tea shop is adopted, but the embodiment is not limited to the adoption of the one mode, and any algorithm capable of realizing line segment identification can be protected in the invention.

Step 403, determining the dirtiness area of the competition field according to the determined field type based on the corresponding relation between the preset field type and the dirtiness area; technical personnel in this field can understand, and the easily dirty region in different places is also different, and to the basketball court, the sweat stain in the three-line is more usually, and is more to the sweat stain in badminton region before the net usually, can determine easily dirty region according to the different situations in different places, and specific easily dirty region can be detected out through predetermineeing the model, predetermine the model and can obtain through the training after gathering the actual place image of different competitions and marking easily dirty region.

And 404, when a cleaning instruction is received, preferentially controlling the cleaning robot to clean the easily-dirty area. When the cleaning robot receives a cleaning instruction, such as remote control, the cleaning robot can preferentially clean the easily dirty area in the field, which is suitable for the competition with short competition break links, and can also perform careful cleaning if time is sufficient after preferentially processing the easily dirty area.

Step 405, controlling the cleaning robot to acquire image data of the field; through image acquisition device etc. cleaning robot can acquire image data, can acquire image data in real time with the frequency of predetermineeing.

Step 406, determining the light reflection information of the ground according to the image data; when water stains and sweat stains exist in the image, the image can collect reflection information.

Step 407, determining the water stain position of the ground according to the light reflection information; the water stain or sweat stain position can be determined according to the light reflection information.

And step 408, controlling the cleaning robot to clean the water stain position. And controlling the cleaning robot to wipe off water stains and keeping the ground dry.

Referring to fig. 2, a vision and laser fused robot control system provided by an exemplary embodiment of the present invention is shown, where the robot is a cleaning robot, the cleaning robot includes a sensing device, the sensing device includes a camera and a lidar, and the cleaning robot further includes:

a determination module 901 for determining an original position of the current cleaning robot before the cleaning job; for the special environment of the venue, the cleaning robot must accurately determine the position information of the cleaning robot firstly, and the position of the cleaning robot cannot influence the performance of the sportsman. For example, if the venue is a basketball court, the cleaning robot may preferably be able to be at least 5 meters from the basketball court. In the aspect of determining self-positioning, the existing mature positioning mode can be applied, for example, the GPS technology is adopted.

The sensing module 902 is used for rotationally shooting the competition field through the sensing device, determining a field reference line through the obtained sensing data, and determining the field type according to the shape information of the reference line; the shooting competition field is collected in a circle area through an image collecting device or a radar and the like, the competition field generally has reference lines, the type of the venue can be determined according to boundary lines and the like in the field, for example, basketball has penalty lines, boundary lines, three lines and the like, and the type of the venue can be determined rapidly and uniquely according to the specific lines. For the line identification, the hough detection algorithm of the same tea shop is adopted, but the embodiment is not limited to the adoption of the one mode, and any algorithm capable of realizing line segment identification can be protected in the invention.

A corresponding module 903, configured to determine, based on a preset correspondence between a field type and a dirtier region, a dirtier region of a competition field according to the determined field type; technical personnel in this field can understand, and the easily dirty region in different places is also different, and to the basketball court, the sweat stain in the three-line is more usually, and is more to the sweat stain in badminton region before the net usually, can determine easily dirty region according to the different situations in different places, and specific easily dirty region can be detected out through predetermineeing the model, predetermine the model and can obtain through the training after gathering the actual place image of different competitions and marking easily dirty region.

And a control module 904 for preferentially controlling the cleaning robot to clean the dirtiness area when receiving a cleaning instruction. When the cleaning robot receives a cleaning instruction, such as remote control, the cleaning robot can preferentially clean the easily dirty area in the field, which is suitable for the competition with short competition break links, and can also perform careful cleaning if time is sufficient after preferentially processing the easily dirty area.

The control module 904 is further configured to: controlling the cleaning robot to acquire image data of the field; determining the light reflection information of the ground according to the image data; determining the water stain position of the ground according to the reflection information; and controlling the cleaning robot to clean the water stain position.

The cleaning robot provided by one embodiment of the invention can be used for implementing the vision and laser fused robot control method provided by the embodiment. The cleaning robot may be the cleaning robot described in the corresponding embodiment of fig. 1. Specifically, the method comprises the following steps:

the cleaning robot includes a Central Processing Unit (CPU), a system memory including a Random Access Memory (RAM) and a Read Only Memory (ROM), and a system bus connecting the system memory and the CPU. There is a basic input/output system (I/O system) for transferring information between the various devices, and a mass storage device for storing an operating system, application programs, and other program modules, which may include a computer-readable medium such as a hard disk or CD-ROM drive. Data acquired by the camera and the laser radar are transmitted to a central processing unit in a unified mode to be subjected to data fusion processing, so that the robot can accurately judge the current environment state. The related data fusion processing technology belongs to the prior art, and specifically, reference may be made to the contents of "a pixel-level target positioning method based on laser and monocular vision fusion" of the invention patent application with chinese patent publication No. CN111998772A, "a precision positioning method of a robot with vision and lidar fusion" of the invention patent application with chinese patent publication No. CN111947647A, "a method for constructing a semantic map on line by using lidar and vision sensor fusion" of the invention patent application with chinese patent publication No. CN111928862A, and the like.

Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, and also includes CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1004 and mass storage device 1007 described above may be collectively referred to as memory.

The memory also includes one or more programs stored in the memory and configured to be executed by one or more processors. The one or more programs include instructions for implementing the vision and laser fused robot control method described above.

The memory has stored therein at least one instruction configured to be executed by one or more processors to implement the functions of the various steps in the vision and laser-fused robot control method described above.

The embodiment of the present invention further provides a chip, where at least one instruction is stored in the chip, and the at least one instruction is loaded and executed by a processor to implement the robot control method based on fusion of vision and laser provided in the above embodiments.

Optionally, the chip may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM).

The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.

It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a chip, and the storage medium may be a read-only memory, a magnetic disk, an optical disk, or the like.

The invention is not to be considered as limited to the particular embodiments shown and described, but is to be understood that various modifications, equivalents, improvements and the like can be made without departing from the spirit and scope of the invention.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:碰撞开关组件及扫地机器人

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!