Multi-sensor map construction method, device and chip based on visual SLAM

文档序号:1462899 发布日期:2020-02-21 浏览:10次 中文

阅读说明:本技术 基于视觉slam的多传感器地图构建的方法、装置及芯片 (Multi-sensor map construction method, device and chip based on visual SLAM ) 是由 肖刚军 于 2019-10-29 设计创作,主要内容包括:本发明涉及一种基于视觉SLAM的多传感器地图构建的方法、装置及芯片,根据激光雷达传感器的初始探测信号发射时间与回波探测信号接收时间得到与目标物体的距离,根据所述初始探测信号与所述回波探测信号得到所述目标物体的方位,根据与所述目标物体的距离及所述目标物体的方位获取第一点云信息,基于与所述目标物体的距离及所述目标物体的方位对所述第一点云信息及初始点云信息进行信息匹配,根据匹配结果将所述第一点云信息与所述初始点云信息融合构建点云地图,以使获取的点云信息与原有点云信息融合,所形成的地图精度高,以便于机器人实现自主避障与路线规划。(The invention relates to a method, a device and a chip for constructing a multi-sensor map based on visual SLAM, obtaining the distance between the laser radar sensor and the target object according to the initial detection signal transmitting time and the echo detection signal receiving time of the laser radar sensor, obtaining the orientation of the target object according to the initial detection signal and the echo detection signal, acquiring first point cloud information according to the distance to the target object and the orientation of the target object, performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the orientation of the target object, fusing the first point cloud information and the initial point cloud information according to a matching result to construct a point cloud map, the acquired point cloud information is fused with the original point cloud information, and the formed map is high in precision, so that the robot can realize autonomous obstacle avoidance and route planning.)

1. A method for multi-sensor map construction based on visual SLAM is characterized in that the method for multi-sensor map construction based on visual SLAM comprises the following steps:

controlling an initial detection signal of a laser radar sensor to a target object, and recording the emission time of the initial detection signal;

receiving echo detection signals transmitted from the target object in real time and the receiving time of the echo detection signals;

obtaining the distance between the target object and the initial detection signal transmitting time and the echo detection signal receiving time;

obtaining the direction of the target object according to the initial detection signal and the echo detection signal;

acquiring first point cloud information according to the distance to the target object and the direction of the target object;

and performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the direction of the target object, and fusing the first point cloud information and the initial point cloud information according to a matching result to construct a point cloud map.

2. The method of visual SLAM-based multi-sensor mapping of claim 1, wherein the step of obtaining first point cloud information based on the distance to the target object and the orientation of the target object is preceded by the step of:

controlling a vision acquisition sensor to acquire initial image information of a current position and acquire first image information of a target object;

and extracting image characteristics in the initial image information and the first image information, and acquiring the distance to the target object and the direction of the target object according to the image characteristics.

3. The method of visual SLAM-based multi-sensor mapping of claim 1, wherein the step of obtaining first point cloud information based on the distance to the target object and the orientation of the target object is preceded by the step of:

controlling a flight time ranging sensor to send an initial laser signal to a target object, and recording the emission time of the initial laser signal;

receiving a reflected laser signal reflected from the target object and the receiving time of the reflected laser signal in real time;

obtaining the distance between the target object and the initial laser signal transmitting time and the transmitting laser signal receiving time;

and obtaining the position of the target object according to the initial laser signal and the transmitting laser signal.

4. The method of visual SLAM-based multi-sensor mapping of claim 1, wherein the step of obtaining first point cloud information based on the distance to the target object and the orientation of the target object is preceded by the step of:

controlling an ultrasonic sensor to send an initial ultrasonic signal to a target object, and recording the transmitting time of the initial ultrasonic signal;

receiving a reflected ultrasonic signal reflected from the target object and the reflected ultrasonic signal receiving time in real time;

obtaining the distance between the target object and the ultrasonic signal transmitting time and the ultrasonic signal receiving time according to the initial ultrasonic signal transmitting time and the transmitting ultrasonic signal receiving time;

and obtaining the position of the target object according to the initial ultrasonic signal and the transmitted ultrasonic signal.

5. The method of visual SLAM-based multi-sensor mapping of claim 1, wherein the step of obtaining first point cloud information based on the distance to the target object and the orientation of the target object is preceded by the step of:

controlling a triangular ranging laser radar to send an initial light spot signal to a target object, and recording the initial ultrasonic signal transmitting time;

receiving a first light spot signal reflected from the target object in real time and receiving time of the first light spot signal;

obtaining the distance between the target object and the initial light spot signal transmitting time and the first light spot signal receiving time;

and obtaining the position of the target object according to the initial light spot signal and the first light spot signal.

6. The method of visual SLAM-based multi-sensor mapping of claim 1, wherein the step of obtaining first point cloud information as a function of distance from the target object and orientation of the target object is followed by:

and preprocessing the acquired first point cloud information.

7. The method of visual SLAM-based multi-sensor map construction of claim 1, wherein the step of information matching the first point cloud information and initial point cloud information based on the distance to the target object and the orientation of the target object, fusing the first point cloud information and the initial point cloud information according to the matching result to construct a point cloud map comprises:

presetting a walking route according to the specific position of the target object on the point cloud map;

and controlling the robot to move to the target object according to a preset walking path.

8. The method of visual SLAM-based multi-sensor mapping of claim 7, wherein the step of pre-setting a travel route based on the specific location of the target object on the point cloud map further comprises, after the step of:

receiving walking obstacle signals;

and adjusting the walking route.

9. A multi-sensor mapping device, the multi-sensor mapping device comprising: a memory, a processor, and a visual SLAM-based multi-sensor mapping program stored on the memory and executable on the processor, the visual SLAM-based multi-sensor mapping program when executed by the processor implementing the steps of the visual SLAM-based multi-sensor mapping method of any of claims 1 to 8.

10. A chip having stored thereon a visual SLAM-based multi-sensor mapping program which, when executed by a processor, performs the steps of the method of visual SLAM-based multi-sensor mapping as claimed in any one of claims 1 to 8.

Technical Field

The invention relates to the technical field of intelligent robots, in particular to a method, a device and a chip for constructing a multi-sensor map based on visual SLAM.

Background

Although a direct map constructed by a feature point map generally has a small data volume and is convenient for data storage, extraction and use, the direct map cannot reflect some necessary information of the environment, such as the position of an obstacle in the environment, so that the map can be basically used for positioning only, and a robot cannot perform autonomous obstacle and path planning. In addition, most of the existing robots adopt a visual sensor or a laser sensor independently when constructing a map, the obtained data is effective, and the map precision is low.

Disclosure of Invention

The invention provides a method, a device and a chip for constructing a multi-sensor map based on a visual SLAM (simultaneous localization and mapping), and mainly aims to improve the accuracy of a robot in constructing the map, solve the problem of positioning of the robot and realize autonomous obstacle avoidance and route planning.

In order to achieve the above object, the present invention provides a method for constructing a multi-sensor map based on a visual SLAM, including: controlling an initial detection signal of a laser radar sensor to a target object, and recording the emission time of the initial detection signal; receiving echo detection signals transmitted from the target object in real time and the receiving time of the echo detection signals; obtaining the distance between the target object and the initial detection signal transmitting time and the echo detection signal receiving time; obtaining the direction of the target object according to the initial detection signal and the echo detection signal; acquiring first point cloud information according to the distance to the target object and the direction of the target object; and performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the direction of the target object, and fusing the first point cloud information and the initial point cloud information according to a matching result to construct a point cloud map.

Optionally, the step of obtaining the first point cloud information according to the distance to the target object and the orientation of the target object further includes: controlling a vision acquisition sensor to acquire initial image information of a current position and acquire first image information of a target object; and extracting image characteristics in the initial image information and the first image information, and acquiring the distance to the target object and the direction of the target object according to the image characteristics.

Optionally, the step of obtaining the first point cloud information according to the distance to the target object and the orientation of the target object includes: controlling a flight time ranging sensor to send an initial laser signal to a target object, and recording the emission time of the initial laser signal; receiving a reflected laser signal reflected from the target object and the receiving time of the reflected laser signal in real time; obtaining the distance between the target object and the initial laser signal transmitting time and the transmitting laser signal receiving time; and obtaining the position of the target object according to the initial laser signal and the transmitting laser signal.

Optionally, the step of obtaining the first point cloud information according to the distance to the target object and the orientation of the target object includes: controlling an ultrasonic sensor to send an initial ultrasonic signal to a target object, and recording the transmitting time of the initial ultrasonic signal; receiving a reflected ultrasonic signal reflected from the target object and the reflected ultrasonic signal receiving time in real time; obtaining the distance between the target object and the ultrasonic signal transmitting time and the ultrasonic signal receiving time according to the initial ultrasonic signal transmitting time and the transmitting ultrasonic signal receiving time; and obtaining the position of the target object according to the initial ultrasonic signal and the transmitted ultrasonic signal.

Optionally, the step of obtaining the first point cloud information according to the distance to the target object and the orientation of the target object includes: controlling a triangular ranging laser radar to send an initial light spot signal to a target object, and recording the initial ultrasonic signal transmitting time; receiving a first light spot signal reflected from the target object in real time and receiving time of the first light spot signal; obtaining the distance between the target object and the initial light spot signal transmitting time and the first light spot signal receiving time; and obtaining the position of the target object according to the initial light spot signal and the first light spot signal.

Optionally, the step of obtaining the first point cloud information according to the distance to the target object and the orientation of the target object includes: and preprocessing the acquired first point cloud information.

Optionally, the step of performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the orientation of the target object, and fusing the first point cloud information and the initial point cloud information according to a matching result to construct a point cloud map includes: presetting a walking route according to the specific position of the target object on the point cloud map; and controlling the robot to move to the target object according to a preset walking path.

Optionally, the step of presetting the walking route according to the specific position of the target object on the point cloud map further includes: receiving walking obstacle signals; and adjusting the walking route.

In addition, to achieve the above object, the present invention also provides a multi-sensor map building apparatus, including: a memory, a processor, and a visual SLAM-based multi-sensor mapping program stored on the memory and executable on the processor, the visual SLAM-based multi-sensor mapping program when executed by the processor implementing the steps of the method of visual SLAM-based multi-sensor mapping as described above.

In addition, to achieve the above object, the present application also proposes a chip having a visual SLAM-based multi-sensor mapping program stored thereon, which when executed by a processor implements the steps of the method for visual SLAM-based multi-sensor mapping as described above.

The invention provides a method, a device and a chip for constructing a multi-sensor map based on visual SLAM, which are used for controlling a laser radar sensor to initially detect a signal to a target object, recording the emission time of the initial detection signal, receiving an echo detection signal emitted from the target object and the receiving time of the echo detection signal in real time, obtaining the distance between the laser radar sensor and the target object according to the emission time of the initial detection signal and the receiving time of the echo detection signal, obtaining the direction of the target object according to the initial detection signal and the echo detection signal, obtaining first point cloud information according to the distance between the laser radar sensor and the target object and the direction of the target object, performing information matching on the first point cloud information and the initial point cloud information based on the distance between the laser radar sensor and the target object and the direction of the target object, fusing the first point cloud information and the initial point cloud information to construct a point cloud map according to a matching result, the acquired point cloud information is fused with the original point cloud information, and the formed map is high in precision, so that the robot can realize autonomous obstacle avoidance and route planning.

Drawings

In order to more clearly illustrate the embodiments or exemplary technical solutions of the present invention, the drawings used in the embodiments or exemplary descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.

Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;

FIG. 2 is a schematic flow chart of a first embodiment of a method for multi-sensor mapping according to the present invention;

FIG. 3 is a schematic flow chart of a method for multi-sensor mapping according to a second embodiment of the present invention;

FIG. 4 is a flowchart illustrating a method for constructing a multi-sensor map according to a third embodiment of the present invention.

FIG. 5 is a flowchart illustrating a method for constructing a multi-sensor map according to a fourth embodiment of the present invention.

FIG. 6 is a flowchart illustrating a fifth embodiment of a multi-sensor mapping method according to the present invention.

FIG. 7 is a flowchart illustrating a method for constructing a multi-sensor map according to a sixth embodiment of the present invention.

FIG. 8 is a flowchart illustrating a method for multi-sensor mapping according to a seventh embodiment of the present invention.

FIG. 9 is a schematic diagram of the triangulation method of the present invention.

The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.

Detailed Description

It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

The main solution of the embodiment of the invention is as follows: the method comprises the steps of obtaining the distance of a target, matching the obtained point cloud information with original point cloud information according to the point cloud information obtained by the distance of the target, so that the obtained point cloud information is fused with the original point cloud information, and the formed map is high in precision, so that the robot can achieve autonomous obstacle avoidance and route planning.

Although the direct map constructed by the feature point map in the prior art has small data volume and is convenient for data storage and extraction, the direct map cannot reflect some necessary information of the environment, such as the position of an obstacle in the environment, so that the map can be basically only used for positioning problems and cannot enable a robot to carry out autonomous barrier and path planning.

The invention provides a solution, which is characterized in that the distance of a target is acquired, the acquired point cloud information is matched with the original point cloud information according to the point cloud information acquired by the distance of the target, so that the acquired point cloud information is fused with the original point cloud information, and the robot can conveniently realize autonomous obstacle avoidance and route planning.

As shown in fig. 1, fig. 1 is a schematic diagram of a hardware operating environment of a terminal according to an embodiment of the present invention.

As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display (Display), an input unit such as a Keyboard (Keyboard), a remote controller, and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (such as a non-volatile memory, e.g., disk storage) and the memory 1005 may optionally also be a storage device separate from the processor 1001.

Those skilled in the art will appreciate that the configuration of the terminal shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.

As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a multi-sensor map building program.

In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to invoke the multi-sensor mapping program stored in the memory 1005 and perform the steps of the method of visual SLAM-based multi-sensor mapping.

Further, the processor 1001 may invoke a network operation control application stored in the memory 1005 and also perform the steps of the method of visual SLAM-based multi-sensor mapping.

The invention provides a multi-sensor map construction method based on visual SLAM.

Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of the steps of the method for constructing a multi-sensor map based on visual SLAM according to the present invention.

The embodiment provides a method for constructing a multi-sensor map based on a visual SLAM, which includes: step S100, controlling an initial detection signal of a laser radar sensor to a target object, and recording the emission time of the initial detection signal; step S200, receiving echo detection signals transmitted from the target object in real time and receiving time of the echo detection signals; step S300, obtaining the distance between the target object and the initial detection signal transmitting time and the echo detection signal receiving time; step S400, obtaining the direction of the target object according to the initial detection signal and the echo detection signal; step S500, acquiring first point cloud information according to the distance to the target object and the direction of the target object; step S600, performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the direction of the target object, and fusing the first point cloud information and the initial point cloud information according to a matching result to construct a point cloud map.

The point cloud can only obtain the unknown environment information of the point cloud at a certain moment, so that the point cloud can only see a part of the environment where the point cloud is located, a map is obtained through the point cloud, and the core process is required to be carried out by matching with an SLAM algorithm, and the SLAM core step in the main framework of a complete SLAM and navigation system comprises the following steps: the first part is called preprocessing, namely, the laser radar original data is optimized, and problematic data are removed or filtered; subsequently, matching is carried out, namely, the point cloud data of the current local environment is searched for a corresponding position on the established map, and the accuracy of the map constructed by the SLAM is directly influenced by the quality of the step; matching and splicing the current point cloud information of the laser radar into the original map, wherein if the matching process is not carried out, the constructed map can be cluttered, and after the part is finished, map fusion is carried out, namely the round of new measurement data is spliced into the original map.

Lidar sensors typically incorporate a receiver. The laser generates and emits a beam of light pulses which impinge on the target and are reflected back and ultimately received by the receiver. The receiver accurately measures the travel time of a light pulse from transmission to reflection, since the light pulse travels at the speed of light, the receiver always receives the previous reflected pulse before the next pulse is transmitted, the travel time can be converted into a distance measurement, the coordinates X, Y, Z of each ground spot can be accurately calculated by combining the height of the laser, the laser scanning angle, the position of the laser from the GPS and the laser transmission direction from the INS, the frequency of the laser beam transmission can be from a few pulses per second to tens of thousands of pulses per second, for example, a system with a frequency of ten thousand pulses per second, the receiver will record sixty thousand points in one minute, and in general, the ground spot spacing of the lidar sensor varies from 2 to 4 m.

The working principle of the laser radar is very similar to that of the radar, the laser is used as a signal source, pulse laser emitted by a laser device strikes trees, roads, bridges and buildings on the ground to cause scattering, a part of light waves can be reflected to a receiver of the laser radar, the distance from the laser radar to a target point is obtained according to calculation of a laser ranging principle, the pulse laser continuously scans a target object to obtain data of all target points on the target, and after the data is used for imaging processing, an accurate three-dimensional image can be obtained.

The embodiment fuses the acquired point cloud information and the original point cloud information so as to facilitate the robot to realize autonomous obstacle avoidance and route planning.

Further, referring to fig. 3, a second embodiment of the present invention is proposed based on the first embodiment, and in this embodiment, before the step of acquiring the first point cloud information according to the distance to the target object and the orientation of the target object, the step further includes: step S700, controlling a vision acquisition sensor to acquire initial image information of a current position and acquire first image information of a target object; step S800, extracting image characteristics in the initial image information and the first image information, and acquiring the distance to the target object and the direction of the target object according to the image characteristics.

The embodiment fuses the acquired point cloud information and the original point cloud information so as to facilitate the robot to realize autonomous obstacle avoidance and route planning.

Further, referring to fig. 4, a third embodiment of the present invention is proposed based on the above embodiment, and in this embodiment, before the step of acquiring the first point cloud information according to the distance to the target object and the orientation of the target object, the step further includes: step S900, controlling a flight time ranging sensor to send an initial laser signal to a target object, and recording the emission time of the initial laser signal; step S1000, receiving a reflected laser signal reflected from the target object in real time and receiving time of the reflected laser signal; step S1100, obtaining the distance to a target object according to the initial laser signal transmitting time and the transmitting laser signal receiving time; and S1200, obtaining the direction of the target object according to the initial laser signal and the emission laser signal.

The time-of-flight ranging sensor is also called ToF, and the ToF ranging method belongs to a two-way ranging technology, and mainly measures the distance between nodes by using the time of flight of a signal going back and forth between two asynchronous transceivers (or reflected surfaces). Under the condition of better Signal level modulation or a non-line-of-sight environment, the estimation result of the distance measurement method based on RSSI (Received Signal Strength Indication) is more ideal; under the sight distance and sight line environment, the estimation method based on the ToF distance can make up the defects of the estimation method based on the RSSI distance.

The ToF ranging method has two key constraints: firstly, the sending equipment and the receiving equipment must be always synchronous, and secondly, the receiving equipment provides the transmission time of the signals. In order to realize clock synchronization, the ToF ranging method adopts clock offset to solve the clock synchronization problem. The Intersil latest ToF signal processing IC-ISL29501 scheme is a typical ToF scheme, which can be used for all lighting conditions and achieves low power consumption for miniaturization and battery applications, because the signal processor technology of the Intersil patent uses phase-based ToF to cope with the effects of ambient light around the test object.

Further, referring to fig. 5, a fourth embodiment of the present invention is proposed based on the above embodiment, and in this embodiment, the step of acquiring the first point cloud information according to the distance to the target object and the orientation of the target object further includes: step S1300, controlling an ultrasonic sensor to send an initial ultrasonic signal to a target object, and recording the emission time of the initial ultrasonic signal; step S1400, receiving a reflected ultrasonic signal reflected from the target object in real time and receiving time of the reflected ultrasonic signal; step S1500, obtaining the distance to the target object according to the initial ultrasonic signal transmitting time and the transmitting ultrasonic signal receiving time; and S1600, obtaining the direction of the target object according to the initial ultrasonic signal and the transmitting ultrasonic signal.

Ultrasonic sensors mainly use direct reflection type detection mode, a target positioned in front of the sensor partially transmits the transmitted sound wave back to a receiver of the sensor, so that the sensor detects the detected object, and also uses opposite type detection mode, one set of opposite type ultrasonic sensors comprises a transmitter and a receiver, the transmitter and the receiver are continuously kept in listening, the target positioned between the receiver and the transmitter can block the receiver from receiving the transmitted sound wave, so that the ultrasonic sensor can generate a switch signal.

Further, referring to fig. 6, a fifth embodiment of the present invention is proposed based on the above embodiment, and in this embodiment, the step of acquiring the first point cloud information according to the distance to the target object and the orientation of the target object further includes: step S1700, controlling a triangular ranging laser radar to send an initial light spot signal to a target object, and recording the initial ultrasonic signal transmitting time; step S1800, receiving a first light spot signal reflected from the target object in real time and receiving time of the first light spot signal; step S1900, obtaining the distance to the target object according to the initial light spot signal transmitting time and the first light spot signal receiving time; and S2000, obtaining the direction of the target object according to the initial light spot signal and the first light spot signal.

The Laser emits a Laser beam at a certain angle beta, and the Laser beam is reflected by an object with a distance d along the Laser direction. The laser beam is received by a generally long strip of CMOS (which can be regarded as a long strip of camera), and the laser beam reflected by the object is imaged by Imager (i.e., CMOS) through a "pinhole. The focal length is f, the vertical distance of the object from the plane is q, the distance between the laser and the focal point is s, the virtual line of the over-focus point parallel to the laser direction, the intersection position of the over-focus point and the Imager is generally known in advance (known after beta is determined), and the position of the point imaged on the Imager after the object laser is reflected is at a distance X from the position.

From FIG. 9, it can be seen that the triangle formed by q, d, beta is similar to the triangle formed by X, f, and thus:

f/X = q/s, then q = f s/X;

also since sin (beta) = q/d, then d = q/(sin (beta));

and finally obtaining: d = f s/(X sin (beta)) since f, s, and beta are known quantities in advance, the only quantity to be measured is X, and thus d is measured by measuring X, i.e. the distance of the object from the laser.

As can be seen from fig. 9, if the distance d is shorter, X becomes larger, and d becomes larger and X becomes smaller. When the distance X is measured by Imager, the distance X can be obtained by only calculating the center of the obtained light spot.

Further, referring to fig. 7, a sixth embodiment of the present invention is proposed based on the above embodiment, and in this embodiment, the step of acquiring the first point cloud information according to the distance to the target object and the orientation of the target object includes: in step S2100, the acquired first point cloud information is preprocessed. Preprocessing, namely optimizing the original data of the laser radar, eliminating problematic data or filtering to obtain more accurate point cloud information.

Further, referring to fig. 8, a seventh embodiment of the present invention is provided based on the above embodiment, and in this embodiment, after the step of performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the orientation of the target object, and fusing the first point cloud information and the initial point cloud information according to a matching result to construct the point cloud map, the method includes: step S2200, presetting a walking route according to the specific position of the target object on the point cloud map; and step S2300, controlling the robot to move to the target object according to a preset walking line.

And constructing a map after the point cloud information and the original point cloud information are fused, and presetting and planning a walking route on the map according to the position and the target position of the current robot, namely searching the point cloud data of the current walking route on the established map to find the corresponding position.

The embodiment is to plan a global path of the robot, specifically determine a position coordinate of the robot in a network map in real time according to a set target location and acquired geographic position information of the robot, and plan and determine an urban road navigation path where the robot reaches the target location in the network map.

The step of presetting a walking route according to the specific position of the target object on the point cloud map further comprises the following steps: receiving walking obstacle signals; and adjusting the walking route.

In order to realize the autonomous barrier and path planning of the robot, only the barrier needs to be identified, and the preset walking route is adjusted, so that the reaction efficiency of the robot is increased, the avoidance capacity of the robot is enhanced, and the user experience is enhanced.

The method comprises the steps of planning a global path of a robot, specifically performing planar two-dimensional processing on an area three-dimensional scene map obtained by processing a three-dimensional scene reconstruction module to obtain a corresponding area two-dimensional scene map used for presenting the occupation condition of an environment object two-dimensional plane of a position area where the robot is located, further determining position state information of the robot in the area two-dimensional scene map, and establishing a corresponding relation of position coordinates of the robot in a network map and the area two-dimensional scene map according to the position coordinate of the robot in the network map, so that local obstacle avoidance path optimization processing is performed on an urban road navigation path of the robot in the network map, wherein the urban road navigation path reaches a target location position according to the occupation condition of the environment object two-dimensional plane of the position area where the robot is located in the two-dimensional scene map.

The invention also provides a multi-sensor map construction device, which comprises: a memory, a processor, and a visual SLAM-based multi-sensor mapping program stored on the memory and executable on the processor, the visual SLAM-based multi-sensor mapping program when executed by the processor implementing the steps of the method of visual SLAM-based multi-sensor mapping as described above.

The present application also provides a chip having a visual SLAM-based multi-sensor mapping program stored thereon, where the visual SLAM-based multi-sensor mapping program, when executed by a processor, implements the steps of the method for visual SLAM-based multi-sensor mapping as described above.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.

Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be substantially or partially embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk), and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a cloud server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.

The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于粒子滤波的SINS/DR组合导航系统位置跟踪确定方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!