Robot positioning method and device and robot

文档序号:1706931 发布日期:2019-12-13 浏览:21次 中文

阅读说明:本技术 一种机器人定位方法及装置和机器人 (Robot positioning method and device and robot ) 是由 刘宝旭 于 2018-06-05 设计创作,主要内容包括:本申请提供一种机器人定位方法及装置、机器人和计算机可读存储介质。其中,机器人定位方法包括:获取机器人的多传感器数据;对多传感器数据进行分析,以从多传感器数据中挑选出用于定位的有效传感器数据;采用选定的定位算法对有效传感器数据进行处理,得到机器人的定位信息。本申请实施例,通过对获取的多传感器数据进行分析,以从多传感器数据中挑选出用于定位的有效传感器数据,使得用于计算定位信息的传感器数据的数量较少,有利于提升定位算法的运行速度,提高定位速度和定位的实时性,且采用选定的定位算法对有效传感器数据进行处理,有利于提高定位的准确性。(The application provides a robot positioning method and device, a robot and a computer readable storage medium. The robot positioning method comprises the following steps: acquiring multi-sensor data of a robot; analyzing the multi-sensor data to select valid sensor data for positioning from the multi-sensor data; and processing the effective sensor data by adopting a selected positioning algorithm to obtain the positioning information of the robot. According to the embodiment of the application, the acquired multi-sensor data are analyzed, so that the effective sensor data for positioning are selected from the multi-sensor data, the quantity of the sensor data for calculating the positioning information is small, the running speed of a positioning algorithm is favorably improved, the positioning speed and the positioning real-time performance are improved, the selected positioning algorithm is adopted to process the effective sensor data, and the positioning accuracy is favorably improved.)

1. A method of robot positioning, the method comprising:

Acquiring multi-sensor data of a robot;

analyzing the multi-sensor data to sort out valid sensor data for positioning from the multi-sensor data;

and processing the effective sensor data by adopting a selected positioning algorithm to obtain the positioning information of the robot.

2. The method of claim 1, wherein said analyzing said multi-sensor data to sort out valid sensor data for positioning from said multi-sensor data comprises:

Analyzing the multi-sensor data to obtain current environment estimation information;

And selecting the effective sensor data from the multi-sensor data according to the current environment estimation information.

3. The method of claim 2, wherein said sorting out said valid sensor data from said multi-sensor data based on said current environment estimation information comprises:

And selecting the effective sensor data from the multi-sensor data according to the current environment estimation information, and obtaining the weight of each effective sensor data according to the current environment estimation information.

4. The method of claim 3, wherein said processing said valid sensor data using a selected positioning algorithm to obtain positioning information for said robot comprises:

And processing the effective sensor data by adopting a selected positioning algorithm according to the weight of each effective sensor data to obtain the positioning information of the robot.

5. The method of claim 1, wherein the positioning algorithm comprises an extended kalman filter algorithm, an unscented kalman filter algorithm, or a cosine matrix computation algorithm.

6. The method of claim 1, wherein the multi-sensor data is collected by at least two of the following sensors:

the device comprises an inertial measurement component sensor, a GPS positioning sensor, a vision sensor, a laser radar sensor, an ultra wide band UWB sensor and an encoder.

7. A robot positioning device, characterized in that the device comprises:

The acquisition module is used for acquiring multi-sensor data of the robot;

The analysis and selection module is used for analyzing the multi-sensor data acquired by the acquisition module so as to select effective sensor data for positioning from the multi-sensor data;

And the positioning module is used for processing the effective sensor data selected by the analyzing and selecting module by adopting a selected positioning algorithm to obtain the positioning information of the robot.

8. the apparatus of claim 7, wherein the analysis culling module comprises:

The analysis obtaining submodule is used for analyzing the multi-sensor data to obtain current environment estimation information;

and the selecting submodule is used for selecting the effective sensor data from the multi-sensor data according to the current environment estimation information obtained by the analyzing and obtaining submodule.

9. A computer-readable storage medium, characterized in that the storage medium stores a computer program for performing the robot positioning method of any of the above claims 1-6.

10. a robot comprising a sensor, a processor, a memory, and a computer program stored on the memory and executable on the processor, the processor implementing the robot positioning method of any of claims 1-6 when executing the computer program.

11. A robot according to claim 10, characterized in that said sensors comprise at least two of the following sensors: the device comprises an inertial measurement component sensor, a GPS positioning sensor, a vision sensor, a laser radar sensor, an ultra wide band UWB sensor and an encoder.

Technical Field

The present disclosure relates to positioning technologies, and in particular, to a robot positioning method and apparatus, a robot, and a computer-readable storage medium.

background

with the progress of society and the development of science and technology, the positioning technology has advanced qualitatively in the aspects of technical means, positioning accuracy, usability and the like, and gradually permeates to the aspects of social life from the fields of navigation, aerospace, aviation, surveying and mapping, military, natural disaster prevention and the like, and becomes an indispensable important application in daily life of people. The positioning can be divided into two categories of indoor positioning and outdoor positioning according to the use scene, and because the scenes are different and the requirements are different, the positioning technologies adopted respectively are different.

Along with the gradual practicality of the intelligent mobile robot, the requirement for positioning navigation is higher and higher, and in order to complete tasks such as autonomous driving, unmanned driving or unmanned distribution, the robot needs to know the precise position where the robot is located so as to control the chassis to move.

disclosure of Invention

In view of the above, the present application provides a robot positioning method and apparatus, a robot, and a computer-readable storage medium.

specifically, the method is realized through the following technical scheme:

According to a first aspect of embodiments of the present disclosure, there is provided a robot positioning method, the method comprising:

Acquiring multi-sensor data of a robot;

Analyzing the multi-sensor data to sort out valid sensor data for positioning from the multi-sensor data;

And processing the effective sensor data by adopting a selected positioning algorithm to obtain the positioning information of the robot.

in one embodiment, the analyzing the multi-sensor data to select valid sensor data for positioning from the multi-sensor data comprises:

Analyzing the multi-sensor data to obtain current environment estimation information;

And selecting the effective sensor data from the multi-sensor data according to the current environment estimation information.

In one embodiment, the sorting out the valid sensor data from the multi-sensor data according to the current environment estimation information includes:

And selecting the effective sensor data from the multi-sensor data according to the current environment estimation information, and obtaining the weight of each effective sensor data according to the current environment estimation information.

in an embodiment, the processing the valid sensor data by using the selected positioning algorithm to obtain the positioning information of the robot includes:

and processing the effective sensor data by adopting a selected positioning algorithm according to the weight of each effective sensor data to obtain the positioning information of the robot.

In an embodiment, the positioning algorithm includes an extended kalman filter algorithm, an unscented kalman filter algorithm, or a cosine matrix calculation algorithm.

in one embodiment, the multi-sensor data is collected by at least two of the following sensors:

The device comprises an inertial measurement component sensor, a GPS positioning sensor, a vision sensor, a laser radar sensor, an ultra wide band UWB sensor and an encoder.

according to a second aspect of embodiments of the present disclosure, there is provided a robot positioning device, the device comprising:

the acquisition module is used for acquiring multi-sensor data of the robot;

The analysis and selection module is used for analyzing the multi-sensor data acquired by the acquisition module so as to select effective sensor data for positioning from the multi-sensor data;

And the positioning module is used for processing the effective sensor data selected by the analyzing and selecting module by adopting a selected positioning algorithm to obtain the positioning information of the robot.

In one embodiment, the analysis culling module comprises:

The analysis obtaining submodule is used for analyzing the multi-sensor data to obtain current environment estimation information;

and the selecting submodule is used for selecting the effective sensor data from the multi-sensor data according to the current environment estimation information obtained by the analyzing and obtaining submodule.

According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the above-described robot positioning method.

According to a fourth aspect of the embodiments of the present disclosure, there is provided a robot comprising a sensor, a processor, a memory, and a computer program stored on the memory and executable on the processor, the processor implementing the robot positioning method when executing the computer program.

In one embodiment, the sensors include at least two of the following: the device comprises an inertial measurement component sensor, a GPS positioning sensor, a vision sensor, a laser radar sensor, an ultra wide band UWB sensor and an encoder.

According to the embodiment of the application, the acquired multi-sensor data are analyzed, so that the effective sensor data for positioning are selected from the multi-sensor data, the quantity of the sensor data for calculating the positioning information is small, the running speed of a positioning algorithm is favorably improved, the positioning speed and the positioning real-time performance are improved, the selected positioning algorithm is adopted to process the effective sensor data, and the positioning accuracy is favorably improved.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

the accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.

FIG. 1A is a flow chart illustrating a method for robot positioning in accordance with an exemplary embodiment of the present application;

FIG. 1B is a flow chart illustrating sorting out valid sensor data according to an exemplary embodiment of the present application;

FIG. 1C is a schematic diagram illustrating a process for sorting out valid sensor data according to an exemplary embodiment of the present application;

FIG. 1D is a schematic diagram of another process for sorting out valid sensor data according to an exemplary embodiment of the present application;

FIG. 1E is a schematic diagram of another process for sorting out valid sensor data according to an exemplary embodiment of the present application;

FIG. 1F is a schematic diagram illustrating another process for sorting out valid sensor data according to an exemplary embodiment of the present application;

FIG. 2 is a flow chart of another robot positioning method shown in an exemplary embodiment of the present application;

FIG. 3 is a hardware block diagram of a robot in which a robot positioning device according to an exemplary embodiment of the present application is shown;

FIG. 4 is a block diagram of a robotic positioning device shown in an exemplary embodiment of the present application;

FIG. 5 is a block diagram of another robot positioning device shown in an exemplary embodiment of the present application.

Detailed Description

reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.

the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.

It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.

fig. 1A is a flowchart illustrating a robot positioning method according to an exemplary embodiment of the present application, where the robot positioning method includes:

Step S101, multi-sensor data of the robot is acquired.

Wherein the multi-sensor data can be collected by at least two of the following sensors located in the robot: inertial measurement component sensors, GPS positioning sensors, vision sensors, lidar sensors, ultra-wideband (UWB) sensors, encoders, and the like.

Step S102, the multi-sensor data is analyzed, so that effective sensor data used for positioning is selected from the multi-sensor data.

the valid sensor data refers to sensor data currently used for calculating positioning information.

in this embodiment, as shown in fig. 1B, the step S102 may include:

Step S1021, analyzing the multi-sensor data to obtain current environment estimation information.

The environment estimation information may include, but is not limited to, outdoor, indoor environment, lighting environment, ground smoothness information, and the like.

For example, it may be determined whether the robot is currently outdoors by analyzing GPS positioning information. For another example, the current lighting environment and whether the robot is in an indoor environment are obtained by analyzing the visual information collected by the visual sensor. For another example, the wheel speed and the movement distance are obtained by analyzing the pulse information collected by the encoder, and the ground smoothness information can also be obtained from the wheel speed and the movement distance. And analyzing the acceleration and gyroscope information of the inertial measurement unit sensor to obtain the attitude and rotation angular velocity information of the robot. The distance information collected by the laser radar sensor is analyzed to obtain the position information of the surrounding objects, and weather, light intensity information and the like can be reflected.

in step S1022, effective sensor data is selected from the multiple sensor data according to the current environment estimation information.

For example, as shown in fig. 1C, the robot includes an inertial measurement component sensor 11, a GPS positioning sensor 12, a vision sensor 13, a laser radar sensor 14 and an encoder 15, and if the robot analyzes the data collected by these five sensors to obtain current environment estimation information that the robot is currently located indoors and the ground is rough, effective sensor data selected from the multiple sensor data is data of the inertial measurement component sensor 11, the vision sensor 13, the laser radar sensor 14 and the encoder 15, because the data of the GPS positioning sensor 12 is not accurate enough in the current environment.

Assuming that the robot moves from indoor to outdoor, as shown in fig. 1D, the current environment estimation information obtained by analyzing the data collected by the five sensors by the robot is that the robot is currently located outdoors and has strong illumination, the effective sensor data selected from the multi-sensor data is the data of the inertial measurement component sensor 11, the GPS positioning sensor 12 and the encoder 15, because the data of the vision sensor 13 and the lidar sensor 14 are not suitable for positioning fusion in the current environment.

assuming that the robot moves from the outdoor to the indoor again, as shown in fig. 1E, the current environment estimation information obtained by analyzing the data collected by the five sensors by the robot is that the robot is currently located indoors and the ground is smooth, the effective sensor data selected from the multiple sensor data is the data of the inertial measurement component sensor 11, the vision sensor 13 and the lidar sensor 14, because the data of the GPS positioning sensor 12 and the data of the encoder 15 are not accurate enough in the current environment.

Assuming that the robot moves from indoor to outdoor again, as shown in fig. 1F, the current environment estimation information obtained by analyzing the data collected by the five sensors by the robot is that the robot is currently located outdoors and on cloudy days, the effective sensor data selected from the multiple sensor data are the data of the inertial measurement component sensor 11, the GPS positioning sensor 12, the vision sensor 13 and the encoder 15, because the data of the laser radar sensor 14 is not suitable for positioning fusion in the current environment.

It should be noted that, the process of selecting valid sensor data is described above by taking five sensors as an example, and in practice, the number of sensors may also be adjusted as needed, for example, two sensors, three sensors, six sensors, and the like may be used. For example, assuming that the currently used sensors are the inertial measurement component sensor 11 and the GPS positioning sensor 12, when the current environment estimation information obtained by analyzing the data collected by the two sensors by the robot is that the robot is currently located outdoors, the effective sensor data selected from the multiple sensor data is the inertial measurement component sensor 11 and the GPS positioning sensor 12. Assuming that the currently used sensors are the inertial measurement unit sensor 11 and the GPS positioning sensor 12, when the current environment estimation information obtained by analyzing the data acquired by the two sensors by the robot is that the robot is currently indoors, effective sensor data selected from the multiple sensor data is the inertial measurement unit sensor 11.

According to the embodiment, the current environment estimation information is obtained, and the effective sensor data is selected from the multi-sensor data according to the current environment estimation information, so that the selected effective sensor data has adaptivity in a dynamic environment.

In this embodiment, since the number of the selected valid sensor data is less than or equal to the number of the acquired multi-sensor data, the number of the sensor data subsequently used for calculating the positioning information is small, which is beneficial to improving the operation speed of the algorithm and the real-time performance of the positioning.

And step S103, processing the effective sensor data by adopting the selected positioning algorithm to obtain the positioning information of the robot.

The positioning algorithm can be selected according to the number and complexity of the current effective sensor data, and the selected positioning algorithm can include, but is not limited to, an Extended Kalman Filter (EKF) algorithm, an Unscented Kalman Filter (UKF) algorithm, a cosine matrix calculation (DCM) algorithm, and the like, that is, the current effective sensor data can be processed by using the EKF algorithm, the UKF algorithm, or the DCM algorithm as needed. Wherein processing the valid sensor data may include, but is not limited to, fusion locating the valid sensor data. For example, if the current valid sensor data is inertial sensor data, a simple DCM algorithm may be used for fusion to reduce the amount of computation and increase the computation speed. In other words, the positioning algorithm used for processing the valid sensor data in the present embodiment may be selected as needed, so as to reduce the calculation amount, increase the calculation speed, and improve the positioning accuracy.

according to the embodiment, the acquired multi-sensor data are analyzed, so that effective sensor data for positioning are selected from the multi-sensor data, the number of the sensor data used for calculating positioning information is small, the running speed of a positioning algorithm is favorably improved, the positioning speed and the positioning real-time performance are improved, the selected positioning algorithm is adopted for processing the effective sensor data, and the positioning accuracy is favorably improved.

fig. 2 is a flowchart illustrating another robot positioning method according to an exemplary embodiment of the present application, as shown in fig. 2, the method including:

In step S201, multi-sensor data of the robot is acquired.

Step S202, analyzing the multi-sensor data to obtain the current environment estimation information.

Step S203, selecting effective sensor data from the multi-sensor data according to the current environment estimation information, and obtaining the weight of each effective sensor data according to the current environment estimation information.

For example, if the ground environment in the obtained current environment estimation information is smooth, the weight of the encoder in the valid sensor data may be adjusted down, and for example, if the GPS signal strength is high, the weight of the GPS sensor may be adjusted up.

And step S204, processing the effective sensor data by adopting a selected positioning algorithm according to the weight of each effective sensor data to obtain the positioning information of the robot.

Wherein processing the valid sensor data may include, but is not limited to, fusion locating the valid sensor data.

For example, if the current environment estimation information is an outdoor strong-illumination non-blocking environment, effective sensor data selected from the multi-sensor data according to the current environment estimation information are an inertial measurement component sensor, a GPS sensor and an encoder, the weight of the inertial measurement component sensor is 1, the weight of the GPS sensor is 2, the weight of the encoder is 0.5, and a positioning algorithm is selected as an EKF algorithm according to the number of the selected effective sensor data, so that the positioning information of the robot can be obtained by processing the EKF algorithm according to the weight of the inertial measurement component sensor, the weight of the GPS sensor and the weight of the encoder.

In the embodiment, the effective sensor data is processed by adopting the selected positioning algorithm according to the weight of each obtained effective sensor data, so that the obtained positioning information is more accurate.

Corresponding to the embodiments of the robot positioning method, the application also provides embodiments of the robot positioning device.

The embodiment of the robot positioning device can be applied to the robot. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Fig. 3 is a hardware structure diagram of a robot in which the robot positioning device according to the present application is located, the robot includes a sensor 300, a processor 310, a memory 320, and a computer program stored in the memory 320 and executable on the processor 310, and the processor 310 implements the robot positioning method when executing the computer program. In addition to the processor 310 and the memory 320 shown in fig. 3, the electronic device in which the apparatus is located in the embodiment may also include other hardware according to the actual function of the robot positioning, which is not described in detail herein.

Among other things, the sensor 300 may include, but is not limited to, at least two of the following sensors: inertial measurement component sensors, GPS positioning sensors, vision sensors, lidar sensors, UWB sensors, and encoders.

fig. 4 is a block diagram of a robot positioning device according to an exemplary embodiment of the present application, and as shown in fig. 4, the robot positioning device includes: an acquisition module 41, an analysis and selection module 42 and a positioning module 43.

the acquisition module 41 is used for acquiring multi-sensor data of the robot.

Wherein the multi-sensor data can be collected by at least two of the following sensors located in the robot: inertial measurement component sensors, GPS positioning sensors, vision sensors, lidar sensors, ultra-wideband (UWB) sensors, encoders, and the like.

The analyzing and selecting module 42 is configured to analyze the multi-sensor data acquired by the acquiring module 41 to select valid sensor data for positioning from the multi-sensor data.

The valid sensor data refers to sensor data currently used for calculating positioning information.

In this embodiment, since the number of the selected valid sensor data is less than or equal to the number of the acquired multi-sensor data, the number of the sensor data subsequently used for calculating the positioning information is small, which is beneficial to improving the operation speed of the algorithm and the real-time performance of the positioning.

the positioning module 43 is configured to process the effective sensor data selected by the analyzing and selecting module 42 by using a selected positioning algorithm to obtain positioning information of the robot.

The positioning algorithm can be selected according to the number and complexity of the current effective sensor data, and the selected positioning algorithm can include, but is not limited to, an Extended Kalman Filter (EKF) algorithm, an Unscented Kalman Filter (UKF) algorithm, a cosine matrix calculation (DCM) algorithm, and the like, that is, the current effective sensor data can be processed by using the EKF algorithm, the UKF algorithm, or the DCM algorithm as needed. Wherein processing the valid sensor data may include, but is not limited to, fusion locating the valid sensor data. For example, if the current valid sensor data is inertial sensor data, a simple DCM algorithm may be used for fusion to reduce the amount of computation and increase the computation speed. In other words, the positioning algorithm used for processing the valid sensor data in the present embodiment may be selected as needed, so as to reduce the calculation amount, increase the calculation speed, and improve the positioning accuracy.

According to the embodiment, the acquired multi-sensor data are analyzed, so that effective sensor data for positioning are selected from the multi-sensor data, the number of the sensor data used for calculating positioning information is small, the running speed of a positioning algorithm is favorably improved, the positioning speed and the positioning real-time performance are improved, the selected positioning algorithm is adopted for processing the effective sensor data, and the positioning accuracy is favorably improved.

Fig. 5 is a block diagram of another robot positioning device according to an exemplary embodiment of the present application, and as shown in fig. 5, on the basis of the embodiment shown in fig. 4, the analyzing and selecting module 42 may include: an analysis obtain sub-module 421 and a pick sub-module 422.

The analysis obtaining sub-module 421 is configured to analyze the multi-sensor data to obtain current environment estimation information.

the environment estimation information may include, but is not limited to, outdoor, indoor environment, lighting environment, ground smoothness information, and the like.

for example, it may be determined whether the robot is currently outdoors by analyzing GPS positioning information. For another example, the current lighting environment and whether the robot is in an indoor environment are obtained by analyzing the visual information collected by the visual sensor. For another example, the wheel speed and the movement distance are obtained by analyzing the pulse information collected by the encoder, and the ground smoothness information can also be obtained from the wheel speed and the movement distance. And analyzing the acceleration and gyroscope information of the inertial measurement unit sensor to obtain the attitude and rotation angular velocity information of the robot. The distance information collected by the laser radar sensor is analyzed to obtain the position information of the surrounding objects, and weather, light intensity information and the like can be reflected.

The selecting sub-module 422 is used for selecting valid sensor data from the multi-sensor data according to the current environment estimation information obtained by the analyzing and obtaining sub-module 421.

For example, as shown in fig. 1C, the robot includes an inertial measurement component sensor 11, a GPS positioning sensor 12, a vision sensor 13, a laser radar sensor 14 and an encoder 15, and if the robot analyzes the data collected by these five sensors to obtain current environment estimation information that the robot is currently located indoors and the ground is rough, effective sensor data selected from the multiple sensor data is data of the inertial measurement component sensor 11, the vision sensor 13, the laser radar sensor 14 and the encoder 15, because the data of the GPS positioning sensor 12 is not accurate enough in the current environment.

Assuming that the robot moves from indoor to outdoor, as shown in fig. 1D, the current environment estimation information obtained by analyzing the data collected by the five sensors by the robot is that the robot is currently located outdoors and has strong illumination, the effective sensor data selected from the multi-sensor data is the data of the inertial measurement component sensor 11, the GPS positioning sensor 12 and the encoder 15, because the data of the vision sensor 13 and the lidar sensor 14 are not suitable for positioning fusion in the current environment.

assuming that the robot moves from the outdoor to the indoor again, as shown in fig. 1E, the current environment estimation information obtained by analyzing the data collected by the five sensors by the robot is that the robot is currently located indoors and the ground is smooth, the effective sensor data selected from the multiple sensor data is the data of the inertial measurement component sensor 11, the vision sensor 13 and the lidar sensor 14, because the data of the GPS positioning sensor 12 and the data of the encoder 15 are not accurate enough in the current environment.

Assuming that the robot moves from indoor to outdoor again, as shown in fig. 1F, the current environment estimation information obtained by analyzing the data collected by the five sensors by the robot is that the robot is currently located outdoors and on cloudy days, the effective sensor data selected from the multiple sensor data are the data of the inertial measurement component sensor 11, the GPS positioning sensor 12, the vision sensor 13 and the encoder 15, because the data of the laser radar sensor 14 is not suitable for positioning fusion in the current environment.

It should be noted that, the process of selecting valid sensor data is described above by taking five sensors as an example, and in practice, the number of sensors may also be adjusted as needed, for example, two sensors, three sensors, six sensors, and the like may be used. For example, assuming that the currently used sensors are the inertial measurement component sensor 11 and the GPS positioning sensor 12, when the current environment estimation information obtained by analyzing the data collected by the two sensors by the robot is that the robot is currently located outdoors, the effective sensor data selected from the multiple sensor data is the inertial measurement component sensor 11 and the GPS positioning sensor 12. Assuming that the currently used sensors are the inertial measurement unit sensor 11 and the GPS positioning sensor 12, when the current environment estimation information obtained by analyzing the data acquired by the two sensors by the robot is that the robot is currently indoors, effective sensor data selected from the multiple sensor data is the inertial measurement unit sensor 11.

in the embodiment, the current environment estimation information is obtained, and the effective sensor data is selected from the multi-sensor data according to the current environment estimation information, so that the selected effective sensor data has adaptivity in a dynamic environment.

the implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.

In an exemplary embodiment, there is also provided a computer-readable storage medium storing a computer program for executing a robot positioning method including:

Acquiring multi-sensor data of a robot;

analyzing the multi-sensor data to select valid sensor data for positioning from the multi-sensor data;

And processing the effective sensor data by adopting a selected positioning algorithm to obtain the positioning information of the robot.

the computer readable storage medium may be, among others, Read Only Memory (ROM), Random Access Memory (RAM), compact disc read only memory (CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.

In an exemplary embodiment, there is also provided a robot including sensors, which may include at least two of the following sensors: the system comprises an inertial measurement component sensor, a GPS positioning sensor, a visual odometer sensor, a laser radar sensor, an ultra-wideband UWB sensor and an encoder.

For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.

Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种路径规划方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!