Robot configuration with three-dimensional lidar

文档序号:453058 发布日期:2021-12-28 浏览:9次 中文

阅读说明:本技术 具有三维激光雷达的机器人配置 (Robot configuration with three-dimensional lidar ) 是由 J.雷姆比斯 J.特兰 V.纳巴特 E.梅尔 于 2020-05-28 设计创作,主要内容包括:一种移动机器人设备(200),包括移动基座(202)和相对于移动基座(202)固定的桅杆(210)。桅杆(210)包括挖出部分。移动机器人设备(200)还包括三维(3D)激光雷达传感器,其安装在桅杆(210)的挖出部分中并相对于桅杆(210)固定,使得3D激光雷达传感器的垂直视场朝向移动机器人设备(200)前方的区域向下呈角度。(A mobile robotic device (200) includes a mobile base (202) and a mast (210) fixed relative to the mobile base (202). The mast (210) includes a hollowed-out portion. The mobile robotic device (200) also includes a three-dimensional (3D) lidar sensor mounted in a cutout of the mast (210) and fixed relative to the mast (210) such that a vertical field of view of the 3D lidar sensor angles downward toward an area in front of the mobile robotic device (200).)

1. A mobile robotic device comprising:

a movable base;

a mast fixed relative to the mobile base, wherein the mast includes a hollowed-out portion; and

a three-dimensional 3D lidar sensor mounted in a cutout of the mast and fixed relative to the mast such that a vertical field of view of the 3D lidar sensor angles downward toward an area in front of the mobile robotic device.

2. The mobile robotic device of claim 1, wherein the 3D lidar sensor is angularly positioned such that a vertical field of view of the 3D lidar sensor includes a ground surface directly in front of the mobile robotic device.

3. The mobile robotic device of claim 1, wherein a vertical field of view of the 3D lidar sensor is greater than 90 degrees, and wherein the 3D lidar sensor is positioned at an angle such that an upper limit of the vertical field of view of the 3D lidar sensor extends from the 3D lidar sensor at an angle above a horizontal vector directed forward of the mobile robotic device.

4. The mobile robotic device of claim 1, wherein the 3D lidar sensor is mounted such that a vertical axis of the 3D lidar sensor is tilted forward relative to vertical.

5. The mobile robotic device of claim 1, wherein a vertical field of view of the 3D lidar sensor extends from a first angle between 10 degrees and 20 degrees above a horizontal vector directed forward of the mobile robotic device to a second angle between 75 degrees and 85 degrees below the horizontal vector.

6. The mobile robotic device of claim 1, further comprising a control system configured to detect a cliff in front of a mobile base of the mobile robotic device based on sensor data from the 3D lidar sensor.

7. The mobile robotic device of claim 1, further comprising a control system configured to detect one or more obstacles in front of or to the side of the mobile robotic device based on sensor data from the 3D lidar sensor.

8. The mobile robotic device of claim 1, further comprising a control system configured to determine a position of the mobile robotic device based on sensor data from the 3D lidar sensor, wherein the sensor data is indicative of one or more surfaces above and behind the mobile robotic device.

9. The mobile robotic device of claim 8, wherein the control system is configured to determine the location of the mobile robotic device by aligning the sensor data with a voxel grid representation of an environment of the mobile robotic device.

10. The mobile robotic device of claim 1, wherein the 3D lidar sensor is configured to have a horizontal field of view of 360 degrees, and wherein at least 270 degrees of the horizontal field of view of the 3D lidar sensor is unobstructed by the mast based on a shape of the excavated portion of the mast.

11. The mobile robotic device of claim 1, wherein the mast comprises an overhang mounting point for the 3D lidar sensor, wherein the 3D lidar sensor is mounted below the overhang mounting point to be received within a cut-out portion of the mast.

12. The mobile robotic device of claim 1, wherein the mast comprises:

a backing member to which the 3D lidar sensor is mounted; and

two symmetrical housing members attached to either side of the backing member such that the 3D lidar sensor is outside a volume enclosed by the backing member and the two symmetrical housing members.

13. The mobile robotic device of claim 1, wherein the mast is part of a stacking tower located at a front end of the mobile robotic device.

14. The mobile robotic device of claim 13, wherein the stacking tower comprises a revolute joint of a robotic arm located below the mast, wherein the revolute joint is configured to rotate the robotic arm without rotating the mast.

15. The mobile robotic device of claim 1, wherein the motion base includes a plurality of one degree of freedom (1 DOF) sensors directed at a region behind the mobile robotic device.

16. A method, comprising:

receiving sensor data indicative of an environment of a mobile robotic device from a three-dimensional 3D lidar sensor, wherein the 3D lidar sensor is mounted in a cut-out of a mast of the mobile robotic device and is fixed relative to the mast such that a vertical field of view of the 3D lidar sensor angles downward toward an area in front of the mobile robotic device; and

controlling the mobile robotic device based on the sensor data.

17. The method of claim 16, wherein the sensor data is indicative of a ground directly in front of a mobile base of the mobile robotic device, wherein the method further comprises:

detecting a cliff in front of the mobile robotic device, wherein controlling the mobile robotic device comprises navigating a mobile base of the mobile robotic device based on the detected cliff.

18. The method of claim 16, wherein the sensor data is indicative of one or more obstacles in front of or to the side of the mobile robotic device, wherein controlling the mobile robotic device comprises avoiding contact with the one or more obstacles.

19. The method of claim 16, wherein the sensor data is indicative of one or more surfaces above and behind the mobile robotic device, wherein the method further comprises:

determining a position of the mobile robotic device relative to the one or more surfaces, wherein control of the mobile robotic device is performed based on the determined position of the mobile robotic device relative to the one or more surfaces.

20. A mast for a mobile robotic device, comprising:

digging out a part; and

a three-dimensional 3D lidar sensor mounted in the excavated portion of the mast and fixed relative to the mast such that a vertical field of view of the 3D lidar sensor is angled downward in a direction extending outward from the excavated portion of the mast.

Background

As technology advances, various types of robotic devices are being created to perform various functions that may assist a user. Robotic devices may be used for applications involving material handling, transportation, welding, assembly, and dispensing, among others. Over time, the manner in which these robotic systems operate becomes more intelligent, efficient, and intuitive. As robotic systems become more prevalent in many aspects of modern life, it is desirable for robotic systems to be efficient. Thus, the need for efficient robotic systems has helped open up innovative areas in actuators, movement, sensing technology, and component design and assembly.

Disclosure of Invention

An example mobile robotic device includes a three-dimensional (3D) lidar sensor mounted on a fixed mast of a robot. The position and orientation of the 3D lidar sensor and the resulting field of view may be optimized such that sensor data from the 3D lidar sensor may be used for forward cliff detection, obstacle detection, and robot positioning.

In an embodiment, a mobile robotic device is provided. The mobile robotic device includes a mobile base. The mobile robotic device also includes a mast fixed relative to the mobile base, wherein the mast includes a cutout portion. The mobile robotic device also includes a 3D lidar sensor mounted in the excavated portion of the mast and fixed relative to the mast such that a vertical field of view of the 3D lidar sensor angles downward toward an area in front of the mobile robotic device.

In another embodiment, a method is provided. The method includes receiving sensor data indicative of an environment of the mobile robotic device from a three-dimensional 3D lidar sensor, wherein the 3D lidar sensor is mounted in a cut-out of a mast of the mobile robotic device and is fixed relative to the mast such that a vertical field of view of the 3D lidar sensor angles downward toward an area in front of the mobile robotic device. The method also includes controlling the mobile robotic device based on the sensor data.

In an additional embodiment, a mast for a mobile robotic device is provided. The mast includes a dug-out portion. The mast further includes a 3D lidar sensor mounted in the excavated portion of the mast and fixed relative to the mast such that a vertical field of view of the 3D lidar sensor is angled downward in a direction extending outward from the excavated portion of the mast.

In another embodiment, a non-transitory computer-readable medium is provided that includes programming instructions executable by at least one processor to cause the at least one processor to perform functions. The functions include receiving sensor data indicative of an environment of the mobile robotic device from a three-dimensional 3D lidar sensor, wherein the 3D lidar sensor is mounted in a cut-out of a mast of the mobile robotic device and is fixed relative to the mast such that a vertical field of view of the 3D lidar sensor angles downward toward an area in front of the mobile robotic device. The functions also include controlling the mobile robotic device based on the sensor data.

In another embodiment, a system is provided that includes means for receiving sensor data indicative of an environment of a mobile robotic device from a three-dimensional 3D lidar sensor, wherein the 3D lidar sensor is mounted in a cutout of a mast of the mobile robotic device and is fixed relative to the mast such that a vertical field of view of the 3D lidar sensor angles downward toward an area in front of the mobile robotic device. The system also includes means for controlling the mobile robotic device based on the sensor data.

The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description and drawings.

Drawings

Fig. 1 shows a configuration of a robot system according to an example embodiment.

Fig. 2 illustrates a mobile robot according to an example embodiment.

Fig. 3 shows an exploded view of a mobile robot according to an example embodiment.

Fig. 4 illustrates a robot arm according to an example embodiment.

Fig. 5A and 5B illustrate a robot mast with a 3D lidar sensor according to an example embodiment.

Fig. 6A, 6B and 6C illustrate detection of a 3D lidar sensor according to an example embodiment.

Fig. 7, 8 and 9 show fields of view of 3D lidar sensors according to example embodiments for different mounting orientations.

Fig. 10 is a block diagram of a method according to an example embodiment.

Detailed Description

Example methods, devices, and systems are described herein. It should be understood that the words "example" and "exemplary" are used herein to mean "serving as an example, instance, or illustration. Any embodiment or feature described herein as "exemplary" or "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or features, unless so stated. Other embodiments may be utilized, and other changes may be made, without departing from the scope of the subject matter presented herein.

Accordingly, the example embodiments described herein are not meant to be limiting. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations.

Throughout the description, the articles "a" or "an" are used to introduce elements of example embodiments. Any reference to "a" or "an" means "at least one," and any reference to "the" means "the at least one," unless otherwise indicated herein or otherwise clearly contradicted by context. The use of the conjunction "or" in a list of at least two terms described is intended to mean any listed term or any combination of listed terms.

The use of ordinal numbers such as "first," "second," "third," etc., is for distinguishing between the elements and not for indicating a particular order of the elements. For the purposes of this specification, the terms "plurality" and "a plurality" mean "two or more" or "more than one"

Furthermore, the features shown in each figure may be used in combination with each other, unless the context suggests otherwise. Thus, the drawings are generally to be regarded as forming an aspect of one or more general embodiments, and it is to be understood that not all illustrated features are required for each embodiment. In the drawings, like reference numerals generally identify like components, unless context dictates otherwise. Furthermore, unless otherwise indicated, the drawings are not to scale and are for illustrative purposes only. Moreover, the drawings are merely representative, and not all of the components are shown. For example, additional structural or constraining components may not be shown.

Furthermore, any enumeration of elements, blocks or steps in the present description or claims is for clarity purposes. Thus, this enumeration should not be interpreted as requiring or implying that such elements, blocks or steps follow a particular arrangement or are performed in a particular order.

I. Overview

Mobile robotic devices may use various sensors to gather information about the environment to assist the robot in operating in the environment. By optimizing the selection of sensors and the location and orientation of the selected sensors on the robot, the overall cost may be reduced while allowing the robot to achieve desired sensor coverage in the area of interest. For non-industrial robots, as well as for certain classes of industrial robots, it may be particularly beneficial from a cost point of view to use a single sensor for a number of different purposes.

In some examples, the robot may be equipped with a three-dimensional (3D) lidar sensor. The 3D lidar sensor measures distance to objects in the environment by illuminating the objects with laser light and measuring reflected light with one or more sensing elements. The difference in laser return time and/or wavelength may then be used to generate a 3D representation of the environment. Some 3D lidar sensors employ a rapidly rotating mirror that reflects light from a laser into the environment, generating a reflected or returned 3D point cloud (point cloud). Thus, the 3D lidar sensor may have a horizontal field of view of 360 degrees around the vertical rotation axis, but only at one fixed angle defining the vertical field of view. In some examples, the vertical field of view may be slightly greater than 90 degrees (e.g., about 95 degrees). In other examples, the vertical field of view may be significantly greater than 90 degrees, equal to 90 degrees, or less than 90 degrees. To benefit most from the available field of view of the 3D lidar sensor, the 3D lidar sensor may be mounted on the robot in a carefully selected position and orientation.

In some examples described herein, the 3D lidar sensor may be mounted in a cut-out of a mast of the robotic device. The mast may be fixed relative to the mobile base of the robot. The mast may be located between the rotatable sensor housing and the rotatable arm joint as part of a stacking tower mounted adjacent the front of the motion base. The 3D lidar sensor may be fixed in an orientation such that the vertical field of view of the 3D lidar sensor angles downward toward an area in front of the robot. In some examples, the 3D lidar may be mounted such that its vertical axis is tilted forward. The position and orientation of the 3D lidar sensor may be optimized to allow the robot to use depth data from the 3D lidar sensor for a variety of purposes, including front cliff detection, obstacle detection, and robot positioning.

With respect to front cliff detection, the 3D lidar sensor may be positioned at an angle such that its vertical field of view includes the ground directly in front of the robot (e.g., contacting or a few centimeters from the front bumper of the robot). Thus, sensor data from the 3D lidar sensor may be used to detect unexpected height changes of the ground in front of the robot, which may indicate a cliff that the robot should avoid. Including points on the ground directly in front of the robot, cover safety situations that open or activate when the robot is located directly in front of the cliff in a new environment. The vertical field of view may also include points on the ground that are further away from the robot to allow the 3D lidar sensor to also detect a distant cliff. The maximum speed of the robot may be set based on the distance that the 3D lidar sensor can reliably detect the cliff in front of the robot. In some examples, in addition to the 3D lidar sensor, one or more other sensors (e.g., cameras) from the robotic perception suite may provide sensor data that may be used to aid cliff detection.

With regard to obstacle detection, the vertical field of view of the 3D lidar sensor may scan upward from one extreme direction pointing to the ground directly in front of the robot to a second extreme direction extending above a height parallel to the top of the robot's sensing housing (e.g., at a distance of two meters in front of the robot). More specifically, in some examples, the vertical field of view may extend from a first angle between 10 degrees and 20 degrees above a horizontal vector pointing in front of the mobile robotic device to a second angle between 75 degrees and 85 degrees below the horizontal vector. Therefore, the 3D laser radar sensor can effectively detect the obstacle in front of the robot within the height range of the robot.

With respect to robot positioning, by angling the vertical field of view of the 3D lidar sensor down towards the area in front of the robot, the 3D lidar sensor will also capture sensor data indicative of surfaces behind and above the robot. Furthermore, the shape of the excavated portion of the mast may prevent the mast from blocking too much upper hemisphere from behind the robot of the 3D lidar sensor. Depth information about the upper hemisphere of the robot environment may be used to help determine the position of the robot in the environment. The upper hemisphere may contain most static structures (e.g., portions of the ceiling and/or walls) that provide a good reference point for robot positioning. In some examples, the robot may maintain a voxel representation of occupied voxels (voxels) in the environment. Localization based on sensor data from the 3D lidar sensor may then involve voxel matching between the detected and stored voxel representations.

Similar tradeoffs may be made in accepting blind spots by selecting the position and orientation of the 3D lidar sensors to optimize the coverage of certain areas around the robot. For example, by angling the 3D lidar sensor down towards the area in front of the robot, the vertical field of view of the 3D lidar sensor may extend only slightly above the horizontal field of view. As a result, the 3D lidar sensor may not be able to detect the area in front of and substantially above the robot. In some examples, this compromise may be acceptable because the robot may be less likely to encounter obstacles hovering above the robot. For example, if an operator is standing in front of the robot, the 3D lidar may be sufficient to detect a portion of the operator's body even if the operator is not completely in the field of view of the 3D lidar sensor. Furthermore, in some examples, a separate sensor, such as a camera, located in the robot's perception housing may provide coverage of a blind spot above the field of view of the 3D lidar sensor in front of the robot. Furthermore, while the upper hemisphere in front of the robot may not be detected by the 3D lidar sensor, the upper hemisphere behind the robot may be sufficient for robot positioning as well.

Another blind spot that may result from angling the vertical field of view of the 3D lidar sensor down towards the area in front of the robot is the area behind the robot on the ground. In some examples, a compromise solution may involve detecting the area using a set of one-dimensional (1D) time-of-flight (ToF) sensors located behind the mobile base of the robot. Although less accurate than 3D lidar sensors, these 1D ToF sensors can provide sufficient depth data of the area behind the robot. Robots may generally require more detailed data about areas in front of where the robot is more likely to operate by, for example, picking up and manipulating objects.

In other examples, additional 3D lidar sensors may be mounted on the rear side of the mast to detect obstacles behind the robot. Additional 3D lidar sensors may be mounted in a mast cut-out separate from or the same as the previous 3D lidar sensor. In various examples, the additional 3D lidar sensor may be tilted up, down, or fixed in a vertical direction. In a further example, an additional 3D lidar may instead be mounted on the mobile base (e.g., near the rear end of the mobile device) to detect obstacles behind the robot. In other examples, one or more different types of sensors may also or alternatively be used to detect obstacles behind the robot.

Example robot System

FIG. 1 illustrates an example configuration of a robotic system that may be used in conjunction with embodiments described herein. The robotic system 100 may be configured to operate autonomously, semi-autonomously, or using user-provided indication(s). The robotic system 100 may be implemented in various forms, such as a robotic arm, an industrial robot, or some other arrangement. Some example embodiments relate to a robotic system 100 designed to be low cost scale and designed to support a variety of tasks. The robotic system 100 may be designed to be able to operate around a person. The robotic system 100 may also be optimized for machine learning. Throughout the description, the robot system 100 may also be referred to as a robot, a robot device, a mobile robot, or the like.

As shown in fig. 1, the robotic system 100 may include processor(s) 102, data storage 104, and controller(s) 108, which together may be part of a control system 118. The robotic system 100 may also include sensor(s) 112, power source(s) 114, mechanical components 110, and electrical components 116. Nonetheless, the robotic system 100 is shown for illustrative purposes and may include more or fewer components. The various components of the robotic system 100 may be connected in any manner, including wired or wireless connections. Further, in some examples, the components of the robotic system 100 may be distributed among multiple physical entities, rather than a single physical entity. Other example illustrations of the robotic system 100 may also exist.

The processor(s) 102 may operate as one or more general-purpose hardware processors or special-purpose hardware processors (e.g., digital signal processors, application specific integrated circuits, etc.). The processor(s) 102 may be configured to execute computer-readable program instructions 106 and manipulate data 107, both of which are stored in the data storage 104. The processor(s) 102 may also interact directly or indirectly with other components of the robotic system 100, such as the sensor(s) 112, the power source(s) 114, the mechanical component 110, or the electrical component 116.

The data storage device 104 may be one or more types of hardware memory. For example, data storage 104 may include or take the form of one or more computer-readable storage media that are read or accessed by processor(s) 102. One or more computer-readable storage media may include volatile or non-volatile storage components (such as optical, magnetic, organic, or another type of memory or storage) that may be integrated in whole or in part with the processor(s) 102. In some implementations, the data storage 104 may be a single physical device. In other embodiments, data storage 104 may be implemented using two or more physical devices that may communicate with each other via wired or wireless communication. As previously described, data storage device 104 may include computer-readable program instructions 106 and data 107. The data 107 may be any type of data, such as configuration data, sensor data, or diagnostic data.

The controller 108 may include one or more circuits, digital logic units, computer chips, or microprocessors configured to, among other tasks possible, interface between the mechanical component 110, the sensor(s) 112, the power source(s) 114, the electrical component 116, the control system 118, or any combination of users of the robotic system 100. In some embodiments, the controller 108 may be a dedicated embedded device for performing certain operations with one or more subsystems of the robotic system 100.

The control system 118 may monitor and physically alter the operating conditions of the robotic system 100. In doing so, the control system 118 may act as a link between portions of the robotic system 100, such as a link between the mechanical component 110 or the electrical component 116. In some cases, the control system 118 may act as an interface between the robotic system 100 and another computing device. Further, the control system 118 may serve as an interface between the robotic system 100 and a user. In some cases, the control system 118 may include various components for communicating with the robotic system 100, including joysticks, buttons or ports, and the like. The above-mentioned example interfaces and communications may be implemented via wired or wireless connections, or both. The control system 118 may also perform other operations for the robotic system 100.

During operation, the control system 118 may communicate with other systems of the robotic system 100 via wired or wireless connections, and may also be configured to communicate with one or more users of the robot. As one possible example, the control system 118 may receive input (e.g., from a user or from another robot) indicative of instructions to perform a requested task, such as picking up an object and moving the object from one location to another. Based on the input, the control system 118 may perform operations to cause the robotic system 100 to make a series of movements to perform the requested task. As another example, the control system may receive an input indicating an instruction to move to a requested location. In response, the control system 118 (possibly with the assistance of other components or systems) may determine the direction and speed to move the robotic system 100 through the environment to the requested location.

The operations of the control system 118 may be performed by the processor(s) 102. Alternatively, the operations may be performed by the controller(s) 108 or a combination of the processor(s) 102 and the controller(s) 108. In some embodiments, the control system 118 may reside partially or entirely on a device external to the robotic system 100, and thus may control the robotic system 100 at least partially remotely.

The mechanical components 110 represent the hardware of the robotic system 100 that may cause the robotic system 100 to perform physical operations. As a few examples, the robotic system 100 may include one or more physical components, such as arms, end effectors, a head, a neck, a torso, a base, and wheels. The physical members or other parts of the robotic system 100 may also comprise actuators arranged to move the physical members relative to each other. The robotic system 100 may also include one or more structured bodies for housing the control system 118 or other components, and may also include other types of mechanical components. The particular mechanical components 110 used in a given robot may vary based on the design of the robot, and may also be based on the operations or tasks that the robot may be configured to perform.

In some examples, mechanical component 110 may include one or more removable components. The robotic system 100 may be configured to add or remove such removable components, which may involve assistance from a user or another robot. For example, the robotic system 100 may be configured with removable end effectors or fingers that may be replaced or changed as needed or desired. In some embodiments, the robotic system 100 may include one or more removable or replaceable battery units, control systems, power systems, buffers, or sensors. In some embodiments, other types of removable components may be included.

The robotic system 100 may include sensor(s) 112 arranged to sense aspects of the robotic system 100. Sensor(s) 112 may include one or more force sensors, torque sensors, velocity sensors, acceleration sensors, position sensors, proximity sensors, motion sensors, positioning sensors, load sensors, temperature sensors, touch sensors, depth sensors, ultrasonic ranging sensors, infrared sensors, object sensors, or cameras, among others. In some examples, the robotic system 100 may be configured to receive sensor data from sensors that are physically separate from the robot (e.g., sensors located on other robots or in the environment in which the robot is operating).

The sensor(s) 112 may provide sensor data to the processor(s) 102, possibly by way of data 107, to allow the robotic system 100 to interact with its environment, as well as to monitor the operation of the robotic system 100. The sensor data may be used to evaluate various factors of activation, movement, and deactivation of mechanical component 110 and electrical component 116 by control system 118. For example, sensor(s) 112 may capture data corresponding to the environmental terrain or the location of nearby objects, which may aid in environmental recognition and navigation.

In some examples, sensor(s) 112 may include RADAR (e.g., for long range object detection, range determination, or velocity determination), LIDAR (e.g., for short range object detection, range determination, or velocity determination), SONAR (e.g., for underwater object detection, range determination, or velocity determination), or a sensor (e.g., for underwater object detection, range determination, or velocity determination)(e.g., for motion capture), one or more cameras (e.g., stereo cameras for 3D vision), a Global Positioning System (GPS) transceiver, or other sensors for capturing information of the environment in which the robotic system 100 is operating. The sensor(s) 112 may monitor the environment in real time and detect obstacles, terrain features, weather conditions, temperature, or other aspects of the environment. In another example, the sensor(s) 112 may capture data corresponding to one or more characteristics of the target or identified object, such as a size, shape, contour, structure, or orientation of the object.

Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of a state of the robotic system 100, including sensor(s) 112 that may monitor a state of various components of the robotic system 100. The sensor(s) 112 may measure system activity of the robotic system 100 and receive information based on operation of various features of the robotic system 100, such as operation of extendable arms, end effectors, or other mechanical or electrical features of the robotic system 100. The data provided by the sensor(s) 112 may enable the control system 118 to determine errors in operation and to monitor the overall operation of the components of the robotic system 100.

As an example, the robotic system 100 may use force/torque sensors to measure loads on various components of the robotic system 100. In some embodiments, the robotic system 100 may include one or more force/torque sensors on the arm or end effector to measure the load on actuators moving one or more members of the arm or end effector. In some examples, the robotic system 100 may include force/torque sensors at or near the wrist or end effector, but not at or near other joints of the robotic arm. In a further example, the robotic system 100 may use one or more position sensors to sense the position of the actuators of the robotic system. For example, such position sensors may sense the extended, retracted, positioned, or rotated state of an actuator on the arm or end effector.

As another example, sensor(s) 112 may include one or more velocity or acceleration sensors. For example, the sensor(s) 112 may include an Inertial Measurement Unit (IMU). The IMU may sense velocity and acceleration in a common coordinate system relative to the gravity vector. The velocity and acceleration sensed by the IMU may then be converted into a velocity and acceleration of the robotic system 100 based on the position of the IMU in the robotic system 100 and the kinematics of the robotic system 100.

The robotic system 100 may include other types of sensors not explicitly discussed herein. Additionally or alternatively, the robotic system may use specific sensors for purposes not enumerated herein.

The robotic system 100 may also include one or more power source(s) 114 configured to provide power to the various components of the robotic system 100. Among other possible power systems, the robotic system 100 may include a hydraulic system, an electrical system, a battery, or other types of power systems. As an example illustration, the robotic system 100 may include one or more batteries configured to provide a charge to components of the robotic system 100. Some of mechanical components 110 or electrical components 116 may each be connected to a different power source, may be powered by the same power source, or may be powered by multiple power sources.

Any type of power source may be used to power the robotic system 100, such as an electric or gasoline engine. Additionally or alternatively, the robotic system 100 may include a hydraulic system configured to provide power to the mechanical component 110 using fluid power. For example, the components of the robotic system 100 may operate based on hydraulic fluid transmitted through the hydraulic system to various hydraulic motors and cylinders. The hydraulic system may transmit hydraulic power by way of pressurized hydraulic fluid through pipes, flexible hoses, or other linkages between components of the robotic system 100. The power source(s) 114 may be charged using various types of charging, such as wired connection to an external power source, wireless charging, combustion, or other examples.

Electrical component 116 may include various mechanisms capable of processing, transmitting, or providing electrical charge or signals. In possible examples, the electrical components 116 may include wires, circuitry, or wireless communication transmitters and receivers to enable operation of the robotic system 100. The electrical components 116 may interact with the mechanical components 110 to enable the robotic system 100 to perform various operations. For example, electrical components 116 may be configured to provide power from power source(s) 114 to various mechanical components 110. Further, the robotic system 100 may include a motor. Other examples of electrical components 116 may also exist.

The robotic system 100 may include a body that may be connected to or house the accessories and components of the robotic system. As such, the structure of the body may vary in examples, and may further depend on the particular operations a given robot may have been designed to perform. For example, a robot developed to carry heavy objects may have a wide body that can hold a load. Similarly, robots designed for operation in tight spaces may have a relatively tall, narrow body. In addition, various types of materials, such as metal or plastic, may be used to develop the body or other components. In other examples, the robot may have a different structure or body made of various types of materials.

The body or other component may include or carry the sensor(s) 112. These sensors may be located at different locations on the robotic system 100, such as the body, head, neck, base, torso, arms, or end effectors, among others.

The robotic system 100 may be configured to carry a load, such as the type of cargo to be transported. In some examples, the load may be placed by the robotic system 100 into a bin or other container attached to the robotic system 100. The load may also represent an external battery or other type of power source (e.g., a solar panel) that the robotic system 100 may utilize. Carrying a load represents one example use for which the robotic system 100 may be configured, but the robotic system 100 may also be configured to perform other operations.

As described above, the robotic system 100 may include various types of accessories, wheels, end effectors, gripping devices, and the like. In some examples, the robotic system 100 may include a mobile base with wheels, pedals, or some other form of movement. Additionally, the robotic system 100 may include a robotic arm or some other form of robotic manipulator. In the case of a mobile base, the base may be considered one of the mechanical components 110 and may include wheels driven by one or more actuators that allow movement of the robotic arm in addition to the rest of the body.

Fig. 2 illustrates a mobile robot according to an example embodiment. Fig. 3 shows an exploded view of a mobile robot according to an example embodiment. More specifically, the robot 200 may include a motion base 202, a middle portion 204, an arm 206, an end-of-arm system (EOAS) 208, a mast 210, a sensing enclosure 212, and a sensing suite 214. The robot 200 may also include a computer enclosure 216 stored within the mobile base 202.

The motion base 202 includes two drive wheels at the front end of the robot 200 to provide movement to the robot 200. The motion base 202 also includes additional casters (not shown) to facilitate movement of the motion base 202 over a floor. The mobile base 202 may have a modular structure that allows the computer chassis 216 to be easily removed. The computer chassis 216 may serve as a removable control system (rather than a mechanically integrated control system) for the robot 200. After the housing is removed, the computing box 216 may be easily removed and/or replaced. The mobile base 202 may also be designed to allow additional modularity. For example, the motion base 202 may also be designed such that the power system, batteries, and/or external bumpers can all be easily removed and/or replaced.

The middle portion 204 may be attached to the motion base 202 at the front end of the motion base 202. The middle portion 204 includes mounting posts that are secured to the mobile base 202. The middle portion 204 also includes a rotational joint for the arm 206. More specifically, the middle portion 204 includes the first two degrees of freedom of the arm 206 (shoulder yaw (yaw) J0 joint and shoulder pitch (pitch) J1 joint). The mounting post and shoulder yaw joint J0 may form part of a stacked tower in front of the motion base 202. The mounting post and shoulder yaw J0 joints may be coaxial. The length of the mounting posts of the intermediate portion 204 may be selected to provide sufficient height for the arms 206 to perform operational tasks on commonly encountered height levels (e.g., coffee countertops and counter tops). The length of the mounting post of the intermediate portion 204 may also allow the shoulder pitch J1 to articulate the arm 206 on the motion base 202 without contacting the motion base 202.

When coupled to the intermediate portion 204, the arm 206 may be a 7DOF robotic arm. As described above, the first two DOF of the arm 206 may be included in the middle portion 204. The remaining five DOF may be included in a separate portion of the arm 206, as shown in fig. 2 and 3. The arm 206 may be constructed of a plastic integral linkage structure. Within the arm 206, separate actuator modules, local motor drives, and through-hole cables may be housed.

EOAS 208 may be an end effector at the end of arm 206. EOAS 208 may allow robot 200 to manipulate objects in the environment. As shown in fig. 2 and 3, EOAS 208 may be a gripper, such as an under-actuated clamp gripper. The gripper may include one or more contact sensors (such as force/torque sensors) and/or non-contact sensors (such as one or more cameras) to facilitate object detection and gripper control. EOAS 208 may also be a different type of gripper (such as a suction gripper) or a different type of tool (such as a drill bit or a brush). EOAS 208 may also be swappable, or include swappable components, such as a grab finger.

The mast 210 may be a relatively long and narrow member between the shoulder yaw J0 joint of the arm 206 and the sensing housing 212. The mast 210 may be part of a stacked tower in front of the mobile base 202. The mast 210 may be fixed relative to the motion base 202. The mast 210 may be coaxial with the intermediate portion 204. The length of mast 210 may facilitate perception kit 214 in perceiving objects manipulated by EOAS 208. The mast 210 may have a length such that the highest point of the bicep of the arm 206 is approximately aligned with the top of the mast 210 when the shoulder pitch J1 joint is rotated vertically upward. The length of the mast 210 may be sufficient to prevent the perception of a collision between the housing 212 and the arm 206 when the shoulder pitch J1 joint is rotated vertically upward.

As shown in fig. 2 and 3, the mast 210 may include a 3D lidar sensor configured to collect depth information about the environment. The 3D lidar sensor may be coupled to the excavated portion of the mast 210 and fixed at a downward angle. Lidar position may be optimized for positioning, navigation and detection of a forward cliff.

The sensing enclosure 212 may include at least one sensor that constitutes a sensing suite 214. Sensing housing 212 may be connected to a pan/tilt controller to allow for reorientation of sensing housing 212 (e.g., to view an object manipulated by EOAS 208). The sensing housing 212 may be part of a stacked tower secured to the mobile base 202. The rear portion of the sensing housing 212 can be coaxial with the mast 210.

The perception suite 214 may include a sensor suite configured to collect sensor data representative of the environment of the robot 200. The perception suite 214 may include an Infrared (IR) assisted stereoscopic depth sensor. The perception suite 214 may also include a wide-angle red-green-blue (RGB) camera for human-computer interaction and contextual information. The perception suite 214 may also include a high resolution RGB camera for object classification. A surface light ring surrounding the perception suite 214 may also be included for improved human-machine interaction and scene lighting.

Fig. 4 illustrates a robot arm according to an example embodiment. The robotic arm includes 7 DOF: a shoulder yaw J0 joint, a shoulder pitch J1 joint, a bicep roll J2 joint, an elbow pitch J3 joint, a forearm roll J4 joint, a wrist pitch J5 joint, and a wrist roll J6 joint. Each of the joints may be coupled to one or more actuators. An actuator coupled to the joint is operable to cause the link (and any end effector connected to the robot arm) to move downward along the kinematic chain.

The shoulder yaw J0 joint allows the robot arm to rotate toward the front of the robot and toward the back of the robot. One beneficial use of such motion is to allow the robot to pick up objects in front of the robot and quickly place objects in the rear portion of the robot (and move in reverse). Another beneficial use of such motion is to quickly move the robot arm from a stowed configuration behind the robot to an active position (and a reverse motion) in front of the robot.

The shoulder pitch J1 joint allows the robot to lift the robot arm (e.g., so that the biceps reaches the perception suite level on the robot) and lower the robot arm (e.g., so that the biceps is just above the motion base). This movement is advantageous in allowing the robot to effectively perform manipulation operations (e.g., top and side grabbing) at different target elevation levels in the environment. For example, the shoulder pitch J1 joint may rotate to a vertically upward position to allow the robot to easily manipulate objects on a table in the environment. The shoulder pitch J1 joint may be rotated to a vertically downward position to allow the robot to easily maneuver objects on the ground in an environment.

Bicep roll J2 joint allows the robot to rotate the bicep to move the elbow and forearm relative to the bicep. Such movement may be particularly advantageous for the perception suite of the robot to clearly observe the EOAS. By rotating the bicep roll J2 joint, the robot can kick out the elbow and forearm to improve the line of sight to the object in the robot holder.

Moving down the kinematic chain, alternating pitch and roll joints (shoulder pitch J1 joint, bicep roll J2 joint, elbow pitch J3 joint, forearm roll J4 joint, wrist pitch J5 joint, and wrist roll J6 joint) are provided to improve the maneuverability of the robotic arm. The axes of the wrist pitch J5 joint, wrist roll J6 joint, and forearm rotation J4 joint intersect to reduce arm movement to reorient the subject. Wrist roll J6 joints are provided instead of two pitch joints in the wrist to improve subject rotation.

In some examples, a robotic arm such as that shown in fig. 4 can operate in a teaching mode. In particular, the teaching mode may be an operational mode of the robotic arm that allows a user to physically interact with the robotic arm and direct the robotic arm to make and record various movements. In the teaching mode, an external force is applied (e.g., by a user) to the robot arm based on teaching input intended to teach the robot how to perform a particular task. The robotic arm may thus obtain data on how to perform a particular task based on instructions and guidance from the user. Such data may relate to various configurations of mechanical components, joint position data, velocity data, acceleration data, torque data, force data, and power data, among others.

During the teaching mode, the user may grasp the EOAS or wrist, in some examples, or any portion of the robotic arm in other examples, and provide the external force by physically moving the robotic arm. In particular, the user may direct the robotic arm to grab onto the object and then move the object from the first position to the second position. When the user guides the robotic arm during the teaching mode, the robot may obtain and record data related to the movement such that the robotic arm may be configured to independently perform tasks at a future time during independent operation (e.g., when the robotic arm is independently operating outside of the teaching mode). In some examples, the external force may also be applied by other entities in the physical workspace, such as by other objects, machines, or robotic systems, and so forth.

Fig. 5A and 5B illustrate a robot mast with a 3D lidar sensor according to an example embodiment. More specifically, fig. 5A illustrates a robot 500, which may be the same as or similar to the robot illustrated and described with reference to fig. 2 and 3. Robot 500 includes mast 502. The mast 502 includes a cutout 504. By attaching 3D lidar sensor 506 below mounting point 508, 3D lidar sensor 506 is mounted in excavated portion 504.

In some examples, 3D lidar sensor 506 may be configured to have a 360 degree horizontal field of view at a fixed vertical angle. In some examples, the fixed perpendicular angle may be greater than 90 degrees. In other examples, the fixed vertical angle may be equal to or less than 90 degrees. The horizontal field of view may be defined about a vertical axis of rotation of one or more mirrors that reflect light projected by one or more lasers into the environment of the robot 500 to collect depth measurements. Referring to fig. 5A, a vertical axis may pass through the center of 3D lidar sensor 506 and through mounting point 508. As shown in fig. 5A, 3D lidar sensor 506 may be tilted forward. Thus, the vertical field of view of 3D lidar sensor 506 may be angled downward toward the area in front of robot 500. As an example, the vertical axis of 3D lidar sensor 506 may be tilted forward 16 degrees from vertical toward the front of the robot.

Cutout 504 may allow 3D lidar sensor 506 to be mounted below mounting point 508 such that 3D lidar sensor 506 is contained within cutout 504 when viewed from the top down. The cutout 504 may be located between two substantially cylindrical portions of the mast 502. Additionally, at least a portion of the 3D lidar sensor may be contained between the substantially cylindrical portions without protruding. Advantageously, by mounting 3D lidar sensor 506 within cutout 504, the 3D lidar sensor may be prevented from obscuring other sensors in the perception suite of robot 500. Additionally, the cutout 504 may prevent mast 502 from obscuring too much of the horizontal field of view of 3D lidar sensor 506. In some examples, at least 270 degrees of the horizontal field of view of the 3D lidar sensor is not obscured by mast 502 based on the shape of cutout 504. In other examples, the mast 502 and/or the dug-out portion 504 may have different shapes or sizes.

Referring to fig. 5B, mast 502 of robot 500 may include a backing member 510 with 3D lidar sensor 506 mounted below mounting point 508 on backing member 510. Backing member 510 may house wiring that connects 3D lidar sensor 506 to the sensing housing and/or middle portion of robot 500. The backing member 510 may also house other components, such as a printed circuit board. The mast 502 may also include two symmetrical housing members 512 and 514. Two symmetrical housing members 512 and 514 may be attached to either side of the backing member 510 such that the 3D lidar sensor 506 is outside of the volume enclosed by the backing member 510 and the two symmetrical housing members 512 and 514. The backing member 510 and/or the two symmetrical housing members 512 and 514 may be injection molded.

In some examples, the mast 502 shown in fig. 5A and 5B may be part of a stacking tower located at the front end of the mobile base of the robot 500. Above the mast 502, the stacking tower may include a sensing housing that can be rocked and tilted. Below the mast 502, the stacking tower may include a revolute joint of a robotic arm. The revolute joint of the robotic arm may be configured to rotate the robotic arm without rotating the mast. Thus, the mast can remain fixed relative to the mobile base. The stacking tower may be secured to the mobile base such that 3D lidar sensor 506 is oriented to detect an area near ground level in front of the mobile base. The mobile base of the robot may also include a set of 1D ToF sensors that point to an area behind the mobile base that is near ground level. In summary, such a sensor arrangement may provide a suitable compromise for certain applications, particularly where more accurate data indicative of an area in front of the robot than an area behind the robot is required.

Fig. 6A, 6B and 6C illustrate detection of a 3D lidar sensor according to an example embodiment. More specifically, fig. 6A shows a zoomed out angular view, fig. 6B shows a top-down view, and fig. 6C shows an enlarged angular view of the detection point cloud of the 3D lidar sensor on the robot 600 in the environment 602. The robot 600 may be the same as or similar to that shown in fig. 2, 3, and/or 5A and 5B.

For purposes of illustration, single point detection is divided into three categories. The smaller unfilled squares represent point detections on the ground of environment 602. The larger unfilled squares represent the detection of points on objects in environment 602. The filled squares represent point detections on the upper hemisphere (e.g., ceiling and/or walls) of the environment 602.

Regarding point detection on the ground, for example, as shown in fig. 6B, the point detection is closest to the robot 600 right in front of the robot 600 based on the position and orientation of the 3D lidar sensor on the robot 600. On the side of the robot 600, the point detection is away from the robot 600. Furthermore, the 3D lidar sensor cannot detect the ground directly behind the robot 600. These compromises allow accurate detection of the front cliff, which may be a priority given that the mobile base of the robot 600 generally navigates forward. In some examples, the ground may not be detected at a minimum distance in front of the robot, for example as shown in fig. 6C. Given the position of the front wheels on the robot 600, this distance may be kept small enough to prevent any risk of the mobile base of the robot 600 traveling over the cliff. There may be less need to detect a cliff behind the robot 600. Therefore, alternative cliff detection solutions that are less costly, such as a face-down 1D ToF sensor, may be used on the robot 600 to detect the cliff behind the robot 600.

With respect to point detection on an object, for example, as shown in fig. 6B, the positioning of the 3D lidar sensor may allow the robot 600 to detect at least some portions of obstacles located in front of and to the side of the robot 600. For example, as shown in fig. 6A, the vertical field of view of the 3D lidar sensor may only allow the robot 600 to detect points on obstacles up to approximately parallel to the height of the perception housing of the robot 600. This compromise may be acceptable because most objects are less likely to float above the robot, rather than having a portion closer to ground level to be detected by the 3D lidar sensor on the robot 600. Furthermore, one or more other sensors (e.g., cameras) in the perception suite of the robot may also provide coverage of the area. Furthermore, there may be less need to detect floating objects outside the safety critical path that the robot may travel. Similarly, detecting objects behind the robot 600 may also be less important. Therefore, lower cost alternative object detection solutions, such as 1D ToF sensors arranged horizontally along the rear side of the robot 600, may be used to detect obstacles behind the robot 600.

Regarding point detection on the upper hemisphere, for example, as shown in fig. 6A, points on the ceiling and/or wall behind and to the side of the robot 600 may be detected by the 3D lidar sensor. This sensor data may be used to help locate the robot 600 in the environment 602. For example, the localization process may involve aligning the detected points with a voxel grid representation of the surface in the environment 602. These surfaces of the upper hemisphere may be particularly suitable for robotic positioning because they are substantially static and unlikely to change often over time. Additionally, the portions of the upper hemisphere behind and to the sides of robot 600 may be as effective as the points of the upper hemisphere in front of robot 600, where these points may not be detected based on the position and orientation of the 3D lidar sensor on robot 600.

It should be understood that the point clouds shown in fig. 6A, 6B, and 6C are for illustrative purposes. Indeed, the robot 600 may include additional sensors that provide additional point cloud data or other types of sensor data. Additionally, in alternative examples, different arrangements of 3D lidar sensors on the robot may produce different point cloud representations of the environment.

Fig. 7, 8 and 9 show fields of view of 3D lidar sensors according to example embodiments for different mounting orientations. More specifically, fig. 7, 8 and 9 each represent two respective blind spots resulting from different mounting angles of the 3D lidar sensor on the robotic device. For each figure, the vertical field of view of the 3D lidar sensor covers the area between two blind spots in the direction directly in front of the robot. For illustration purposes, the vertical field of view of the 3D lidar sensor is represented in each figure as being slightly greater than 90 degrees. In an alternative example, a 3D lidar sensor with a different vertical field of view may be used instead.

Fig. 7 shows a first mounting position of the 3D lidar sensor on the robotic device, wherein the 3D lidar sensor is tilted up towards the front of the robot. More specifically, robot 700 may include a 3D lidar sensor 702 with a vertical axis tilted backward (e.g., at an 18 degree angle) from vertical. A first blind spot 704 in front of the robot 700 may result from this mounting angle of the 3D lidar sensor 702. Additionally, a second blind spot 706 above and behind robot 700 may also result from this mounting angle of 3D lidar sensor 702.

In some applications, blind spot 704 may not allow 3D lidar sensor 702 to be used for front cliff detection because too large a ground area in front of robot 700 is not detectable by 3D lidar sensor 702. At the installation angle shown in fig. 7, 3D lidar sensor 702 may effectively detect the area in front of and substantially above robot 700. Additionally, based on the location of the blind spot 706, this mounting angle may be effective for robot positioning using a portion of the upper hemisphere in front of the robot 700. In some applications, the mounting angle shown in FIG. 7 may be the preferred mounting angle. However, in some applications, it may not be important for 3D lidar sensor 702 to detect the area in front of and substantially above robot 700.

Fig. 8 shows a second mounting orientation of the 3D lidar sensor on the robotic device, where the 3D lidar sensor is vertical. More specifically, robot 800 may include a 3D lidar sensor 802 that has a vertical axis that is perpendicular to the ground. A first blind spot 804 in front of the robot 800 may result from this mounting angle of the 3D lidar sensor 802. In addition, a second blind spot 806 above the robot 800 may also result from this mounting angle of the 3D lidar sensor 802. Although 3D lidar sensor 802 is mounted vertically on robot 800, the vertical field of view of 3D lidar sensor 802 may be angled downward toward the area in front of the robot based on the internal configuration of 3D lidar sensor 802.

In some applications, the mounting angle of 3D lidar sensor 802 shown in fig. 8 may be a preferred mounting angle. However, although smaller than blind spot 704, blind spot 804 may not allow 3D lidar sensor 802 to be used for front cliff detection because too large a ground area in front of robot 800 may still not be detected by 3D lidar sensor 802. Furthermore, blind spot 806 may not allow 3D lidar sensor 802 to be effectively used for robot positioning because 3D lidar sensor 802 does not detect enough upper hemisphere of the environment. At the mounting angle shown in fig. 8, 3D lidar sensor 802 may detect fewer upper hemispheres than 3D lidar sensor 702 in fig. 7.

Fig. 9 illustrates a third mounting orientation of the 3D lidar sensor on the robotic device, where the 3D lidar sensor is angled downward toward the front of the robot. More specifically, robot 900 may include a 3D lidar sensor 902 with a vertical axis that is tilted forward (e.g., 16 degrees) from vertical. A first blind spot 904 in front of robot 900 may result from this mounting angle of 3D lidar sensor 902. In addition, a second blind spot 906 above and in front of robot 900 may also result from this mounting angle of 3D lidar sensor 902.

Blind spot 904 may be small enough (or in some cases non-existent) to allow 3D lidar sensor 902 to be effectively used for front cliff detection. Furthermore, blind spot 906 may not prevent 3D lidar sensor 902 from being effectively used for robot positioning because 3D lidar sensor 902 detects a sufficient upper hemisphere of the environment behind and above robot 900. At the installation angle shown in fig. 9, the upper limit vector of the vertical field of view of 3D lidar sensor 902 may be angled slightly upward from the horizontal. For example, the vector may pass through a height parallel to the top of the robot's sensing housing at a distance of two meters from the robot. In some examples, the vertical field of view may provide sufficient coverage for obstacle detection in front of the robot 900 in addition to front cliff detection and robot positioning. Therefore, in some applications, the mounting angle of 3D lidar sensor 902 shown in fig. 9 may be a preferred mounting angle.

Fig. 10 is a block diagram of a method according to an example embodiment. In some examples, the method 1000 of fig. 10 may be performed by a control system (such as the control system 118 of the robotic system 100). In a further example, the method 1000 may be performed by one or more processors (such as the processor(s) 102) executing program instructions, such as the program instructions 106, stored in a data storage device (such as the data storage device 104). The performance of method 1000 may involve any of the robots and/or robotic components shown and described with reference to fig. 1-4, 5A-5B, 6A-6C, 7-9, and/or 10. Other robotic devices may also be used in the performance of method 1000. In a further example, some or all of the blocks of method 1000 may be performed by a control system remote from the robotic device. In yet another example, different blocks of method 1000 may be performed by different control systems located on and/or remote from the robotic device.

In block 1010, method 1000 includes receiving sensor data from a 3D lidar sensor indicative of an environment of a mobile robotic device. The 3D lidar sensor may be mounted in a cut out portion of a mast of the mobile robotic device. The 3D lidar sensor may be fixed relative to the mast such that a vertical field of view of the 3D lidar sensor angles downward toward a forward region of the robot. In some examples, the vertical axis of the 3D lidar sensor may be tilted forward toward the front of the robot relative to the vertical direction. The sensor data may be point cloud data.

In some examples, the 3D lidar sensor is positioned at an angle such that a vertical field of view of the 3D lidar sensor includes the ground directly in front of the mobile robotic device. For example, the vertical field of view of the 3D lidar sensor may include or be directly aligned with a portion of the front bumper of the mobile base of the robotic device.

In some examples, the vertical field of view of the 3D lidar sensor is greater than 90 degrees, and the 3D lidar sensor is positioned at an angle such that an upper limit of the vertical field of view of the 3D lidar sensor extends from the 3D lidar sensor at an angle above a horizontal vector pointing in front of the mobile robotic device.

In some examples, the vertical field of view of the 3D lidar sensor extends from a first angle between 10 degrees and 20 degrees above a horizontal vector directed forward of the mobile robotic device to a second angle between 75 degrees and 85 degrees below the horizontal vector.

In block 1020, the method 1000 further includes controlling the mobile robotic device based on the sensor data. Controlling the mobile robotic device may involve using sensor data from the 3D lidar sensor for any combination of front cliff detection, obstacle detection, and robot positioning.

More specifically, the sensor data may be indicative of the ground directly in front of a mobile base of the mobile robotic device, and the method 1000 may also involve detecting a cliff in front of the mobile robotic device. Controlling the mobile robotic device may then involve navigating a mobile base of the mobile robotic device based on the detected cliff. For example, the mobile robotic device may be controlled to stop or change direction to avoid crossing a detected cliff.

The sensor data from the 3D lidar sensor may also indicate one or more obstacles in front of or to the side of the mobile robotic device. In this case, controlling the mobile robotic device based on the sensor data may include avoiding contact with one or more obstacles. For example, the mobile robotic device may be controlled to stop or change direction to avoid encountering a detected obstacle.

The sensor data from the 3D lidar sensor may also be indicative of one or more surfaces above and behind the mobile robotic device, and the method 1000 may further involve determining a position of the mobile robotic device relative to the one or more surfaces. Determining the position of the mobile robotic device may involve aligning the sensor data with a voxel grid representation of the environment of the mobile robotic device. The mobile robotic device may then be controlled based on the determined position of the mobile robotic device relative to the one or more surfaces.

Examples described herein relate to optimizing the position and orientation of a 3D lidar sensor on a mobile robotic device for forward cliff detection, obstacle detection, and robot positioning using sensor data from the 3D lidar sensor. The sensor data may also be used for other purposes. Furthermore, the position and orientation of the 3D lidar sensor may be adjusted to optimize the sensor data collected for different applications.

Conclusion III

The present disclosure is not limited to the particular embodiments described in this application, which are intended to be illustrative of various aspects. It will be apparent to those skilled in the art that many modifications and variations can be made without departing from the spirit and scope of the invention. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing description. Such modifications and variations are intended to fall within the scope of the appended claims.

The foregoing detailed description has described various features and functions of the disclosed systems, devices, and methods with reference to the accompanying drawings. In the drawings, like reference numerals generally identify like components, unless context dictates otherwise. The exemplary embodiments described herein and in the drawings are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein.

The blocks representing information processing may correspond to circuitry that may be configured to perform particular logical functions of the methods or techniques described herein. Alternatively or additionally, a block representing processing of information may correspond to a module, segment, or portion of program code (including related data). The program code may include one or more instructions executable by a processor to implement specific logical functions or actions in a method or technique. The program code or related data may be stored on any type of computer readable medium, such as a storage device or other storage medium including a diskette or hard drive.

The computer readable medium may also include non-transitory computer readable media such as computer readable media that store data for short periods of time, such as register memory, processor cache, and Random Access Memory (RAM). The computer readable medium may also include a non-transitory computer readable medium that stores program code or data for a longer period of time, such as a secondary or permanent long term memory, e.g., Read Only Memory (ROM), optical or magnetic disk, compact-disc read only memory (CD-ROM). The computer readable medium may also be any other volatile or non-volatile storage system. The computer-readable medium may be considered, for example, a computer-readable storage medium or a tangible storage device.

Further, a block representing one or more transfers of information may correspond to a transfer of information between software or hardware modules in the same physical device. However, other information transfers may occur between software modules or hardware modules in different physical devices.

The particular arrangements shown in the drawings should not be considered limiting. It should be understood that other embodiments may include more or less of each element shown in a given figure. In addition, some of the illustrated elements may be combined or omitted. Furthermore, example embodiments may include elements not shown in the figures.

While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and not limitation, with the true scope being indicated by the following claims.

29页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:机器人控制装置、机器人以及机器人控制方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!