Method and system for generating three-dimensional model for robot scene
阅读说明:本技术 生成用于机器人场景的三维模型的方法和系统 (Method and system for generating three-dimensional model for robot scene ) 是由 张飚 R·博卡 C·莫拉托 C·马蒂内兹 汪建军 滕舟 黄金苗 M·瓦尔斯特罗姆 J 于 2019-06-27 设计创作,主要内容包括:本公开的实施例涉及生成用于机器人场景的三维模型的方法和系统。机器人被配置为使用用于生成足以确定无碰撞路径并且识别工业场景中的对象的3D模型的方法来在对象上执行任务。该方法包括确定预定义的无碰撞路径并且扫描机器人周围的工业场景。工业场景的被存储的图像从存储器中被检索并且被分析以构建新的3D模型。在新的3D模型中检测到对象之后,机器人可以在沿无碰撞路径移动的同时进一步扫描工业场景中的图像,直到对象以预定义的确定性级别被识别。然后机器人可以在对象上执行机器人任务。(Embodiments of the present disclosure relate to methods and systems for generating three-dimensional models for robotic scenes. The robot is configured to perform a task on an object using a method for generating a 3D model sufficient to determine a collision-free path and identify the object in the industrial scene. The method includes determining a predefined collision-free path and scanning an industrial scene around the robot. Stored images of the industrial scene are retrieved from memory and analyzed to construct a new 3D model. After detecting the object in the new 3D model, the robot may further scan the images in the industrial scene while moving along the collision-free path until the object is identified with a predefined level of certainty. The robot may then perform a robot task on the object.)
1. A method, comprising:
determining a predefined collision-free robot path;
moving a robot along the predefined robot path;
scanning an industrial scene with a scanning sensor positioned on a robot while moving along the predefined robot path;
storing the scanned image of the industrial scene in a memory;
constructing a 3D model of the industrial scene based on the images stored in the memory;
planning a next collision-free robot path based on the 3D model;
moving the robot along the next collision-free robot path;
scanning an industrial scene with the scanning sensor positioned on the robot while moving along the next robot path; and
storing the new scanned image in the memory; and
reconstructing the 3D model of the industrial scene based on the new scanned image.
2. The method of claim 1, further comprising repeating the planning, scanning, storing, and reconstructing steps until a complete 3D industrial scene is constructed.
3. The method of claim 2, further comprising performing a work task with the robot after completing the 3D industrial scene model.
4. The method of claim 1, wherein the scanning sensor is a 3D camera.
5. The method of claim 1, further comprising moving the scanning sensor with respect to the robot when determining the predefined collision-free path.
6. The method of claim 5, wherein the movement of the scanning sensor comprises pan, tilt, rotation, and translation motions with respect to the robot.
7. The method of claim 5, wherein the moving of the scanning sensor comprises moving an arm of the robot while a base of the robot remains stationary.
8. The method of claim 1, further comprising planning the collision-free path with a controller having a collision-free motion planning algorithm.
9. The method of claim 1, wherein the planning of the collision-free path occurs in real-time without off-line computer analysis.
10. A method, comprising:
determining a predefined collision-free robot path;
scanning an industrial scene along the predefined collision-free robot path with a scanning sensor positioned on a robot;
storing the scanned image of the industrial scene in a memory;
constructing a 3D model of the industrial scene based on the images stored in memory;
detecting an object within the 3D model of the industrial scene;
moving the robot along the collision-free robot path to generate a next scanning viewpoint of the detected object;
scanning the industrial scene to obtain a new scanned image of the object;
storing the new scanned image in the memory; and
repeating the moving and scanning steps until the object is identified to a threshold level of certainty.
11. The method of claim 10, wherein the scanning comprises capturing an image with a 3D camera.
12. The method of claim 10, further comprising performing a robotic task on the object after the object has been identified to the threshold level of certainty.
13. The method of claim 12, wherein the robotic task comprises grasping the object.
14. The method of claim 10, further comprising panning, tilting, and rotating the scanning sensor to capture images from different vantage points to generate a new 3D model of the industrial scene.
15. The method of claim 14, further comprising planning a next scan path prior to generating the new 3D model of the industrial scene.
16. The method of claim 15, wherein the planning comprises analyzing the new 3D model with a controller having a collision-free motion planner algorithm.
17. The method of claim 16, further comprising determining the next scan path based on results from a collision-free motion planning algorithm.
18. A method, comprising:
determining a predefined collision-free robot path;
scanning an industrial scene close to the robot by using a scanning sensor;
storing the scanned image of the industrial scene in a memory;
constructing a 3D model of the industrial scene based on the images stored in memory;
detecting an object within the industrial scene;
determining whether the object is identified with sufficient accuracy;
determining whether a robotic task can be performed on the object; and
after the object is identified with sufficient certainty, one or more robotic tasks are performed on the object.
19. The method of claim 18, further comprising: if the object is not identified with sufficient certainty, a next scanning viewpoint is generated and the industrial scene is rescanned.
20. The method of claim 18, further comprising: if the 3D model of the industrial scene is insufficient for a gripping analysis, a next scanning viewpoint is generated and the industrial scene is rescanned.
21. The method of claim 18, further comprising: if the 3D model of the industrial scene is incomplete, a next scanning viewpoint is generated and the industrial scene is rescanned.
22. The method of claim 18, wherein the scanning of the industrial scene comprises panning, tilting, rotating, and translating the scanning sensor with respect to the robot.
23. The method of claim 18, further comprising planning, by a controller having a collision-free motion planning algorithm, the collision-free path.
24. The method of claim 23, wherein the planning occurs in real-time without off-line computer analysis.
Technical Field
The present application relates generally to modeling industrial scenes by robots, and more particularly, but not exclusively, to building 3D models of industrial scenes using scanning with visual sensors associated with robots.
Background
With the continued development in the field of robotics, more and more attention has been directed to the development of techniques that allow robots to determine collision-free paths and the position of workpieces or other objects in real time. Randomly placed objects within a robot work area or industrial scene may interfere with certain movements of the robot and prevent work tasks from being completed. Some existing systems have various drawbacks with respect to certain applications. Therefore, there is still a need for further contributions in this area of technology.
Disclosure of Invention
One embodiment of the present application is a unique system and method for generating a real-time 3D model of a robotic work area or industrial scene. Other embodiments include apparatuses, systems, devices, hardware, methods, and combinations for generating collision-free paths for robotic operation in an industrial setting. Further embodiments, forms, features, aspects, benefits, and advantages of the present application will become apparent from the description and drawings provided herein.
Drawings
FIG. 1 is a schematic view of a robotic system according to an exemplary embodiment of the present disclosure;
fig. 2 is a schematic illustration of an industrial robot scenario according to an exemplary embodiment of the present disclosure;
FIG. 3 is a flow diagram for a method for generating a scan path for collision free robot motion according to one embodiment of the present disclosure;
FIG. 4 is a flow diagram for a method for identification of objects in an industrial scene;
FIG. 5 is a flow diagram for a method for generating a 3D model sufficient to reduce ambiguity of an object such that a work task may be performed on the object by a robot;
FIG. 6 is a flow chart for a method for improving object recognition and planning a next scan path for collision-free motion so that the robot can perform tasks on detected objects; and
fig. 7A and 7B define a flow chart for a method for improving object recognition, improving grasping confidence, and planning a next scan path for collision-free motion.
Detailed Description
For the purposes of promoting an understanding of the principles of the application, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the application is thereby intended. Any alterations and further modifications in the described embodiments, and any further applications of the principles of the application as described herein are contemplated as would normally occur to one skilled in the art to which the application relates.
With the continued development of the field of robotics, more and more attention is being paid to the development of technologies that allow for more tightly coupled human-machine interaction (HRI). The application of HRI techniques helps the robot understand information about its surroundings and allows the operator to understand or receive feedback about the level of understanding the robot has reached. An initial understanding of the work environment or industrial scenario may be obtained before interaction between the operator and the robot occurs. As the robot's understanding of the scene increases, the level of human interaction may decrease (i.e., the operator does not have to program all information into the robot before operation, which minimizes setup time).
The robot system disclosed herein provides a control method to retrieve valid information from an industrial scene that can be identified by the memory of the robot. The control method enables the robot to obtain information and understand elements or objects in the scene while optimizing robot motion to perform tasks within the industrial scene. The robot path reacts to changes in the scene and helps the robot understand the surrounding boundaries. The ability of the robot to autonomously retrieve information from the scene facilitates detection of objects, constructive robot motion generation, reduction of time required for the overall discovery process, and minimizes human involvement setup and robot programming. Industrial robots may use teach boxes and joysticks to "jog the robot" and teach the robot points, but this can be cumbersome, time consuming and dangerous if the operator is very close to the robot as it moves through the industrial scene. 3D vision and implicit programming can be used to improve robot path programming by generating 3D models of unknown scenes around the robot, which requires time to teach the robot to hold the 3D sensor to scan an industrial scene. It may be difficult to manually generate scan paths to collect sufficient data from scenes and objects without robot collisions and/or causing accessibility issues.
The present disclosure includes methods and systems for automatically generating a complete 3D model of an industrial scene for robotic applications, which reduces engineering time and costs compared to manually programmed robots. The method and system automatically generate scan paths to collect sufficient data about the scene and object locations without causing collision or reachability problems. The robot scanning system is operatively connected to the robot path planning algorithm, to the 3D object recognition algorithm, for providing control inputs to the robot to perform tasks on objects within the industrial scene.
Referring now to FIG. 1, an exemplary
The
Referring now to FIG. 2, another exemplary
Teaching and training robots to autonomously discover and understand industrial scenarios and perform robotic work tasks (such as extracting randomly arranged objects from bins) is a complex task. Given 2D RGB (red, green, blue) sensor images, the
The control method for defining the automatic robot path starts from an initial viewpoint of the
Referring now to fig. 3, a flow diagram illustrates a
Referring now to FIG. 4, a
Referring now to fig. 5, a
Referring now to fig. 6, a control method of
Referring now to fig. 7A and 7B, a
In one aspect, the present disclosure includes a method comprising: determining a predefined collision-free robot path; moving the robot along a predefined robot path; scanning an industrial scene with a scanning sensor positioned on a robot while moving along a predefined robot path; storing the scanned image of the industrial scene in a memory; constructing a 3D model of the industrial scene based on the images stored in the memory; planning a next collision-free robot path based on the 3D model; moving the robot along the next collision-free robot path; scanning the industrial scene with a scanning sensor positioned on the robot while moving along the next robot path; and storing the new scanned image in a memory; and reconstructing a 3D model of the industrial scene based on the new scanned image.
In a refined aspect, the method further comprises repeating the planning, scanning, storing, and reconstructing steps until a complete 3D industrial scene is constructed; further comprising performing a work task with the robot after completing the 3D industrial scene model; wherein the scanning sensor is a 3D camera; further comprising moving the scanning sensor with respect to the robot when the predefined collision-free path is determined; wherein the movement of the scanning sensor comprises panning, tilting, rotating and translating movements with respect to the robot; wherein the movement of the scanning sensor comprises moving an arm of the robot while a base of the robot remains stationary; further comprising planning a collision-free path with the controller having a collision-free motion planning algorithm; where the planning of collision-free paths occurs in real time without off-line computer analysis.
Another aspect of the present disclosure includes a method comprising: determining a predefined collision-free robot path; scanning an industrial scene along a predefined collision-free robot path with a scanning sensor positioned on the robot; storing the scanned image of the industrial scene in a memory; constructing a 3D model of the industrial scene based on the images stored in the memory; detecting an object within a 3D model of an industrial scene; moving the robot along the collision-free robot path to generate a next scanning viewpoint of the detected object; scanning an industrial scene to obtain a new scanned image of an object; storing the new scan image in a memory; and repeating the moving and scanning steps until the object is identified to a threshold level of certainty.
In a refinement aspect of the disclosure, wherein the scanning comprises: capturing an image with a 3D camera; further comprising performing a robotic task on the object after the object has been identified to a threshold level of certainty; wherein the robotic task comprises grasping an object; further comprising pan, tilt and rotate scan sensors to capture images from different vantage points to generate a new 3D model of the industrial scene; further comprising planning a next scan path before generating a new 3D model of the industrial scene; wherein planning comprises analyzing the new 3D model with a controller having a collision-free motion planner algorithm; and further comprising determining a next scan path based on results from the collision free motion planning algorithm.
Another aspect of the disclosure includes a method comprising: determining a predefined collision-free robot path; scanning an industrial scene close to the robot by using a scanning sensor; storing the scanned image of the industrial scene in a memory; constructing a 3D model of the industrial scene based on the images stored in the memory; detecting an object within an industrial scene; determining whether the object is identified with sufficient accuracy; determining whether a robot task can be performed on an object; and performing one or more robotic tasks on the object after the object is identified with sufficient certainty.
In a refinement aspect of the disclosure, the method further comprises: if the object is not identified with sufficient certainty, generating a next scanning viewpoint and rescanning the industrial scene; further comprising: generating a next scanning viewpoint and rescanning the industrial scene if the 3D model of the industrial scene is insufficient for the gripping analysis; further comprising: if the 3D model of the industrial scene is incomplete, generating a next scanning viewpoint and rescanning the industrial scene; wherein the scanning of the industrial scene comprises panning, tilting, rotating, and translating the scanning sensor with respect to the robot; further comprising: planning a collision-free path by a controller having a collision-free motion planning algorithm; and wherein planning occurs in real-time without off-line computer analysis.
While the application has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only the preferred embodiment has been shown and described and that all changes and modifications that come within the spirit of the application are desired to be protected. It should be understood that while the use of words such as preferable, preferabl preferred, preferred or more preferred in the description above indicate that the feature so described may be more desirable, embodiments in which it may not be necessary and in which the same feature is lacking are contemplated as within the scope of the application, the scope being defined by the claims that follow. In reading the claims, it is intended that when words such as "a," "an," "at least one," or "at least a portion" are used, unless specifically stated otherwise in the claims, it is not intended that the claims be limited to one item only. When the language "at least a portion" and/or "a portion" is used, the item can include a portion and/or the entire item unless specifically stated otherwise.
Unless specified or limited otherwise, the terms "mounted," "connected," "supported," and "coupled" and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings. Further, "connected" and "coupled" are not restricted to physical or mechanical connections or couplings.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:基于多巴胺神经元仿生CPG系统的机械臂控制器