Depth sensing robot hand-eye camera using structured light

文档序号:927687 发布日期:2021-03-02 浏览:4次 中文

阅读说明:本技术 使用结构光的深度感测机器手-眼相机 (Depth sensing robot hand-eye camera using structured light ) 是由 马特·西姆金斯 于 2019-05-22 设计创作,主要内容包括:所公开的系统包括被配置为在工件上执行任务的机器人。具有视场的相机被可操作地连接到机器人。光系统被配置为将结构光投射到在视场内具有较小面积的感兴趣区域上。控制系统被可操作地耦合到机器人,并且相机被配置成使用被投射到感兴趣区域内的工件上的结构光,确定工件相对于机器人位置的深度。所公开的系统包括被配置为在工件上执行任务的机器人。具有视场的相机被可操作地连接到机器人。光系统被配置为将结构光投射到在视场内具有较小面积的感兴趣区域上。控制系统被可操作地耦合到机器人,并且相机被配置成使用被投射到感兴趣区域内的工件上的结构光,确定工件相对于机器人位置的深度。(The disclosed system includes a robot configured to perform a task on a workpiece. A camera having a field of view is operably connected to the robot. The light system is configured to project structured light onto a region of interest having a smaller area within the field of view. The control system is operably coupled to the robot, and the camera is configured to determine a depth of the workpiece relative to a position of the robot using the structured light projected onto the workpiece within the region of interest. The disclosed system includes a robot configured to perform a task on a workpiece. A camera having a field of view is operably connected to the robot. The light system is configured to project structured light onto a region of interest having a smaller area within the field of view. The control system is operably coupled to the robot, and the camera is configured to determine a depth of the workpiece relative to a position of the robot using the structured light projected onto the workpiece within the region of interest.)

1. A system, comprising:

a robot configured to perform a robot task;

a vision system including a camera operably connected to the robot, the camera operable to capture images within a field of view;

a controller operable to analyze the image and determine a region of interest within the field of view;

a light system configured to project structured light to the region of interest; and

wherein the control system is configured to determine a depth of a workpiece within the region of interest.

2. The system of claim 1, wherein the region of interest has an area smaller than the field of view of the camera.

3. The system of claim 1, wherein the control system determines depths of a plurality of workpieces within the region of interest.

4. The system of claim 1, wherein the structured light is defined by at least one of: a plurality of patterns, shapes, shadows, intensities, colors, wavelengths, and/or frequencies.

5. The system of claim 1, wherein the vision system comprises one or more 3D cameras.

6. The system of claim 1, wherein the light system comprises one or more laser beams or encoded light projected onto the region of interest.

7. The system of claim 6, further comprising a reflector positioned in a path of at least one of the laser beams.

8. The system of claim 6, further comprising a refractor positioned in a path of at least one of the laser beams.

9. The system of claim 6, further comprising a diffractive element positioned in a path of at least one of the laser beams.

10. The system of claim 1, wherein the control system directs movement of the robot based on a scanned image of a workpiece within the region of interest.

11. The system of claim 1, wherein at least a portion of the structured light is projected from the robot.

12. A method, comprising:

scanning an industrial robot scene with at least one image sensor having a field of view;

storing image data from the image sensor in a memory;

analyzing the image data;

determining a region of interest within the image data, wherein the region of interest has an area that is smaller than an area of the field of view;

projecting structured light onto the region of interest;

determining a depth of an object located within the region of interest based on an analysis of the object illuminated by the structured light;

communicating the depth information to a controller, the controller being operatively coupled to the robot; and

performing a task on the object with the robot.

13. The method of claim 12, wherein the at least one image sensor is a camera.

14. The method of claim 13, wherein the camera is a 3D video camera.

15. The method of claim 12, wherein the projection of structured light comprises a laser beam projection.

16. The method of claim 12, wherein the structured light is projected onto the region of interest in different patterns, shapes, shadows, intensities, colors, wavelengths, and/or frequencies.

17. The method of claim 12, wherein the task comprises gripping the object with a robotic gripper.

18. A system, comprising:

an industrial scene defining a workspace for a robot;

a vision system having a field of view in the industrial scene;

a control system operably coupled to the robot, the control system configured to receive and analyze data communicated from the vision system;

means for determining, with the control system, a region of interest within a portion of the field of view;

a light system configured to direct structured light onto the region of interest; and

means for determining, with the control system, a position and a depth of an object within the region of interest relative to the robot.

19. The robot of claim 18, wherein the control system provides operating commands to the robot.

20. The robot of claim 18, wherein the light system includes a laser.

21. The robot of claim 18, wherein the structured light comprises a variable output comprising one of: a light pattern change, a light shape change, a light shading change, a light intensity change, a light color change, a light wavelength change, and/or a light frequency change.

22. The robot of claim 18, wherein the vision system comprises a 3D camera.

23. The robot of claim 18 wherein the vision system and portions of the light system are mounted on the robot.

24. The robot of claim 18, wherein the robot performs work tasks on the object based on controller analysis of the object having structured light projected thereon.

Technical Field

The present application relates generally to a robot-eye camera having a field of view, a control system operable for determining a region of interest within the field of view, and a light system for projecting structured light onto an object located within the region of interest.

Background

The robot may be used with a camera system to determine the position of the work object relative to the robot. Typically, the entire field of view or "scene" is illuminated with one or more light sources to aid in depth sensing by the camera. Some existing systems have various drawbacks with respect to certain applications. Thus, there remains a need for further contributions in this area of technology.

Disclosure of Invention

One embodiment of the present application is a unique system for sensing the position of an object in a robot workspace or industrial scene. Other embodiments include apparatuses, systems, devices, hardware, methods, and combinations thereof for sensing a position of an object relative to a robot using a camera system, wherein structured light is projected on only a portion of a field of view. Further embodiments, forms, features, aspects, benefits, and advantages of the present application will become apparent from the description and drawings provided herein.

Drawings

FIG. 1 is a schematic illustration of a robotic system according to an exemplary embodiment of the present disclosure;

FIG. 2 is a prior art schematic illustration of structured light being projected onto the entire work area or field of view of a camera;

FIG. 3 is a schematic illustration of a region of interest located in a portion of the field of view of the camera as determined by the control system;

FIG. 4 is a schematic illustration of structured light projected onto a region of interest for facilitating interaction of a robot with objects in the region of interest; and

fig. 5 is a flow chart illustrating a method of operation.

Detailed Description

For the purposes of promoting an understanding of the principles of the application, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the application is thereby intended. Any alterations and further modifications in the described embodiments, and any further applications of the principles of the application as described herein are contemplated as would normally occur to one skilled in the art to which the application relates.

Structured light systems may be used to enable a computerized control system to measure the shape and position of three-dimensional (3D) objects. The structured light system includes a light source and a pattern generator. The camera may measure the appearance of the light pattern and the light pattern variations projected onto the object. The observed phase of the periodic light pattern is related to the topography or depth of the illuminated object. The change in the light pattern may include a change in the shape, shade, intensity, color, wavelength, and/or frequency of the projected light.

With the continued development of the field of robotics, more and more attention is being directed to the development of techniques that allow robots to complete tasks faster and with less computational requirements. Typically, structured light is projected onto or across the entire field of view of the vision system to assist the robotic system in determining the position and depth of one or more objects within the field of view. Structured light interference can be problematic if the fields of view of multiple stereo cameras overlap exactly. Furthermore, computing depth based on image analysis of the entire field of view is computationally intensive. For these reasons, real-time 3D camera applications typically rely on fast, less accurate algorithms that require higher power consumption and more expensive computer systems. The present disclosure provides a method to reduce computation time, reduce the chance of light reflection interference within the visual system, and reduce the likelihood of eye injury due to a wide array of arrivals of light projections.

Referring now to FIG. 1, an illustrative robotic system 10 is shown in an exemplary work environment or industrial setting. It should be understood that the robotic systems illustrated herein are exemplary in nature, and that variations in robotic and/or industrial scenarios are contemplated herein. The robotic system 10 may include a robot 12 having a vision system 36, the vision system 36 having one or more cameras 38. In one form, one or more of the cameras 38 may be mounted on one of the movable arms 16a, 16b of the robot 12. In other forms, one or more cameras 38 may be positioned remote from the robot 12. The control system 14 includes an electronic controller having a CPU, memory, and an input/output system, the control system 14 being operatively coupled to the robot 12 and the vision system 36. Control system 14 is operable to receive and analyze images captured by vision system 36, as well as other sensor data for operation of robot 12. In some forms, the control system 14 is defined within a portion of the robot 12.

The robot 12 may include a movable base 20, and a plurality of movable portions connected to the movable base. The movable portion may translate or rotate in any desired direction. By way of example and not limitation, the movable portions illustrated by arrows 18, 26, 28, 30, 32, and 34 may be employed by the exemplary robot 12. A bin 40 for holding a workpiece or other object 42 may form at least part of an exemplary industrial setting, the workpiece or other object 42 to be retrieved by and/or worked upon by the robot 12. An end effector (e.g., a gripping or grasping mechanism) 24 may be attached to the movable arm 16a and used to grasp the object 42 and/or perform other work tasks on the object 42 as desired. It should be understood that the term "bin" is exemplary in nature and, as used herein, means, but is not limited to, any container, bin, cassette, tray, or other structure capable of receiving and/or holding a workpiece, component, or other object. Additional components 44 may be associated with vision system 36. These components 44 may include an optical system, reflector(s), refractor(s), diffractive element(s), beam expander(s), and the like.

Referring now to fig. 2, a robot scenario 48 is shown in accordance with a prior art embodiment, wherein the work box 40 may be a part of the industrial robot scenario 48, or an entirety of the industrial robot scenario 48. A light source 50, such as a laser or other known illumination source, may project structured light into the industrial robot scene 48 such that the entire or full field of view 54 of the camera 38 is filled with structured light 52, the structured light 52 being illustrated in the exemplary embodiment as parallel lines of dashed lines. The field of view 54 of the camera 38 may include a portion of the entire industrial robot scene 48 or the entire industrial robot scene 48, however, the computational time delay to analyze all objects 42 within the field of view 54 is time consuming and can be challenging in a real-time robotic work process.

Referring now to FIG. 3, a system for reducing the computational time required for the control system 14 to analyze the object 42 within the field of view 54 is shown. The control system 14 is operable to determine a region of interest 56, the region of interest 56 being illustrated by a box pattern and covering only a portion of the object 42 positioned within the entire field of view 54. As shown in FIG. 4, once the region of interest 56 is determined by the control system 14, structured light 58 can be projected from the light source 50 into the region of interest 56. The control system 14 need only analyze one or more objects 42 in a region of interest 56 defined by a portion of the field of view 54. The region of interest 56 illuminated by the structured light 58, shown by linear line segments, may be captured by the camera 38 of the vision system 36. The control system 14 then analyzes the captured image(s) and determines the location and depth of one or more objects 42 within the region of interest 56. In this manner, computational analysis requirements for the control system 14 may be reduced, and thus the speed at which the robotic system 10 performs work tasks on the object 42 within the region of interest 56 will be increased.

In one form, the structured light system may project a pattern of light onto the region of interest 56, and the control system 14 may compare certain features of the pattern in the captured image to the locations of the features in the reference image to determine differences (disparities) that may be used to determine the depth at each location. The region of interest 56 may be illuminated by a time-multiplexed sequence of patterns. Typically, two or more patterns are used to reconstruct a 3D image with sufficiently high resolution. For example, to obtain 20 depth frames per second (fps), the light projector 50 needs to be able to project patterns at a sufficiently fast rate — typically greater than sixty (60) fps. Various light projectors may be used, such as, for example, a laser generator, an LCD (liquid crystal diode), DMD (digital micromirror device), or LED (light emitting display) computer controlled projector, etc. In one form, the structured light 58 can be a light pattern with variously encoded parallel bars, and the image can include a plurality of pixels corresponding to the plurality of parallel bars of light. Other forms are contemplated herein.

Referring now to FIG. 5, a method 100 is disclosed for performing a work task on an object 42 using structured light 58 to assist in determining the position and depth of the object 42. At step 102, the structured light 58 is turned off. At step 104, the control system 14, including the vision system 36, identifies the object 42 and calculates the region of interest 56 within the field of view 54. Then, at step 106, the light source 50 projects structured light 58 onto the region of interest 56. At step 108, the control system 14 will calculate the position and depth of the object 42 within the region of interest 56 using only the pixels bounded by the bounding box, which defines the region of interest 56. At step 110, the robot 12 performs a robotic task on the object 42 within the region of interest 56. The robotic task may include, but is not limited to, grabbing, moving, assembling, or performing other work operations on the object 42. It should be understood that when the terms robot, robotic task, robotic system, etc. are used herein, the system is not limited to a single robot, but may instead include multiple robots operating in an industrial setting.

In one aspect, the present disclosure includes a system comprising: a robot configured to perform a robot task; a vision system including a camera operably connected to the robot, the camera operable to capture images within a field of view; a controller operable to analyze the image and determine a region of interest within the field of view; a light system configured to project structured light onto a region of interest; and wherein the control system is configured to determine a depth of a workpiece (workpiece) within the region of interest.

In an improved aspect, wherein the region of interest has an area smaller than a field of view of the camera, wherein the control system determines depths of the plurality of workpieces within the region of interest, wherein the structured light is defined by at least one of a plurality of patterns, shapes, shadows, intensities, colors, wavelengths, and/or frequencies, wherein the vision system comprises one or more 3D cameras, wherein the light system comprises one or more laser beams projected to the region of interest, the system further comprising a reflector positioned in a path of at least one of the laser beams, further comprising a refractor positioned in a path of at least one of the laser beams, further comprising a diffractive element positioned in a path of at least one of the laser beams, wherein the control system directs movement of the robot based on a scanned image of the workpieces within the region of interest, and wherein at least a portion of the structured light is projected from the robot.

Another aspect of the disclosure includes a method comprising: scanning an industrial robot scene with at least one image sensor having a field of view; storing image data from the image sensor in a memory; analyzing the image data; determining a region of interest within the image data, wherein the region of interest has an area that is smaller than an area of the field of view; projecting structured light onto a region of interest; determining a depth of an object located within the region of interest based on an analysis of the object illuminated by the structured light; transmitting the depth information to a controller, the controller being operatively coupled to the robot; and performing a task on the object with the robot.

In an improved aspect, the method comprises: wherein the at least one image sensor is a camera, wherein the camera is a 3D video camera, wherein the projection of structured light comprises a laser beam projection, wherein the structured light is projected on the region of interest in different patterns, shapes, shadows, intensities, colors, wavelengths and/or frequencies, and wherein the task comprises gripping the object with a robotic gripper.

Another aspect of the invention includes a system comprising: an industrial scene defining a workspace of the robot; a vision system having a field of view in an industrial scene; a control system operatively coupled to the robot, the control system configured to receive and analyze data communicated from the vision system; means for determining a region of interest within a portion of the field of view with a control system; a light system configured to direct structured light onto a region of interest; and means for determining with the control system a position and a depth of the object within the region of interest relative to the robot.

In an improved aspect, wherein the control system provides operating commands to the robot, wherein the light system comprises a laser, wherein the structured light comprises a variable output comprising one of a light pattern change, a light shape change, a light shading change, a light intensity change, a light color change, a light wavelength change, and/or a light frequency change, wherein the vision system comprises a 3D camera, wherein the vision system and parts of the light system are mounted on the robot, and wherein the robot performs a work task on an object on which the structured light is projected based on a controller analysis of the object.

While the application has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only the preferred embodiment has been shown and described and that all changes and modifications that come within the spirit of the application are desired to be protected. It should be understood that while the use of words such as preferred, preferably, preferred or more preferred in the description above indicate that the feature so described may be preferred, it nonetheless may not be necessary and embodiments lacking the same may be contemplated as within the scope of the application, the scope being defined by the claims that follow. When words such as "a," "an," "at least one," or "at least a portion" are used in reading the claims, it is not intended that the claims be limited to only one item unless specifically stated to the contrary in the claims. When the language "at least a portion" and/or "a portion" is used, the item can include a portion and/or the entire item unless specifically stated to the contrary.

Unless specified or limited otherwise, the terms "mounted," "connected," "supported," and "coupled" and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings. Further, "connected" and "coupled" are not restricted to physical or mechanical connections or couplings.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:虚拟现实手势生成

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!