Robot grabbing method, system and device based on 3D vision and medium

文档序号:1898305 发布日期:2021-11-30 浏览:12次 中文

阅读说明:本技术 一种基于3d视觉的机器人抓取方法、系统、装置及介质 (Robot grabbing method, system and device based on 3D vision and medium ) 是由 王城 王耿 陈和平 席宁 于 2021-08-17 设计创作,主要内容包括:本发明公开了一种基于3D视觉的机器人抓取方法、系统、装置及介质,其中方法包括:对于包括n个零件的组装产品,获取所述组装产品的点云;根据点云切割算法对所述组装产品的点云进行分割,获得n个子点云;根据子点云从预设的数据库中,匹配获取各个所述零件的点云模型;根据预设的零件装配顺序获取对应的零件的点云模型,以及获取零件的抓取点;获取零件的抓取点与手眼标定模型的抓取点之间的转换关系;根据所述转换关系获取机器人抓手的抓取位姿,根据抓取位姿控制机器人抓取所述零件。本发明基于单个机器人,实现多个零件的识别与抓取,能够进行多个物体的装配,提高了自动化程度,有效地控制了成本,可广泛应用于智能机器人技术领域。(The invention discloses a robot grabbing method, system, device and medium based on 3D vision, wherein the method comprises the following steps: for an assembled product comprising n parts, acquiring a point cloud of the assembled product; dividing the point cloud of the assembled product according to a point cloud cutting algorithm to obtain n sub-point clouds; matching and obtaining a point cloud model of each part from a preset database according to the sub-point cloud; acquiring a point cloud model of a corresponding part according to a preset part assembly sequence, and acquiring a grabbing point of the part; acquiring a conversion relation between a grabbing point of a part and a grabbing point of a hand-eye calibration model; and acquiring the grabbing pose of the robot gripper according to the conversion relation, and controlling the robot to grab the part according to the grabbing pose. The invention is based on a single robot, realizes the identification and the grabbing of a plurality of parts, can assemble a plurality of objects, improves the automation degree, effectively controls the cost, and can be widely applied to the technical field of intelligent robots.)

1. A robot grabbing method based on 3D vision is characterized by comprising the following steps:

for an assembled product comprising n parts, acquiring a point cloud of the assembled product;

dividing the point cloud of the assembled product according to a point cloud cutting algorithm to obtain n sub-point clouds;

matching and obtaining a point cloud model of each part from a preset database according to the sub-point cloud;

acquiring a point cloud model of a corresponding part according to a preset part assembly sequence, and acquiring a grabbing point of the part;

acquiring a conversion relation between a grabbing point of a part and a grabbing point of a hand-eye calibration model;

and acquiring the grabbing pose of the robot gripper according to the conversion relation, and controlling the robot to grab the part according to the grabbing pose.

2. The 3D vision-based robot grabbing method according to claim 1, wherein the obtaining of the conversion relationship between the grabbing points of the part and the grabbing points of the hand-eye calibration model comprises:

after the hands and eyes are calibrated, establishing a new three-dimensional coordinate system by taking the grabbing points of the parts as the original points;

and carrying out translation operation and/or rotation operation on the new three-dimensional coordinate system to enable the original point of the new three-dimensional coordinate system to coincide with the grabbing point of the hand-eye calibration model, so as to obtain the conversion relation between the grabbing point of the part and the grabbing point of the hand-eye calibration model.

3. The 3D vision-based robot grabbing method according to claim 1, wherein the obtaining of the conversion relationship between the grabbing points of the part and the grabbing points of the hand-eye calibration model comprises:

before the hand-eye calibration, establishing a first three-dimensional coordinate system by taking a grabbing point of a hand-eye calibration model as an origin;

establishing a second three-dimensional coordinate system based on the grabbing points of the parts as an original point;

and carrying out translation operation and/or rotation operation on the second three-dimensional coordinate system to enable the second three-dimensional coordinate system to be overlapped with the first three-dimensional coordinate system, and generating a homogeneous transformation matrix as a conversion relation.

4. The 3D vision-based robot grabbing method according to claim 1, characterized in that the hand-eye calibration model is calibrated by:

the method comprises the steps that a 3D camera collects point cloud data of a calibration scene;

acquiring a point cloud of a calibration object from the point cloud data;

matching the obtained point cloud of the calibration object with a preset point cloud of the calibration object to obtain the pose of the calibration object in a coordinate system of the 3D camera;

realizing hand-eye calibration according to the obtained pose;

wherein, the calibration object is fixed at the tail end of a mechanical arm of the robot, and the 3D camera is installed above the robot.

5. The 3D vision-based robotic grasping method according to claim 4, characterized in that the calibration object is a tee.

6. The 3D vision-based robotic grasping method according to claim 1, further comprising a step of pre-establishing a three-dimensional model of a part, including:

scanning the part to obtain a point cloud model of the part;

and after the point cloud model is marked with the grabbing points of the parts, storing the point cloud model.

7. 3D-vision based robotic grasping method according to claim 2 or 3, characterized in that the translation of the coordinate points is calculated with the following formula:

wherein p is1Representing a first coordinate point; p is a radical of2Representing a second coordinate point; xp1、Yp1、Zp1Respectively, a first coordinate point p1The coordinates of (a); t is the first coordinate point p1And a second coordinate point p2The offset between; xtIndicating the offset, Y, on the X-axistRepresenting the offset, Z, in the Y axistIndicating the amount of offset in the Z-axis.

8. A robotic grasping system based on 3D vision, comprising:

the point cloud acquisition module is used for acquiring a point cloud of an assembled product comprising n parts;

the point cloud cutting module is used for segmenting the point cloud of the assembled product according to a point cloud cutting algorithm to obtain n sub-point clouds;

the point cloud matching module is used for matching and acquiring a point cloud model of each part from a preset database according to the sub-point cloud;

the grabbing point acquisition module is used for acquiring a point cloud model of the corresponding part according to a preset part assembly sequence and acquiring grabbing points of the part;

the grabbing point conversion module is used for obtaining the conversion relation between the grabbing points of the parts and the grabbing points of the hand-eye calibration model;

and the pose inverse calculation module is used for acquiring the grabbing pose of the robot gripper according to the conversion relation and controlling the robot to grab the part according to the grabbing pose.

9. A robot gripping device based on 3D vision, comprising:

at least one processor;

at least one memory for storing at least one program;

when executed by the at least one processor, cause the at least one processor to implement the method of any one of claims 1-7.

10. A storage medium having stored therein a program executable by a processor, wherein the program executable by the processor is adapted to perform the method of any one of claims 1-7 when executed by the processor.

Technical Field

The invention relates to the technical field of intelligent robots, in particular to a robot grabbing method, system, device and medium based on 3D vision.

Background

In recent years, robots have been widely used in various fields including medical treatment, industrial production, environmental monitoring, city management, and the like. When facing different application environment, the robot self has proposed higher requirement to the adaptability of environment, work efficiency etc. and combines machine vision and machine each other with very big promotion robot work efficiency and multiple environment snatch the object.

When a product comprises a plurality of parts, its assembly needs the robot to snatch different parts, most robot assembly still are through the assembly of fixed point location at present, and the assembly of the part of single robot assembly often single part, the flexibility of assembly object is relatively poor, if use a plurality of robots to carry out the collaborative operation to a plurality of parts, the promotion of cost such as space and electric power of occupied use, to the renewal of enterprise's production product, the adjustment of robot and the cost of changing the outfit also can promote greatly. If a single robot is used for assembling different parts, the positions of the grippers of the robot to different parts to be grabbed need to be calculated, and the effective grabbing positions can be achieved. When a plurality of parts are replaced and assembled by the conventional method, the position and posture of the robot gripper to the part to be grabbed cannot be acquired, so that the corresponding grabbing position and posture cannot be acquired.

Disclosure of Invention

To solve at least one of the technical problems in the prior art to some extent, an object of the present invention is to provide a robot grasping method, system, device and medium based on 3D vision.

The technical scheme adopted by the invention is as follows:

a robot grabbing method based on 3D vision comprises the following steps:

for an assembled product comprising n parts, acquiring a point cloud of the assembled product;

dividing the point cloud of the assembled product according to a point cloud cutting algorithm to obtain n sub-point clouds;

matching and obtaining a point cloud model of each part from a preset database according to the sub-point cloud;

acquiring a point cloud model of a corresponding part according to a preset part assembly sequence, and acquiring a grabbing point of the part;

acquiring a conversion relation between a grabbing point of a part and a grabbing point of a hand-eye calibration model;

and acquiring the grabbing pose of the robot gripper according to the conversion relation, and controlling the robot to grab the part according to the grabbing pose.

Further, the obtaining of the conversion relationship between the grabbing point of the part and the grabbing point of the hand-eye calibration model includes:

after the hands and eyes are calibrated, establishing a new three-dimensional coordinate system by taking the grabbing points of the parts as the original points;

and carrying out translation operation and/or rotation operation on the new three-dimensional coordinate system to enable the original point of the new three-dimensional coordinate system to coincide with the grabbing point of the hand-eye calibration model, so as to obtain the conversion relation between the grabbing point of the part and the grabbing point of the hand-eye calibration model.

Further, the obtaining of the conversion relationship between the grabbing point of the part and the grabbing point of the hand-eye calibration model includes:

before the hand-eye calibration, establishing a first three-dimensional coordinate system by taking a grabbing point of a hand-eye calibration model as an origin;

establishing a second three-dimensional coordinate system based on the grabbing points of the parts as an original point;

and carrying out translation operation and/or rotation operation on the second three-dimensional coordinate system to enable the second three-dimensional coordinate system to be overlapped with the first three-dimensional coordinate system, and generating a homogeneous transformation matrix as a conversion relation.

Further, the hand-eye calibration model is calibrated in the following way:

the method comprises the steps that a 3D camera collects point cloud data of a calibration scene;

acquiring a point cloud of a calibration object from the point cloud data;

matching the obtained point cloud of the calibration object with a preset point cloud of the calibration object to obtain the pose of the calibration object in a coordinate system of the 3D camera;

realizing hand-eye calibration according to the obtained pose;

wherein, the calibration object is fixed at the tail end of a mechanical arm of the robot, and the 3D camera is installed above the robot.

Further, the calibration object is a tee pipe fitting.

Further, the method also comprises the step of establishing a three-dimensional model of the part in advance, and the method comprises the following steps:

scanning the part to obtain a point cloud model of the part;

and after the point cloud model is marked with the grabbing points of the parts, storing the point cloud model.

Further, the translation of the coordinate point is calculated using the following formula:

wherein p is1Representing a first coordinate point; p is a radical of2Representing a second coordinate point;respectively, a first coordinate point p1The coordinates of (a); t is the first coordinate point p1And a second coordinate point p2The offset between; xtIndicating the offset, Y, on the X-axistRepresenting the offset, Z, in the Y axistIndicating the amount of offset in the Z-axis.

The other technical scheme adopted by the invention is as follows:

a 3D vision-based robotic grasping system, comprising:

the point cloud acquisition module is used for acquiring a point cloud of an assembled product comprising n parts;

the point cloud cutting module is used for segmenting the point cloud of the assembled product according to a point cloud cutting algorithm to obtain n sub-point clouds;

the point cloud matching module is used for matching and acquiring a point cloud model of each part from a preset database according to the sub-point cloud;

the grabbing point acquisition module is used for acquiring a point cloud model of the corresponding part according to a preset part assembly sequence and acquiring grabbing points of the part;

the grabbing point conversion module is used for obtaining the conversion relation between the grabbing points of the parts and the grabbing points of the hand-eye calibration model;

and the pose inverse calculation module is used for acquiring the grabbing pose of the robot gripper according to the conversion relation and controlling the robot to grab the part according to the grabbing pose.

The other technical scheme adopted by the invention is as follows:

A3D vision-based robotic grasping device comprising:

at least one processor;

at least one memory for storing at least one program;

when executed by the at least one processor, cause the at least one processor to implement the method described above.

The other technical scheme adopted by the invention is as follows:

a storage medium having stored therein a processor-executable program for performing the method as described above when executed by a processor.

The invention has the beneficial effects that: the invention is based on a single robot, realizes the identification and the grabbing of a plurality of parts, can assemble a plurality of objects, improves the automation degree and effectively controls the cost.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description is made on the drawings of the embodiments of the present invention or the related technical solutions in the prior art, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.

FIG. 1 is a flowchart illustrating steps of a 3D vision-based robot grasping method according to an embodiment of the present invention;

FIG. 2 is a schematic view of a robotic assembly system in accordance with an embodiment of the present invention;

FIG. 3 is a diagram of a three-way water pipe according to an embodiment of the present invention;

FIG. 4 is a diagram illustrating a three-way water pipe according to an embodiment of the present invention;

FIG. 5 is a schematic diagram of the capture point homing in an embodiment of the present invention.

Detailed Description

Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.

In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.

In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.

In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.

When the assembly robot is assembled by using machine vision, camera calibration and hand-eye calibration are firstly needed to determine the position relation of an assembly object in a camera coordinate system, a world coordinate system and a robot gripper coordinate system, so that a homogeneous conversion matrix or a three-dimensional posture relation relative to the assembly robot is generated. And the position and posture of the object relative to the robot gripper are judged. And then, generating a grabbing gesture according to the model, finishing the motion planning and executing an assembly task. When the robot faces different objects in the assembling process, the matched model is selected and replaced, so that the positions and states of the grabbed objects are obtained. However, the above method has the following problems: when a plurality of parts are replaced and assembled, the position and posture of the robot gripper to the part to be grabbed cannot be obtained, so that the corresponding grabbing position and posture cannot be obtained.

In order to solve the above-mentioned problems, the prior art provides various solutions, such as determining the part-gripping position and posture by multiple force constraints, which solves how to assemble the position and posture. However, this brings problems, or the point cloud needs to be processed in advance, which increases the complexity of the algorithm and the capturing time, resulting in an increase in the overall cost of the robot assembly system. In addition to the above methods, most of the methods adopted at present are to grab parts through deep learning, but the disadvantages of the methods are also quite obvious, one is that the development difficulty of the deep learning is complex, and the cost of the deep learning is high.

Based on the above problems, the present embodiment provides a robot grasping method based on 3D vision, which can rapidly grasp an object based on a three-dimensional space and an arbitrary object shape; in addition; when the parts are grabbed and assembled, a plurality of different objects can be matched, and grabbing poses corresponding to the different objects can be calculated. It should be emphasized that the robot in this embodiment is provided with a gripper at the end instead of a suction cup, so that the gripping point needs to be found first, and if the object is not gripped based on the gripping point, the object may be "bumped" and damaged. The method comprises the following steps:

and S1, constructing a point cloud model database of the part.

Scanning a part, constructing a point cloud model of the part, marking a grabbing point of the part in the point cloud model, and storing the point cloud model.

In some embodiments, the point cloud model of the part may be processed secondarily to increase the speed of point cloud processing in the point cloud matching process, wherein the secondary processing includes downsampling and de-triangularization.

S2, acquiring a point cloud of the assembled product for the assembled product comprising n parts.

And S3, segmenting the point clouds of the assembled products according to a point cloud cutting algorithm to obtain n sub-point clouds.

And S4, matching and acquiring the point cloud model of each part from a preset database according to the sub-point cloud.

And S5, acquiring the point cloud model of the corresponding part according to the preset part assembly sequence, and acquiring the grabbing point of the part.

And S6, acquiring a conversion relation between the grabbing point of the part and the grabbing point of the hand-eye calibration model.

And S7, acquiring the grabbing pose of the robot gripper according to the conversion relation, and controlling the robot to grab the part according to the grabbing pose.

In some alternative embodiments, step S6 is implemented by steps a1-a 2:

a1, after calibrating, establishing a new three-dimensional coordinate system by taking the grabbing point of the part as an original point;

and A2, performing translation operation and/or rotation operation on the new three-dimensional coordinate system to enable the origin of the new three-dimensional coordinate system to coincide with the grabbing points of the hand-eye calibration model, and obtaining the conversion relation between the grabbing points of the part and the grabbing points of the hand-eye calibration model.

In some alternative embodiments, step S6 is implemented by steps B1-B3 as follows:

b1, before the hand-eye calibration, establishing a first three-dimensional coordinate system by taking the grabbing points of the hand-eye calibration model as an origin;

b2, establishing a second three-dimensional coordinate system based on the grabbing points of the parts as the origin;

and B3, performing translation operation and/or rotation operation on the second three-dimensional coordinate system to enable the second three-dimensional coordinate system to be overlapped with the first three-dimensional coordinate system, and generating a homogeneous transformation matrix as a conversion relation.

In some alternative embodiments, the hand-eye calibration model between the robot and the 3D camera is completed by the following steps C1-C4:

c1, collecting point cloud data of a calibration scene by a 3D camera;

c2, acquiring point cloud of a calibration object from the point cloud data;

c3, matching the obtained point cloud of the calibration object with a preset point cloud of the calibration object to obtain the pose of the calibration object in the coordinate system of the 3D camera;

and C4, realizing hand-eye calibration according to the obtained pose.

Wherein, the calibration object is fixed at the tail end of a mechanical arm of the robot, and the 3D camera is installed above the robot. In some alternative embodiments, a tee is used as a calibration, which is common in life (e.g., tee), is readily available, has no discontinuities and vertices, and is well suited as a calibration, and a physical representation of a tee is shown in fig. 3. Referring to fig. 4, the dimensions of the three-way water pipe are measured by using a vernier caliper, and a model diagram is drawn by using solidworks software.

The foregoing is explained in detail with reference to specific embodiments below.

The present embodiment proposes a method for the assembly of arbitrary objects and automated object assembly, which is applicable to a robot assembly system. The method comprises two parts, wherein the first part is a matching part and is determined by changing the initial pose of an assembled object, and the corresponding grabbing pose in the next matching process changes the initial pose and is referred to the position of a model in hand eye calibration. The second part is a grabbing part for automatically grabbing any object, and a background database is created for storing the model data subjected to secondary processing. If different grabbing parts need to be changed in the assembly robot system, only corresponding part models need to be selected on the control platform, and the system can calculate corresponding positions and postures according to model data during matching.

In the present embodiment, referring to fig. 2, the robot assembling system includes a PC 1, assembling workpieces (i.e., parts, including a first assembling workpiece 4 and a second assembling workpiece 5 in fig. 2), a YUMI robot 3, and a 3D camera 2. The robot is fixed on a workbench, a 3D camera is fixed above the working range of the robot, the robot is connected with a PC through a communication port of RJ45, and the 3D camera performs data transmission and control with the PC through a USB3.0 interface.

A database, i.e. a database containing point cloud models of the parts, is created, on the basis of which the relationships between the individual part objects and the shot point cloud objects can be obtained. The point cloud model comprises attribute information of the parts, including point cloud number, size, names of the parts and the like.

Based on the created database, the model grabbing point homing operation is performed, and the operation is realized by two methods:

the method comprises the following steps: reading a three-dimensional model (namely a point cloud model) of the part, establishing a new three-dimensional coordinate system by taking a grabbing point of the part as an original point, and carrying out three-dimensional translation and rotation on the newly established three-dimensional coordinate system to ensure that the original point of the new coordinate system is superposed with the grabbing point of the hand-eye calibration model. And saving the processed return-to-origin point model file.

The second method comprises the following steps: before the calibration of the hands and the eyes, a first coordinate system is established by taking a grabbing point of a hand and eye calibration model as an original point, a second coordinate system is established by taking a grabbing point of a part, the two coordinate systems are overlapped to generate a new model by moving through three-dimensional translation and rotation, and then the calibration of the hands and the eyes is carried out to generate a calibration file. And performing the same coordinate transformation on the matched model and the hand-eye calibration before the pose is reversely calculated. And saving the processed return-to-origin point model file. Referring to FIG. 5, in a first step, a grasping point O1 on a part P1 (e.g., a tee fitting) is translated to an origin on a first world coordinate system C1. Secondly, determining a grabbing point O2 of the part P2, and establishing a second world coordinate system C2 by taking the grabbing point O2 as an origin. And thirdly, coinciding the second world coordinate system C2 with the first world coordinate system through operations of translation and rotation, and simultaneously generating a homogeneous transformation matrix of relative ratio. And fourthly, converting the part P2 to a first world coordinate system C1 through a homogeneous transformation matrix to realize the recovery of the grabbing point.

Among them, the conversion of the three-dimensional coordinate system can be realized in the following manner.

1) Translation of three-dimensional coordinates:

in three-dimensional space midpoint p2Shift to p1As in formula (1), wherein Xp1,Yp1,Zp1Is p1Coordinates of points, t being p2Relative to p1Is offset by a distance, which is both magnitude and direction.

2) Rotation of three-dimensional coordinates:

the rotation of the three-dimensional coordinates is mainly combined by rotation in three directions of XYZ to determine the rotation mode of the object.

The relation between the parts and the robot hand grip determined according to the hand eye calibration file is fixed, and a rigid replacement is generated, and when the position and the posture of the parts are known, the posture of the hand grip is determined. Wherein the coordinate system O2Move to the coordinate system O1The calculation formula is as follows:

O1=R*O2+t (3)

wherein R in formula 3 can be represented as H in a matrix manner, and H is O1To O2Is uniformAnd transforming the matrix. In the formula, t is the amount of translation between two points.

The rigidity is converted into a homogeneous transformation matrix through the formula, and O is represented in a coordinate system C22Conversion of origin to O1The calculation process of (2). Wherein

For example, a part needs to be rotated on the Y-axis and rotated on the Z-axis, and its homogeneous transformation matrix can be expressed as formula (6). Wherein the homogeneous transformation matrix of equation (2) is a homogeneous transformation matrix in which the object is rotated in space about XYZ axes. While R in equation (3) represents the rotation that occurs when an object moves from O2 to O1, the H homogeneous transition matrix can be known from equation (3), equation (4) and equation (5) above, and the knowledge of the known points. And transforming the three-dimensional model through the homogeneous transformation matrix so as to determine a new three-dimensional model.

After the conversion relation between the grabbing points of the part and the grabbing points of the hand-eye calibration model is obtained, the change of rigidity between the part and the grabbing hand is determined, and meanwhile, the relation between a world coordinate system and a camera is also determined.

Determining the change in pose of a part can be demonstrated by continuous rotation or translation, for example, rotation of an object to point P2 at point P1 can be achieved by equation 7:

Rgba=Rx(RotX)·Ry(RotY)·Rz(RotZ) (7)

RotX denotes the angle of rotation about the X axis, RotY denotes the angle of rotation about the Y axis, RotZ denotes the angle of rotation about the Z axis, and R denotes a rotation matrix.

After the conversion relation is obtained, the pose of the object is calculated in a mode of reversely calculating the pose, the pose information is transmitted to the robot, and the robot is assembled according to an assembling program in the robot.

In summary, compared with the prior art, the method of the embodiment has the following beneficial effects: the embodiment provides a simple and efficient grabbing method, which solves the problem that a robot is fast in assembling and replaces assembled parts, and meanwhile, a plurality of objects can be assembled simultaneously and complex workpieces can be assembled simultaneously, so that the automation degree is improved.

The embodiment also provides a robot grasping system based on 3D vision, including:

the point cloud acquisition module is used for acquiring a point cloud of an assembled product comprising n parts;

the point cloud cutting module is used for segmenting the point cloud of the assembled product according to a point cloud cutting algorithm to obtain n sub-point clouds;

the point cloud matching module is used for matching and acquiring a point cloud model of each part from a preset database according to the sub-point cloud;

the grabbing point acquisition module is used for acquiring a point cloud model of the corresponding part according to a preset part assembly sequence and acquiring grabbing points of the part;

the grabbing point conversion module is used for obtaining the conversion relation between the grabbing points of the parts and the grabbing points of the hand-eye calibration model;

and the pose inverse calculation module is used for acquiring the grabbing pose of the robot gripper according to the conversion relation and controlling the robot to grab the part according to the grabbing pose.

The robot grasping system based on 3D vision of the embodiment can execute the robot grasping method based on 3D vision provided by the embodiment of the method of the invention, can execute any combination of the implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.

The embodiment also provides a robot grabbing device based on 3D vision, includes:

at least one processor;

at least one memory for storing at least one program;

when executed by the at least one processor, cause the at least one processor to implement the method of fig. 1.

The robot gripping device based on 3D vision of the embodiment of the present invention can execute the robot gripping method based on 3D vision provided by the embodiment of the method of the present invention, can execute any combination of the implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.

The embodiment of the application also discloses a computer program product or a computer program, which comprises computer instructions, and the computer instructions are stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and executed by the processor to cause the computer device to perform the method illustrated in fig. 1.

The embodiment also provides a storage medium, which stores an instruction or a program capable of executing the 3D vision-based robot grabbing method provided by the embodiment of the method of the present invention, and when the instruction or the program is executed, the method can be executed by any combination of the embodiments of the method, and the method has corresponding functions and advantages.

In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.

Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the described functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.

The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.

More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.

It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.

In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.

While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种主从式咽拭子采样机器人控制系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!