Mechanical arm positioning guiding and calibrating method based on machine vision cooperation

文档序号:742619 发布日期:2021-04-23 浏览:7次 中文

阅读说明:本技术 一种基于机器视觉配合机械手定位引导和标定方法 (Mechanical arm positioning guiding and calibrating method based on machine vision cooperation ) 是由 刘蕾 汪波 周闯 任永强 王光磊 屠庆松 于 2020-12-10 设计创作,主要内容包括:本发明公开了一种基于机器视觉配合机械手定位引导方法和标定方法,该标定方法包括:利用机械手抓取产品,使产品某一特征点位于相机视野内,并且让机械手带动产品以九宫格的方式移动九次,移动间距为△d,以九点任意一点为中心原点,计算出其他八个点的实际坐标;相机在机械手每移动一次时拍摄一次,抓取产品特征点并计算其像素坐标,最终可形成九个点的像素坐标;利用视觉九点标定算法计算得到像素坐标和实际坐标的坐标转换关系关系。本标定方法标定过程简单,不需要标定块,仅通过图像点之间的对应关系进行标定。本导引方法可根据虚拟坐标系中计算的旋转中心点求出定位点的偏移量,进而能够引导机械手对产品的抓取位调整。(The invention discloses a machine vision-based mechanical arm positioning guiding method and a calibration method, wherein the calibration method comprises the following steps: grabbing the product by using a manipulator, enabling a certain characteristic point of the product to be located in the visual field of a camera, and enabling the manipulator to drive the product to move nine times in a Sudoku mode, wherein the moving distance is delta d, and the actual coordinates of other eight points are calculated by taking any one point of the nine points as a central origin; the camera shoots once when the manipulator moves once, characteristic points of the product are grabbed, pixel coordinates of the characteristic points are calculated, and finally the pixel coordinates of nine points can be formed; and calculating to obtain the coordinate conversion relation between the pixel coordinate and the actual coordinate by using a visual nine-point calibration algorithm. The calibration method has simple calibration process, does not need a calibration block, and only calibrates through the corresponding relation between the image points. The guiding method can solve the offset of the positioning point according to the rotation central point calculated in the virtual coordinate system, and further can guide the manipulator to adjust the grabbing position of the product.)

1. A calibration method based on machine vision cooperation mechanical arm positioning guide is characterized by comprising the following steps:

s11, grabbing the product by using the manipulator, enabling a certain characteristic point of the product to be located in the camera view, enabling the manipulator to drive the product to move nine times in a Sudoku mode, enabling the moving distance to be delta d, and calculating actual coordinates of other eight points by taking any one point of the nine points as a center origin;

s12, shooting once when the manipulator moves once, capturing characteristic points of the product and calculating pixel coordinates of the characteristic points, and finally forming the pixel coordinates of nine points;

and S13, calculating by using a visual nine-point calibration algorithm to obtain a coordinate conversion relation between the pixel coordinate and the actual coordinate, and outputting the coordinate conversion relation.

2. A mechanical arm positioning and guiding method based on machine vision cooperation is characterized by comprising the following steps:

s21, establishing a vision system which comprises a camera with a lens, a controller and a light source so that the view of the camera can be shot to cover the whole product;

s22, grabbing the product at the reference position by using the manipulator, enabling the manipulator to drive the product to rotate for multiple times at a fixed angle in a product placing plane, shooting once when the manipulator rotates once, grabbing feature points of the product and calculating pixel coordinates;

s23, fitting the product feature points in all the rotated images into a circle, and calculating the pixel coordinates of the circle center; obtaining a coordinate conversion relation between a pixel coordinate and an actual coordinate by using the calibration method based on the machine vision and the mechanical arm positioning guide as claimed in claim 1, and calculating a coordinate (Xc, Yc) corresponding to the pixel coordinate of the circle center in a virtual coordinate system;

s24, photographing the product at the actual position, capturing a plurality of product feature points, calculating pixel coordinates, and calculating corresponding coordinates (Xa, Ya) in a virtual coordinate system according to the coordinate conversion relation; and

and S25, calculating the offset delta X and delta Y of the product at the actual position relative to the product at the reference position and the offset angle delta Angel of the product according to the circle center coordinate points (Xc and Yc) of the product, and providing the offset angle delta Angel to a manipulator control system so as to adjust the grabbing position of the manipulator.

Technical Field

The invention relates to a positioning guiding and calibrating method based on machine vision and mechanical arm matching.

Background

With the development of science and technology, market users have higher requirements on the precision degree and quality of products, and the traditional positioning method cannot meet the requirements of users, so that machine vision positioning technology is successively introduced.

In order to unify the relationship of the vision coordinate system and the robot coordinate system is commonly referred to as calibration. The traditional camera calibration method is to use the structural information of the scenery, usually a calibration block with consistent structure and high processing precision as an orientation reference object, establish the parameter relation of a camera model through the corresponding relation between a space point and an image pickup point, and then extract parameters through an optimization algorithm.

In the prior art, the calibration process is complex, the requirement on the precision of a calibration block is high, and the calibration block cannot be used in many situations of practical application, so that the calibration is difficult to complete.

Disclosure of Invention

The invention aims to provide a calibration method based on machine vision and mechanical arm positioning guide, which does not need a calibration block and only calibrates through the corresponding relation between image points.

The invention also aims to provide a positioning and guiding method based on machine vision and matched with a mechanical arm, which can obtain the offset of a positioning point according to a rotation central point calculated in a virtual coordinate system, and further can guide the mechanical arm to adjust the grabbing position of a product.

Therefore, the invention provides a calibration method based on machine vision and mechanical arm positioning guide, which comprises the following steps: grabbing the product by using a manipulator, enabling a certain characteristic point of the product to be located in the visual field of a camera, and enabling the manipulator to drive the product to move nine times in a Sudoku mode, wherein the moving distance is delta d, and the actual coordinates of other eight points are calculated by taking any one point of the nine points as a central origin; the camera shoots once when the manipulator moves once, characteristic points of the product are grabbed, pixel coordinates of the characteristic points are calculated, and finally the pixel coordinates of nine points can be formed; and calculating by using a visual nine-point calibration algorithm to obtain a coordinate conversion relation between the pixel coordinate and the actual coordinate, and outputting the coordinate conversion relation.

The invention also provides a positioning and guiding method based on machine vision and mechanical arm cooperation, which comprises the following steps: s21, establishing a vision system which comprises a camera with a lens, a controller and a light source so that the view of the camera can be shot to cover the whole product; s22, grabbing the product at the reference position by using the manipulator, enabling the manipulator to drive the product to rotate for multiple times at a fixed angle in a product placing plane, shooting once when the manipulator rotates once, grabbing feature points of the product and calculating pixel coordinates; s23, fitting the product feature points in all the rotated images into a circle, and calculating the pixel coordinates of the circle center; obtaining a coordinate conversion relation between a pixel coordinate and an actual coordinate by using the calibration method based on the machine vision and the mechanical arm positioning guide as claimed in claim 1, and calculating a coordinate (Xc, Yc) corresponding to the pixel coordinate of the circle center in a virtual coordinate system; s24, photographing the product at the actual position, capturing a plurality of product feature points, calculating pixel coordinates, and calculating corresponding coordinates (Xa, Ya) in a virtual coordinate system according to the coordinate conversion relation; and S25, calculating the offset delta X and delta Y of the product at the actual position relative to the product at the reference position and the offset angle delta Angel of the product according to the circle center coordinate points (Xc, Yc) of the product, and providing the offset angle delta Angel to a manipulator control system so as to adjust the grabbing position of the manipulator.

Compared with the prior art, the invention has the following technical effects:

(1) the calibration process is simple, calibration blocks are not needed, and calibration is only carried out through the corresponding relation between the image points.

(2) The offset of the positioning point can be solved according to the rotation center point calculated in the virtual coordinate system without knowing the coordinates of the rotation center point of the manipulator.

In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the drawings.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:

fig. 1 is a flowchart of a calibration method based on machine vision to cooperate with robot positioning guidance according to a first embodiment of the present invention;

FIG. 2 is a schematic view of a vision system of the calibration method shown in FIG. 1;

FIG. 3 is a schematic diagram of the nine-point actual coordinates of the calibration method shown in FIG. 1;

fig. 4 is a flowchart of a robot positioning and guiding method based on machine vision cooperation according to a second embodiment of the present invention; and

FIG. 5 is a circle to which all rotated feature points are fitted in the calibration method of FIG. 4; and

fig. 6 shows a geometrical positional relationship between an actual position product and a reference position product in the positioning guide method according to the present invention.

Detailed Description

The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.

The invention relates to a positioning, guiding and calibrating method of a vision system.

With reference to fig. 1, the calibration method based on machine vision and mechanical arm positioning guidance of the present invention includes the following steps:

s11, grabbing the product by using the manipulator, enabling a certain characteristic point of the product to be located in the camera view, enabling the manipulator to drive the product to move nine times in a Sudoku mode, enabling the moving distance to be delta d, and calculating actual coordinates of other eight points by taking any one point of the nine points as a center origin;

s12, shooting once when the manipulator moves once, capturing characteristic points of the product and calculating pixel coordinates of the characteristic points, and finally forming the pixel coordinates of nine points;

and S13, calculating by using a visual nine-point calibration algorithm to obtain a coordinate conversion relation between the pixel coordinate and the actual coordinate, and outputting the coordinate conversion relation.

The calibration method is described in detail below.

The camera is fixedly installed in the direction from bottom to top or from top to bottom.

As shown in fig. 2, the manipulator 1 grabs the product 2 to make a certain characteristic point of the product in the visual field of the camera 3, and the manipulator drives the product to move nine times in a squared manner, with a distance Δ d, taking any one point as a center origin (0,0), so that coordinates of the other eight points can be calculated.

The manipulator takes a picture of the camera once every time it moves, grasps the feature points of the product and calculates the pixel coordinates, and finally, the pixel coordinates of nine points (Xp0, Yp0), (Xp1, Yp1) … … (Xp9, Yp9) can be formed.

The actual coordinates (actual coordinate system is a virtual coordinate system) of the nine points (Xr0, Yr0), (Xr1, Yr1) … … (Xr9, Yr9) are (0,0) and (. DELTA.d, 0) … … (2. DELTA.d ), respectively, as shown in fig. 3.

The calibration relationship (i.e., coordinate transformation relationship, usually embodied in the form of calibration matrix) between the pixel coordinate and the actual coordinate can be obtained by using a visual nine-point calibration algorithm, and the pixel coordinate can be transformed into a coordinate point in a virtual coordinate system.

According to the calibration method, the calibration process is simple, the calibration block is not needed, and the calibration is only carried out through the corresponding relation between the image points.

With reference to fig. 4, the method for guiding and positioning a manipulator based on machine vision cooperation comprises the following steps:

s21, establishing a vision system which comprises a camera with a lens, a controller and a light source so that the view of the camera can be shot to cover the whole product;

s22, grabbing the product at the reference position by using the manipulator, enabling the manipulator to drive the product to rotate for multiple times at a fixed angle in a product placing plane, shooting once when the manipulator rotates once, grabbing feature points of the product and calculating pixel coordinates;

s23, fitting the product feature points in all the rotated images into a circle, and calculating the pixel coordinates of the circle center; obtaining a coordinate conversion relation between a pixel coordinate and an actual coordinate by using the calibration method based on the machine vision and the mechanical arm positioning guide as claimed in claim 1, and calculating a coordinate (Xc, Yc) corresponding to the pixel coordinate of the circle center in a virtual coordinate system;

s24, photographing the product at the actual position, capturing a plurality of product feature points, calculating pixel coordinates, and calculating corresponding coordinates (Xa, Ya) in a virtual coordinate system according to the coordinate conversion relation; and

and S25, calculating the offset delta X and delta Y of the product at the actual position relative to the product at the reference position and the offset angle delta Angel of the product according to the circle center coordinate points (Xc and Yc) of the product, and providing the offset angle delta Angel to a manipulator control system so as to adjust the grabbing position of the manipulator.

The present positioning and guiding method will be described in detail below.

The gripping position of a manipulator is fixed as a reference position, and the manipulator drives the product to rotate for a plurality of times, such as more than 20 times, at a fixed angle, such as 3 degrees, and each time the characteristic point of the product is required to be in the visual field of the camera.

The camera shoots once every time the camera rotates, the characteristic points of the product are grabbed, the pixel coordinates are calculated, and all points obtained after rotation are fitted into a circle (the more the rotation times, the more accurate the circle is fitted).

The coordinates (Xc, Yc) of the center C in the virtual coordinate system can be calculated by the above calibration relationship (i.e., coordinate transformation relationship, usually embodied in the form of calibration matrix), as shown in fig. 5.

The actual position of the product can be determined by the geometry of the product by taking a picture of the product in the actual position, then grabbing a plurality of characteristic points of the product and calculating pixel coordinates, and then calculating the corresponding coordinates (Xa, Ya) in the virtual coordinate system. Wherein the reference position is fixed and unique, and the actual position is randomly generated, which is related to the prior production process of the product.

As shown in fig. 6, according to the circle center coordinate points (Xc, Yc) of the product, the offset Δ X, Δ Y and the product offset angle Δ Angel of the actual position product relative to the reference position product can be calculated by using common geometric knowledge.

In the invention, the actual position product is regarded as the reference position product and is obtained after rotating around the circle center (the rotation central point of the manipulator) and translating, so that when the manipulator grabs the actual product, the reference grabbing position for grabbing the reference position product can meet the grabbing requirement of the actual position product after rotating around the rotation central point and translating and adjusting.

According to the calibration method, the coordinates of the rotation center point of the manipulator do not need to be known, and the offset of the positioning point can be solved according to the rotation center point calculated in the virtual coordinate system.

The calibration method and the positioning and guiding method according to the present invention may be integrated in a vision system of the manipulator, or may be integrated in an external electronic device, where the electronic device may be a server, or may be a terminal or other devices.

The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Network acceleration service (CDN), big data and an artificial intelligence platform.

The terminal can be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart sound box, a smart watch, and the like. The terminal and the server may be directly or indirectly connected by wired or wireless communication.

The computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, etc., which stores a calibration program and a positioning guidance program, wherein the calibration program is used to implement the steps of the calibration method when executed, and the positioning guidance program is used to implement the steps of the positioning guidance method when executed.

The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:送餐机器人的托盘及送餐机器人

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!