Welding seam detection positioning method and positioning device of automatic eddy current flaw detection system

文档序号:133298 发布日期:2021-10-22 浏览:43次 中文

阅读说明:本技术 自动化涡流探伤系统的焊缝检测定位方法及定位装置 (Welding seam detection positioning method and positioning device of automatic eddy current flaw detection system ) 是由 杨凯 梁斌 高春良 谢利明 王峰 汪永恒 于 2021-07-26 设计创作,主要内容包括:本发明涉及电涡流检测技术领域,具体公开一种自动化涡流探伤系统的焊缝检测定位方法及定位装置,该定位方法包括根据预设示教数据将携带双目视觉相机和电涡流探头的机械手臂移动至待检测焊缝区域附近;通过双目视觉相机采集当前位置的图像数据进行三维重建,获得当前三维信息;读取预设历史三维信息,将当前三维信息和历史三维信息进行配准求出变换矩阵;控制机械手臂按照变换矩阵在空间三个方向上的偏移量移动到焊缝区域表面,并按照平行于焊缝长度的方向进行扫查,实现了依托工业机器人及三维视觉定位技术完成自动化的构架焊缝缺陷的无损探伤。(The invention relates to the technical field of eddy current testing, and particularly discloses a welding seam detection positioning method and a positioning device of an automatic eddy current testing system, wherein the positioning method comprises the steps of moving a mechanical arm carrying a binocular vision camera and an eddy current probe to the vicinity of a welding seam area to be detected according to preset teaching data; acquiring image data of a current position by a binocular vision camera to carry out three-dimensional reconstruction so as to obtain current three-dimensional information; reading preset historical three-dimensional information, and registering the current three-dimensional information and the historical three-dimensional information to obtain a transformation matrix; and controlling the mechanical arm to move to the surface of the welding seam area according to the offset of the transformation matrix in three spatial directions, and scanning according to the direction parallel to the length of the welding seam, so that automatic nondestructive flaw detection of the frame welding seam defect is realized by depending on an industrial robot and a three-dimensional visual positioning technology.)

1. A welding seam detection positioning method of an automatic eddy current flaw detection system is characterized in that the automatic eddy current flaw detection system comprises a mechanical arm, a binocular vision camera carried on the mechanical arm and an eddy current probe clamped at the tail end of the mechanical arm, and the positioning method comprises the following steps:

moving a mechanical arm carrying a binocular vision camera and an eddy current probe to the vicinity of a welding seam area to be detected according to preset teaching data;

acquiring image data of a current position by a binocular vision camera to carry out three-dimensional reconstruction so as to obtain current three-dimensional information;

reading preset historical three-dimensional information, and registering the current three-dimensional information and the historical three-dimensional information to obtain a transformation matrix;

and controlling the mechanical arm to move to the surface of the welding seam area according to the offset of the transformation matrix in three spatial directions, and scanning according to the direction parallel to the length of the welding seam.

2. The method for detecting and positioning the welding line of the automatic eddy current inspection system according to claim 1, wherein the step of acquiring image data of the current position by a binocular vision camera for three-dimensional reconstruction and obtaining current three-dimensional information comprises the steps of:

obtaining left image I collected by binocular vision camera1And right picture I2

Defining pixel point coordinates (x, y) suitable for each image, a matching window U, a horizontal offset dx and a vertical offset dy which take (x, y) as a center, and a horizontal distance pixel number i and a vertical distance pixel number j which take (x, y) as a center;

proceed to left drawing I1And right picture I2Calculating the absolute value of the phase difference of the middle pixel points:

CSAD=∑(i,j∈U)|I1(x+i,y+j)-I2(x+dx+i,y+dy+ j) l, where U represents the size of the selection window at the time of matching, (x, y) represents the currently matched pixel point, I1(x + I, y + j) represents the absolute phase value of a pixel point in a window range with the current matching point (x, y) as the center, the horizontal distance of I pixels and the vertical distance of j pixels in the reference image, I2(x + dx + I, y + dy + j) represents the absolute phase value of a pixel point within a window range of j pixels by taking the pixel (x + dx, y + dy) to be matched as the center, the horizontal distance is I pixels, the vertical distance is j pixels, and dx and dy represent the left image I1And right picture I2The offset of the compared pixel points in the horizontal direction and the vertical direction is compared;

substitution into left graph I1And right picture I2The limit constraint dy of (2) is 0, so that the left graph I1And right picture I2If only the offset dx exists in the horizontal direction, the field of view value required in stereo matching, that is, the current three-dimensional information, is output.

3. The method of claim 1, wherein the step of reading the preset historical three-dimensional information and registering the current three-dimensional information and the historical three-dimensional information to obtain the transformation matrix comprises:

partitioning the point cloud of the current three-dimensional information by using a region growing point cloud partitioning algorithm to obtain a current partitioned point cloud;

registering the current segmentation point cloud and the point cloud of the historical three-dimensional information, removing the interference of abnormal points by using an ICP (inductively coupled plasma) registration algorithm, and simultaneously fusing the registration results of all the point clouds;

projecting each registration result, and projecting the registration results to X, Y and Z planes respectively according to a preset projection rule to form binarization results of the three planes;

acquiring registration results of the point clouds of the current three-dimensional information and the point clouds of the historical three-dimensional information for multiple times according to the three steps, and selecting the registration result with the minimum sum of the projection areas of the three planes as the registration result of the final calculation transformation matrix;

and (3) performing difference calculation on the point cloud of the current three-dimensional information and the point cloud of the historical three-dimensional information to obtain a point cloud three-dimensional deviation matrix (x, y, z) of the current three-dimensional information and the historical three-dimensional information, namely a transformation matrix.

4. The method for detecting and positioning the welding seam of the automatic eddy current inspection system according to claim 1, wherein before the step of moving the mechanical arm carrying the binocular vision camera and the eddy current probe to the vicinity of the welding seam area to be detected according to preset teaching data, the method further comprises: generating preset teaching data and preset historical three-dimensional data;

the step of generating preset teaching data and preset historical three-dimensional data comprises the following steps:

moving the mechanical arm to the position near the welding line area to be detected in a manual teaching mode, and acquiring preset teaching data;

acquiring an image of a welding seam area by using a binocular vision camera in cooperation with structured light, and acquiring original three-dimensional image data, namely preset historical three-dimensional information;

and repeating the previous two steps to obtain preset teaching data when the mechanical arm reaches each welding seam area and preset historical three-dimensional information of each welding seam.

5. The welding seam detection and positioning method of the automatic eddy current inspection system according to claim 4, wherein the step of moving the mechanical arm to the vicinity of the position of the welding seam area to be detected by means of manual teaching to obtain preset teaching data comprises the following steps: obtaining the position relation between a mechanical arm coordinate system and a binocular vision camera coordinate system through hand-eye calibration;

the step of obtaining the position relation between the mechanical arm coordinate system and the binocular vision camera coordinate system through hand-eye calibration comprises the following steps:

taking the current mechanical arm as an original point, moving the mechanical arm, and photographing through a binocular vision camera to reconstruct an object;

calculating the position transformation relation between the current position of the object and the position of the origin through three-dimensional registration, namely recording B;

recording the position conversion relation of the mechanical arm before and after movement, namely recording A;

solving a matrix equation: AX is XB, A is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, B is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, X is a hand-eye matrix to be solved, and the position relation of the binocular vision camera relative to the manipulator is obtained by calibrating multiple groups of data to solve X.

6. The utility model provides an automatic change eddy current inspection system's welding seam detects positioner, its characterized in that, automatic eddy current inspection system includes robotic arm, carries on the binocular vision camera on robotic arm and centre gripping at the terminal eddy current probe of robotic arm, positioner includes:

the mechanical arm moving module is used for moving a mechanical arm carrying a binocular vision camera and an eddy current probe to the position near a welding line area to be detected according to preset teaching data;

the three-dimensional information acquisition module is used for acquiring image data of the current position through the binocular vision camera to carry out three-dimensional reconstruction so as to acquire current three-dimensional information;

the transformation matrix acquisition module is used for reading preset historical three-dimensional information and registering the current three-dimensional information and the historical three-dimensional information to obtain a transformation matrix;

and the mechanical arm control module is used for controlling the mechanical arm to move to the surface of the welding seam area according to the offset of the transformation matrix in three spatial directions and scanning according to the direction parallel to the length of the welding seam.

7. The welding seam detection positioning device of the automatic eddy current inspection system according to claim 6, wherein the three-dimensional information acquisition module acquires image data of a current position through a binocular vision camera for three-dimensional reconstruction, and the step of acquiring current three-dimensional information comprises:

obtaining left image I collected by binocular vision camera1And right picture I2

Defining pixel point coordinates (x, y) suitable for each image, a matching window U, a horizontal offset dx and a vertical offset dy which take (x, y) as a center, and a horizontal distance pixel number i and a vertical distance pixel number j which take (x, y) as a center;

proceed to left drawing I1And right picture I2Calculating the absolute value of the phase difference of the middle pixel points:

CSAD=∑(i,j∈U)|I1(x+i,y+j)-I2(x+dx+i,y+dy+ j) l, where U represents the size of the selection window at the time of matching, (x, y) represents the currently matching pixel point, I1(x + I, y + j) represents the absolute phase value of a pixel point in a window range with the current matching point (x, y) as the center, the horizontal distance of I pixels and the vertical distance of j pixels in the reference image, I2(x + dx + I, y + dy + j) represents the absolute phase value of a pixel point within a window range of j pixels by taking the pixel (x + dx, y + dy) to be matched as the center, the horizontal distance is I pixels, the vertical distance is j pixels, and dx and dy represent the left image I1And right picture I2The offset of the compared pixel points in the horizontal direction and the vertical direction is compared;

substitution into left graph I1And right picture I2The limit constraint dy of (2) is 0, so that the left graph I1And right picture I2If only the offset dx exists in the horizontal direction, the field of view value required in stereo matching, that is, the current three-dimensional information, is output.

8. The weld inspection positioning device of the automated eddy current inspection system according to claim 6, wherein the transformation matrix obtaining module reads preset historical three-dimensional information, and the step of registering the current three-dimensional information and the historical three-dimensional information to obtain the transformation matrix comprises:

partitioning the point cloud of the current three-dimensional information by using a region growing point cloud partitioning algorithm to obtain a current partitioned point cloud;

registering the current segmentation point cloud and the point cloud of the historical three-dimensional information, removing the interference of abnormal points by using an ICP (inductively coupled plasma) registration algorithm, and simultaneously fusing the registration results of all the point clouds;

projecting each registration result, and projecting the registration results to X, Y and Z planes respectively according to a preset projection rule to form binarization results of the three planes;

acquiring registration results of the point clouds of the current three-dimensional information and the point clouds of the historical three-dimensional information for multiple times according to the three steps, and selecting the registration result with the minimum sum of the projection areas of the three planes as the registration result of the final calculation transformation matrix;

and (3) performing difference calculation on the point cloud of the current three-dimensional information and the point cloud of the historical three-dimensional information to obtain a point cloud three-dimensional deviation matrix (x, y, z) of the current three-dimensional information and the historical three-dimensional information, namely a transformation matrix.

9. The weld detection positioning device of the automated eddy current inspection system according to claim 6, further comprising a preset data acquisition module for generating preset teaching data and preset historical three-dimensional data;

the preset data acquisition module generates preset teaching data and preset historical three-dimensional data, and the preset teaching data and the preset historical three-dimensional data comprise the following steps:

moving the mechanical arm to the position near the welding line area to be detected in a manual teaching mode, and acquiring preset teaching data;

acquiring an image of a welding seam area by using a binocular vision camera in cooperation with structured light, and acquiring original three-dimensional image data, namely preset historical three-dimensional information;

and repeating the previous two steps to obtain preset teaching data when the mechanical arm reaches each welding seam area and preset historical three-dimensional information of each welding seam.

10. The welding seam detection positioning device of the automatic eddy current inspection system according to claim 9, wherein the step of moving the robot arm to the vicinity of the position of the welding seam area to be detected by means of manual teaching to obtain preset teaching data comprises: obtaining the position relation between a mechanical arm coordinate system and a binocular vision camera coordinate system through hand-eye calibration;

the step of obtaining the position relation between the mechanical arm coordinate system and the binocular vision camera coordinate system through hand-eye calibration comprises the following steps:

taking the current mechanical arm as an original point, moving the mechanical arm, and photographing through a binocular vision camera to reconstruct an object;

calculating the position transformation relation between the current position of the object and the position of the origin through three-dimensional registration, namely recording B;

recording the position conversion relation of the mechanical arm before and after movement, namely recording A;

solving a matrix equation: AX is XB, A is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, B is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, X is a hand-eye matrix to be solved, and the position relation of the binocular vision camera relative to the manipulator is obtained by calibrating multiple groups of data to solve X.

Technical Field

The invention relates to the technical field of eddy current testing, in particular to a welding seam testing and positioning method and a welding seam testing and positioning device of an automatic eddy current testing system.

Background

The nondestructive flaw detection of the frame weld joint is one of the technologies aiming at the health detection of the frame structure, and the current mode of nondestructive flaw detection by utilizing the eddy current is mainly handheld portable detection.

From the inspection mode, the traditional portable eddy current manual inspection cannot realize automation, and the manual operation has positioning deviation, so that the inspection efficiency and the inspection precision are both disadvantageous, and therefore, the robot carrying the eddy current probe sensor is an important means for realizing automatic inspection to judge the position of a specific welding seam.

From the analysis of the visual positioning technology level, the traditional two-dimensional visual positioning mode needs more complex preparation work: firstly, completing automatic calibration of camera shooting by virtue of a checkerboard; secondly, the two-dimensional positioning mode has better positioning accuracy for short objects on a plane, and positioning errors in the whole view field range of the camera are consistent. The average error is 2-3mm larger than the theoretical error (the test measurement error may be very large); in addition, for a slightly uneven test piece or structure surface, the slight unevenness or tilting can cause the distortion of camera imaging, the camera imaging is not matched with an original reference object, and the positioning fails, even if the bottommost feature is used in the teaching process, the two-dimensional positioning is not suitable for a three-dimensional target object after the normal use of matching cannot be guaranteed, and therefore, a visual positioning system, namely a visual camera, is required for realizing efficient and automatic detection.

Disclosure of Invention

In view of this, because the frame fixed positions in the system are different, and the accurate positioning of the eddy current probe is very important, the application provides a welding seam detection positioning method of an automatic eddy current flaw detection system, which can well adapt to the field detection working condition and the automation requirement through the visual positioning technology.

In order to solve the technical problems, the technical scheme provided by the invention is a welding seam detection positioning method of an automatic eddy current flaw detection system, wherein the automatic eddy current flaw detection system comprises a mechanical arm, a binocular vision camera carried on the mechanical arm and an eddy current probe clamped at the tail end of the mechanical arm, and the positioning method comprises the following steps:

moving a mechanical arm carrying a binocular vision camera and an eddy current probe to the vicinity of a welding seam area to be detected according to preset teaching data;

acquiring image data of a current position by a binocular vision camera to carry out three-dimensional reconstruction so as to obtain current three-dimensional information;

reading preset historical three-dimensional information, and registering the current three-dimensional information and the historical three-dimensional information to obtain a transformation matrix;

and controlling the mechanical arm to move to the surface of the welding seam area according to the offset of the transformation matrix in three spatial directions, and scanning according to the direction parallel to the length of the welding seam.

Preferably, the step of acquiring image data of the current position by using a binocular vision camera to perform three-dimensional reconstruction to obtain current three-dimensional information includes:

obtaining left image I collected by binocular vision camera1And right picture I2

Defining pixel point coordinates (x, y) suitable for each image, a matching window U, a horizontal offset dx and a vertical offset dy which take (x, y) as a center, and a horizontal distance pixel number i and a vertical distance pixel number j which take (x, y) as a center;

proceed to left drawing I1And right picture I2Calculating the absolute value of the phase difference of the middle pixel points:

CSAD=∑(i,j∈U)|I1(x+i,y+j)-I2(x+dx+i,y+dy+ j) l, where U represents the size of the selection window at the time of matching, (x, y) represents the currently matched pixel point, I1(x + i, y + j) indicates that the reference map is centered at the current matching point (x, y), the horizontal distance is i pixels,the vertical distance is the absolute phase value of the pixel point in the window range of j pixels, I2(x + dx + I, y + dy + j) represents that the pixel (x + dx, y + dy) to be matched is taken as the center, the horizontal distance is I pixels, the vertical distance is the absolute phase value of the pixel point in the window range of j pixels, dx and dy represent that the left image I is1And right picture I2The offset of the compared pixel points in the horizontal direction and the vertical direction is compared;

substitution into left graph I1And right picture I2The limit constraint dy of (2) is 0, so that the left graph I1And right picture I2If only the offset dx exists in the horizontal direction, the field of view value required in stereo matching, that is, the current three-dimensional information, is output.

Preferably, the step of reading preset historical three-dimensional information, and registering the current three-dimensional information and the historical three-dimensional information to obtain a transformation matrix includes:

partitioning the point cloud of the current three-dimensional information by using a region growing point cloud partitioning algorithm to obtain a current partitioned point cloud;

registering the current segmentation point cloud and the point cloud of the historical three-dimensional information, removing the interference of abnormal points by using an ICP (inductively coupled plasma) registration algorithm, and simultaneously fusing the registration results of all the point clouds;

projecting each registration result, and projecting the registration results to X, Y and Z planes respectively according to a preset projection rule to form binarization results of the three planes;

acquiring registration results of the point clouds of the current three-dimensional information and the point clouds of the historical three-dimensional information for multiple times according to the three steps, and selecting the registration result with the minimum sum of the projection areas of the three planes as the registration result of the final calculation transformation matrix;

and (3) performing difference calculation on the point cloud of the current three-dimensional information and the point cloud of the historical three-dimensional information to obtain a point cloud three-dimensional deviation matrix (x, y, z) of the current three-dimensional information and the historical three-dimensional information, namely a transformation matrix.

Preferably, before the step of moving the mechanical arm carrying the binocular vision camera and the eddy current probe to the vicinity of the weld joint area to be detected according to preset teaching data, the method further comprises the following steps: generating preset teaching data and preset historical three-dimensional data;

the step of generating preset teaching data and preset historical three-dimensional data comprises the following steps:

moving the mechanical arm to the position near the welding line area to be detected in a manual teaching mode, and acquiring preset teaching data;

acquiring an image of a welding seam area by using a binocular vision camera in cooperation with structured light, and acquiring original three-dimensional image data, namely preset historical three-dimensional information;

and repeating the previous two steps to obtain preset teaching data when the mechanical arm reaches each welding seam area and preset historical three-dimensional information of each welding seam.

Preferably, the step of obtaining preset teaching data by moving the mechanical arm to the position near the welding seam area to be detected in a manual teaching mode comprises: obtaining the position relation between a mechanical arm coordinate system and a binocular vision camera coordinate system through hand-eye calibration;

the step of obtaining the position relation between the mechanical arm coordinate system and the binocular vision camera coordinate system through hand-eye calibration comprises the following steps:

taking the current mechanical arm as an original point, moving the mechanical arm, and photographing through a binocular vision camera to reconstruct an object;

calculating the position transformation relation between the current position of the object and the position of the origin through three-dimensional registration, namely recording B;

recording the position conversion relation of the mechanical arm before and after movement, namely recording A;

solving a matrix equation: AX is XB, A is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, B is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, X is a hand-eye matrix to be solved, and the position relation of the binocular vision camera relative to the manipulator is obtained by calibrating multiple groups of data to solve X.

The invention also provides a welding seam detection positioning device of the automatic eddy current inspection system, the automatic eddy current inspection system comprises a mechanical arm, a binocular vision camera carried on the mechanical arm and an eddy current probe clamped at the tail end of the mechanical arm, and the positioning device comprises:

the mechanical arm moving module is used for moving a mechanical arm carrying a binocular vision camera and an eddy current probe to the position near a welding line area to be detected according to preset teaching data;

the three-dimensional information acquisition module is used for acquiring image data of the current position through the binocular vision camera to carry out three-dimensional reconstruction so as to acquire current three-dimensional information;

the transformation matrix acquisition module is used for reading preset historical three-dimensional information and registering the current three-dimensional information and the historical three-dimensional information to obtain a transformation matrix;

and the mechanical arm control module is used for controlling the mechanical arm to move to the surface of the welding seam area according to the offset of the transformation matrix in three spatial directions and scanning according to the direction parallel to the length of the welding seam.

Preferably, the three-dimensional information obtaining module acquires image data of a current position through a binocular vision camera to perform three-dimensional reconstruction, and the step of obtaining the current three-dimensional information includes:

obtaining left image I collected by binocular vision camera1And right picture I2

Defining pixel point coordinates (x, y) suitable for each image, a matching window U, a horizontal offset dx and a vertical offset dy which take (x, y) as a center, and a horizontal distance pixel number i and a vertical distance pixel number j which take (x, y) as a center;

proceed to left drawing I1And right picture I2Calculating the absolute value of the phase difference of the middle pixel points:

CSAD=∑(i,j∈U)|I1(x+i,y+j)-I2(x+dx+i,y+dy+ j) l, where U represents the size of the selection window at the time of matching, (x, y) represents the currently matching pixel point, I1(x + I, y + j) represents the absolute phase value of a pixel point in a window range with the current matching point (x, y) as the center, the horizontal distance of I pixels and the vertical distance of j pixels in the reference image, I2(x + dx + i, y + dy + j) represents that the horizontal distance is i pixels and the vertical distance is j pixels by taking the pixel (x + dx, y + dy) to be matched as the centerThe absolute phase values of the pixels within the pixel window, dx and dy represent the left image I1And right picture I2The offset of the compared pixel points in the horizontal direction and the vertical direction is compared;

substitution into left graph I1And right picture I2The limit constraint dy of (2) is 0, so that the left graph I1And right picture I2If only the offset dx exists in the horizontal direction, the field of view value required in stereo matching, that is, the current three-dimensional information, is output.

Preferably, the step of reading the preset historical three-dimensional information by the transformation matrix obtaining module and registering the current three-dimensional information and the historical three-dimensional information to obtain the transformation matrix includes:

partitioning the point cloud of the current three-dimensional information by using a region growing point cloud partitioning algorithm to obtain a current partitioned point cloud;

registering the current segmentation point cloud and the point cloud of the historical three-dimensional information, removing the interference of abnormal points by using an ICP (inductively coupled plasma) registration algorithm, and simultaneously fusing the registration results of all the point clouds;

projecting each registration result, and projecting the registration results to X, Y and Z planes respectively according to a preset projection rule to form binarization results of the three planes;

acquiring registration results of the point clouds of the current three-dimensional information and the point clouds of the historical three-dimensional information for multiple times according to the three steps, and selecting the registration result with the minimum sum of the projection areas of the three planes as the registration result of the final calculation transformation matrix;

and (3) performing difference calculation on the point cloud of the current three-dimensional information and the point cloud of the historical three-dimensional information to obtain a point cloud three-dimensional deviation matrix (x, y, z) of the current three-dimensional information and the historical three-dimensional information, namely a transformation matrix.

Preferably, the welding seam detection positioning device of the automatic eddy current inspection system further comprises a preset data acquisition module for generating preset teaching data and preset historical three-dimensional data;

the preset data acquisition module generates preset teaching data and preset historical three-dimensional data, and the preset teaching data and the preset historical three-dimensional data comprise the following steps:

moving the mechanical arm to the position near the welding line area to be detected in a manual teaching mode, and acquiring preset teaching data;

acquiring an image of a welding seam area by using a binocular vision camera in cooperation with structured light, and acquiring original three-dimensional image data, namely preset historical three-dimensional information;

and repeating the previous two steps to obtain preset teaching data when the mechanical arm reaches each welding seam area and preset historical three-dimensional information of each welding seam.

Preferably, the step of obtaining preset teaching data by moving the mechanical arm to the position near the welding seam area to be detected in a manual teaching mode comprises: obtaining the position relation between a mechanical arm coordinate system and a binocular vision camera coordinate system through hand-eye calibration;

the step of obtaining the position relation between the mechanical arm coordinate system and the binocular vision camera coordinate system through hand-eye calibration comprises the following steps:

taking the current mechanical arm as an original point, moving the mechanical arm, and photographing through a binocular vision camera to reconstruct an object;

calculating the position transformation relation between the current position of the object and the position of the origin through three-dimensional registration, namely recording B;

recording the position conversion relation of the mechanical arm before and after movement, namely recording A;

solving a matrix equation: AX is XB, A is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, B is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, X is a hand-eye matrix to be solved, and the position relation of the binocular vision camera relative to the manipulator is obtained by calibrating multiple groups of data to solve X.

Compared with the prior art, the beneficial effects of the method are detailed as follows: the welding seam detection positioning method of the automatic eddy current flaw detection system comprises the steps of moving a mechanical arm carrying a binocular vision camera and an eddy current probe to be close to a welding seam area to be detected according to preset teaching data; acquiring image data of a current position by a binocular vision camera to carry out three-dimensional reconstruction so as to obtain current three-dimensional information; reading preset historical three-dimensional information, and registering the current three-dimensional information and the historical three-dimensional information to obtain a transformation matrix; and controlling the mechanical arm to move to the surface of the welding seam area according to the offset of the transformation matrix in three spatial directions, and scanning according to the direction parallel to the length of the welding seam, so that automatic nondestructive flaw detection of the frame welding seam defect is realized by depending on an industrial robot and a three-dimensional visual positioning technology.

Drawings

In order to illustrate the embodiments of the present invention more clearly, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained by those skilled in the art without inventive effort.

FIG. 1 is a schematic diagram illustrating a method for positioning a weld in an automated eddy current inspection system according to an embodiment of the present invention;

FIG. 2 is a schematic diagram of another method for positioning a weld in an automated eddy current inspection system according to an embodiment of the present invention;

fig. 3 is a welding seam detection positioning device of an automatic eddy current testing system according to an embodiment of the present invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without any creative work belong to the protection scope of the present invention.

In order to make the technical solutions of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments.

The invention aims to use a three-dimensional vision positioning technology and develop a corresponding algorithm by relying on an automatic eddy current flaw detection system of a framework, effectively avoids the condition limitations that the shooting direction of a camera lens is vertical to the surface of an object to be detected and the like in the traditional two-dimensional vision technology, and realizes the function of positioning the position of a three-dimensional welding seam structure or a framework structure.

The invention is characterized in that nondestructive flaw detection of the frame weld defects is realized by depending on an industrial robot and a three-dimensional visual positioning key technology. The visual positioning technology can realize the accurate positioning of the detection object and provide important basis for the detection path and the process correction of the industrial robot. The main principle is that three-dimensional information of a measured object is reconstructed based on a binocular camera (arranged in the fifth axis and the sixth axis of an industrial robot and linked with the sixth axis) and structured light, the information of the object in different time periods and postures is reconstructed in a three-dimensional mode, three-dimensional registration is completed, the offset of the current position relative to the historical position is obtained, and finally the mechanical arm is guided to move, so that position positioning and detection are achieved.

As shown in fig. 1, an embodiment of the present invention provides a method for detecting and positioning a weld of an automatic eddy current inspection system, where the automatic eddy current inspection system includes a robot arm, a binocular vision camera mounted on the robot arm, and an eddy current probe clamped at a tail end of the robot arm, and the method includes:

s11: moving a mechanical arm carrying a binocular vision camera and an eddy current probe to the vicinity of a welding seam area to be detected according to preset teaching data;

s12: acquiring image data of a current position by a binocular vision camera to carry out three-dimensional reconstruction so as to obtain current three-dimensional information;

s13: reading preset historical three-dimensional information, and registering the current three-dimensional information and the historical three-dimensional information to obtain a transformation matrix;

s14: and controlling the mechanical arm to move to the surface of the welding seam area according to the offset of the transformation matrix in three spatial directions, and scanning according to the direction parallel to the length of the welding seam.

Specifically, the core module and the function of the automatic eddy current flaw detection system are that an industrial robot is used for clamping an eddy current probe to realize the detection of a frame welding seam. The main hardware of the whole system is a mechanical arm, a binocular vision camera (carried on the mechanical arm), a clamping eddy current probe (positioned at the tail end of the mechanical arm), a detected framework and the like.

Specifically, the automatic welding line detection of the mechanical arm is carried out on the basis of completing teaching, namely, the accurate positioning is carried out on the basis of a teaching result, wherein historical data are needed to be used, and the whole visual positioning process is as follows:

(1) the tail end of the mechanical arm carries a binocular camera and a clamping probe to move to a to-be-detected area of a welding line according to a preset teaching result, and three-dimensional information acquisition and reconstruction of the current position are completed;

(2) calling out historical three-dimensional information shot in the teaching process by an algorithm, and registering a current result with a historical result;

(3) the purpose of registration is to solve a transformation matrix, accurately correct the teaching result according to coordinates, and control the error within an allowable range;

(4) and guiding the mechanical arm to accurately move to the surface of the welding seam according to the offset of the transformation matrix in three spatial directions, and scanning in a direction parallel to the welding seam.

In S11, the robot arm (including the front-end binocular camera and the eddy current probe) is moved to the approximate position of the detected weld joint region by a manual teaching method, so as to ensure that the field of view captured by the binocular camera is appropriate.

It should be noted that, in S12, the step of acquiring image data of the current position by using the binocular vision camera to perform three-dimensional reconstruction, and obtaining current three-dimensional information includes:

s121: obtaining left image I collected by binocular vision camera1And right picture I2

S122: defining pixel point coordinates (x, y) suitable for each image, a matching window U, a horizontal offset dx and a vertical offset dy which take (x, y) as a center, and a horizontal distance pixel number i and a vertical distance pixel number j which take (x, y) as a center;

s123: proceed to left drawing I1And right picture I2Calculating the absolute value of the phase difference of the middle pixel points:

CSAD=∑(i,j∈U)|I1(x+i,y+j)-I2(x+dx+i,y+dy+ j) |, where U represents a matchThe size of the time selection window, (x, y) represents the currently matched pixel point, I1(x + I, y + j) represents the absolute phase value of a pixel point in a window range with the current matching point (x, y) as the center, the horizontal distance of I pixels and the vertical distance of j pixels in the reference image, I2(x + dx + I, y + dy + j) represents the absolute phase value of a pixel point within a window range of j pixels by taking the pixel (x + dx, y + dy) to be matched as the center, the horizontal distance is I pixels, the vertical distance is j pixels, and dx and dy represent the left image I1And right picture I2The offset of the compared pixel points in the horizontal direction and the vertical direction is compared;

s124: substitution into left graph I1And right picture I2The limit constraint dy of (2) is 0, so that the left graph I1And right picture I2If only the offset dx exists in the horizontal direction, the field of view value required in stereo matching, that is, the current three-dimensional information, is output.

Specifically, a binocular camera is adopted to obtain original image data of each welding seam area in the teaching process and obtain current image data in the actual detection process, the image data are stored in pairs, namely, a left image and a right image are generated by the binocular camera aiming at shooting at the same position every time. The core purpose of the step is to match the phase of the two, and finally realize the three-dimensional information of the shot target.

First, two images taken by a binocular camera are input: left and Right. (hereinafter defined as I1 and I2, respectively); secondly, defining pixel point coordinates (x, y) suitable for each image, a matching window U, horizontal and vertical offsets dx and dy with the (x, y) as the center, and horizontal and vertical distance pixel numbers i and j with the (x, y) as the center; then, after the definition is completed, the absolute value of the phase difference of the convenient pixel points in the two images is calculated:

CSAD=∑(i,j∈U)|I1(x+i,y+j)-I2(x+dx+i,y+dy+ j) |, where U represents the size of the selection window at the time of matching, (x, y) represents the currently matching pixel point, and I1(x + I, y + j) represents the window range centered on the currently matching point (x, y) with a horizontal distance of I pixels and a vertical distance of j pixels with respect to the reference mapThe absolute phase value of the pixel. I2(x + dx + I, y + dy + j) represents the absolute phase value of a pixel point within a window of j pixels, with the pixel to be matched (x + dx, y + dy) as the center, and the horizontal distance is I pixels. dx and dy indicate the offsets of the pixel points compared in the two figures in the horizontal and vertical directions. Finally, the limit constraint condition dy of the left and right images needs to be substituted into 0, so that only the offset dx in the horizontal direction exists in the two images, that is, the field of view value required in stereo matching is output, and the three-dimensional target reconstruction is realized.

It should be noted that the step of reading preset historical three-dimensional information and registering the current three-dimensional information and the historical three-dimensional information to obtain the transformation matrix in S13 includes:

s131: partitioning the point cloud of the current three-dimensional information by using a region growing point cloud partitioning algorithm to obtain a current partitioned point cloud;

s132: registering the current segmentation point cloud and the point cloud of the historical three-dimensional information, removing the interference of abnormal points by using an ICP (inductively coupled plasma) registration algorithm, and simultaneously fusing the registration results of all the point clouds;

s133: projecting each registration result, and projecting the registration results to X, Y and Z planes respectively according to a preset projection rule to form binarization results of the three planes;

s134: acquiring registration results of the point clouds of the current three-dimensional information and the point clouds of the historical three-dimensional information for multiple times according to the three steps, and selecting the registration result with the minimum sum of the projection areas of the three planes as the registration result of the final calculation transformation matrix;

s135: and (3) performing difference calculation on the point cloud of the current three-dimensional information and the point cloud of the historical three-dimensional information to obtain a point cloud three-dimensional deviation matrix (x, y, z) of the current three-dimensional information and the historical three-dimensional information, namely a transformation matrix.

Specifically, on the basis of the three-dimensional reconstruction result of S12, the current and historical registration is completed, the purpose is to output a transformation matrix, and finally the robot arm is guided to accurately position to the local position of the welding seam on the basis of the teaching result, so that defect scanning is completed. Since the reconstructed three-dimensional point cloud information in the step S12 has a layering phenomenon, and the point cloud distribution of the historical three-dimensional information does not correspond to one another, which is very large in error for registration, an idea of layering registration and fusion to realize overall registration is provided. The method comprises the following specific steps: (1) partitioning the target point cloud by using a region-growth-based point cloud partitioning algorithm, wherein the partitioning aims to separate layers from each other; (2) registering the current segmentation point cloud and the historical point cloud (removing the interference of abnormal points to a certain extent by using an ICP (inductively coupled plasma) registration algorithm), and fusing the registration results of all the point clouds; (3) projecting each registration result to X, Y and Z planes respectively, wherein the projection rule is to ensure that the projection number of inner points of each 3mm (set according to project precision requirements) is 1, and forming binarization results (black and white images) of three planes; the step of binary projection is to measure the effect and check the coincidence consistency in the registration process, and if the coincidence consistency of the current point cloud and the historical point cloud is higher, the coincidence degree of the projection is better. (4) And (3) circularly obtaining current and historical registration results for multiple times, and selecting the best matching as a result of finally calculating a transformation matrix, wherein the idea is to respectively calculate the sum of the projection areas of the three planes in the step (3) and take the minimum. (5) And finally, calculating a current deviation matrix (x, y, z) and a historical deviation matrix (x, y, z), transmitting the result to the mechanical arm, adjusting the posture according to the deviation value, moving the probe to an accurate position, and finishing scanning in the direction parallel to the welding seam.

It should be noted that, as shown in fig. 2, an embodiment of the present invention further provides a method for detecting and positioning a weld of an automated eddy current inspection system, where on the basis of the foregoing embodiment, before the step of moving a robot arm carrying a binocular vision camera and an eddy current probe to a position near a weld area to be detected according to preset teaching data in S11, the method further includes: s10: generating preset teaching data and preset historical three-dimensional data;

the step of S10 generating preset teaching data and preset historical three-dimensional data includes:

s101: moving the mechanical arm to the position near the welding line area to be detected in a manual teaching mode, and acquiring preset teaching data;

s102: acquiring an image of a welding seam area by using a binocular vision camera in cooperation with structured light, and acquiring original three-dimensional image data, namely preset historical three-dimensional information;

s103: and repeating the previous two steps to obtain preset teaching data when the mechanical arm reaches each welding seam area and preset historical three-dimensional information of each welding seam.

It should be noted that, in S101, the robot arm is moved to the vicinity of the position of the welding seam area to be detected in a manual teaching manner, and the step of acquiring preset teaching data includes: obtaining the position relation between a mechanical arm coordinate system and a binocular vision camera coordinate system through hand-eye calibration; the method comprises the following specific steps:

taking the current mechanical arm as an original point, moving the mechanical arm, and photographing through a binocular vision camera to reconstruct an object;

calculating the position transformation relation between the current position of the object and the position of the origin through three-dimensional registration, namely recording B;

recording the position conversion relation of the mechanical arm before and after movement, namely recording A;

solving a matrix equation: AX is XB, A is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, B is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, X is a hand-eye matrix to be solved, and the position relation of the binocular vision camera relative to the manipulator is obtained by calibrating multiple groups of data to solve X.

In particular, as used herein, the hand-eye calibration technique is used, and generally, the relationship between the coordinate system of the mechanical arm and the coordinate system of the camera needs to be obtained through calibration. Hand-eye calibration essentially solves the matrix equation: AX ═ XB; a is a homogeneous matrix of two spatial transformations before and after the camera (monocular or binocular); b is a homogeneous matrix of the two-time transformation of the tail end coordinate system of the mechanical arm; x is a hand-eye matrix to be solved; by solving the equation multiple times, X can be solved. The specific working process is as follows: and taking the current mechanical arm as an original point, moving the mechanical arm, and photographing to reconstruct an object. And calculating the position transformation relation between the current object and the origin through three-dimensional registration, and recording the position transformation relation, namely recording B. And simultaneously recording the position conversion relation of the front and back movement of the mechanical arm, namely recording A. And calibrating multiple groups of data to solve X to obtain the final position relation of the camera relative to the tail end of the mechanical arm.

After the approximate shooting range of the binocular camera is determined by teaching once in S102, image acquisition is performed on the weld area by using the binocular vision camera (in cooperation with structured light), and original three-dimensional image information is acquired;

in S103, S101 and S102 are repeated until the teaching that the robot arm reaches the weld region and the acquisition of the history data of the three-dimensional information of all the welds are completed.

Compared with the prior art, the method has the following beneficial effects and innovation points: 1. in the process of carrying out eddy current nondestructive inspection on the frame welding seam defects: current and historical three-dimensional information of a welding seam area is shot by using a binocular camera, three-dimensional reconstruction and registration are realized, an industrial robot is accurately guided to replace a flaw detection mode of manually holding an eddy current probe, and automation efficiency is improved; 2. in the process of three-dimensional registration, in order to reduce the larger influence of errors brought by the traditional integral registration method, the layered registration precision can be preferentially ensured by adopting a mode of layered segmentation of three-dimensional point cloud information and local layered registration and fusion, so that the higher quality of the integral registration is achieved, the registration precision is greatly improved, and an important role is played in guiding relevant actions of a mechanical arm.

As shown in fig. 3, an embodiment of the present invention further provides a welding seam detection positioning device of an automatic eddy current inspection system, where the automatic eddy current inspection system includes a robot arm, a binocular vision camera mounted on the robot arm, and an eddy current probe clamped at a distal end of the robot arm, and the positioning device includes:

the mechanical arm moving module 21 is used for moving a mechanical arm carrying a binocular vision camera and an eddy current probe to the vicinity of a welding seam area to be detected according to preset teaching data;

the three-dimensional information acquisition module 22 is used for acquiring image data of the current position through the binocular vision camera to perform three-dimensional reconstruction so as to acquire current three-dimensional information;

the transformation matrix obtaining module 23 is configured to read preset historical three-dimensional information, and perform registration on the current three-dimensional information and the historical three-dimensional information to obtain a transformation matrix;

and the mechanical arm control module 24 is used for controlling the mechanical arm to move to the surface of the welding seam area according to the offset of the transformation matrix in three spatial directions and scanning in a direction parallel to the length of the welding seam.

It should be noted that, the three-dimensional information obtaining module 22 acquires image data of a current position through the binocular vision camera to perform three-dimensional reconstruction, and the step of obtaining current three-dimensional information includes:

obtaining left image I collected by binocular vision camera1And right picture I2

Defining pixel point coordinates (x, y) suitable for each image, a matching window U, a horizontal offset dx and a vertical offset dy which take (x, y) as a center, and a horizontal distance pixel number i and a vertical distance pixel number j which take (x, y) as a center;

proceed to left drawing I1And right picture I2Calculating the absolute value of the phase difference of the middle pixel points:

CSAD=∑(i,j∈U)|I1(x+i,y+j)-I2(x+dx+i,y+dy+ j) l, where U represents the size of the selection window at the time of matching, (x, y) represents the currently matching pixel point, I1(x + I, y + j) represents the absolute phase value of a pixel point in a window range with the current matching point (x, y) as the center, the horizontal distance of I pixels and the vertical distance of j pixels in the reference image, I2(x + dx + I, y + dy + j) represents the absolute phase value of a pixel point within a window range of j pixels by taking the pixel (x + dx, y + dy) to be matched as the center, the horizontal distance is I pixels, the vertical distance is j pixels, and dx and dy represent the left image I1And right picture I2The offset of the compared pixel points in the horizontal direction and the vertical direction is compared;

substitution into left graph I1And right picture I2The limit constraint dy of (2) is 0, so that the left graph I1And right picture I2If only the offset dx exists in the horizontal direction, the field of view value required in stereo matching, that is, the current three-dimensional information, is output.

It should be noted that the step of reading the preset historical three-dimensional information by the transformation matrix obtaining module 23, and registering the current three-dimensional information and the historical three-dimensional information to obtain the transformation matrix includes:

partitioning the point cloud of the current three-dimensional information by using a region growing point cloud partitioning algorithm to obtain a current partitioned point cloud;

registering the current segmentation point cloud and the point cloud of the historical three-dimensional information, removing the interference of abnormal points by using an ICP (inductively coupled plasma) registration algorithm, and simultaneously fusing the registration results of all the point clouds;

projecting each registration result, and projecting the registration results to X, Y and Z planes respectively according to a preset projection rule to form binarization results of the three planes;

acquiring registration results of the point clouds of the current three-dimensional information and the point clouds of the historical three-dimensional information for multiple times according to the three steps, and selecting the registration result with the minimum sum of the projection areas of the three planes as the registration result of the final calculation transformation matrix;

and (3) performing difference calculation on the point cloud of the current three-dimensional information and the point cloud of the historical three-dimensional information to obtain a point cloud three-dimensional deviation matrix (x, y, z) of the current three-dimensional information and the historical three-dimensional information, namely a transformation matrix.

The welding seam detection positioning device of the automatic eddy current inspection system further comprises a preset data acquisition module, which is used for generating preset teaching data and preset historical three-dimensional data;

the preset data acquisition module generates preset teaching data and preset historical three-dimensional data, and the preset teaching data and the preset historical three-dimensional data comprise the following steps:

moving the mechanical arm to the position near the welding line area to be detected in a manual teaching mode, and acquiring preset teaching data;

acquiring an image of a welding seam area by using a binocular vision camera in cooperation with structured light, and acquiring original three-dimensional image data, namely preset historical three-dimensional information;

and repeating the previous two steps to obtain preset teaching data when the mechanical arm reaches each welding seam area and preset historical three-dimensional information of each welding seam.

It should be noted that, moving the mechanical arm to the vicinity of the position of the welding seam area to be detected in a manual teaching manner, and acquiring preset teaching data includes: obtaining the position relation between a mechanical arm coordinate system and a binocular vision camera coordinate system through hand-eye calibration;

the step of obtaining the position relation between the mechanical arm coordinate system and the binocular vision camera coordinate system through hand-eye calibration comprises the following steps:

taking the current mechanical arm as an original point, moving the mechanical arm, and photographing through a binocular vision camera to reconstruct an object;

calculating the position transformation relation between the current position of the object and the position of the origin through three-dimensional registration, namely recording B;

recording the position conversion relation of the mechanical arm before and after movement, namely recording A;

solving a matrix equation: AX is XB, A is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, B is a homogeneous matrix of the binocular vision camera in front of the mobile manipulator and behind the mobile manipulator, X is a hand-eye matrix to be solved, and the position relation of the binocular vision camera relative to the manipulator is obtained by calibrating multiple groups of data to solve X.

For the description of the features in the embodiment corresponding to fig. 3, reference may be made to the related description of the embodiments corresponding to fig. 1 to fig. 2, which is not repeated here.

The above detailed description is provided for the weld joint detection positioning method and the positioning device of the automatic eddy current inspection system according to the embodiment of the present invention. The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种食品安全全程追溯方法及样品检测系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类