Butt joint ring capture point measuring method and system based on binocular stereo vision

文档序号:806437 发布日期:2021-03-26 浏览:26次 中文

阅读说明:本技术 一种基于双目立体视觉的对接环抓捕点测量方法及系统 (Butt joint ring capture point measuring method and system based on binocular stereo vision ) 是由 杜晓东 陈磊 丁健 倪文成 孔旭 危清清 高升 谭启蒙 于 2020-09-24 设计创作,主要内容包括:本发明公开了一种基于双目立体视觉的对接环抓捕点测量方法及系统,其中,该方法包括在机械臂末端上配置彩色双目相机,使其与末端固联,且两相机公共视场覆盖对接环抓捕点特征区域。两台相机以对接环内外边缘为主要识别对象,通过图像处理算法获取抓捕点附近的对接环特征信息,然后采用立体匹配的方式得到对接环内外边缘所对应的三维空间点云,最后利用点云数据来解算抓捕点相对于机械臂末端的位置和姿态。本发明直接以非合作目标航天上的典型自然特征为识别对象,不需要在目标星上安装辅助测量的合作标志器,也不需要知道对接环特征的尺寸等先验信息。(The invention discloses a method and a system for measuring a capture point of a docking ring based on binocular stereo vision, wherein the method comprises the steps of configuring a color binocular camera at the tail end of a mechanical arm to be fixedly connected with the tail end, and covering a feature area of the capture point of the docking ring by a common view field of the two cameras. The two cameras take the inner edge and the outer edge of the docking ring as main identification objects, acquire the characteristic information of the docking ring near the capture point through an image processing algorithm, then obtain three-dimensional space point clouds corresponding to the inner edge and the outer edge of the docking ring in a stereo matching mode, and finally use the point cloud data to calculate the position and the posture of the capture point relative to the tail end of the mechanical arm. The invention directly takes the typical natural features on the non-cooperative target space as the identification objects, does not need to install a cooperative marker for auxiliary measurement on the target satellite, and does not need to know the prior information such as the size of the docking ring features.)

1. A docking ring capture point measurement method based on binocular stereo vision is characterized by comprising the following steps:

(1) taking the butt joint ring of the non-cooperative target as an identification target, and utilizing a binocular camera at the tail end of the mechanical arm to perform butt joint ring capture point imaging; wherein, within the vision measurement distance range, the left eye camera and the right eye camera can both capture images of the inner edge and the outer edge of the docking ring;

(2) filtering and denoising a left eye original image collected by a left eye camera and a right eye original image collected by a right eye camera;

(3) performing color model conversion on the left eye original image and the right eye original image obtained in the step (2) after filtering and denoising, converting the RGB image into an HSV image, and simultaneously obtaining a left eye V brightness image and a right eye V brightness image in an HSV image mode;

(4) carrying out epipolar line correction on the left eye V brightness image and the right eye V brightness image obtained in the step (3) to align the rows of the left eye V brightness image and the right eye V brightness image;

(5) carrying out binarization processing on the left eye V brightness image and the right eye V brightness image obtained in the step (4) after the epipolar line correction;

(6) and (5) carrying out parameter retrieval on the inner edge and the outer edge of the butt joint ring on the binaryzation-processed left eye V brightness image and the right eye V brightness image obtained in the step (5), wherein the parameters of the inner edge and the outer edge of the butt joint ring comprise an arc segment point set phi of the left eye inner edgeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiR

(7) Set of points phi in arc segment of left eye edgeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiRSelecting a subset to carry out three-dimensional reconstruction and obtain three-dimensional point cloud data of the edge of the butt joint ring;

(8) establishing a target coordinate system sigma by using the three-dimensional point cloud data obtained in the step (7)TAnd calculating the mounting coordinate system sigma of the capture point of the docking ring relative to the preset cameraCWherein the pose parameters comprise a rotation matrixCRTAnd translation vectorCTT

(9) Obtaining a preset camera mounting coordinate system ∑CCoordinate system sigma of the end of the preset mechanical armERelative relation between the tail end of the mechanical arm and the catching point of the butt joint ring is obtainedAnd (5) carrying out pose relation.

2. The binocular stereo vision based docking ring capture point measurement method of claim 1, wherein: in step (4), the correction formula for epipolar line correction is as follows:

wherein [ u ]L vL 1]T、[uR vR 1]THomogeneous pixel coordinates of spatial points in the pre-correction left-eye V luminance image and the right-eye V luminance image, [ u'L v′L 1]T、[u′R v′R 1]TRespectively the homogeneous pixel coordinate, M, of the space point in the corrected left eye V brightness image and the right eye V brightness imageL、MRIntrinsic parameter matrices, R, for the left eye camera and the right eye camera, respectivelyL、RRRespectively are rotation matrixes of coordinate systems of the left eye camera and the right eye camera, M' is a corrected camera intrinsic parameter matrix, RrecFor the corrected camera coordinate system rotation matrix, λL、λRNot equal to 0 is a constant.

3. The binocular stereo vision based docking ring capture point measurement method of claim 2, wherein: rotating matrix R of corrected camera coordinate systemrecThe determination process is as follows:

the direction of the predetermined pole is e1

Wherein, TeIs the translation vector between the center of projection of the left eye camera and the right eye camera.

Direction e of the predetermined and pole1And a camera principal axis direction vector TZOrthogonal direction is e2

e2=TZ×e1

Then with e1、e2Orthogonal direction is e3

e3=e1×e2

The finally obtained corrected rotation matrix of the camera coordinate system is as follows:

4. the binocular stereo vision based docking ring capture point measurement method of claim 2, wherein: in the step (6), the parameters of the inner edge and the outer edge of the butt joint ring of the left eye V brightness image and the right eye V brightness image obtained by the binarization processing in the step (5) are searched, and the method comprises the following steps:

performing connected domain search in the binarized left eye V brightness image and right eye V brightness image, determining the region of a butt joint ring, and eliminating the interference existing in the images;

performing penetration detection along the radial direction or the direction of penetrating through the docking ring to obtain the inner edge and the outer edge of a capture point region of the docking ring;

filtering the edge according to the smooth characteristic of the edge of the butt-joint ring;

and performing linear interpolation between the continuous pixels to realize smooth transition of edges.

5. The binocular stereo vision based docking ring capture point measurement method of claim 1, wherein: in step (7), the coordinates of the selected subset satisfy the following condition:

wherein v isLi、vRiRespectively is a preset feature point PiImage ordinate, v, in left and right eye camerasLo、vRoThe vertical coordinates of the images of the center points of the image planes of the left camera and the right eye camera are respectively.

6. The binocular stereo vision based docking ring capture point measurement method of claim 5, wherein: in the step (7), the step of obtaining the three-dimensional point cloud data of the edge of the docking ring comprises the following steps:

according to the imaging projection relation of the left eye camera and the right eye camera:

wherein, muL、μRNot equal to 0 is a constant number,is a projective transformation matrix of the left eye camera,a projective transformation matrix of the right-eye camera;

and simultaneously constructing a matrix equation to obtain:

KCPi=U;

wherein the content of the first and second substances,

solving to obtain a preset characteristic point PiThree-dimensional coordinates under the camera mounting coordinate system:

CPi=(KTK)-1KTU;

wherein (u)Li,vLi)、(uRi,vRi) Respectively is a preset feature point PiImage coordinates in left and right eye cameras, RLj、TLn、RRj、TRnAll elements of the camera projective transformation matrix are provided, j is 1, 2, … …, 9, n is 1, 2, 3, K, U are coefficient matrixes obtained in the process of simultaneous calculation,CPifor presetting a characteristic point PiA three-dimensional coordinate vector in the camera mounting coordinate system,CXifor presetting a characteristic point PiIn the X-coordinate under the camera mounting coordinate system,CYifor presetting a characteristic point PiIn the Y-coordinate under the camera mounting coordinate system,CZifor presetting a characteristic point PiIn the Z-coordinate under the camera mounting coordinate system,CPi=[CXi CYi CZi]。

7. the binocular stereo vision based docking ring capture point measurement method of claim 6, wherein: in step (8), a target coordinate system Σ is establishedTAnd calculating the mounting coordinate system sigma of the capture point of the docking ring relative to the preset cameraCThe pose parameters comprise the following steps:

the central position of the butt joint annular belt is solved by utilizing the three-dimensional space coordinates of the arc segment point set of the left eye inner edge, the arc segment point set of the right eye inner edge, the arc segment point set of the left eye outer edge and the arc segment point set of the right eye outer edge:

wherein, OavgIs the central position of the annular band of the butt joint ring, Xavg、Yavg、ZavgX, Y, Z coordinates for the center position of the docking ring belt, respectively;

for reconstructed n1+n2Fitting the space points to obtain a plane gamma, wherein the equation of the plane gamma is as follows:

AX+BY+CZ+D=0

wherein A, B, C, D is the coefficient of the plane Γ equation;

n is to be1+n2Substituting the three-dimensional coordinates of the space points into the formula to construct a matrix equation:

AX-l=0;

in the formulaX=[A B C]T,l=[-D -D ... -D]T(ii) a A is n1+n2A matrix of three-dimensional coordinates of the individual spatial points, X being a vector of plane equation coefficients A, B, C, Xq、Yq、ZqAre each n1+n2X, Y, Z coordinates of each space point, q is 1, 2, … …, n1+n2L is a vector consisting of the coefficients of the plane equation D;

thus, the plane Γ, i.e., the normal vector of the observed surface of the docking ring, is determined as:

X=(ATA)-1ATl;

take a preset point OavgProjection O on the fitting plane ΓTAs a target coordinate system ∑TOrigin position of (2):

projecting the arc segment point set of the outer edge of the left eye and the arc segment point set of the outer edge of the right eye on a plane gamma, and solving a projection point and a point OTD, the preset point PTIs a distance point OTThe nearest proxels, namely:

using the direction as a target coordinate system ∑TX-axis of (a):

using normal vector of plane gamma as target coordinate system sigmaTZ-axis of (a):

target coordinate system ΣTThe Y axis of (a) can be given as:

target coordinate system ΣTRelative to the camera mounting coordinate system ∑CThe rotation matrix of (a) is:

CRT=[n o a];

target coordinate system ΣTRelative to the camera mounting coordinate system ∑CThe translation vector of (a) is:

CTT=[CXT CYT CZT]T

8. the binocular stereo vision based docking ring capture point measurement method of claim 6, wherein: in step (9), a camera mounting coordinate system Σ is presetCCoordinate system sigma of the end of the preset mechanical armEThe relative relationship between them is as follows:

EPiERC×CPi+ETC

in the formula (I), the compound is shown in the specification,ERCandETCrespectively a rotation matrix and a translation vector of a camera mounting coordinate system relative to a mechanical arm tail end coordinate system,CPiEPirespectively represent preset feature points PiIn camera mounting coordinate system ∑CEnd of a robot armCoordinate system ΣEThe coordinates of (a).

9. The binocular stereo vision based docking ring capture point measurement method of claim 8, wherein: in the step (9), the relative pose relationship between the tail end of the mechanical arm and the capture point of the butt-joint ring is as follows:

10. the utility model provides a butt joint ring capture point measurement system based on binocular stereo vision which characterized in that includes:

the first module is used for taking the butt joint ring of the non-cooperative target as an identification target and utilizing a binocular camera at the tail end of the mechanical arm to perform butt joint ring capture point imaging; wherein, within the vision measurement distance range, the left eye camera and the right eye camera can both capture images of the inner edge and the outer edge of the docking ring;

the second module is used for carrying out filtering and denoising processing on a left eye original image acquired by the left eye camera and a right eye original image acquired by the right eye camera;

the third module is used for performing color model conversion on the left eye original image and the right eye original image which are obtained by the second module and subjected to filtering and denoising, converting the RGB image into an HSV image, and simultaneously acquiring a left eye V brightness image and a right eye V brightness image in an HSV image mode;

the fourth module is used for carrying out epipolar line correction on the left eye V brightness image and the right eye V brightness image obtained by the third module so as to align the rows of the left eye V brightness image and the right eye V brightness image;

the fifth module is used for carrying out binarization processing on the left eye V brightness image and the right eye V brightness image obtained by the fourth module after the epipolar line correction;

a sixth module, configured to perform parameter retrieval on the inner edge and the outer edge of the docking ring for the left eye V luminance image and the right eye V luminance image obtained by the fifth module after binarization processing, where the parameters of the inner edge and the outer edge of the docking ring are used to perform parameter retrievalIncluding a set of arc points phi of the left eye inner edgeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiR

A seventh module for collecting phi at the edge arc segment point of the left eyeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiRSelecting a subset to carry out three-dimensional reconstruction and obtain three-dimensional point cloud data of the edge of the butt joint ring;

an eighth module, configured to establish a target coordinate system Σ using the three-dimensional point cloud data obtained by the seventh moduleTAnd calculating the mounting coordinate system sigma of the capture point of the docking ring relative to the preset cameraCWherein the pose parameters comprise a rotation matrixCRTAnd translation vectorCTT

A ninth module for obtaining a preset camera mounting coordinate system ∑CCoordinate system sigma of the end of the preset mechanical armERelative relation between the tail end of the mechanical arm and the capture point of the butt-joint ring is obtained.

Technical Field

The invention belongs to the technical field of relative measurement of capture points of non-cooperative spacecrafts, and particularly relates to a method and a system for measuring capture points of a docking ring based on binocular stereo vision.

Background

Conventionally, the fact that a space target is cooperative means that the target can deliver relative motion state information to a serving spacecraft, or provide conditions for the serving spacecraft that facilitate operations such as approaching, capturing, and the like. Such spacecraft are typically equipped with a feature marker for measurement and robotic arm grasping or docking. The spacecraft service projects that have successfully been tested on orbit are mostly targeted for cooperative targets, such as ETS-VII in Japan and "rail express" in the United states. In these on-orbit service tasks, a visual means is usually used to measure the relative pose of the target, and a cooperative identifier for assisting the visual measurement is mostly installed on the target. The pose measurement algorithm adopts a PNP algorithm, namely, a camera is assumed to be a pinhole model and is calibrated, an image of N space points with known coordinates in an object coordinate system is shot, the coordinates of the N points in the camera coordinate system can be determined by utilizing pixel coordinates of the N points and the constraint relation among the N points, and then the relative pose relation between a target and the camera can be calculated.

In contrast, non-cooperative targets refer to those spacecraft that are unable to provide relative state information to the serving spacecraft and for which no information is known for acquisition in proximity. At present, most of on-orbit satellites do not consider to receive on-orbit service, so that cooperative identifications for assisting intersection measurement are not installed, and the satellites belong to the category of non-cooperative targets. Non-cooperative targets cannot provide effective cooperative information to the serving spacecraft, which presents significant challenges to relative measurements.

Disclosure of Invention

The technical problem solved by the invention is as follows: the method and the system for measuring the capture point of the docking ring based on the binocular stereo vision solve the problem that a space mechanical arm is difficult to measure the relative pose of the capture point of a target spacecraft in a non-cooperative spacecraft capture task comprising docking ring characteristics.

The purpose of the invention is realized by the following technical scheme: a docking ring capture point measurement method based on binocular stereo vision comprises the following steps: (1) taking the butt joint ring of the non-cooperative target as an identification target, and utilizing a binocular camera at the tail end of the mechanical arm to perform butt joint ring capture point imaging; wherein, within the vision measurement distance range, the left eye camera and the right eye camera can both capture images of the inner edge and the outer edge of the docking ring; (2) filtering and denoising a left eye original image collected by a left eye camera and a right eye original image collected by a right eye camera; (3) performing color model conversion on the left eye original image and the right eye original image obtained in the step (2) after filtering and denoising, converting the RGB image into an HSV image, and simultaneously obtaining a left eye V brightness image and a right eye V brightness image in an HSV image mode; (4) carrying out epipolar line correction on the left eye V brightness image and the right eye V brightness image obtained in the step (3) to align the rows of the left eye V brightness image and the right eye V brightness image; (5) carrying out binarization processing on the left eye V brightness image and the right eye V brightness image obtained in the step (4) after the epipolar line correction; (6) and (5) carrying out parameter retrieval on the inner edge and the outer edge of the butt joint ring on the binaryzation-processed left eye V brightness image and the right eye V brightness image obtained in the step (5), wherein the parameters of the inner edge and the outer edge of the butt joint ring comprise an arc segment point set phi of the left eye inner edgeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiR(ii) a (7) Set of points phi in arc segment of left eye edgeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiRSelecting a subset to carry out three-dimensional reconstruction and obtain three-dimensional point cloud data of the edge of the butt joint ring; (8) establishing a target coordinate system sigma by using the three-dimensional point cloud data obtained in the step (7)TAnd calculating the mounting coordinate system sigma of the capture point of the docking ring relative to the preset cameraCWherein the pose parameters comprise a rotation matrixCRTAnd translation vectorCTT(ii) a (9) Obtaining a preset camera mounting coordinate system ∑CCoordinate system sigma of the end of the preset mechanical armERelative relation between the tail end of the mechanical arm and the capture point of the butt-joint ring is obtained.

In the above docking ring capture point measurement method based on binocular stereo vision, in step (4), the correction formula of epipolar line correction is as follows:

wherein [ u ]L vL 1]T、[uR vR 1]THomogeneous pixel coordinates of spatial points in the pre-correction left-eye V luminance image and the right-eye V luminance image, [ u'L v′L 1]T、[u′R v′R 1]TRespectively the homogeneous pixel coordinate, M, of the space point in the corrected left eye V brightness image and the right eye V brightness imageL、MRIntrinsic parameter matrices, R, for the left eye camera and the right eye camera, respectivelyL、RRRespectively are rotation matrixes of coordinate systems of the left eye camera and the right eye camera, M' is a corrected camera intrinsic parameter matrix, RrecFor the corrected camera coordinate system rotation matrix, λL、λRNot equal to 0 is a constant.

In the method for measuring the capture point of the docking ring based on the binocular stereo vision, the rotation matrix R of the camera coordinate system after correction is carried outrecThe determination process is as follows:

the direction of the predetermined pole is e1

Wherein, TeIs the translation vector between the center of projection of the left eye camera and the right eye camera.

Direction e of the predetermined and pole1And a camera principal axis direction vector TZOrthogonal direction is e2

e2=TZ×e1

Then with e1、e2Orthogonal direction is e3

e3=e1×e2

The finally obtained corrected rotation matrix of the camera coordinate system is as follows:

in the method for measuring the docking ring capture points based on binocular stereo vision, in the step (6), the retrieval of parameters of the inner edge and the outer edge of the docking ring is carried out on the binaryzation-processed left eye V brightness image and the binaryzation-processed right eye V brightness image obtained in the step (5), and the method comprises the following steps: performing connected domain search in the binarized left eye V brightness image and right eye V brightness image, determining the region of a butt joint ring, and eliminating the interference existing in the images; performing penetration detection along the radial direction or the direction of penetrating through the docking ring to obtain the inner edge and the outer edge of a capture point region of the docking ring; filtering the edge according to the smooth characteristic of the edge of the butt-joint ring; and performing linear interpolation between the continuous pixels to realize smooth transition of edges.

In the method for measuring the docking ring capture point based on binocular stereo vision, in the step (7), the coordinates of the selected subset satisfy the following conditions:

wherein v isLi、vRiRespectively is a preset feature point PiImage ordinate, v, in left and right eye camerasLo、 vRoThe vertical coordinates of the images of the center points of the image planes of the left camera and the right eye camera are respectively.

In the above method for measuring the docking ring capture point based on binocular stereo vision, in the step (7), the step of obtaining the three-dimensional point cloud data of the docking ring edge includes the following steps:

according to the imaging projection relation of the left eye camera and the right eye camera:

wherein, muL、μRNot equal to 0 is a constant number,is a projective transformation matrix of the left eye camera,a projective transformation matrix of the right-eye camera;

and simultaneously constructing a matrix equation to obtain:

KCPi=U;

wherein the content of the first and second substances,

solving to obtain a preset characteristic point PiThree-dimensional coordinates under the camera mounting coordinate system:

CPi=(KTK)-1KTU;

wherein (u)Li,vLi)、(uRi,vRi) Respectively is a preset feature point PiImage coordinates in left and right eye cameras, RLj、TLn、RRj、TRnAll elements of the camera projective transformation matrix are provided, j is 1, 2, … …, 9, n is 1, 2, 3, K, U are coefficient matrixes obtained in the process of simultaneous calculation,CPifor presetting a characteristic point PiAt camera installation seatA three-dimensional coordinate vector under the coordinate system,CXifor presetting a characteristic point PiIn the X-coordinate under the camera mounting coordinate system,CYifor presetting a characteristic point PiIn the Y-coordinate under the camera mounting coordinate system,CZifor presetting a characteristic point PiIn the Z-coordinate under the camera mounting coordinate system,CPi=[CXi CYi CZi]。

in the method for measuring the docking ring capture point based on binocular stereo vision, in step (8), a target coordinate system sigma is establishedTAnd calculating the mounting coordinate system sigma of the capture point of the docking ring relative to the preset cameraCThe pose parameters comprise the following steps:

the central position of the butt joint annular belt is solved by utilizing the three-dimensional space coordinates of the arc segment point set of the left eye inner edge, the arc segment point set of the right eye inner edge, the arc segment point set of the left eye outer edge and the arc segment point set of the right eye outer edge:

wherein, OavgIs the central position of the annular band of the butt joint ring, Xavg、Yavg、ZavgX, Y, Z coordinates for the center position of the docking ring belt, respectively;

for reconstructed n1+n2Fitting the space points to obtain a plane gamma, wherein the equation of the plane gamma is as follows:

AX+BY+CZ+D=0

wherein A, B, C, D is the coefficient of the plane Γ equation;

n is to be1+n2Substituting the three-dimensional coordinates of the space points into the formula to construct a matrix equation:

AX-l=0;

in the formulaX=[A B C]T,l=[-D -D ... -D]T(ii) a A is n1+n2A matrix of three-dimensional coordinates of the individual spatial points, X being a vector of plane equation coefficients A, B, C, Xq、Yq、ZqAre each n1+n2X, Y, Z coordinates of each space point, q is 1, 2, … …, n1+n2L is a vector consisting of the coefficients of the plane equation D;

thus, the plane Γ, i.e., the normal vector of the observed surface of the docking ring, is determined as:

X=(ATA)-1ATl;

take a preset point OavgProjection O on the fitting plane ΓTAs a target coordinate system ∑TOrigin position of (2):

projecting the arc segment point set of the outer edge of the left eye and the arc segment point set of the outer edge of the right eye on a plane gamma, and solving a projection point and a point OTD, the preset point PTIs a distance point OTThe nearest proxels, namely:

using the direction as a target coordinate system ∑TX-axis of (a):

using normal vector of plane gamma as target coordinate system sigmaTZ-axis of (a):

target coordinate system ΣTThe Y axis of (a) can be given as:

target coordinate system ΣTRelative to the camera mounting coordinate system ∑CThe rotation matrix of (a) is:

CRT=[n o a];

target coordinate system ΣTRelative to the camera mounting coordinate system ∑CThe translation vector of (a) is:

CTT=[CXT CYT CZT]T

in the method for measuring the docking ring capture point based on binocular stereo vision, in step (9), a camera installation coordinate system sigma is presetCCoordinate system sigma of the end of the preset mechanical armEThe relative relationship between them is as follows:

EPiERC×CPi+ETC

in the formula (I), the compound is shown in the specification,ERCandETCrespectively a rotation matrix and a translation vector of a camera mounting coordinate system relative to a mechanical arm tail end coordinate system,CPiEPirespectively represent preset feature points PiIn camera mounting coordinate system ∑CAnd the mechanical arm end coordinate system sigmaEThe coordinates of (a).

In the above method for measuring the docking ring capture point based on binocular stereo vision, in step (9), the relative pose relationship between the tail end of the mechanical arm and the docking ring capture point is as follows:

a docking ring capture point measurement system based on binocular stereo vision, comprising: the first module is used for taking the butt joint ring of the non-cooperative target as an identification target and utilizing a binocular camera at the tail end of the mechanical arm to perform butt joint ring capture point imaging; wherein, in the vision measuring distance range, the left sideThe eye camera and the right eye camera can capture images of the inner edge and the outer edge of the docking ring; the second module is used for carrying out filtering and denoising processing on a left eye original image acquired by the left eye camera and a right eye original image acquired by the right eye camera; the third module is used for performing color model conversion on the left eye original image and the right eye original image which are obtained by the second module and subjected to filtering and denoising, converting the RGB image into an HSV image, and simultaneously acquiring a left eye V brightness image and a right eye V brightness image in an HSV image mode; the fourth module is used for carrying out epipolar line correction on the left eye V brightness image and the right eye V brightness image obtained by the third module so as to align the rows of the left eye V brightness image and the right eye V brightness image; the fifth module is used for carrying out binarization processing on the left eye V brightness image and the right eye V brightness image obtained by the fourth module after the epipolar line correction; a sixth module, configured to perform parameter retrieval on the inner edge and the outer edge of the docking ring for the binarized left eye V luminance image and right eye V luminance image obtained by the fifth module, where the parameters of the inner edge and the outer edge of the docking ring include an arc segment point set phi of the left eye inner edgeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiR(ii) a A seventh module for collecting phi at the edge arc segment point of the left eyeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiRSelecting a subset to carry out three-dimensional reconstruction and obtain three-dimensional point cloud data of the edge of the butt joint ring; an eighth module, configured to establish a target coordinate system Σ using the three-dimensional point cloud data obtained by the seventh moduleTAnd calculating the mounting coordinate system sigma of the capture point of the docking ring relative to the preset cameraCWherein the pose parameters comprise a rotation matrixCRTAnd translation vectorCTT(ii) a A ninth module for obtaining a preset camera mounting coordinate system ∑CCoordinate system sigma of the end of the preset mechanical armERelative relation between the tail end of the mechanical arm and the capture point of the butt-joint ring is obtained.

Compared with the prior art, the invention has the following beneficial effects:

(1) aiming at the characteristics of a butt joint ring on a non-cooperative target spacecraft, a hand-eye measurement method for mechanical arm capture is provided, the problem of non-cooperative target measurement is effectively solved, and the method does not depend on manually arranged visual marks.

(2) The visual measurement method provided by the invention can be applied to ultra-close range, the whole process of the whole butt joint ring is not required to be visible in the process of approaching measurement, and the problem of few identifiable characteristics limited by the visual field is effectively solved.

(3) The vision measuring equipment adopted by the invention has the advantages of light weight, low power consumption and the like, and the proposed relative pose estimation method is simple, effective and reliable and is more suitable for the space environment with less resources.

Drawings

Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:

fig. 1 is a flowchart of a docking ring capture point measurement method based on binocular stereo vision provided by an embodiment of the present invention;

FIG. 2 is a schematic diagram of a docking ring capture point imaging measurement provided by an embodiment of the present invention;

FIG. 3 is a schematic diagram of establishing a target coordinate system according to an embodiment of the present invention

Fig. 4 is a schematic diagram of a pose calculation-related coordinate system provided by the embodiment of the invention.

Detailed Description

Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.

The embodiment mainly provides a capture point relative measurement method based on binocular hand-eye vision for an on-orbit control task of a space robot aiming at a typical docking ring structure on a non-cooperative target spacecraft.

The method adopted by the embodiment is as follows: and a color binocular camera is arranged at the tail end of the mechanical arm and fixedly connected with the tail end, and the common view field of the two cameras covers the characteristic area of the capture point of the butt-joint ring. The two cameras take the inner edge and the outer edge of the docking ring as main identification objects, acquire the characteristic information of the docking ring near the capture point through an image processing algorithm, then obtain three-dimensional space point clouds corresponding to the inner edge and the outer edge of the docking ring in a stereo matching mode, and finally use the point cloud data to calculate the position and the posture of the capture point relative to the tail end of the mechanical arm. The method directly takes typical natural features on non-cooperative target spaceflight as identification objects, does not need to install a cooperative marker for auxiliary measurement on a target satellite, and does not need to know prior information such as the size of the features of the docking ring.

As shown in fig. 1, the method for measuring the capture point of the docking ring based on binocular stereo vision mainly comprises the following steps:

(1) taking the butt joint ring of the non-cooperative target as an identification target, and utilizing a binocular color camera at the tail end of the mechanical arm to perform butt joint ring capture point imaging; wherein, within the vision measurement distance range, the left eye camera and the right eye camera can both capture images of the inner edge and the outer edge of the docking ring;

(2) filtering and denoising original images acquired by a left eye camera and a right eye camera;

(3) performing color model conversion on the denoised image obtained in the step (2) after filtering treatment, converting an RGB image into an HSV image, and simultaneously obtaining a V brightness image in an HSV image mode;

(4) carrying out epipolar line correction on the V brightness image obtained in the step (3) to align the left eye image line and the right eye image line:

the correction formula for epipolar line correction is as follows:

wherein [ u ]L vL 1]T、[uR vR 1]THomogeneous pixel coordinates of the spatial points in the pre-correction left and right eye images, [ u'L v′L 1]T、[u′R v′R 1]TRespectively the homogeneous pixel coordinates, M, of the space points in the corrected left and right eye imagesL、MRThe intrinsic parameter matrix, R, of the left and right eye cameras respectivelyL、RRRespectively as rotation matrix of coordinate system of left and right eye camera, M' as corrected internal parameter matrix of camera, RrecFor the corrected camera coordinate system rotation matrix, λL、λRNot equal to 0 is a constant.

(5) Selecting a fixed threshold value to carry out binarization processing on the binocular image obtained in the step (4) after the epipolar line correction;

(6) searching parameters of the inner edge and the outer edge of the butt joint ring of the binaryzation processed binocular image obtained in the step (5), wherein the parameters of the inner edge and the outer edge of the butt joint ring comprise a point set phi of an arc section of the inner edge of the left eyeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiR

(7) Set of points phi in arc segment of left eye edgeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiRSelecting a subset to carry out three-dimensional reconstruction and obtain a three-dimensional point cloud of the edge of the butt joint ring;

wherein, taking parameter k as 2, that is, the coordinates in the selected image point subset satisfy the condition:

wherein (u)Li,vLi)、(uLi,vLi) Are respectively a point PiImage coordinates in left and right eye cameras, (u)Lo,vLo)、 (uRo,vRo) Respectively representing the image coordinates of the center points of the image planes of the left eye camera and the right eye camera.

The reconstructed space point cloud comprises an inner edge arc segment point set phi (n)1Individual spatial points), set of outer edge arc segment points Ψ (n)2One spatial point).

(8) Establishing a target coordinate system sigma by using the obtained three-dimensional point cloud dataTAnd calculating a docking ring capture point relative to a camera mounting coordinate system sigmaCThe pose parameters include a rotation matrixCRTAnd translation vectorCTT

(9) Obtaining a camera mounting coordinate system sigma by calibrationCCoordinate system sigma with the end of the robot armEAnd the relative position and posture relation between the tail end of the mechanical arm and the capture point of the butt-joint ring is calculated.

The invention carries out epipolar line correction on the preprocessed left eye image and right eye image. Corrected camera coordinate system rotation matrix RrecThe determination process is as follows:

first select e1Direction of pole:

selection and e1And a camera main shaft TZOrthogonal direction is e2

e2=TZ×e1

Then e3And e1、e2Orthogonal, one can obtain:

e3=e1×e2

the finally obtained corrected rotation matrix of the camera coordinate system is as follows:

in this embodiment, the process of searching and detecting the binarized images of the left and right eyes to obtain the inner and outer edges of the docking ring near the capture point includes:

(1) performing connected domain search in the binary image, determining the region of the butt joint ring, and eliminating the interference existing in the image;

(2) performing penetration detection along the radial direction or the direction of penetrating through the docking ring, and determining the inner edge and the outer edge of a capture point region of the docking ring;

(3) filtering the edge according to the smooth characteristic of the edge of the butt-joint ring;

(4) and performing linear interpolation between the continuous pixels to realize smooth transition of edges.

The inner and outer edges of the docking ring near the capture point are obtained by feature detection, as shown in fig. 2. In the left eye camera image, the extracted feature image points are divided into two point sets: inner edge arc point set phiLOuter edge arc segment point set psiL. Similarly, an inner edge arc segment point set phi matched with the left eye image is also extracted from the right eye camera imageROuter edge arc segment point set psiR

In this embodiment, three-dimensional reconstruction is performed on the characteristic points of the inner edge and the outer edge of the docking ring near the capture point, so as to obtain a three-dimensional point cloud of the edge of the docking ring. According to the formulaCondition of (2) inner edge arc segment point set phiLR) Outer edge arc segment point set psiLR) And taking the image point subset to perform pose resolving.

For a left eye camera and a right eye camera, the imaging projection relation is

Wherein muL、μRNot equal to 0 is a constant, ML、MRIs a left and right eye camera projection matrixCXi CYi CZi 1]TAs a characteristic edge point PiHomogeneous coordinates in the camera mounting coordinate system.

Simultaneous construction of the matrix equation yields:

KCPi=U

wherein

Solving the three-dimensional coordinates of the available feature points:

CPi=(KTK)-1KTU

through three-dimensional reconstruction, two groups of space point clouds are obtained, including: inner edge arc segment point set phi (n)1Individual spatial points), set of outer edge arc segment points Ψ (n)2One spatial point).

The invention uses the three-dimensional space coordinates of the inner edge arc segment point set phi and the outer edge arc segment point set psi to calculate the central position of the butt joint ring belt:

as shown in fig. 3, for the reconstructed n1+n2And fitting the space points to obtain a plane gamma. First, a general plane equation is constructed

AX+BY+CZ+D=0

N is to be1+n2The three-dimensional coordinates of each space point are substituted into a plane equation to obtain:

AX-l=0

in the formulaX=[A B C]T,l=[-D -D ... -D]T

Thus, the normal vector of the fitting plane gamma (namely the observed surface of the butt joint ring) is obtained as

X=(ATA)-1ATl

Get point OavgProjection O on the fitting plane ΓTAs a target coordinate system ∑TOrigin position of (2):

a point set psi (n) of the outer edge arc segment2Spatial points) onto a plane Γ and find the projected point and the point OTThe distance d between them. Set point PTIs a distance point OTNearest projection point, i.e.

Then the direction is used as the target coordinate system ∑TX-axis of (a):

using normal vector of plane gamma as target coordinate system sigmaTZ-axis of (a):

target coordinate system ΣTThe Y axis of (a) can be given as:

target coordinate system ΣTRelative to the camera mounting coordinate system ∑CIs a rotation matrix of

CRT=[n o a]

Target coordinate system ΣTRelative to the camera mounting coordinate system ∑CThe translation vector is

CTT=[XT YT ZT]T

As shown in fig. 4, the coordinate system Σ is installed by the cameraCCoordinate system sigma with the end of the robot armEThe relative relationship between the capture points of the docking rings and the tail end of the mechanical arm is calibrated. Camera mounting coordinate system ∑CCoordinate system sigma with the end of the robot armEThe relative relationship between them is as follows:

EPiERC×CPi+ETC

in the formulaERCAndETCrespectively a rotation matrix and a translation vector of a camera mounting coordinate system relative to a mechanical arm tail end coordinate system,CPiEPirespectively represent a certain characteristic point PiAnd mounting coordinates of a coordinate system and a mechanical arm tail end coordinate system on the camera. Therefore, the relative pose of the tail end of the mechanical arm and the capture point of the butt joint ring can be obtained as follows:

this embodiment also provides a pair of ring capture point measurement system based on binocular stereo vision, includes: the first module is used for taking the butt joint ring of the non-cooperative target as an identification target and utilizing a binocular camera at the tail end of the mechanical arm to perform butt joint ring capture point imaging; wherein, within the vision measurement distance range, the left eye camera and the right eye camera can both capture images of the inner edge and the outer edge of the docking ring; a second module for filtering the left eye original image collected by the left eye camera and the right eye original image collected by the right eye cameraCarrying out wave denoising treatment; the third module is used for performing color model conversion on the left eye original image and the right eye original image which are obtained by the second module and subjected to filtering and denoising, converting the RGB image into an HSV image, and simultaneously acquiring a left eye V brightness image and a right eye V brightness image in an HSV image mode; the fourth module is used for carrying out epipolar line correction on the left eye V brightness image and the right eye V brightness image obtained by the third module so as to align the rows of the left eye V brightness image and the right eye V brightness image; the fifth module is used for carrying out binarization processing on the left eye V brightness image and the right eye V brightness image obtained by the fourth module after the epipolar line correction; a sixth module, configured to perform parameter retrieval on the inner edge and the outer edge of the docking ring for the binarized left eye V luminance image and right eye V luminance image obtained by the fifth module, where the parameters of the inner edge and the outer edge of the docking ring include an arc segment point set phi of the left eye inner edgeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiR(ii) a A seventh module for collecting phi at the edge arc segment point of the left eyeLPoint set psi of outer edge arc segment of left eyeLRight eye inner edge arc segment point set phiRRight eye outer edge arc segment point set psiRSelecting a subset to carry out three-dimensional reconstruction and obtain three-dimensional point cloud data of the edge of the butt joint ring; an eighth module, configured to establish a target coordinate system Σ using the three-dimensional point cloud data obtained by the seventh moduleTAnd calculating the mounting coordinate system sigma of the capture point of the docking ring relative to the preset cameraCWherein the pose parameters comprise a rotation matrixCRTAnd translation vectorCTT(ii) a A ninth module for obtaining a preset camera mounting coordinate system ∑CCoordinate system sigma of the end of the preset mechanical armERelative relation between the tail end of the mechanical arm and the capture point of the butt-joint ring is obtained.

The embodiment provides a method for measuring the relative hand-eye position of a mechanical arm based on binocular vision, aiming at a typical docking ring structure on a non-cooperative target spacecraft. The two cameras take the inner edge and the outer edge of the docking ring as main identification objects, acquire the characteristic information of the docking ring near the capture point through an image processing algorithm, then obtain three-dimensional space point clouds corresponding to the inner edge and the outer edge of the docking ring in a stereo matching mode, and finally use the point cloud data to calculate the position and the posture of the capture point relative to the tail end of the mechanical arm. The method directly takes typical natural features on non-cooperative target spaceflight as identification objects, does not need to install a cooperative marker for auxiliary measurement on a target satellite, and does not need to know prior information such as the size of the features of the docking ring.

Aiming at the characteristics of a butt joint ring on a non-cooperative target spacecraft, a hand-eye measurement method for mechanical arm capture is provided, the problem of non-cooperative target measurement is effectively solved, and the method does not depend on manually arranged visual marks. The visual measurement method provided by the invention can be applied to ultra-close range, the whole process of the whole butt joint ring is not required to be visible in the process of approaching measurement, and the problem of few identifiable characteristics limited by the visual field is effectively solved. The vision measuring equipment adopted by the invention has the advantages of light weight, low power consumption and the like, and the proposed relative pose estimation method is simple, effective and reliable and is more suitable for the space environment with less resources.

Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:单光子激光雷达水下光子位移校正、测深方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!