Three-dimensional target tracking algorithm research based on original CAD model

文档序号:1954757 发布日期:2021-12-10 浏览:18次 中文

阅读说明:本技术 一种基于原始cad模型的三维目标跟踪算法研究 (Three-dimensional target tracking algorithm research based on original CAD model ) 是由 董飞 马源源 于 2021-09-13 设计创作,主要内容包括:本发明公开了一种基于原始CAD模型的三维目标跟踪算法研究,包括建立VISP、CAO格式模型建立、三维目标跟踪、投影边缘的距离计算、三维模型与目标对准、KLT的目标跟踪算法、点线协同的目标跟踪。本发明涉及CAD模型技术领域,该基于原始CAD模型的三维目标跟踪算法研究,通过对传统基于CAD模型的三维目标跟踪算法难以处理复杂模型的实时跟踪问题,提出了一种基于替代模型的跟踪方法,该方法的核心是对原始CAD模型进行简化,同时尽可能保留关键结构信息,但当钝角边缘构成对象边界时也会保留,实验证明,本文基于简化的CAD模型能够实现对复杂对象实时、准确、高效的跟踪,从而为实时注册提供准确的定位信息。(The invention discloses a three-dimensional target tracking algorithm research based on an original CAD model, which comprises the steps of building VISP, building a CAO format model, tracking a three-dimensional target, calculating the distance of a projection edge, aligning the three-dimensional model and the target, carrying out KLT target tracking algorithm and carrying out point-line cooperative target tracking. The invention relates to the technical field of CAD models, and provides a tracking method based on a substitute model for researching a three-dimensional target tracking algorithm based on an original CAD model, which aims to solve the problem that the traditional three-dimensional target tracking algorithm based on the CAD model is difficult to process the real-time tracking of a complex model.)

1. A three-dimensional target tracking algorithm research based on an original CAD model is characterized in that the research method comprises the following steps:

s1, establishing VISP: establishing a VISP platform to realize processing of real-time images and tracking of a computer vision algorithm, and performing unmarked target tracking by using CAD model prior knowledge capable of providing 3D position information of an object while using a calibration camera;

s2, CAO format model establishment: selecting CAO format, respectively carrying out CAO format modeling on the regular graph and the irregular graph, and when modeling the irregular graph, using an index triangular mesh as a storage format for the model to realize effective mesh representation of the model, wherein each triangle in the triangular mesh shares an edge with other adjacent triangles, and the information to be stored by the triangular mesh comprises vertex, edge and face information;

s3, three-dimensional target tracking: detecting edge characteristics, calculating a projection straight line of the edge of the model in the visible edge of the image space of the current frame according to the target position of the previous frame, solving equation parameters of straight line coordinates according to the projection of model nodes, sampling along the direction of the projection straight line, setting sampling step length, solving pixel coordinates of sampling points, and selecting an edge detection unit according to the pixel coordinates and straight line angle parameters of each sampling point;

s4, calculating the distance of the projection edge: transforming the image space characteristic change into Cartesian coordinates, wherein the transformation relation is expressed by using an interaction matrix of point-to-straight line distances, the interaction matrix is a relation matrix capable of acquiring the speed of a target in the three-dimensional space translation and rotation direction and the image space coordinate change speed, and a point P (x) in a given camera coordinate system is givenc,yc,zc) And the corresponding plane projection point is p (x, y), and the derivation can be obtained according to the projection relation:

the interactive matrix is estimated by using camera internal parameters and depth information, an image space linear parameter equation xcos theta + ysin theta-rho is 0, in the equation, theta represents an included angle between a straight line and an image coordinate system u axis, rho is a distance from a point to the straight line, and a plane equation A in which the space straight line is positioned1xc+B1yc+C1zc+D1Thus, the linear projection equation for its corresponding focal length normalized image plane can be expressed as:

Ax+By+C=1/zc (3.2)

wherein A is ═ A1/D1,B=-B1/D1,C=-C1/D1Derivation of linear polar equation

ρ′+(xsinθ-ycosθ)θ′=x′cosθ+y′sinθ (3.3)

From the formulae (3.1), (3.2), (3.3), and reference [6] can be obtained

In the formula ofρ=-Aρcosθ-Bρsinθ-C,λθThe interaction matrix of ═ Asin θ + Bcos θ, i.e., straight lines, is:

the distance from a point to a straight line can be determined from the difference in distance from the image coordinate point to two straight lines of equal slope, point (x)d,yd) The distance to the straight line (ρ, θ) can be expressed as

d=ρ-ρd=ρ-xdcosθ+ydsinθ (3.6)

Derived from the above formula

d′=ρ′-(xdcosθ+ydsinθ)θ′=ρ′-αθ′ (3.7)

Wherein α ═ xdcosθ+ydsin theta, obtaining an interaction matrix from a point to a straight line according to the linear interaction matrix

Ld=Lρ-αLθIs shown as

In the formula ofd=λρ+θλθThe distance between the edge point and the projection straight line of the corresponding model is used for representing the image characteristics in each tracking unit, when a plurality of tracking units exist in the target, all the image characteristics of the target are fused to construct an interaction matrix of the system, all the visible model projection edges are converged to the image edge point, and the interaction matrix of the system is represented as

Ls=[L1 L2…Lm] (3.9)

The interaction matrix of the tracking cell features is represented as

In the formula, λdi=λpiiλθi(i ═ 1.. said., m), the number of rows in the matrix represents the number of feature cells of model projection edge sampling, the number of columns represents the number of degrees of freedom of the model in cartesian space, and the image space feature deviation is transformed into model velocity variation using a pseudo-inverse:

in the formula (I), the compound is shown in the specification,is the pseudo-inverse of the interaction matrix, vx,vy,vz,wx,wy,wzRepresenting the velocity of the model in the rotational and translational directions, d1,d2,...,dmThe distance from the target edge point to the visible projection edge of the corresponding model is obtained;

s5, aligning the three-dimensional model with the target: the process of converging the distance between the model and the target to zero, herein using weighted least squares, creates an objective function in image space representing the deviation between the target edge and the projected features of the model, the function being as follows:

s6, KLT target tracking algorithm: extracting the feature points on the image, then matching the feature points in the next frame to obtain the position of the target in the next frame, wherein the core of the process is to convert the target tracking into the tracking of the feature points:

A. harris feature point extraction algorithm:

shifting the image window by [ u, v ], resulting in a gray-scale change E (u, v), then

Wherein w (x, y) is a window function, I (x, y) is image gray scale, I (x + u, y + v) is image gray scale after translation, and formula (3.16) is simplified by Taylor series

In the formulaFor each pixel in the window, gradient I in x and y directionsxAnd IyPerforming statistical analysis to obtain M, and calculating the corner response function R of each pixel by using M:

R=detM-k(traceM)2 (3.18)

in the formula, detM ═ λ1λ2,traceM=λ1λ2In the matrix R, a certain point simultaneously satisfies R (i, j) ═ det (m) -Ktrace2When the value (M) > T and R (i, j) is a local maximum value in the region, the point is regarded as an angular point, wherein the value range of K is 0.04-0.06, T is a threshold value, and R lower than the threshold value is set as 0 in the detection process;

B. KLT matching algorithm theory:

I(x,y,t+τ)=I(x-Δx,y-Δy,t) (3.19)

given two images I and J, the definition enables

SSDSum of Squared intensity Difference, here denoted ε) minimized d, then:

where W is a given feature window, W (X) is a weighting function, d ═ Δ X, Δ y is the motion offset of point X (X, y), since d is much smaller than X, then J (X + d) can be taylor expanded, and d is derived by combining equation (3.20):

Zd=e (3.21)

in the formula (I), the compound is shown in the specification,g is an expanded first-order Taylor coefficient, Newton iteration is carried out on each point through a (3.21) formula, when the iteration result reaches the required precision, the tracking of the image point is realized, namely the optimal matching is found, and the final solution is as follows:

wherein d is the translation amount of the center of the characteristic window, dkSetting an initial iteration value to be 0 during iterative computation, and setting a relative interframe displacement to be small when a target moves at a slow speed under the condition that the initial iteration value is set to be 0 by using a k-th Newton iteration method, wherein the corresponding interframe displacement is also small when the target moves at a slow speed, the matching search range can be reduced and high matching precision can be achieved by using the KLT matching algorithm under the condition that the interframe displacement is also large when the target moves too fast, the reliability of the feature point pair obtained according to matching can be reduced, at the moment, if the matching window is selected to be small, matching can be missed, and if the matching window is selected to be large, the matching precision can be influenced, the method for solving the problem is to carry out layered iteration on the KLT matching algorithm, wherein the layered iteration matching method is realized by Gaussian pyramid decomposition, and the operation steps are to carry out L-layer pyramid decomposition with descending precision on the images I and J to obtain the IlAnd JlL is set to 0, 1, 2, L, (1) L is set to L, and initialization is performedn is 0, (2) let n be n +1, in IlAnd JlObtained from the above formula (3.22)Then judging whether the iteration times are finished or notIf the precision of (3) has met the requirement, executing the next step as long as one of the conditions is met, otherwise returning to the step (2), (3) finishing the iteration when l is equal to 0, and finally, finishing the iterationOtherwise make Returning to the step (2), selecting a small search window in the layering iteration process, and then continuously iterating from a high layer to a bottom layer until reaching the lowest layer, so that the point pair matching precision when the target moves rapidly is improved;

s7, point-line cooperative target tracking: camera displacement estimation based on texture information is integrated in the camera pose calculation process based on edge features.

2. The three-dimensional object tracking algorithm study based on original CAD models according to claim 1, characterized in that: in step S1, the available resources provided by the VISP platform are three model-based unmarked trackers, which are respectively a model tracker based on edge information, a model tracker based on KLT keypoint information, and a target tracker based on point-line coordination.

3. The three-dimensional object tracking algorithm study based on original CAD models according to claim 1, characterized in that: in step S3, the edge detection operator is determined according to the angle of the straight line.

4. The three-dimensional object tracking algorithm study based on original CAD models according to claim 1, characterized in that: in step S3, when performing the edge detection operator, each sampling unit searches in the normal direction along the straight line and within the set pixel search range, and calculates the maximum value of the pixel convolution in the normal direction of each detection unit, thereby solving the corresponding pixel coordinate.

5. The three-dimensional object tracking algorithm study based on original CAD models according to claim 1, characterized in that: in the step S5, ρ represents a robustness estimation parameter, MiThe method is characterized in that the method is a projection matrix of a current model, N is the number of edge tracking features, and K is camera internal parameters.

6. The three-dimensional object tracking algorithm study based on original CAD models according to claim 1, characterized in that: in step S6, the algorithm uses Harris operator to extract feature points and combines with the KLT matching algorithm of optimal estimation to realize tracking, the tracking process includes a reinitialization process of the KLT tracker, and the reinitialization criterion is to compare the ratio of the number of currently tracked feature points to the number of initially detected features.

7. The three-dimensional object tracking algorithm study based on original CAD models according to claim 1, characterized in that: in the step S6, the arris corner detection algorithm is based on the idea that a local detection window is set on an image, and when the gray scale changes greatly due to the window moving in any direction, the central pixel point of the area is the detected corner.

8. The three-dimensional object tracking algorithm study based on original CAD models according to claim 1, characterized in that: in step S6B, the KLT tracking algorithm has a core idea that the sum of squares of gray differences of a window to be tracked between video image frames is used as a measurement, specifically, a window W containing feature texture information is given, and then a translation model is used to describe changes of pixel points in the feature window, so that an image frame at time t is I (x, y, t), and an image frame at time t + τ is I (x, y, t + τ).

9. The three-dimensional object tracking algorithm study based on original CAD models according to claim 1, characterized in that: in step S6, determining the optimal position and shape of the target according to the matching obtained keypoint position information and by combining the information of the CAD model, thereby achieving tracking of the three-dimensional target.

10. The three-dimensional object tracking algorithm study based on original CAD models according to claim 1, characterized in that: in step S7, the method of sampling points is implemented by minimizing the non-linear criterion.

Technical Field

The invention relates to the technical field of CAD models, in particular to a three-dimensional target tracking algorithm research based on an original CAD model.

Background

The application of the augmented reality technology in modern life is continuously popularized, the augmented reality technology is widely applied in the fields of industry, military, medical treatment and the like, and even shows frequently in the programs watched in spring and evening in recent two years, so the augmented reality technology plays an increasingly important role in modern life and continuously changes daily life.

However, in the actual application process of the current three-dimensional target tracking algorithm of the CAD model, the three-dimensional target in the scene cannot be accurately tracked in real time, and when the target is tracked in an environment lacking features, an artificially designed reference mark is needed, which is very troublesome.

Disclosure of Invention

Aiming at the defects of the prior art, the invention provides a three-dimensional target tracking algorithm research based on an original CAD model, and solves the problems that the three-dimensional target tracking algorithm of the CAD model cannot really and accurately track the three-dimensional target in a scene in real time in the actual application process, and the target is tracked in an environment lacking characteristics, so that an artificially designed reference mark is needed, and the problem is very troublesome.

In order to achieve the purpose, the invention is realized by the following technical scheme: a three-dimensional target tracking algorithm research based on an original CAD model comprises the following steps:

s1, establishing VISP: establishing a VISP platform to realize processing of real-time images and tracking of a computer vision algorithm, and performing unmarked target tracking by using CAD model prior knowledge capable of providing 3D position information of an object while using a calibration camera;

s2, CAO format model establishment: selecting CAO format, respectively carrying out CAO format modeling on the regular graph and the irregular graph, and when modeling the irregular graph, using an index triangular mesh as a storage format for the model to realize effective mesh representation of the model, wherein each triangle in the triangular mesh shares an edge with other adjacent triangles, and the information to be stored by the triangular mesh comprises vertex, edge and face information;

s3, three-dimensional target tracking: detecting edge characteristics, calculating a projection straight line of the edge of the model in the visible edge of the image space of the current frame according to the target position of the previous frame, solving equation parameters of straight line coordinates according to the projection of model nodes, sampling along the direction of the projection straight line, setting sampling step length, solving pixel coordinates of sampling points, and selecting an edge detection unit according to the pixel coordinates and straight line angle parameters of each sampling point;

s4, calculating the distance of the projection edge: transforming the image space characteristic change into Cartesian coordinates, wherein the transformation relation is expressed by using an interaction matrix of point-to-straight line distances, the interaction matrix is a relation matrix capable of acquiring the speed of a target in the three-dimensional space translation and rotation direction and the image space coordinate change speed, and a point P (x) in a given camera coordinate system is givenc,yc,zc) And the corresponding plane projection point is p (x, y), and the derivation can be obtained according to the projection relation:

the interactive matrix is estimated by using camera internal parameters and depth information, an image space linear parameter equation x cos theta + y sin theta-rho is 0, in the equation, theta represents an included angle between a straight line and an image coordinate system u axis, rho is a distance from a point to the straight line, and a plane equation A where the space straight line is located is used as the interactive matrix1xc+B1yc+C1zc+D1Thus, the linear projection equation for its corresponding focal length normalized image plane can be expressed as:

Ax+By+C=1/zc (3.2)

wherein A is ═ A1/D1,B=-B1/D1,C=-C1/D1Derivation of linear polar equation

ρ′+(xsinθ-ycosθ)θ′=x′cosθ+y′sinθ (3.3)

From the formulae (3.1), (3.2), (3.3), and reference [6] can be obtained

In the formula ofρ=-Aρcosθ-Bρsinθ-C,λθThe interaction matrix of lines is-a sin θ + B cos θ:

the point-to-line distance can be determined from the difference in distance of the image coordinate points from two lines of equal slope,

point (x)d,yd) The distance to the straight line (ρ, θ) can be expressed as

d=ρ-ρd=ρ-xdcosθ+ydsinθ (3.6)

Derived from the above formula

d′=ρ′-(xdcosθ+ydsinθ)θ′=ρ′-αθ′ (3.7)

Wherein α ═ xdcosθ+ydsin theta, obtaining an interaction matrix from a point to a straight line according to the linear interaction matrix

Ld=Lρ-αLθIs shown as

In the formula ofd=λρ+θλθThe distance between the edge point and the projection straight line of the corresponding model is used for representing the image characteristics in each tracking unit, when a plurality of tracking units exist in the target, all the image characteristics of the target are fused to construct an interaction matrix of the system, all the visible model projection edges are converged to the image edge point, and the interaction matrix of the system is represented as

Ls=[L1 L2...Lm] (3.9)

The interaction matrix of the tracking cell features is represented as

In the formula, λdi=λpiiλθi(i 1.. said., m), rows in a matrixThe number represents the number of characteristic units of model projection edge sampling, the number of columns represents the number of degrees of freedom of the model on a Cartesian space, and the image space characteristic deviation is converted into the speed change of the model by using a pseudo-inverse method:

in the formula (I), the compound is shown in the specification,is the pseudo-inverse of the interaction matrix, vx,vy,vz,wx,wy,wzRepresenting the velocity of the model in the rotational and translational directions, d1,d2,...,dmThe distance from the target edge point to the visible projection edge of the corresponding model is obtained;

s5, aligning the three-dimensional model with the target: the process of converging the distance between the model and the target to zero, herein using weighted least squares, creates an objective function in image space representing the deviation between the target edge and the projected features of the model, the function being as follows:

s6, KLT target tracking algorithm: extracting the feature points on the image, then matching the feature points in the next frame to obtain the position of the target in the next frame, wherein the core of the process is to convert the target tracking into the tracking of the feature points:

C. harris feature point extraction algorithm:

shifting the image window by [ u, v ], resulting in a gray-scale change E (u, v), then

Wherein w (x, y) is a window function, I (x, y) is image gray scale, I (x + u, y + v) is image gray scale after translation, and formula (3.16) is simplified by Taylor series

In the formulaFor each pixel in the window, gradient I in x and y directionsxAnd IyPerforming statistical analysis to obtain M, and calculating the corner response function R of each pixel by using M:

R=det M-k(traceM)2 (3.18)

in the formula, det M is lambda1λ2,traceM=λ1λ2In the matrix R, a certain point simultaneously satisfies R (i, j) ═ det (m) -Ktrace2When the value (M) > T and R (i, j) is a local maximum value in the region, the point is regarded as an angular point, wherein the value range of K is 0.04-0.06, T is a threshold value, and R lower than the threshold value is set as 0 in the detection process;

D. KLT matching algorithm theory:

I(x,y,t+τ)=I(x-Δx,y-Δy,t) (3.19)

given two images I and J, the definition enables

SSDSum of Squared intensity Difference, here denoted ε) minimized d, then:

where W is a given feature window, W (X) is a weighting function, d ═ Δ X, Δ y is the motion offset of point X (X, y), since d is much smaller than X, then J (X + d) can be taylor expanded, and d is derived by combining equation (3.20):

Zd=e (3.21)

in the formula (I), the compound is shown in the specification,g is an expanded first-order Taylor coefficient, Newton iteration is carried out on each point through a (3.21) formula, when the iteration result reaches the required precision, the tracking of the image point is realized, namely the optimal matching is found, and the final solution is as follows:

wherein d is the translation amount of the center of the characteristic window, dkSetting an initial iteration value to be 0 during iterative computation, and setting a relative interframe displacement to be small when a target moves at a slow speed under the condition that the initial iteration value is set to be 0 by using a k-th Newton iteration method, wherein the corresponding interframe displacement is also small when the target moves at a slow speed, the matching search range can be reduced and high matching precision can be achieved by using the KLT matching algorithm under the condition that the interframe displacement is also large when the target moves too fast, the reliability of the feature point pair obtained according to matching can be reduced, at the moment, if the matching window is selected to be small, matching can be missed, and if the matching window is selected to be large, the matching precision can be influenced, the method for solving the problem is to carry out layered iteration on the KLT matching algorithm, wherein the layered iteration matching method is realized by Gaussian pyramid decomposition, and the operation steps are to carry out L-layer pyramid decomposition with descending precision on the images I and J to obtain the IlAnd JlL is set to 0, 1, 2, L, (1) L is set to L, and initialization is performed(2) Let n be n +1, inlAnd JlObtained from the above formula (3.22)Then judging whether the iteration times are finished or notIf the precision of (3) has met the requirement, executing the next step as long as one of the conditions is met, otherwise returning to the step (2), (3) finishing the iteration when l is equal to 0, and finally, finishing the iterationOtherwise makeReturning to the step (2) when l is l-1 and n is 0, selecting a small search window in the layering iteration process, and then continuously iterating from a high layer to a bottom layer until the lowest layer is reached, so that the point pair matching precision when the target moves fast is improved;

s7, point-line cooperative target tracking: camera displacement estimation based on texture information is integrated in the camera pose calculation process based on edge features.

Preferably, in step S1, the available resources provided by the VISP platform are three model-based unmarked trackers, which are respectively a model tracker based on edge information, a model tracker based on KLT keypoint information, and a target tracker based on point-line coordination.

Preferably, in step S3, the edge detection operator is determined according to an angle of the straight line.

Preferably, in step S3, when performing the edge detection operator, each sampling unit searches in the normal direction along the straight line and within the set pixel search range, and calculates the maximum value of pixel convolution in the normal direction of each detection unit, so as to solve the corresponding pixel coordinate, and thus, the next frame also uses the method to solve the edge pixel coordinates, and the process is continuously cycled until the tracking is finished.

Preferably, in the step S5, ρ represents a robustness estimation parameter, MiThe method is characterized in that the method is a projection matrix of a current model, N is the number of edge tracking features, and K is camera internal parameters.

Preferably, in step S6, the algorithm uses Harris operator to extract feature points and combines with the KLT matching algorithm of optimal estimation to realize tracking, the tracking process includes a reinitialization process of the KLT tracker, and the reinitialization criterion is to compare the number of currently tracked feature points to the number of initially detected features.

Preferably, in step S6 a, the idea of the aris corner detection algorithm is to set a local detection window on the image, and when the gray scale changes greatly due to the window moving in any direction, the central pixel point in the area is the detected corner.

Preferably, in step S6B, the KLT tracking algorithm has a core idea that a sum of squared gray differences of a window to be tracked between video image frames is used as a metric, specifically, a window W containing feature texture information is given, and then a translation model is used to describe changes of pixel points in the feature window, so that an image frame at time t is I (x, y, t), and an image frame at time t + τ is I (x, y, t + τ).

Preferably, in step S6, in step B, the optimal position and shape of the target are determined according to the matching key point position information and the information of the CAD model, so as to realize the tracking of the three-dimensional target.

Preferably, in step S7, the method for sampling points is implemented by minimizing a non-linear criterion.

Advantageous effects

The invention provides a three-dimensional target tracking algorithm research based on an original CAD model, which has the following beneficial effects compared with the prior art:

according to the research of the three-dimensional target tracking algorithm based on the original CAD model, the traditional three-dimensional target tracking algorithm based on the CAD model is difficult to process the real-time tracking problem of a complex model, and a tracking method based on a substitute model is provided.

Drawings

FIG. 1 is a flow chart of the algorithm of the present invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Referring to fig. 1, the present invention provides a technical solution: a three-dimensional target tracking algorithm research based on an original CAD model comprises the following steps:

s1, establishing VISP: establishing a VISP platform to realize processing of real-time images and tracking of a computer vision algorithm, and performing unmarked target tracking by using CAD model prior knowledge capable of providing 3D position information of an object while using a calibration camera;

s2, CAO format model establishment: selecting CAO format, respectively carrying out CAO format modeling on the regular graph and the irregular graph, and when modeling the irregular graph, using an index triangular mesh as a storage format for the model to realize effective mesh representation of the model, wherein each triangle in the triangular mesh shares an edge with other adjacent triangles, and the information to be stored by the triangular mesh comprises vertex, edge and face information;

s3, three-dimensional target tracking: detecting edge characteristics, calculating a projection straight line of a model edge in a visible edge in an image space of a current frame according to a target pose of a previous frame, solving equation parameters of a straight line coordinate according to projection of a model node, sampling along the direction of the projection straight line, setting a sampling step length, solving pixel coordinates of sampling points, and selecting an edge detection unit according to the pixel coordinates and straight line angle parameters of each sampling point;

s4, calculating the distance of the projection edge: transforming the image space characteristic change into Cartesian coordinates, wherein the transformation relation is expressed by using an interaction matrix of point-to-straight line distances, the interaction matrix is a relation matrix capable of acquiring the speed of a target in the three-dimensional space translation and rotation direction and the image space coordinate change speed, and a point P (x) in a given camera coordinate system is givenc,yc,zc) And the corresponding plane projection point is p (x, y), and the derivation can be obtained according to the projection relation:

the interactive matrix is estimated by using camera internal parameters and depth information, an image space linear parameter equation x cos theta + y sin theta-rho is 0, in the equation, theta represents an included angle between a straight line and an image coordinate system u axis, rho is a distance from a point to the straight line, and a plane equation A where the space straight line is located is used as the interactive matrix1xc+B1yc+C1zc+D1Thus, the linear projection equation for its corresponding focal length normalized image plane can be expressed as:

Ax+By+C=1/zc (3.2)

wherein A is ═ A1/D1,B=-B1/D1,C=-C1/D1Derivation of linear polar equation

ρ′+(x sinθ-y cosθ)θ′=x′cosθ+y′sinθ (3.3)

From the formulae (3.1), (3.2), (3.3), and reference [6] can be obtained

In the formula ofρ=-Aρcosθ-Bρsinθ-C,λθThe interaction matrix of lines is-a sin θ + B cos θ:

the distance from a point to a straight line can be determined from the difference in distance from the image coordinate point to two straight lines of equal slope, point (x)d,yd) The distance to the straight line (ρ, θ) can be expressed as

d=ρ-ρd=ρ-xdcosθ+ydsinθ (3.6)

Derived from the above formula

d′=ρ′-(xdcosθ+ydsinθ)θ′=ρ′-αθ′ (3.7)

Wherein α ═ xdcosθ+ydsin theta, obtaining an interaction matrix from a point to a straight line according to the linear interaction matrix

Ld=Lρ-αLθIs shown as

In the formula ofd=λρ+θλθThe distance between the edge point and the projection straight line of the corresponding model is used for representing the image characteristics in each tracking unit, when a plurality of tracking units exist in the target, all the image characteristics of the target are fused to construct an interaction matrix of the system, all the visible model projection edges are converged to the image edge point, and the interaction matrix of the system is represented as

Ls=[L1 L2...Lm] (3.9)

The interaction matrix of the tracking cell features is represented as

In the formula, λdi=λpiiλθi(i ═ 1.. said., m), the number of rows in the matrix represents the number of feature cells of model projection edge sampling, the number of columns represents the number of degrees of freedom of the model in cartesian space, and the image space feature deviation is transformed into model velocity variation using a pseudo-inverse:

in the formula (I), the compound is shown in the specification,is the pseudo-inverse of the interaction matrix, vx,vy,vz,wx,wy,wzRepresenting the velocity of the model in the rotational and translational directions, d1,d2,...,dmThe distance from the target edge point to the visible projection edge of the corresponding model is obtained;

s5, aligning the three-dimensional model with the target: the process of converging the distance between the model and the target to zero, herein using weighted least squares, creates an objective function in image space representing the deviation between the target edge and the projected features of the model, the function being as follows:

s6, KLT target tracking algorithm: extracting the feature points on the image, then matching the feature points in the next frame to obtain the position of the target in the next frame, wherein the core of the process is to convert the target tracking into the tracking of the feature points:

E. harris feature point extraction algorithm:

shifting the image window by [ u, v ], resulting in a gray-scale change E (u, v), then

Wherein w (x, y) is a window function, I (x, y) is image gray scale, I (x + u, y + v) is image gray scale after translation, and formula (3.16) is simplified by Taylor series

In the formulaFor each pixel in the window, gradient I in x and y directionsxAnd IyPerforming statistical analysis to obtain M and benefitThe corner response function R for each pixel is calculated with M:

R=det M-k(traceM)2 (3.18)

in the formula, det M is lambda1λ2,traceM=λ1λ2In the matrix R, a certain point simultaneously satisfies R (i, j) ═ det (m) -Ktrace2When the value (M) > T and R (i, j) is a local maximum value in the region, the point is regarded as an angular point, wherein the value range of K is 0.04-0.06, T is a threshold value, and R lower than the threshold value is set as 0 in the detection process;

F. KLT matching algorithm theory:

I(x,y,t+τ)=I(x-Δx,y-Δy,t) (3.19)

given two images I and J, the definition enables

SSDSum of Squared intensity Difference, here denoted ε) minimized d, then:

where W is a given feature window, W (X) is a weighting function, d ═ Δ X, Δ y is the motion offset of point X (X, y), since d is much smaller than X, then J (X + d) can be taylor expanded, and d is derived by combining equation (3.20):

Zd=e (3.21)

in the formula (I), the compound is shown in the specification,g is an expanded first-order Taylor coefficient, Newton iteration is carried out on each point through a (3.21) formula, when the iteration result reaches the required precision, the tracking of the image point is realized, namely the optimal matching is found, and the final solution is as follows:

wherein d is the translation amount of the center of the characteristic window, dkThe value of d obtained by the k-th Newton iteration method is calculated in iterationSetting an initial iteration value to be 0 in a calculation text, setting the initial iteration value to be 0 when the moving speed of a target is slow, and enabling the relative interframe displacement to be small, under the condition, utilizing the KLT matching algorithm to achieve high matching precision while reducing a matching search range, but enabling the interframe displacement to be large when the target moves too fast, and enabling the reliability of a feature point pair obtained according to matching to be reduced, wherein at the moment, if a matching window is selected to be small, matching is missed, and if the matching window is selected to be large, the matching precision is influencedlAnd JlL is set to 0, 1, 2, L, (1) L is set to L, and initialization is performed(2) Let n be n +1, inlAnd JlObtained from the above formula (3.22)Then judging whether the iteration times are finished or notIf the precision of (3) has met the requirement, executing the next step as long as one of the conditions is met, otherwise returning to the step (2), (3) finishing the iteration when l is equal to 0, and finally, finishing the iterationOtherwise makeReturning to the step (2) when l is l-1 and n is 0, selecting a small search window in the layering iteration process, and then continuously iterating from a high layer to a bottom layer until the lowest layer is reached, so that the point pair matching precision when the target moves fast is improved;

s7, point-line cooperative target tracking: camera displacement estimation based on texture information is integrated in the camera pose calculation process based on edge features.

In the embodiment of the present invention, in step S1, the available resources provided by the VISP platform are three model-based unmarked trackers, which are respectively a model tracker based on edge information, a model tracker based on KLT key point information, and a target tracker based on point-line coordination.

In step S3, in the embodiment of the present invention, the edge detection operator is determined according to the angle of the straight line.

In the embodiment of the present invention, in step S3, when performing the edge detection operator, each sampling unit searches in the normal direction along the straight line and within the set pixel search range, and calculates the maximum value of the pixel convolution of each detection unit in the normal direction, so as to solve the corresponding pixel coordinate.

In the embodiment of the present invention, in step S5, ρ represents a robustness estimation parameter, MiThe method is characterized in that the method is a projection matrix of a current model, N is the number of edge tracking features, and K is camera internal parameters.

In the embodiment of the invention, in step S6, the algorithm adopts Harris operator to extract feature points and combines with the KLT matching algorithm of optimal estimation to realize tracking, the tracking process includes a reinitialization process of the KLT tracker, and the reinitialization criterion is to compare the ratio of the number of currently tracked feature points to the number of initially detected features.

In the embodiment of the present invention, in step S6, an aris corner detection algorithm is implemented by setting a local detection window on an image, and when the gray scale changes greatly due to movement of the window in any direction, a central pixel point in the area is a detected corner.

In the embodiment of the present invention, in step S6B, the key idea of the KLT tracking algorithm is to use the sum of squared gray differences of the window to be tracked between video image frames as a measurement, specifically, the operation is to first give a window W containing feature texture information, and then use a translation model to describe the change of pixel points in the feature window, so that the image frame at time t is I (x, y, t), and the image frame at time t + τ is I (x, y, t + τ).

In the embodiment of the present invention, in step S6, the optimal position and shape of the target are determined according to the matching obtained keypoint position information and by combining the information of the CAD model, so as to realize the tracking of the three-dimensional target.

In the embodiment of the present invention, in step S7, the method for sampling points is implemented by minimizing the non-linear criterion.

And those not described in detail in this specification are well within the skill of those in the art.

It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种家畜计数识别系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!