Distance measurement method based on scene with three-dimensional camera or monocular camera sliding on guide rail

文档序号:1902793 发布日期:2021-11-30 浏览:15次 中文

阅读说明:本技术 基于三维相机或单目相机在导轨上滑动场景的测距方法 (Distance measurement method based on scene with three-dimensional camera or monocular camera sliding on guide rail ) 是由 王方聪 石珞家 辛纪潼 查美怡 王鹏 于 2021-09-18 设计创作,主要内容包括:本发明公开了一种基于三维相机或单目相机在导轨上滑动场景的测距方法,包括:由单目相机在光学导轨上滑动或利用三维相机采集同一光轴上的一组远图和近图;把近图进行缩放,得到一系列连续的缩放图;把远图和缩放图二值化,再进行边缘提取,得到远图和近图的二值化的轮廓图;一系列近图的轮廓图依次和远图的轮廓图进行矩形卷积;比较全部的卷积值得到最大卷积值,读取最大卷积值所在的两矩阵的位置,以及重合时候的缩放得到的近图大小;保留远图和最大卷积值对应的缩放图重合的那部分,再和原来的近图组成一组信息量相等的图像。本发明为更为准确且便宜的摄像头测距方案。(The invention discloses a distance measurement method based on a scene that a three-dimensional camera or a monocular camera slides on a guide rail, which comprises the following steps: a monocular camera slides on the optical guide rail or a three-dimensional camera is used for acquiring a group of distance images and near images on the same optical axis; zooming the near image to obtain a series of continuous zoomed images; binarizing the far image and the zoom image, and then performing edge extraction to obtain binarized contour maps of the far image and the near image; performing rectangular convolution on the contour map of the series of near maps and the contour map of the far map in sequence; comparing all convolution values to obtain a maximum convolution value, reading the positions of two matrixes where the maximum convolution value is located, and obtaining the size of a near image by scaling when the maximum convolution value is superposed; and reserving the overlapped part of the far image and the zoom image corresponding to the maximum convolution value, and forming a group of images with the same information quantity with the original near image. The invention provides a more accurate and cheaper camera ranging scheme.)

1. A distance measurement method based on a scene with a three-dimensional camera or a monocular camera sliding on a guide rail is characterized by comprising the following steps:

step S1, a monocular camera slides on the optical guide rail or a three-dimensional camera is used for collecting a group of distance images and near images on the same optical axis;

step S2, down-sampling by a linear interpolation algorithm, and zooming the near images to obtain a series of continuous zoomed images;

step S3, binarizing the far image and the zoom image, and then performing edge extraction to obtain binarized contour maps of the far image and the near image;

step S4, using the series of contour maps of the near map as operators to continuously move, and performing a convolution once for each movement, wherein the series of contour maps of the near map are sequentially subjected to a rectangular convolution with the contour map of the far map;

step S5, comparing all convolution values to obtain the maximum convolution value, reading the positions of the two matrixes where the maximum convolution value is located, and the size of the near graph obtained by scaling when in superposition;

step S6, reserving the overlapped part of the far image and the zoom image corresponding to the maximum convolution value, and forming a group of images with equal information content with the original near image;

step S7, coordinate conversion: converting the rectangular coordinate of the pixel with the origin at the upper left corner of the image into the polar coordinate of the extreme point at the center of the image, and taking the coordinate as a standard;

step S8, respectively using SIFT corner point method or SURF corner point method for the far image and the near image, and sequentially outputting coordinates of the corner points under polar coordinates;

step S9, if the corner points sequenced in the far image and the near image are in a certain threshold range according to the value of the polar coordinates, matching the corner points;

step S10, classifying the diagonal points of the outline according to the subordination of the object;

step S11, connecting the corner points of the same object in the same image, obtaining the matching relationship of the line segments according to the position information of the matching corner points, and selecting the length of a pair of matching line segments with the longest length, or the average value of the lengths of the matching line segments of the far image and the near image, or the average value of the polar coordinate lengths of the corner points of the same object in which the far image and the near image are subordinate respectively;

step S12, substituting the obtained length or average value into the corresponding optical relation according to the situation to solve, and obtaining the object distance;

and step S13, the corner points are classified or judged according to the outlines of the objects, and the object distance represents the distance between the object and the lens according to the corner point dependency relationship.

2. The method for measuring distance based on the three-dimensional camera or the monocular camera sliding the scene on the guide rail according to claim 1, wherein in step S12, the scene is: a scene where the monocular camera slides on the optical rail, or a three-dimensional camera scene.

3. A ranging method based on a three-dimensional camera or a monocular camera sliding a scene on a rail as claimed in claim 2, wherein the three-dimensional camera means: a small volume common virtual axis portable three-dimensional camera based on monocular distance measurement principle is provided.

4. A ranging method based on a three-dimensional camera or a monocular camera sliding scene on a guide rail as claimed in claim 1, wherein in step S4, after one traversal, the overlapped parts of one pixel position are multiplied to obtain 1, and the overlapped parts are multiplied to obtain 0 or 1.

5. A ranging method based on a three-dimensional camera or a monocular camera sliding scene on a guide rail as claimed in claim 1, wherein in step S6, the zoomed near image and the un-zoomed near image when overlapping form a set of images with the same information amount.

6. The method for measuring distance based on the three-dimensional camera or monocular camera sliding scene on the guide rail according to claim 1, wherein in step S9, or using the method of the ratio between the nearest distance and the next closest distance, a threshold is set, the ratio between the nearest distance and the next closest distance is below the threshold, the corner point matching is performed, and the unnecessary points are removed.

7. A ranging method based on a three-dimensional camera or a monocular camera sliding a scene on a rail as claimed in claim 2, characterized in that, in the scenario where the monocular camera is sliding on the optical rail,

assuming that the object distance in the first imaging is u and the object distance in the second imaging is u + d; the length value obtained by the first imaging of the object is h1The length value obtained when the object is imaged for the second time is h2(ii) a Since parameters of the monocular camera are kept unchanged during two times of imaging, a formula obtained according to an optical imaging relationship can be obtained:

and calculating to obtain the object distance u.

8. A ranging method based on a three-dimensional camera or a monocular camera sliding scene on a rail as claimed in claim 2, characterized in that, in a small volume common virtual axis three-dimensional camera scenario using the principle of monocular ranging,

assuming that L1 is the distance between the 50% mirror center and the upper lens, and L2 is the distance between the total reflection mirror center and the lower lens; the length value in the first lens is d1, the length value in the second lens is d 2; the distance h is between the optical axis of the first lens and the optical axis of the second lens; the distance L' 1 between the object and the first lens; since the first lens and the second lens have the same focal length and the same angle of view θ, the formula obtained from the optical imaging relationship is:

the object distance L' 1 is calculated.

Technical Field

The invention relates to the field of distance measurement.

Background

The distance measurement algorithm is widely applied to the fields of industrial detection, medical treatment, traffic, automatic driving building design, aerospace, virtual reality and the like. In scenes such as autopilot and unmanned aerial vehicle flight, camera range finding has huge cost advantage for ranging modes such as radar, laser, lidar.

The traditional camera ranging method is divided into monocular ranging and binocular ranging. Binocular range finding is along with the increase of distance, and the error constantly strengthens, can't carry out long-distance range finding and use troublesome degree of difficulty big. The traditional monocular distance measurement needs to calibrate the camera, and the calibration of the camera is very troublesome and almost certainly introduces errors due to calibration. The current distance measurement mode has problems and needs to be improved.

Disclosure of Invention

The invention aims to provide a distance measuring method based on a scene with a three-dimensional camera or a monocular camera sliding on a guide rail, and a more accurate and cheaper camera distance measuring scheme is obtained.

The technical scheme for realizing the purpose is as follows:

a distance measurement method based on a scene which is slid on a guide rail by a three-dimensional camera or a monocular camera comprises the following steps:

step S1, a monocular camera slides on the optical guide rail or a three-dimensional camera is used for collecting a group of distance images and near images on the same optical axis;

step S2, down-sampling by a linear interpolation algorithm, and zooming the near images to obtain a series of continuous zoomed images;

step S3, binarizing the far image and the zoom image, and then performing edge extraction to obtain binarized contour maps of the far image and the near image;

step S4, using the series of contour maps of the near map as operators to continuously move, and performing a convolution once for each movement, wherein the series of contour maps of the near map are sequentially subjected to a rectangular convolution with the contour map of the far map;

step S5, comparing all convolution values to obtain the maximum convolution value, reading the positions of the two matrixes where the maximum convolution value is located, and the size of the near graph obtained by scaling when in superposition;

step S6, reserving the overlapped part of the far image and the zoom image corresponding to the maximum convolution value, and forming a group of images with equal information content with the original near image;

step S7, coordinate conversion: converting the rectangular coordinate of the pixel with the origin at the upper left corner of the image into the polar coordinate of the extreme point at the center of the image, and taking the coordinate as a standard;

step S8, respectively using SIFT (Scale invariant feature transform) corner method or SURF (speeded up robust feature of Scale invariant feature transform) corner method for the far image and the near image, and sequentially outputting coordinates of the corners under polar coordinates;

step S9, if the corner points sequenced in the far image and the near image are in a certain threshold range according to the value of the polar coordinates, matching the corner points;

step S10, classifying the diagonal points of the outline according to the subordination of the object;

step S11, connecting the corner points of the same object in the same image, obtaining the matching relationship of the line segments according to the position information of the matching corner points, and selecting the length of a pair of matching line segments with the longest length, or the average value of the lengths of the matching line segments of the far image and the near image, or the average value of the polar coordinate lengths of the corner points of the same object in which the far image and the near image are subordinate respectively;

step S12, substituting the obtained length or average value into the corresponding optical relation according to the situation to solve, and obtaining the object distance;

and step S13, the corner points are classified or judged according to the outlines of the objects, and the object distance represents the distance between the object and the lens according to the corner point dependency relationship.

Preferably, in step S12, the scenario refers to: a scene where the monocular camera slides on the optical rail, or a three-dimensional camera scene.

Preferably, the three-dimensional camera means: a small volume common virtual axis portable three-dimensional camera based on monocular distance measurement principle is provided.

Preferably, in step S4, after one traversal, the multiplication of the overlapped part of one pixel position obtains 1, and the multiplication of the non-overlapped position may obtain 0 or 1.

Preferably, in step S6, the scaled near image and the non-scaled near image when overlapping each other constitute a set of images having the same amount of information.

Preferably, in step S9, or using the method of the ratio between the closest distance and the second closest distance, a threshold is set, and the ratio between the closest distance and the second closest distance is below the threshold, and corner matching is performed while removing unnecessary points.

Preferably, in the scenario where the monocular camera is slid on the optical rail,

assuming that the object distance in the first imaging is u and the object distance in the second imaging is u + d; the length value obtained by the first imaging of the object is h1The length value obtained when the object is imaged for the second time is h2(ii) a Since parameters of the monocular camera are kept unchanged during two times of imaging, a formula obtained according to an optical imaging relationship can be obtained:

and calculating to obtain the object distance u.

Preferably, in the scene of the small volume common virtual axis three-dimensional camera utilizing the monocular distance measuring principle, the near image is shot by the L1, and the far image is shot by the L2;

assuming that L1 is the distance between the 50% mirror center and the upper lens, and L2 is the distance between the total reflection mirror center and the lower lens; the length value in the first lens is d1, the length value in the second lens is d 2; the distance h is between the optical axis of the first lens and the optical axis of the second lens; the distance L' 1 between the object and the first lens; since the first lens and the second lens have the same focal length and the same angle of view θ, the formula obtained from the optical imaging relationship is:

the object distance L' 1 is calculated.

The invention has the beneficial effects that: the invention avoids directly using a sift angular point method, and can not bring a great amount of errors in angular point selection due to the difference of the information content of front and back views. The invention eliminates the error of the information quantity of the distance graph, further reduces the error in the process of corner matching and elimination, and further reduces the algorithm error in the process of finally selecting the parameter as the length mean value or selecting the length of the longest line segment. The method has the advantages of real-time low cost, small error and wide application range, has a very good cost advantage, overcomes the problems that the monocular distance measurement for calibrating the object and the binocular distance measurement for measuring the object far away cannot be well carried out, and can be well used for the portable three-dimensional camera with small volume and a common virtual axis and the distance measurement scene in which the monocular camera slides on the guide rail. The method can effectively measure the distance and prepare materials for the three-dimensional reconstruction in the next step.

Drawings

FIG. 1 is a flow chart of a ranging method based on a three-dimensional camera or a monocular camera sliding a scene on a guide rail according to the present invention;

fig. 2 is a structural diagram of an embodiment of a small-volume virtual coaxial portable three-dimensional camera according to the invention.

Detailed Description

The invention will be further explained with reference to the drawings.

Referring to fig. 1 and 2, the distance measuring method based on a scene that a three-dimensional camera or a monocular camera slides on a guide rail according to the present invention includes the following steps:

and step S1, a monocular camera slides on the optical guide rail or a three-dimensional camera is used for acquiring a group of distance images and near images on the same optical axis. During the shooting process of the far image and the near image, the camera parameters are kept consistent.

In step S2, down-sampling is performed by a linear interpolation algorithm, and the near images are scaled to obtain a series of continuous scaled images. The zoom map of the zoom scale is continuously reduced. The interpolation algorithm comprises bilinear interpolation, nearest neighbor interpolation and cubic interpolation, and one of the interpolation algorithms is used for image scaling. And adjusting the scaling according to the same difference, wherein the scaling of the image obtained each time is reduced according to the same difference or according to the structure of a Gaussian pyramid.

And step S3, binarizing the far image and the zoom image, and then performing edge extraction to obtain binarized contour maps of the far image and the near image. The edge extraction method includes sobel, canny, laplace method, and one of them is used for edge extraction.

And step S4, taking the series of contour maps of the near map as operators to continuously move, and performing rectangular convolution on the series of contour maps of the near map and the far map in sequence once moving convolution each time. Each matrix movement outputs a value, the size of the output value of each position is mainly compared, and the comparison is carried out by using the size of the value, wherein each value corresponds to the relative position information of a far-near map. After traversing once, the overlapped parts of one pixel position are multiplied to obtain 1, and the overlapped parts are multiplied to obtain 0 or 1. In one convolution, the output values of all image pixel points are accumulated to obtain a convolution result

Step S5, comparing all convolution values to obtain the maximum convolution value, reading the positions of the two matrices where the maximum convolution value is located, and the size of the close graph obtained by scaling when overlapping.

Step S6, reserving the overlapped part of the far image and the zoom image corresponding to the maximum convolution value, and forming a group of images with equal information content with the original near image; or the near image obtained by scaling when the images are overlapped and the non-scaled near image form a group of images with the same information amount.

Step S7, coordinate conversion: converting the rectangular coordinate of the pixel with the origin at the upper left corner of the image into the polar coordinate of the extreme point at the center of the image, and taking the coordinate as a standard;

step S8, respectively using SIFT corner point method or SURF corner point method for the far image and the near image, and sequentially outputting coordinates of the corner points under polar coordinates;

step S9, if the corner points sequenced in the far image and the near image are in a certain threshold range according to the value of the polar coordinates, matching the corner points; or setting a threshold value by using a method of a ratio of the nearest distance to the next-near distance, wherein the ratio of the nearest distance to the next-near distance is below the threshold value, carrying out corner point matching, and removing unnecessary points.

And step S10, classifying the corner points according to the subordination of the object according to the closed edges of the outline.

Step S11, connecting the corner points of the same object in the same image, obtaining the matching relationship of the line segments from the position information of the matching corner points, and selecting the length of the pair of matching line segments with the longest length, or the average of the lengths of the matching line segments of the far image and the near image, or the average of the polar coordinate lengths of the corner points of the same object in which the far image and the near image are subordinate, respectively.

And step S12, substituting the obtained length or average value into the corresponding optical relation according to the situation to solve, and obtaining the object distance. The scenario refers to: a scene where the monocular camera slides on the optical rail, or a three-dimensional camera scene. The three-dimensional camera means: a small volume common virtual axis portable three-dimensional camera based on monocular distance measurement principle is provided.

In the scenario where the monocular camera is sliding on the optical rail,

assuming that the object distance in the first imaging is u and the object distance in the second imaging is u + d; the length value obtained by the first imaging of the object is h1The length value obtained when the object is imaged for the second time is h2(ii) a Since parameters of the monocular camera are kept unchanged during two times of imaging, a formula obtained according to an optical imaging relationship can be obtained:

and calculating to obtain the object distance u. d is a quantity directly readable in the optical track, h1And h2All printed or read out, and the parameters are substituted into the formula to obtain the object distance u.

In a small volume common virtual axis three-dimensional camera scenario using monocular distance measurement principles,

assuming that L1 is the distance between the 50% mirror center and the upper lens, and L2 is the distance between the total reflection mirror center and the lower lens; the length value in the first lens is d1, the length value in the second lens is d 2; the distance h is between the optical axis of the first lens and the optical axis of the second lens; the distance L' 1 between the object and the first lens; since the first lens and the second lens have the same focal length and the same angle of view θ, the formula obtained from the optical imaging relationship is:

the object distance L' 1 is calculated. L1 and L2 and h are known quantities, d1 and d2 have been printed or read, and the parameters are substituted into the above equation to obtain the object distance L' 1.

And step S13, the corner points are classified or judged according to the outlines of the objects, and the object distance represents the distance between the object and the lens according to the corner point dependency relationship. Wherein the object is the distance from the line formed by the index points to the lens.

In fig. 2, the respective numbers indicate: a first lens 1; a second lens 2; a spectroscope 3; a total reflection mirror 4; a target object 5; the distance h 6 is between the optical axis of the first lens and the optical axis of the second lens; a first lens optical axis 7; second lens optical axis 8.

The above embodiments are provided only for illustrating the present invention and not for limiting the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention, and therefore all equivalent technical solutions should also fall within the scope of the present invention, and should be defined by the claims.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于目标识别神经网络的单目测距方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!