Monocular water surface target segmentation and monocular distance measurement method of shipborne camera

文档序号:1873853 发布日期:2021-11-23 浏览:23次 中文

阅读说明:本技术 船载摄像头的单目水面目标分割及单目测距方法 (Monocular water surface target segmentation and monocular distance measurement method of shipborne camera ) 是由 陈姚节 孙棘山 王薇 于 2021-07-28 设计创作,主要内容包括:本发明公开了一种船载摄像头的单目水面目标分割及单目测距方法,包括:S1、从船载摄像头拍摄的视频流中按帧读取图像;S2、针对抽取的关键帧,对所有出现在视觉范围内的环境进行目标检测,从中筛选出目标;S3、基于筛选出的目标和实际的应用场景,进行二维边界框框定和图像分割,获取当前视频帧中目标信息;S4、根据激光测距获取的距离数据及目标信息,计算目标对应的传感器系数;S5、根据建立的几何关系测距模型,并利用目标对应的传感器系数,计算目标与单目摄像头之间的水平距离;S6、根据目标与单目摄像头之间的水平距离,建立视场角优化模型进行视场角优化,获得目标与单目摄像头之间的距离。本发明方法具备较高的测量精度,且泛化能力较强。(The invention discloses a monocular water surface target segmentation and monocular distance measurement method of a shipborne camera, which comprises the following steps: s1, reading images by frames from the video stream shot by the shipborne camera; s2, aiming at the extracted key frames, carrying out target detection on all environments appearing in the visual range, and screening out targets from the environments; s3, based on the screened target and the actual application scene, performing two-dimensional bounding box framing and image segmentation to obtain target information in the current video frame; s4, calculating a sensor coefficient corresponding to the target according to the distance data and the target information acquired by laser ranging; s5, calculating the horizontal distance between the target and the monocular camera according to the established geometric relation distance measurement model and by using the sensor coefficient corresponding to the target; and S6, establishing a view angle optimization model for view angle optimization according to the horizontal distance between the target and the monocular camera, and obtaining the distance between the target and the monocular camera. The method has the advantages of high measurement precision and strong generalization capability.)

1. A monocular water surface target segmentation and monocular distance measurement method of a shipborne camera is characterized by at least comprising the following steps:

s1, reading images by frames from the video stream shot by the shipborne camera;

s2, extracting key frames from the step S1, carrying out target detection on all environments appearing in the visual range aiming at the extracted key frames, and screening out targets from the environments;

s3, based on the screened target and the actual application scene, performing two-dimensional bounding box framing and image segmentation to obtain target information in the current video frame;

s4, calculating a sensor coefficient corresponding to the target according to the distance data from the camera to the bottom of the target obtained by laser ranging and the target information obtained in the step S3;

s5, establishing a geometric relation distance measurement model, and calculating the horizontal distance between the target and the monocular camera according to the geometric relation distance measurement model and by using the sensor coefficient corresponding to the target obtained in the step S4;

and S6, establishing a viewing angle optimization model according to the horizontal distance between the target and the monocular camera obtained in the step S5 to optimize the viewing angle, and obtaining the distance between the target and the monocular camera.

2. The method according to claim 1, wherein in step S1, the mounting position of the onboard camera is fixed, and the angle of view and the pitch angle thereof are freely adjustable.

3. The method according to claim 1, wherein in step S2, the image is processed by Fast R-CNN, which is a target detection algorithm based on deep convolutional neural network, and the network structure of the image comprises two parts, RPN and Fast R-CNN, wherein RPN is used for predicting a candidate region that may contain a target in the input image, and Fast R-CNN is used for classifying the candidate region predicted by RPN.

4. The method according to claim 3, wherein the step of performing target detection through a target detection algorithm fast R-CNN based on a deep convolutional neural network in step S2 comprises:

1) initializing RPN network parameters by adopting a pre-training network model, and finely adjusting the RPN network parameters by a random gradient descent algorithm and a back propagation algorithm;

2) initializing fast R-CNN target detection network parameters by adopting a pre-training network model, extracting a candidate region by using the RPN in the step 1), and training a target detection network;

3) reinitializing and fine-tuning RPN network parameters by adopting the target detection network trained in the step 2) to obtain a new RPN network;

4) extracting a candidate region by adopting the RPN in the step 3), and finely adjusting the target detection network parameters trained in the step 2);

5) and repeating the step 3) and the step 4) until the maximum iteration number is reached or the network converges.

5. The method according to claim 1, wherein in step S3, an example segmentation network is used for image segmentation, i.e. a convolutional neural network algorithm based on regional covariance guidance.

6. The method of claim 5, wherein the convolutional neural network algorithm based on regional covariance steering comprises the steps of:

1) extracting low-level features of the key frame by taking pixels as units;

2) constructing a region covariance based on the multi-dimensional feature vector;

3) constructing a convolutional neural network model by taking the covariance matrix as a training sample;

4) calculating the image significance based on the local and global contrast principles;

5) and (5) framing out the obvious target and acquiring target information.

7. The method according to claim 5 or 6, wherein in step S4, the coordinate system is transformed according to the target information obtained by the instance division network, and then the sensor coefficient corresponding to the target is calculated by combining the distance data obtained by the laser ranging.

8. The method according to claim 1, wherein in step S5, the horizontal distance calculation process between the target and the monocular camera comprises the following steps:

1) acquiring the height H of the shipborne camera from the horizontal plane and the focal length f of the ship camera;

2) acquiring a pitch angle pitch when the shipborne camera shoots a target;

3) acquiring the pixel size h of the height of the ship-borne camera measurement target in an image;

4) calculating and determining the horizontal distance between the shipborne camera and the target by the following formula (1):

where k is the sensor coefficient described in step S4.

Technical Field

The invention relates to the technical field of shipborne distance measurement, in particular to a monocular water surface target segmentation and monocular distance measurement method of a shipborne camera.

Background

The principle technology of pinhole imaging vision distance measurement is a basic method for distance measurement and identification of current image and video targets, and provides a new idea for processing image data of sensor equipment by a plurality of ship equipment. When the ship equipment adopts the monocular camera to acquire target data, the small-hole imaging vision technology is adopted to carry out distance measurement and identification, the distance measurement principle is simple and convenient, and the data utilization rate is high, so that the current mainstream distance measurement research direction is gradually developed.

Monocular distance measurement algorithms can be divided into three categories: imaging model based ranging algorithms, mathematical regression modeling based ranging algorithms, and geometric derivation based ranging algorithms. The imaging model-based ranging algorithm must acquire the actual height or width of the ship measurement target in advance, and is not suitable for the actual application process. The distance measurement algorithm based on mathematical regression modeling uses a large number of data sets and performs mathematical modeling through the existing data sets, and once the data sets are replaced, the model is not applicable any more, and the generalization capability is poor. Compared with the two distance measurement algorithms, the distance measurement algorithm based on the geometric relationship derivation only needs the internal reference and the external reference of the camera, and obviously, the algorithm has better generalization capability and applicability. Stein et al propose a basic model of a similar triangle ranging algorithm and discuss the influence of pixel errors on ranging accuracy. Subsequently, Liu et al consider the influence of the camera attitude angle pitch (pitch angle) on the range finding on this basis; in the later period, a great deal of practical application is carried out on the model, but deeper theoretical research is not carried out.

At the present stage, when the distance measurement method in the aspect of monocular vision combines the technologies of machine learning and deep learning, a relatively large data set is needed to train a distance measurement model, and the generalization capability of the model trained by the method is often not ideal, and the model effect is influenced by many factors, so that in the process of carrying out ship-borne monocular vision distance measurement, with the current technical background, the accuracy of the measurement of the ship-borne monocular vision distance measurement method is not high, the process is complex and tedious, and the relatively good generalization capability is also lacked.

Disclosure of Invention

In order to overcome the defects of the technology, the invention provides a monocular water surface target segmentation and monocular distance measurement method of a shipborne camera.

The technical scheme adopted by the invention for overcoming the technical problems is as follows:

a monocular water surface target segmentation and monocular distance measurement method of a shipborne camera at least comprises the following steps:

s1, reading images by frames from the video stream shot by the shipborne camera;

s2, extracting key frames from the step S1, carrying out target detection on all environments appearing in the visual range aiming at the extracted key frames, and screening out targets from the environments;

s3, based on the screened target and the actual application scene, performing two-dimensional bounding box framing and image segmentation to obtain target information in the current video frame;

s4, calculating a sensor coefficient corresponding to the target according to the distance data from the camera to the bottom of the target obtained by laser ranging and the target information obtained in the step S3;

s5, establishing a geometric relation distance measurement model, and calculating the horizontal distance between the target and the monocular camera according to the geometric relation distance measurement model and by using the sensor coefficient corresponding to the target obtained in the step S4;

and S6, establishing a viewing angle optimization model according to the horizontal distance between the target and the monocular camera obtained in the step S5 to optimize the viewing angle, and obtaining the distance between the target and the monocular camera.

Further, in step S1, the mounting position of the onboard camera is fixed, and the angle of view and the pitch angle thereof are freely adjustable.

Further, in step S2, the image is processed by Fast R-CNN, which is a target detection algorithm based on deep convolutional neural network, and the network structure of the image includes two parts, RPN and Fast R-CNN, where RPN is used to predict a candidate region that may include a target in the input image, and Fast R-CNN is used to classify the candidate region predicted by RPN.

Further, in step S2, the step of performing target detection by fast R-CNN, which is a target detection algorithm based on a deep convolutional neural network, includes:

1) initializing RPN network parameters by adopting a pre-training network model, and finely adjusting the RPN network parameters by a random gradient descent algorithm and a back propagation algorithm;

2) initializing fast R-CNN target detection network parameters by adopting a pre-training network model, extracting a candidate region by using the RPN in the step 1), and training a target detection network;

3) reinitializing and fine-tuning RPN network parameters by adopting the target detection network trained in the step 2) to obtain a new RPN network;

4) extracting a candidate region by adopting the RPN in the step 3), and finely adjusting the target detection network parameters trained in the step 2);

5) and repeating the step 3) and the step 4) until the maximum iteration number is reached or the network converges.

Further, in step S3, an example segmentation network is used to perform image segmentation, i.e. a convolutional neural network algorithm based on regional covariance guidance.

Further, the convolutional neural network algorithm based on the regional covariance guidance comprises the following steps:

1) extracting low-level features of the key frame by taking pixels as units;

2) constructing a region covariance based on the multi-dimensional feature vector;

3) constructing a convolutional neural network model by taking the covariance matrix as a training sample;

4) calculating the image significance based on the local and global contrast principles;

5) and (5) framing out the obvious target and acquiring target information.

Further, in step S4, the coordinate system is converted from the target information acquired by the example division network, and the sensor coefficient corresponding to the target is calculated by combining the distance data acquired by the laser ranging.

Further, in step S5, the horizontal distance calculation process between the target and the monocular camera includes the following steps:

1) acquiring the height H of the shipborne camera from the horizontal plane and the focal length f of the ship camera;

2) acquiring a pitch angle pitch when the shipborne camera shoots a target;

3) acquiring the pixel size h of the height of the ship-borne camera measurement target in an image;

4) calculating and determining the horizontal distance between the shipborne camera and the target by the following formula (1):

where k is the sensor coefficient described in step S4.

The invention has the beneficial effects that:

1. when the method is used for actual ship target ranging, the height or the width of a measured object does not need to be known, only internal and external parameters of a camera need to be known, and the ranging preparation work and the principle are simple and convenient.

2. When the method is used for distance measurement, the method has higher measurement precision and real-time performance and stronger generalization capability; in addition, the problem of measurement errors caused by water column water mist shielding of laser ranging can be solved to a certain extent, and meanwhile, the laser ranging device is less influenced by external factors and wide in application range.

Drawings

Fig. 1 is a schematic flow chart of a monocular water surface target segmentation and monocular distance measurement method of a shipborne camera according to an embodiment of the present invention.

Fig. 2 is a diagram of a geometric distance measurement model according to an embodiment of the present invention.

Fig. 3 is a diagram of the effect of simulating the movement of a target in a lake to obtain different positions according to the embodiment of the present invention.

Fig. 4 is a diagram of absolute errors of different position and field angle ranging measurements according to an embodiment of the present invention.

Detailed Description

In order to facilitate a better understanding of the invention for those skilled in the art, the invention will be described in further detail with reference to the accompanying drawings and specific examples, which are given by way of illustration only and do not limit the scope of the invention.

The embodiment discloses a monocular water surface target segmentation and monocular distance measurement method for a shipborne camera, which at least comprises the following steps as shown in fig. 1:

step S1, reading images by frame from the video stream captured by the onboard camera.

Wherein the installation position of the shipborne camera is fixed, but the view angle and the pitch angle can be freely adjusted.

And step S2, extracting key frames from the step S1, detecting the targets of all the environments appearing in the visual range aiming at the extracted key frames, and screening out the targets.

Specifically, the target detection is to process the image by adopting a target detection algorithm Fast R-CNN based on a deep convolutional neural network, and the network structure of the image comprises two parts, namely RPN and Fast R-CNN, wherein the RPN is used for predicting a candidate region possibly containing a target in the input image, and the Fast R-CNN is used for classifying the candidate region predicted by the RPN.

For the detection of the target, the data set is selected to be a real-shot picture data set simulating the movement of the target in a certain lake. Before implementation, a part of target images in the acquired data set is used as a training data set, and the other part of the target images is used as a test set. The pre-training model selected in this embodiment is ResNet50, the RPN performs end-to-end training in the training phase, the initial learning rate in the Faster R-CNN network is 0.0003, and the iterations are 20000 times.

Specifically, the step of carrying out target detection through a target detection algorithm Faster R-CNN based on a deep convolutional neural network comprises the following steps:

1) initializing RPN network parameters by adopting a pre-training network model, fine-tuning the RPN network parameters by a stochastic gradient descent algorithm and a back propagation algorithm, wherein the purpose of fine-tuning the RPN network parameters is to initialize pre-convolution network layer parameters to make a plurality of generated candidate regions biased to target characteristics.

2) Initializing fast R-CNN target detection network parameters by adopting a pre-training network model, extracting a candidate region by using the RPN in the step 1), and training a target detection network.

3) And (3) reinitializing and fine-tuning RPN network parameters by adopting the target detection network trained in the step 2), thus reducing some candidate regions with low correlation with target features, and simultaneously adjusting the positions of the candidate regions to minimize the deviation between the candidate regions and a real target frame to obtain a new RPN network.

4) Extracting a candidate region by adopting the RPN in the step 3), and finely adjusting the target detection network parameters trained in the step 2) to enhance the effect of a real target frame.

5) And repeating the step 3) and the step 4) until the maximum iteration number is reached or the network converges.

And step S3, based on the screened target and the actual application scene, performing two-dimensional bounding box framing and image segmentation to acquire target information in the current video frame.

In order to solve the problem in the case that the target is blocked by water mist of the water column, the embodiment provides an example segmentation network for image segmentation, that is, a convolutional neural network algorithm guided based on regional covariance, which is used for segmenting a significant target blocked by water mist of the water column. The salient object detection is a research which is provided by simulating a human eye visual attention mechanism and takes the most interesting area of human eyes as a detection object, the object is a salient object when a ship sails on the water surface with a single background, and the object information can be obtained by returning to a boundary frame of the salient object after the detection.

The convolutional neural network algorithm based on the regional covariance guidance comprises the following steps:

1) extracting low-level features of the key frame by taking pixels as units;

2) constructing a region covariance based on the multi-dimensional feature vector;

3) constructing a convolutional neural network model by taking the covariance matrix as a training sample;

4) calculating the image significance based on the local and global contrast principles;

5) and (5) framing out the obvious target and acquiring target information.

And S4, calculating the sensor coefficient corresponding to the target according to the distance data from the camera to the bottom of the target obtained by laser ranging and the target information obtained in the step S3.

In this embodiment, the target position information acquired by the network needs to be segmented according to the example, and the sensor coefficient corresponding to the target is calculated by combining the distance data acquired by laser ranging after coordinate system conversion, where the coordinate system conversion includes conversion from a world coordinate system to a camera coordinate system, conversion from the camera coordinate system to an image coordinate system, and conversion from the image coordinate system to a pixel coordinate system.

Specifically, the coordinate of a certain feature point of the target in the world coordinate system is set to be (X)w,Yw,Zw). The camera coordinate system is also a three-dimensional rectangular coordinate system defined as (X)C,YC,ZC). The origin of the camera coordinate system is the optical center of the lens, the X, Y axes are respectively parallel to two sides of the phase plane, and the Z axis is the optical axis of the lens and is vertical to the image plane. The conversion from the world coordinate system to the camera coordinate system is:

where R is a 3 × 3 rotation matrix and t is a 3 × 1 translation matrix.

And (3) converting from 3D to 2D from a camera coordinate system to an image coordinate system, wherein the coordinate of the corresponding characteristic point in the camera coordinate system is (X, Y), and the origin of the camera coordinate system is the intersection position of the optical axis of the camera and the image coordinate system. The conversion from camera coordinate system to image coordinate system is:

where f is the focal length of the camera.

The pixel coordinate system and the image coordinate system are both on the imaging plane, except that the respective origin and the measurement unit are different, wherein the origin of the image coordinate system is the intersection point of the camera optical axis and the imaging plane, and the corresponding characteristic point coordinates (u, v) in the pixel coordinate system. The conversion from the image coordinate system to the pixel coordinate system is:

wherein the content of the first and second substances,dx、dyare the physical dimensions, u, of each pixel in the x-axis and y-axis directions of the image plane, respectively0、v0Is the coordinate of the origin of the image coordinate system in the pixel coordinate system.

In summary, the relationship from the world coordinate to the pixel coordinate of a certain feature point is as follows:

and calculating the height of the ship by obtaining ship position information through the example segmentation network according to the relation between the world coordinates and the pixel coordinates, and calculating the sensor coefficient corresponding to the target by combining distance data measured by the laser.

Step S5, establishing a geometric distance measurement model, as shown in fig. 2, calculating a horizontal distance between the target and the monocular camera according to the geometric distance measurement model and the sensor coefficient corresponding to the target obtained in step S4.

Specifically, the monocular camera is fixedly placed on a deck of a ship, the position of a measured target is A, d is the horizontal distance O' A from the target to the camera, H is the vertical distance OO from the target to the camera, pitch is the pitch angle when the ship-borne camera shoots the target, k is a sensor coefficient, and i is the image of the measured object on a photosensitive element CCD or a CMOSLength, h is the pixel height of the measurement target at the interface, f is the focal length of the camera,where L is the diagonal length of the CMOS target surface, the camera used is known as 1/2CMOS, i.e.Where δ is the field angle);

from fig. 2, equation 1 can be derived:

known as α ═ pitch, β ═ α + γ;

then equation 2:

equation 3:

equation 4:

equation 5:

equation 6:

in the above, i is the image length of the measured object on the photosensitive element CCD or CMOS, while we see the image length of the image on the display screen, there is a conversion relationship between the two, and a sensor coefficient k is introduced to represent the ratio of the image length of the object on the display screen to the image length of the object on the photosensitive element, so i is replaced by k × h, i.e. the horizontal distance between the shipborne camera and the target:

and step S6, establishing a viewing angle optimization model for viewing angle optimization according to the horizontal distance between the target and the monocular camera obtained in the step S5, and obtaining the distance between the target and the monocular camera.

Specifically, the field angle optimization by using the field angle optimization model specifically includes:

in an optical system, the radius of an imaging light beam is limited by the aperture of a lens, a field grating of a camera lens is generally positioned in the lens, and the field shape is circular, so that an actual imaging plane is positioned close to an imaging plane of a pinhole imaging model, but the imaging sizes of the imaging planes are in a proportional relation.

According to the mathematical trigonometric relation, the field angle delta and the field plane distance h1And ideal imaging focal length h2The relationship between can be expressed as:

wherein, alpha is pitch angle, mu is angle of view vertex distance coefficient;

based on the above formula, the relationship between the fixed field angle and the focal length can be expanded to form a field angle-focal length plane, which is substituted into the original formulaThe following can be obtained:

will f is*Substituting into the above equation 1, the horizontal distance between the target and the monocular camera is recalculated, and then the distance between the target and the monocular camera is calculated using the following equation:

referring to fig. 3, fig. 3 lists 4 graphs of (a), (b), (c) and (d), and it can be seen that the distance values at different positions obtained by using the simulated target motion test are 23.4m, 23.2m, 29.4m and 25.5m, respectively.

In the specific implementation process of the embodiment, the sensor coefficient k obtained for the simulated target is 1.7e-6, the distance X between the obtained target and the monocular camera is calculated, and then compared with the laser ranging value, the absolute error is calculated, as a result, referring to fig. 4, fig. 4 shows three data with distances of 25.2m, 28.1m and 29.6m respectively, it can be seen that, after being compared with the laser ranging value, the absolute errors of the three distances are all within 5%, and the application requirements are met.

The installation position of the ship-mounted monocular camera is fixed, so the height H of the ship-mounted monocular camera is fixed, and the parameters of the ship-mounted monocular camera are fixed. The ranging error of the invention does not change according to the change of the external environment.

The method establishes a ship water surface ranging model according to the actual water surface ranging scene of the ship carrying the monocular camera; secondly, carrying out target detection and instance segmentation of the water surface scene through a shipborne camera to obtain target information; then, the sensor coefficient of the corresponding target is calculated by combining the measured data; then, calculating the horizontal distance between the target and the shipborne monocular camera through the obtained sensor coefficient and the geometric relation distance measurement model; and finally, further calculating the distance between the target and the shipborne camera by utilizing the view angle optimization model. The actual measurement result shows that the method has higher accuracy; when the method is used for distance measurement, the height pixel size of the target can be acquired from the target frame in real time, and the distance result is displayed on the image in real time, so that the method has high real-time performance. In addition, when the method is adopted to carry out actual ship target ranging, the height or the width of a measured object does not need to be known, only internal and external parameters of a camera need to be known, the ranging preparation work and the principle are simple and convenient, the generalization capability is strong, the ranging precision is high, and meanwhile, the influence of external factors is small.

The foregoing merely illustrates the principles and preferred embodiments of the invention and many variations and modifications may be made by those skilled in the art in light of the foregoing description, which are within the scope of the invention.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种用于波浪滑翔器通讯系统的密封舱

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!