Ellipse identification method based on deep learning

文档序号:1738165 发布日期:2019-12-20 浏览:5次 中文

阅读说明:本技术 一种基于深度学习的椭圆识别方法 (Ellipse identification method based on deep learning ) 是由 徐静 陈恳 刘炽成 于 2019-08-09 设计创作,主要内容包括:本发明提出一种基于深度学习的椭圆识别方法,属于模式识别技术领域。该方法在深度学习训练阶段,首先采集包含椭圆的图像并标注边缘后构建训练数据集;然后建立深度神经网络,利用训练数据集对深度经网络进行迭代训练,得到训练完毕的网络;在边缘跟踪识别椭圆阶段,将需要识别的图像输入训练完毕的网络,网络输出预测的边缘图像;然后采用边缘跟踪算法识别边缘图像中的椭圆,完成识别。本发明利用深度学习算法预测图像边缘,通过边缘跟踪算法识别椭圆,实现了基于深度学习的椭圆识别。(The invention provides an ellipse recognition method based on deep learning, and belongs to the technical field of pattern recognition. In the deep learning training stage, firstly, acquiring an image containing an ellipse, marking edges, and then constructing a training data set; then establishing a deep neural network, and performing iterative training on the deep neural network by using a training data set to obtain a trained network; in the stage of edge tracking and ellipse identification, inputting an image to be identified into a trained network, and outputting a predicted edge image by the network; and then, identifying the ellipse in the edge image by adopting an edge tracking algorithm to finish the identification. The invention utilizes the deep learning algorithm to predict the image edge and identifies the ellipse through the edge tracking algorithm, thereby realizing the ellipse identification based on the deep learning.)

1. The ellipse recognition method based on deep learning is characterized by comprising two stages of deep learning training and edge tracking ellipse recognition, and comprises the following steps:

1) a deep learning training stage; the method comprises the following specific steps:

1-1) acquiring M images containing ellipses as training input images, manually labeling all edge information contained in each training input image, and obtaining corresponding edge images as training target images after labeling is finished; each training input image and the corresponding training target image form a training sample, and all the training samples form a training data set;

1-2) constructing a deep neural network; the network is provided with 14 layers of neurons, wherein the front 7 layers of the network are compression layers, the size of an input image of each layer of the compression layers is reduced in turn in a halving mode, the rear 7 layers of the network are generation layers, the size of the input image of each layer of the generation layers is enlarged in turn in a doubling mode, and the size of an output image of the generation layer of the 7 th layer is consistent with the size of an input image; the network outputs a total of 8 images, assuming that the input image size is W × W, 8 output images Od0,Od1,Od2,Od3,Ou3,Ou2,Ou1,Ou0The dimensions of (A) are W × W, 0.5W × 0.5W, 0.25W × 0.25W, 0.125W × 0.125W, 0.125W × 0.125W, 0.25W × 0.25W, 0.5W × 0.5W, W × W, respectively; wherein, Odj,iFor compressing the output of the layer, Ouj,iTo generate the output of the layer, j is 0, 1, 2, 3, representing the number of times the image is halved compared to the original image size;

1-3) carrying out iterative training on the deep neural network by utilizing a training data set to obtain a trained deep neural network; the method comprises the following specific steps:

1-3-1) randomly initializing parameters of the deep neural network established in 1-2), and taking the initial deep neural network as a current neural network;

1-3-2) making i ═ 1;

1-3-3) predicting an output edge image corresponding to each training input image in a training data set by using a current deep neural network; record the ith training input image as X0,iThe 8 output edge images predicted by the deep neural network are respectively Odj,iAnd Ouj,i,j=0,1,2,3;

1-3-4) input the ith training image X0,iCorresponding training target image Y0,iContinuous fold-and-half down sampling to obtain Yj,iJ is 0, 1, 2, 3, wherein Y0,iThe dimension is W × W, then Yj,iSize of 0.5jW×0.5jW;

1-3-5) Using the output edge image Odj,i,Ouj,iAnd a training target image Yj,iStructural multilayer error LiThe expression is as follows:

1-3-6) by minimizing L using a gradient descent algorithmiUpdating the parameters of the current deep neural network to obtain an updated current deep neural network;

1-3-7) let i ═ i +1, then go back to step 1-3-3) again), by minimizing L using an algorithm such as gradient descentiPerforming iterative training on the current deep neural network until all training samples traverse once, and returning to the step 1-3-2) to perform the next iterative training until the average error is reachedThe continuous 5-round iterative training does not decline any more, wherein N is the number of training samples; get LavgThe minimum deep neural network is a trained deep neural network;

2) an ellipse is identified by edge tracking; the method comprises the following specific steps:

2-1) obtaining an image at willXa0(inputting the image into the trained deep neural network in the step 1) to obtain an output image O of a 7 th generation layer of the networka0

2-2) to the image O obtained in the step 2-1)a0Performing morphological slimming treatment to obtain an edge image with the width of 1 pixel;

2-3) carrying out edge tracking on the edge image with the width of 1 pixel in the step 2-2); the method comprises the following specific steps:

2-3-1) randomly selecting 2-2) to obtain an unused pixel in the edge image as a seed pixel, wherein the seed pixel contains no more than two 8-connected neighbors;

2-3-2) initializing a blank curve without any pixel as a current edge curve, selecting any 8-connected neighbor of the seed pixel as a boundary pixel, and adding the boundary pixel into the current edge curve, wherein all the pixels added into the current edge curve are used pixels;

2-3-3) determining boundary pixels:

if the boundary pixel contains more than two or less than two 8-connected neighbors, ending the edge tracking of the connecting line direction of the seed pixel and the boundary pixel, and entering the step 2-3-5); if the boundary pixel contains two 8-connected neighbors, taking the 8-connected neighbors which are not on the current edge curve in the two 8-connected neighbors of the boundary pixel as new boundary pixels, adding the current edge curve, and entering the step 2-3-4);

2-3-4) repeating the step 2-3-3) until the edge tracking of the connecting line direction of the seed pixel and the boundary pixel is finished, and entering the step 2-3-5);

2-3-5) if the seed pixel has only 1 8 communicating neighbors, taking the current edge curve of the edge tracking completed in the step 2-3-4) as the edge curve of the seed pixel, and entering the step 2-4); if the seed pixel has two 8-connected neighbors, selecting the other 8-connected neighbor of the seed pixel as a new boundary pixel, repeating the steps from 2-3-2) to 2-3-4), initializing a new current edge curve, completing edge tracking of the seed pixel in the connecting line direction of the seed pixel and the new boundary pixel, and finally splicing the two current edge curves of which the edge tracking is completed in the steps from 2-3-4) and from 2-3-5) into an edge curve of the seed pixel;

2-4) randomly selecting a new unused pixel from the edge image in the step 2-2), and repeating the step 2-3) until all the unused pixels obtain corresponding edge curves;

2-5) scanning each edge curve obtained in the step 2-4), and disconnecting the edge curve at a position where the curvature of each edge curve exceeds a set curvature threshold;

2-6) deleting edge curves with straightness exceeding a set straightness threshold from the edge curves processed in the step 2-5);

2-7) calculating each end point of each edge curve processed in the step 2-6) to obtain an edge curve of a smooth curve connected with the end point, sequencing the edge curves obtained by calculation according to the lengths from large to small, and taking the edge curve with the largest length as a neighbor curve of the end point;

2-8) randomly taking an edge curve from the edge curves processed in the step 2-6), splicing the neighbor curves of the end points along any end point of the edge curve, then continuously splicing the corresponding neighbor curves of the end points which are not used for splicing the neighbor curves, sequentially descending till the end points cannot be spliced, and respectively storing the selected edge curve and the curve group obtained after each splicing to obtain an intermediate process curve group corresponding to the edge curve and a final splicing result curve group;

2-9) repeating the steps 2-8) until obtaining an intermediate process curve group and a final splicing result curve group respectively corresponding to all the edge curves;

2-10) calculating the rotation angle of each curve group obtained in the step 2-9), wherein the calculation method is that the rotation angles are accumulated along the curve from one endpoint of the curve group to the other endpoint of the curve group, and the accumulated rotation angles are recorded as the rotation angles of the curve group;

and screening all curve groups by using the rotation angle of each curve group: if the rotation angle of the curve group is less than 150 degrees or more than 400 degrees, the curve group is illegal to combine, and the curve group is deleted;

2-11) fitting a corresponding ellipse to each curve group after the screening in the step 2-10), calculating the distance from the pixels on the curve group to the fitted ellipse, and judging: if the number of pixels with the distance less than one pixel distance to the fitted corresponding ellipse accounts for less than the set proportion threshold value of all the pixel numbers, deleting the ellipse, otherwise, keeping the ellipse;

2-12) calculating the ratio of the number of pixels of the corresponding curve group, the distance between the pixels and the ellipse of which is less than one pixel distance, to the perimeter of the ellipse according to the calculated values of all the ellipses processed in the steps 2-11) and sorting all the ellipses from large to small according to the ratio;

2-13) sequentially selecting the ellipses sequenced in the step 2-12) and judging: if the edge curves used by the selected ellipse are not consumed, the ellipse is reserved, and the edge curves used by the ellipse are consumed; otherwise, deleting the ellipse; and after all the ellipses are judged, the finally reserved ellipses are the result of the ellipse identification corresponding to the image obtained in the step 2-1).

Technical Field

The invention belongs to the technical field of pattern recognition, and particularly relates to an ellipse recognition method based on deep learning.

Background

Ellipse identification has become the basis of many computer vision applications, as ellipse is one of the most common geometric figures. For example, ellipse recognition may be used in the fields of surface inspection, camera calibration, object differentiation, eye capture, road traffic sign recognition and classification, and the like. Therefore, it is important to robustly and stably identify an ellipse from an image.

The existing ellipse recognition algorithms are roughly divided into two types, one is an ellipse recognition algorithm based on Hough transform, and the other is an ellipse recognition algorithm based on edge tracking. The ellipse recognition algorithm based on Hough transform is not only low in efficiency, but also low in precision, and is difficult to be used in an industrial generation environment. The existing ellipse recognition algorithm based on edge tracking recognizes edges based on image gradient and recognizes ellipses based on edges, and for pictures in industrial environment, the image edges cannot be recognized well only based on image gradient due to more image gradient noise. In addition, none of the existing ellipse recognition algorithms based on edge tracking can recognize small ellipses (ellipses with semi-major axis smaller than 10 pixels in the image) well.

Disclosure of Invention

The invention aims to overcome the defects of the prior art and provides an ellipse recognition method based on deep learning. The invention adopts the deep learning algorithm to identify the edge image, can overcome the problem of more gradient noise of the image, adopts the edge tracking algorithm to identify the ellipse on the edge image, optimizes the identification of the small ellipse, can realize high precision and high recall rate, and has very high application value.

The invention provides an ellipse recognition method based on deep learning, which is characterized by comprising two stages of deep learning training and edge tracking ellipse recognition, and comprises the following steps:

1) a deep learning training stage; the method comprises the following specific steps:

1-1) acquiring M images containing ellipses as training input images, manually labeling all edge information contained in each training input image, and obtaining corresponding edge images as training target images after labeling is finished; each training input image and the corresponding training target image form a training sample, and all the training samples form a training data set;

1-2) constructing a deep neural network; the network has 14 layers of neurons, the first 7 layers of the network are compression layers, and each layer of the compression layersThe input image size of the layer 7 is reduced in turn in a halving mode, the next 7 layers are generated layers, the input image size of each layer of the generated layers is enlarged in turn in a doubling mode, and the output image size of the layer 7 is consistent with the input image size; the network outputs a total of 8 images, assuming that the input image size is W × W, 8 output images Od0,Od1,Od2,Od3,Ou3,Ou2,Ou1,Ou0The dimensions of (A) are W × W, 0.5W × 0.5W, 0.25W × 0.25W, 0.125W × 0.125W, 0.125W × 0.125W, 0.25W × 0.25W, 0.5W × 0.5W, W × W, respectively; wherein, Odj,iFor compressing the output of the layer, Ouj,iTo generate the output of the layer, j is 0, 1, 2, 3, representing the number of times the image is halved compared to the original image size;

1-3) carrying out iterative training on the deep neural network by utilizing a training data set to obtain a trained deep neural network; the method comprises the following specific steps:

1-3-1) randomly initializing parameters of the deep neural network established in 1-2), and taking the initial deep neural network as a current neural network;

1-3-2) making i ═ 1;

1-3-3) predicting an output edge image corresponding to each training input image in a training data set by using a current deep neural network; record the ith training input image as X0,iThe 8 output edge images predicted by the deep neural network are respectively Odj,iAnd Ouj,i,j=0,1,2,3;

1-3-4) input the ith training image X0,iCorresponding training target image Y0,iContinuous fold-and-half down sampling to obtain Yj,iJ is 0, 1, 2, 3, wherein Y0,iThe dimension is W × W, then Yj,iSize of 0.5jW×0.5jW;

1-3-5) Using the output edge image Odj,i,Ouj,iAnd a training target image Yj,iStructural multilayer error LiThe expression is as follows:

1-3-6) by minimizing L using a gradient descent algorithmiUpdating the parameters of the current deep neural network to obtain an updated current deep neural network;

1-3-7) let i ═ i +1, then go back to step 1-3-3) again), by minimizing L using an algorithm such as gradient descentiPerforming iterative training on the current deep neural network until all training samples traverse once, and returning to the step 1-3-2) to perform the next iterative training until the average error is reachedThe continuous 5-round iterative training does not decline any more, wherein N is the number of training samples; get LavgThe minimum deep neural network is a trained deep neural network;

2) an ellipse is identified by edge tracking; the method comprises the following specific steps:

2-1) obtaining an image X at willa0(inputting the image into the trained deep neural network in the step 1) to obtain an output image O of a 7 th generation layer of the networka0

2-2) to the image O obtained in the step 2-1)a0Performing morphological slimming treatment to obtain an edge image with the width of 1 pixel;

2-3) carrying out edge tracking on the edge image with the width of 1 pixel in the step 2-2); the method comprises the following specific steps:

2-3-1) randomly selecting 2-2) to obtain an unused pixel in the edge image as a seed pixel, wherein the seed pixel contains no more than two 8-connected neighbors;

2-3-2) initializing a blank curve without any pixel as a current edge curve, selecting any 8-connected neighbor of the seed pixel as a boundary pixel, and adding the boundary pixel into the current edge curve, wherein all the pixels added into the current edge curve are used pixels;

2-3-3) determining boundary pixels:

if the boundary pixel contains more than two or less than two 8-connected neighbors, ending the edge tracking of the connecting line direction of the seed pixel and the boundary pixel, and entering the step 2-3-5); if the boundary pixel contains two 8-connected neighbors, taking the 8-connected neighbors which are not on the current edge curve in the two 8-connected neighbors of the boundary pixel as new boundary pixels, adding the current edge curve, and entering the step 2-3-4);

2-3-4) repeating the step 2-3-3) until the edge tracking of the connecting line direction of the seed pixel and the boundary pixel is finished, and entering the step 2-3-5);

2-3-5) if the seed pixel has only 1 8 communicating neighbors, taking the current edge curve of the edge tracking completed in the step 2-3-4) as the edge curve of the seed pixel, and entering the step 2-4); if the seed pixel has two 8-connected neighbors, selecting the other 8-connected neighbor of the seed pixel as a new boundary pixel, repeating the steps from 2-3-2) to 2-3-4), initializing a new current edge curve, completing edge tracking of the seed pixel in the connecting line direction of the seed pixel and the new boundary pixel, and finally splicing the two current edge curves of which the edge tracking is completed in the steps from 2-3-4) and from 2-3-5) into an edge curve of the seed pixel;

2-4) randomly selecting a new unused pixel from the edge image in the step 2-2), and repeating the step 2-3) until all the unused pixels obtain corresponding edge curves;

2-5) scanning each edge curve obtained in the step 2-4), and disconnecting the edge curve at a position where the curvature of each edge curve exceeds a set curvature threshold;

2-6) deleting edge curves with straightness exceeding a set straightness threshold from the edge curves processed in the step 2-5);

2-7) calculating each end point of each edge curve processed in the step 2-6) to obtain an edge curve of a smooth curve connected with the end point, sequencing the edge curves obtained by calculation according to the lengths from large to small, and taking the edge curve with the largest length as a neighbor curve of the end point;

2-8) randomly taking an edge curve from the edge curves processed in the step 2-6), splicing the neighbor curves of the end points along any end point of the edge curve, then continuously splicing the corresponding neighbor curves of the end points which are not used for splicing the neighbor curves, sequentially descending till the end points cannot be spliced, and respectively storing the selected edge curve and the curve group obtained after each splicing to obtain an intermediate process curve group corresponding to the edge curve and a final splicing result curve group;

2-9) repeating the steps 2-8) until obtaining an intermediate process curve group and a final splicing result curve group respectively corresponding to all the edge curves;

2-10) calculating the rotation angle of each curve group obtained in the step 2-9), wherein the calculation method is that the rotation angles are accumulated along the curve from one endpoint of the curve group to the other endpoint of the curve group, and the accumulated rotation angles are recorded as the rotation angles of the curve group;

and screening all curve groups by using the rotation angle of each curve group: if the rotation angle of the curve group is less than 150 degrees or more than 400 degrees, the curve group is illegal to combine, and the curve group is deleted;

2-11) fitting a corresponding ellipse to each curve group after the screening in the step 2-10), calculating the distance from the pixels on the curve group to the fitted ellipse, and judging: if the number of pixels with the distance less than one pixel distance to the fitted corresponding ellipse accounts for less than the set proportion threshold value of all the pixel numbers, deleting the ellipse, otherwise, keeping the ellipse;

2-12) calculating the ratio of the number of pixels of the corresponding curve group, the distance between the pixels and the ellipse of which is less than one pixel distance, to the perimeter of the ellipse according to the calculated values of all the ellipses processed in the steps 2-11) and sorting all the ellipses from large to small according to the ratio;

2-13) sequentially selecting the ellipses sequenced in the step 2-12) and judging: if the edge curves used by the selected ellipse are not consumed, the ellipse is reserved, and the edge curves used by the ellipse are consumed; otherwise, deleting the ellipse; and after all the ellipses are judged, the finally reserved ellipses are the result of the ellipse identification corresponding to the image obtained in the step 2-1).

The invention has the characteristics and beneficial effects that:

the invention can realize the identification of the edge image in a high-noise environment by training a deep neural network, and then can identify the ellipse from the edge image by utilizing an edge tracking algorithm. The invention is suitable for identifying the ellipse in the industrial environment, can solve the problems of high noise and more small ellipses of the image in the industrial environment, has high identification result precision and recall rate and has high application value.

Drawings

FIG. 1 is an overall flow diagram of an embodiment of the present invention.

Fig. 2 is a schematic structural diagram of a deep neural network according to an embodiment of the present invention.

Detailed Description

The invention provides an ellipse recognition method based on deep learning, which is further described in detail below with reference to the accompanying drawings and specific embodiments.

The invention provides an ellipse recognition method based on deep learning, which is divided into two stages of deep learning training and edge tracking ellipse recognition, wherein the whole process is shown in figure 1 and comprises the following steps:

1) a deep learning training stage; the method comprises the following specific steps:

1-1) acquiring an image containing an ellipse as a training input image, wherein the larger the number is, the better the number is, in the embodiment, 81 images collected by a network and containing the ellipse are used as the training input image, the size is from 200 pixels to 1000 pixels, all edge information (not limited to the ellipse) contained in each training input image is manually labeled, and after the labeling is finished, a corresponding edge image is obtained and used as a training target image; each training input image and the corresponding training target image form a training sample, and all the training samples form a training data set;

1-2) constructing a deep neural network; the input of the deep neural network is an image containing an ellipse, and the output is an edge image corresponding to the input image. The network is provided with 14 layers of neurons, the first 7 layers of the network are compression layers, the size of an input image of each layer of the compression layers is reduced in turn in a halving mode, the second 7 layers of the network are generation layers, and the size of the input image of each layer of the generation layers is enlarged in turn in a doubling modeThe output image size of the 7 th layer generation layer is identical to the input image size. Wherein each layer except the merging layer (the layer represented by the superposition of two rectangles in fig. 2) is connected with an output module, the network outputs 8 images in total, and assuming that the size of the input image is W multiplied by W, the 8 output images Od0,Od1,Od2,Od3,Ou3,Ou2,Ou1,Ou0The dimensions of (A) are W × W, 0.5W × 0.5W, 0.25W × 0.25W, 0.125W × 0.125W, 0.125W × 0.125W, 0.25W × 0.25W, 0.5W × 0.5W, W × W, respectively;

fig. 2 is a diagram illustrating a deep neural network result according to an embodiment of the present invention. Where 1 is the input image and 2 is the output image for each layer. Wherein Pool,/2 is a fold-half pooling layer, Conv is a convolution layer, Conv,/2 is a fold-half convolution layer, Trans, X2 is a doubling generation layer, and two graphs connected by Same represent the Same intermediate variable; x0,iFor the ith input image, X1,i,X2,i,X3,iAre each X0,iReducing the image to one half, one quarter and one eighth; zdj,iBeing intermediate variables of the compression layer, Zuj,iTo be an intermediate variable of the formation, Odj,iFor compressing the output of the layer, Ouj,iGenerating an output of the layer, wherein j is 0, 1, 2, 3, representing the number of times the image is halved compared to the original input image size;

1-3) carrying out iterative training on the deep neural network by utilizing a training data set to obtain a trained deep neural network; the method comprises the following specific steps:

1-3-1) randomly initializing the parameters of the deep neural network established in 1-2), or using some sophisticated initialization algorithm. Taking the initial deep neural network as a current neural network;

1-3-2) making i ═ 1;

1-3-3) predicting an output edge image corresponding to each training input image in the training data set by using the current deep neural network, wherein each training input image corresponds to 8 output edge images with different sizes. Record the ith training input image as X0,iThe 8 output edge images predicted by the deep neural network are respectivelyOdj,i(j ═ 0, 1, 2, 3) and Ouj,i(j is 0, 1, 2, 3), and the image size of the subscript band j is 0.5 of the original imagej

1-3-4) input the ith training image X0,iCorresponding training target image Y0,iContinuous fold-and-half down sampling to obtain Yj,i(j ═ 1, 2, 3), where j represents the number of decimated down samples, where Y0,iThe dimension is W × W, then Yj,iSize of 0.5jW×0.5jW;

1-3-5) Using the output edge image Odj,i,Ouj,iAnd a training target image Yj,iStructural multilayer error LiThe expression is as follows:

1-3-6) by minimizing L using a gradient descent algorithmiUpdating the parameters of the current deep neural network to obtain an updated current deep neural network, and setting batch updating according to the memory condition, wherein the updating is performed once for each image in the embodiment;

1-3-7) let i ═ i +1, then go back to step 1-3-3) again), by minimizing L using an algorithm such as gradient descentiPerforming iterative training on the current deep neural network until all training samples traverse once, and returning to the step 1-3-2) to perform the next iterative training until the average error is reached(where N is the number of training samples in the training data set) no longer decreases in 5 consecutive rounds of iterative training, and L is selectedavgThe minimum deep neural network is a trained deep neural network;

2) an ellipse is identified by edge tracking; the method comprises the following specific steps:

2-1) obtaining an image X at willa0Inputting the image into the trained deep neural network in the step 1) to obtain an output image O of a 7 th generation layer of the networka0(ii) a Wherein, the image Xa0Can be an image obtained by any means;

2-2) to the image O obtained in the step 2-1)a0Performing morphological slimming (morphing) to obtain an edge image with a width of 1 pixel;

2-3) carrying out edge tracking on the edge image with the width of 1 pixel in the step 2-2); the method comprises the following specific steps:

2-3-1) randomly selecting 2-2) to obtain an unused pixel (all pixels of the edge image are unused in an initial state) in the edge image as a seed pixel, wherein the seed pixel only contains no more than two 8-connected neighbors, and the 8-connected neighbors are pixels sharing one edge or one corner with the pixel;

2-3-2) initializing a blank curve without any pixel as a current edge curve, selecting any 8-connected neighbor of the seed pixel as a boundary pixel, and adding the boundary pixel into the current edge curve (the curve is composed of pixels, the adding means that the pixel is placed in the curve, which is equivalent to that the curve is lengthened by one pixel), wherein all the pixels added into the current edge curve are considered to be used;

2-3-3) determining boundary pixels:

if the boundary pixel contains more than two or less than two 8-connected neighbors, ending the edge tracking of the connecting line direction of the seed pixel and the boundary pixel, and entering the step 2-3-5); if the boundary pixel contains exactly two 8 connected neighbors, taking the 8 connected neighbors which are not on the current edge curve in the two 8 connected neighbors of the boundary pixel as new boundary pixels, adding the current edge curve, and entering the step 2-3-4);

2-3-4) repeating the step 2-3-3) until the edge tracking of the connecting line direction of the seed pixel and the boundary pixel is finished, and entering the step 2-3-5);

2-3-5) if the seed pixel has only 1 8 communicating neighbors, taking the current edge curve of the edge tracking completed in the step 2-3-4) as the edge curve of the seed pixel, and entering the step 2-4); if the seed pixel has two 8-connected neighbors, selecting the other 8-connected neighbor of the seed pixel as a new boundary pixel, repeating the steps from 2-3-2) to 2-3-4), initializing a new current edge curve, completing edge tracking of the seed pixel in the connecting line direction of the seed pixel and the new boundary pixel, and finally splicing the two current edge curves of which the edge tracking is completed in the steps from 2-3-4) and from 2-3-5) into an edge curve of the seed pixel;

2-4) randomly selecting a new unused pixel from the edge image in the step 2-2), and repeating the step 2-3) until all the unused pixels obtain corresponding edge curves, so as to obtain a series of edge curves;

2-5) scanning each edge curve obtained in the step 2-4), and disconnecting the edge curve at a position where the curvature exceeds a set curvature threshold value in each edge curve to obtain a plurality of edge curves, wherein a specific curvature threshold value is selected according to an application scene (the threshold value is an average value twice in the embodiment);

2-6) deleting the edge curves with the straightness exceeding the set straightness threshold value from the edge curves processed in the step 2-5), wherein the straightness threshold value is adjusted according to a specific application scene (in the embodiment, 0.9 is adopted);

2-7) calculating an edge curve which can be connected with each end point of each edge curve processed in the step 2-6) to obtain a smooth curve, sequencing the edge curves obtained by calculation according to the lengths from large to small, and taking the edge curve with the largest length as a neighbor curve of the end point;

2-8) randomly taking an edge curve from the edge curves processed in the step 2-6), splicing the neighbor curves of the end points along any end point of the edge curve, then continuously splicing the corresponding neighbor curves of the end points which are not used for splicing the neighbor curves, sequentially descending till the end points cannot be spliced, and respectively storing the selected edge curve and the curve group obtained after each splicing to obtain an intermediate process curve group corresponding to the edge curve and a final splicing result curve group; e.g. if the final stitching result of a certain edge curve is a set of curves P1,P2,P3]If so, then the intermediate process [ P ]1]And [ P1,P2]Is also reserved, where P1For a randomly selected edge curve, P2Is P1Neighbor curve of an endpoint, P3Is P2The neighbor curves corresponding to the endpoints that are not used for stitching.

2-9) repeating the steps 2-8) until all the edge curves respectively correspond to the middle process curve group and the final splicing result curve group; thus, a series of curve combinations are obtained;

2-10) calculating the rotation angle of each curve group obtained in the step 2-9), wherein the calculation method is that the rotation angles are accumulated along the curve from one endpoint of the curve group to the other endpoint of the combination, and the accumulated rotation angles are recorded as the rotation angles of the curve group;

and screening all curve groups by using the rotation angle of each curve group: if the rotation angle of the curve group is less than 150 degrees or more than 400 degrees, the curve group is considered to be illegal combination, and the curve group is deleted;

2-11) fitting a corresponding ellipse to each curve group after the screening in the step 2-10), and calculating the distance from the pixels on the curve group to the ellipse corresponding to the fitting, wherein the ratio of the number of the pixels with the distance to the ellipse corresponding to the fitting to be less than one pixel distance to the number of all the pixels is not lower than a set ratio threshold, if the number of the pixels with the distance to be less than one pixel distance to the number of all the pixels is lower than the set ratio threshold, the ellipse is deleted, otherwise, the ellipse is retained (the specific threshold is determined according to the actual application scene, and the value of the ratio threshold in the embodiment is 0.5);

2-12) calculating the ratio of the number of pixels of the corresponding curve group, the distance between which and the ellipse is less than one pixel distance, to the number of pixels of the ellipse by the perimeter of the ellipse according to the calculated values of all the ellipses processed in the step 2-11), and sorting all the ellipses from large to small according to the ratio;

2-13) sequentially selecting the ellipses sequenced in the step 2-12) and judging: if the edge curves used by the selected ellipse are not consumed, the ellipse is reserved, and the edge curves used by the ellipse are consumed; otherwise, the ellipse is deleted (i.e., if the edge curve that fits the ellipse is consumed in whole or in part, the ellipse is deleted); and after all the ellipses are judged, the finally reserved ellipses are the result of the ellipse identification corresponding to the image obtained in the step 2-1).

The invention utilizes the deep neural network to identify the edge image and utilizes the edge tracking algorithm to identify the ellipse, thereby realizing the ellipse identification based on deep learning.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于Faster-RCNN的船牌识别方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!