Vehicle fire identification method combining YOLOv3 and optical flow method

文档序号:1891309 发布日期:2021-11-26 浏览:24次 中文

阅读说明:本技术 一种结合YOLOv3和光流法的车辆火灾识别方法 (Vehicle fire identification method combining YOLOv3 and optical flow method ) 是由 吴敏思 于 2021-08-31 设计创作,主要内容包括:本发明公开了一种结合YOLOv3和光流法的车辆火灾识别方法,所述方法为:制作训练样本集,使样本集内包括多种常见的目标;训练权重模型,配置YOLOv3网络模型参数,将训练样本集代入,离线训练YOLOv3网络模型,生成权重模型;进行单帧图像火焰检测,采集车辆实时图像输入权重模型,输出检测结果,判断是否存在真实火焰和干扰因素;对连续图像序列进行火灾识别,通过光流法统计火焰区域的光流信息,统计疑似区域内火焰整体运动方向,判断火灾真实性。本发明解决了车辆火灾识别精度低、误差大的问题。(The invention discloses a vehicle fire identification method combining a YOLOv3 and an optical flow method, which comprises the following steps: making a training sample set to enable the sample set to comprise a plurality of common targets; training a weight model, configuring YOLOv3 network model parameters, substituting a training sample set, training a YOLOv3 network model in an off-line manner, and generating the weight model; carrying out flame detection on a single frame image, collecting a real-time image of a vehicle, inputting the image into a weight model, outputting a detection result, and judging whether real flames and interference factors exist or not; and carrying out fire identification on the continuous image sequence, counting the optical flow information of the flame area by an optical flow method, counting the overall movement direction of the flame in the suspected area, and judging the authenticity of the fire. The invention solves the problems of low vehicle fire identification precision and large error.)

1. A vehicle fire identification method combining a YOLOv3 and an optical flow method is characterized in that the method comprises the following steps:

making a training sample set to enable the sample set to comprise a plurality of common targets;

training a weight model, configuring YOLOv3 network model parameters, substituting a training sample set, training a YOLOv3 network model in an off-line manner, and generating the weight model;

carrying out flame detection on a single frame image, collecting a real-time image of a vehicle, inputting the image into a weight model, outputting a detection result, and judging whether real flames and interference factors exist or not;

and carrying out fire identification on the continuous image sequence, counting the optical flow information of the flame area by an optical flow method, counting the overall movement direction of the flame in the suspected area, and judging the authenticity of the fire.

2. The method for identifying fire in a vehicle by combining YOLOv3 and optical flow as claimed in claim 1, wherein the training sample set comprises three types of targets: a real flame sample, a display sample and a light source sample;

the real flame sample comprises flame images acquired under different light rays, angles and fire sources in an actual application environment and a network public sample set, and the flame sample is screened from the flame images;

since the display device is arranged in a common vehicle, if a fire safety propaganda sheet is played, false alarm is easily caused, and a real flame sample of the display device is collected;

the light source sample comprises a plurality of light source types around the vehicle, and a vehicle lamp light source, a vehicle interior lighting lamp light source and a decorative lamp light source are all arranged in the light source sample.

3. The method for identifying a fire in a vehicle by combining YOLOv3 and an optical flow method as claimed in claim 2, wherein the real flame sample, the display sample and the light source sample are sample marked by labelImg software to generate a marking file.

4. The method for identifying fire in a vehicle by combining YOLOv3 and optical flow as claimed in claim 1, wherein the method for training the weight model comprises:

the prior frames of 9 scales are obtained by adopting a k-means clustering algorithm on the training sample set, so that targets with different scales of large, medium and small can be detected;

preparing off-line training, configuring parameters of a YOLOv3 network model, and adjusting the training category number, the prior frame and the iteration number;

and starting off-line training of the YOLOv3 network model by using the label file and the training sample set to generate a weight model.

5. The method for identifying vehicle fire through the combination of YOLOv3 and optical flow method as claimed in claim 4, wherein the weight model is iterated continuously until the error loss converges to a minimum value, thereby obtaining an optimal weight model.

6. The method for identifying a fire in a vehicle by combining YOLOv3 and an optical flow method according to claim 1, wherein the flame detection method of the single-frame image comprises the following steps:

acquiring a real-time image of the vehicle through a camera;

inputting the collected real-time images into a trained weight model;

the weight model judges whether real flame, flame displayed by a display or a light source exists in the single-frame image;

eliminating interference factors and outputting a detection analysis result.

7. The method for identifying a fire in a vehicle by combining YOLOv3 and an optical flow method as claimed in claim 1, wherein the method for identifying a fire in the continuous image sequence comprises:

accumulating the continuous images of the flame detected by multiple frames;

calculating an optical flow field of an input image, then performing threshold segmentation on the optical flow field, performing morphological denoising, and extracting a connected domain;

counting optical flow information of the flame area, including a movement direction and a movement speed;

influenced by the air conditioner on the vehicle, the flame movement direction can tend to the same direction, and the flame movement amplitude is not large in a short time, so that whether a fire disaster occurs is judged.

Technical Field

The invention relates to the technical field of image recognition, in particular to a vehicle fire recognition method combining YOLOv3 and an optical flow method.

Background

In recent years, fire accidents frequently occur in railway vehicles, serious casualties and economic losses are caused, and early warning of fire is extremely important. The fire detectors which are widely applied at present are mainly traditional temperature-sensitive detectors, smoke detectors and the like. The temperature-sensing detector mainly detects the temperature of the environment through a temperature-sensing element and judges the occurrence of fire through a single threshold or a plurality of thresholds. The smoke detector detects the concentration of smoke particles in the environment, and the smoke detector judges a fire alarm when the concentration exceeds a certain threshold value. These detector algorithms are relatively simple and have certain limitations or false alarm conditions in application.

The image can reflect real information in a most lossless manner, a fire can be recognized in an early stage when the fire happens through the video image, and the personal and property safety is ensured by taking corresponding measures in time. Yolov3 is a target detection framework in the field of deep learning, and is concerned about in target detection application due to the characteristics of multi-scale feature detection, high detection precision and high speed. The characteristics of the rail vehicle are that passenger flow is dense and space is small, the traditional moving target detection method is interfered by passenger flow, a dynamic target cannot be accurately extracted when a fire disaster occurs, and meanwhile, the calculation amount of feature extraction is greatly increased. The Yolov3 network framework can realize target recognition by one-time search, not only can extract bottom layer features (color, texture and the like), but also can extract deep layer features, and has good detection precision for target detection. However, in the rail vehicle, the detection result of YOLOv3 may be affected by factors such as light source interference and fire safety promo, resulting in a decrease in accuracy.

Disclosure of Invention

Therefore, the invention provides a vehicle fire identification method combining YOLOv3 and an optical flow method, which aims to solve the problems of low vehicle fire identification precision and large error.

In order to achieve the above purpose, the invention provides the following technical scheme:

the invention discloses a vehicle fire identification method combining a YOLOv3 and an optical flow method, which comprises the following steps:

making a training sample set to enable the sample set to comprise a plurality of common targets;

training a weight model, configuring YOLOv3 network model parameters, substituting a training sample set, training a YOLOv3 network model in an off-line manner, and generating the weight model;

carrying out flame detection on a single frame image, collecting a real-time image of a vehicle, inputting the image into a weight model, outputting a detection result, and judging whether real flames and interference factors exist or not;

and carrying out fire identification on the continuous image sequence, counting the optical flow information of the flame area by an optical flow method, counting the overall movement direction of the flame in the suspected area, and judging the authenticity of the fire.

Further, three types of targets are included in the training sample set: a real flame sample, a display sample and a light source sample;

the real flame sample comprises flame images acquired under different light rays, angles and fire sources in an actual application environment and a network public sample set, and the flame sample is screened from the flame images;

since the display device is arranged in a common vehicle, if a fire safety propaganda sheet is played, false alarm is easily caused, and a real flame sample of the display device is collected;

the light source sample comprises a plurality of light source types around the vehicle, and a vehicle lamp light source, a vehicle interior lighting lamp light source and a decorative lamp light source are all arranged in the light source sample.

Further, the real flame sample, the display sample and the light source sample are subjected to sample marking through labelImg software, and a marking file is generated.

Further, the method for training the weight model comprises the following steps:

the prior frames of 9 scales are obtained by adopting a k-means clustering algorithm on the training sample set, so that targets with different scales of large, medium and small can be detected;

preparing off-line training, configuring parameters of a YOLOv3 network model, and adjusting the training category number, the prior frame and the iteration number;

and starting off-line training of the YOLOv3 network model by using the label file and the training sample set to generate a weight model.

Further, the weight model is continuously iterated, and the error loss converges to the minimum value, so that the optimal weight model is obtained.

Further, the single-frame image flame detection method comprises the following steps:

acquiring a real-time image of the vehicle through a camera;

inputting the collected real-time images into a trained weight model;

the weight model judges whether real flame, flame displayed by a display or a light source exists in the single-frame image;

eliminating interference factors and outputting a detection analysis result.

Further, the method for fire identification of the continuous image sequence comprises the following steps:

accumulating the continuous images of the flame detected by multiple frames;

calculating an optical flow field of an input image, then performing threshold segmentation on the optical flow field, performing morphological denoising, and extracting a connected domain;

counting optical flow information of the flame area, including a movement direction and a movement speed;

influenced by the air conditioner on the vehicle, the flame movement direction can tend to the same direction, and the flame movement amplitude is not large in a short time, so that whether a fire disaster occurs is judged.

The invention has the following advantages:

the invention discloses a vehicle fire recognition method combining a YOLOv3 and an optical flow method, aiming at a specific scene of a vehicle, three training sample sets are designed, wherein two types of training of a real flame sample and a light source sample can inhibit over-fitting of flame; the flame target of the display is designed, so that the false alarm rate can be reduced; static characteristics of deep layers and shallow layers of flames are extracted through a YOLOv3 model, continuous images of multiple frames of detected flames are analyzed through an optical flow method, and the dynamic characteristics of the flames changing along with time are combined to comprehensively analyze and recognize the fire. The flame identification precision is improved, and whether a real fire disaster occurs is accurately judged.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.

The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions that the present invention can be implemented, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the effects and the achievable by the present invention, should still fall within the range that the technical contents disclosed in the present invention can cover.

FIG. 1 is a flow chart of a method for identifying a fire in a vehicle, which combines YOLOv3 and an optical flow method according to an embodiment of the present invention;

fig. 2 is a test original image of a vehicle fire identification method combining YOLOv3 and an optical flow method according to an embodiment of the present invention;

fig. 3 is a schematic diagram of a flame recognition result of a vehicle fire recognition method combining YOLOv3 and an optical flow method according to an embodiment of the present invention.

Detailed Description

The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Examples

The embodiment discloses a vehicle fire identification method combining a YOLOv3 and an optical flow method, which comprises the following steps:

making a training sample set to enable the sample set to comprise a plurality of common targets;

training a weight model, configuring YOLOv3 network model parameters, substituting a training sample set, training a YOLOv3 network model in an off-line manner, and generating the weight model;

carrying out flame detection on a single frame image, collecting a real-time image of a vehicle, inputting the image into a weight model, outputting a detection result, and judging whether real flames and interference factors exist or not;

and carrying out fire identification on the continuous image sequence, counting the optical flow information of the flame area by an optical flow method, counting the overall movement direction of the flame in the suspected area, and judging the authenticity of the fire.

Three types of targets are included in the training sample set: a real flame sample, a display sample and a light source sample;

the real flame sample comprises flame images acquired under different light rays, angles and fire sources in an actual application environment and a network public sample set, and the flame sample is screened from the flame images;

since the display device is arranged in a common vehicle, if a fire safety propaganda sheet is played, false alarm is easily caused, and a real flame sample of the display device is collected;

the light source sample comprises a plurality of light source types around the vehicle, and a vehicle lamp light source, a vehicle interior lighting lamp light source and a decorative lamp light source are all arranged in the light source sample.

Making three types of training image sample sets: fire, tvmonitor and light, all being RGB images, the image size is arbitrary. the two categories of tvmonitor and light are used for eliminating interference and reducing the modulus false alarm rate and overfitting.

The method for training the weight model comprises the following steps:

setting parameters of a Yolov3 network model and the number of sample classesK=3, number of iterations 10000, image resolution normalized to 416 × 416 × 3;

(1) the image A is assumed to be transmitted into a YOLOv3 network model to be subjected to feature extraction, and a feature map with 3 scales is obtainedS×SSThe numbers of the grids indicated are 13, 26 and 52, respectively.

(2) Transforming the feature map into dimensions ofS×S×{B×(5+C) The tensor of.BRepresenting the number of the prediction frames to be selected, 5 representing the prediction of the coordinates of the central point, the width and height (x, y, w, h) offsets and the confidence of the prediction frames,Kpresentation pairKThe prediction probability of each class.

(3) Calculating a loss function of a Yolov3 network modelLossError in the coordinate center and width and height of the prediction box: (loss coordinate ) Confidence error (1)loss confidence ) And classification error: (loss classes ) The three parts are obtained by weighting to obtain the weight,λ coordinate coordinate weight coefficients:

coordinate errors including center error and width-height error are expressed in terms of euclidean distance:

whereinI ij objIs shown asiIn a single gridjThe candidate box is responsible for the target, if responsibleI ij obj=1, and otherwise,I ij obj=0;respectively representiIn a single gridjThe center and width and height of the object coordinate of the candidate box are true,respectively representiIn a single gridjAnd (4) predicting the target coordinate center and the width and height of each candidate frame.

The confidence error is expressed by adopting a binary cross entropy, and comprises two parts: confidence errors for current grids in charge of targets and not in charge of targets:

whereinIs shown asiIn a single gridjOne of the candidate boxes is not responsible for the target,and representing that no target exists, and the value of the weight coefficient is between 0 and 1.A predicted value representing the confidence of the current iteration model,representing the true value of the confidence.

The classification error also adopts a bisection cross entropy:

whereinI ij objAs above indicateiIn a single gridjOne of the candidate boxes is responsible for the goal,kis shown askThe objects of the class are,P i j (k) Show firstiIndividual gridjIn each candidate box is responsible forkThe prediction probability of a class object is,is shown asiIndividual gridjThe true probability of the target in each candidate box is responsible.

(4) And updating the weight, performing error back propagation, and enabling the error to flow backwards through the network. By pairsLossDerivation to achieve: suppose network islLayer output isZ l()Z l()=f(W l()),W l()Representing the weight. Device for placingConfidence, classification probability and coordinate centerxAndyare all obtained by activating the function Sigmoid:. The confidence weight updates the gradient:

whereinC r l Represents the network oflLayer onerThe confidence level of the individual feature maps,W r l con represents the network oflLayer onerConfidence weights for individual feature maps.

The classification error weight update gradient is:

whereinP r l (k) Represents the network oflLayer onerCharacteristic diagram 1kThe prediction probability of a class object is,W r l (k) Represents the network oflLayer onerCharacteristic diagram 1kWeight of class object prediction probability.

The coordinate error weight update gradient is:

whereinx r l y r l w r l h r l Is represented in the networklLayer onerThe coordinate center and the width and height vector of the predicted target frame of the individual feature map,is represented in the networklLayer onerThe coordinate center and width and height of the object box of the feature map are real value vectors,W r l network the firstlLayer onerAnd the coordinate weight of the target frame of the characteristic diagram.

And continuously iterating, and converging the error loss to the minimum value to obtain the optimal weight model.

After the optimal weight model training is completed, single-frame image flame detection is carried out;

acquiring a real-time image of the vehicle through a camera;

and substituting the image and the trained weight model into a YOLOv3 network structure to obtain a target prediction frame. If the fire target is detected, reserving the fire target, and then detecting the next frame; if two types of tvmonitor and fire exist at the same time, the relative position relation of the two targets is judged, and the measure is the intersection-parallel ratio of the two targetsA represents a tvmonitor detection box, and B represents a fire detection box. IOU if fire and tvmonitor>0.9, judging that no fire alarm occurs; otherwise, the current image is reserved, and the next frame detection is carried out;

in the single-frame image flame detection process, interference factors of a display sample and a light source sample are eliminated, a real-time image is collected again when real flame does not exist, whether the display exists or not is judged when the real flame exists, a continuous image sequence is reserved when the display does not exist, whether a flame image exists or not is judged when the display exists, the continuous image sequence is reserved when the flame image does not exist, interference exists when the flame image exists, and the real-time image is collected again.

The method for fire identification of the continuous image sequence comprises the following steps:

accumulating the continuous images of the flame detected by multiple frames; when a fire target is detected for 5 consecutive frames, optical flow information between the frames is calculated. Using the front frame image and the back frame image as input, calling a calOptical flow Farnenback function dense optical flow method in an opencv library to obtain optical flow information of each pixel point (offset in the horizontal direction) by fitting dense optical flow through a quadratic polynomialhOffset from the verticalv). Calculating the motion direction of each pixel point according to the triangle theorem through the optical flow informationθ= arctan(v / h) Distance of movement. The movement direction is calculated according to 4 directions of up, down, left and right, and is definedθAt (45 degrees, 135 degrees)]The interval direction is upward at (135 deg., 225 deg. °)]The interval direction is right at (225 degrees, 315 degrees)]The interval direction is downward at (315 degrees, 360 degrees)]And [0 °, 45 ° ]]The interval direction is left. Counting the overall movement direction of the suspected area, if the overall trend of the movement direction of the suspected area is consistent and the movement distance d is [2,10 ]]And (4) judging fire.

The embodiment discloses a vehicle fire identification method combining a YOLOv3 and an optical flow method, aiming at a specific scene of a vehicle, three training sample sets are designed, wherein two types of training of a real flame sample and a light source sample can inhibit flame overfitting; the flame target of the display is designed, so that the false alarm rate can be reduced; static characteristics of deep layers and shallow layers of flames are extracted through a YOLOv3 model, continuous images of multiple frames of detected flames are analyzed through an optical flow method, and the dynamic characteristics of the flames changing along with time are combined to comprehensively analyze and recognize the fire. The flame identification precision is improved, and whether a real fire disaster occurs is accurately judged.

Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:具有定位功能的火灾监控装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!