Real-time monitoring and early warning method for airport taxiways

文档序号:1906399 发布日期:2021-11-30 浏览:17次 中文

阅读说明:本技术 一种机场滑行道实时监控预警方法 (Real-time monitoring and early warning method for airport taxiways ) 是由 王栋欢 肖洪 唐轲 李爽 于艾洋 于 2021-08-31 设计创作,主要内容包括:本发明属于机场交通管制工程技术领域,涉及一种机场滑行道实时监控预警方法。具体技术方案为:包括以下步骤:采集机场滑行飞机图片数据并建立训练数据集和标签集;建立二类别飞机实时检测模型;预设检测域坐标,并在检测域内预设报警线;确定待监测滑行道,设置飞机面积阈值S1和S2;依次读取每帧视频画面,根据检测域坐标对每帧视频图片裁取后输入实时检测模型,输出每帧视频图像的检测结果;根据飞机面积阈值和预警线坐标位置,对每帧的检测结果进行判断,若满足报警条件输出为真并发送一次报警信号。与现有技术相比,本发明是一种可识别飞机行进方向、识别精度高、稳定性强、设备成本低、架设维护简单的机场滑行道实时监控预警方法。(The invention belongs to the technical field of airport traffic control engineering, and relates to a real-time monitoring and early warning method for an airport taxiway. The specific technical scheme is as follows: the method comprises the following steps: acquiring image data of an airport taxiing airplane and establishing a training data set and a label set; establishing a second-class airplane real-time detection model; presetting a detection domain coordinate, and presetting an alarm line in the detection domain; determining taxiways to be monitored, and setting airplane area thresholds S1 and S2; sequentially reading each frame of video image, cutting each frame of video image according to the detection domain coordinates, inputting the cut frames into a real-time detection model, and outputting the detection result of each frame of video image; and judging the detection result of each frame according to the area threshold value of the airplane and the coordinate position of the early warning line, and outputting a true alarm signal and sending an alarm signal once if the alarm condition is met. Compared with the prior art, the airport taxiway real-time monitoring and early warning method can identify the advancing direction of the airplane, and is high in identification precision, strong in stability, low in equipment cost and simple in erection and maintenance.)

1. A real-time monitoring and early warning method for an airport taxiway is characterized by comprising the following steps:

a. acquiring image data of an airport taxiing airplane and establishing a training data set and a label set;

b. establishing a second-class airplane real-time detection model;

c. presetting a detection domain coordinate, and presetting an alarm line in the detection domain;

d. determining taxiways to be monitored, and setting airplane area thresholds S1 and S2;

e. sequentially reading each frame of video image, cutting each frame of video image according to the detection domain coordinates, inputting the cut frames into a real-time detection model, and outputting the detection result of each frame of video image;

f. and judging the detection result of each frame according to the area threshold value of the airplane and the coordinate position of the early warning line, and outputting a true alarm signal and sending an alarm signal once if the alarm condition is met.

2. The real-time monitoring and early warning method for the airport taxiways according to claim 1, wherein: in the step a, video frame images of the lateral taxiing airplanes are collected, the images are cut, the left-going airplanes and the right-going airplanes are respectively subjected to frame selection, corresponding label names of the left-going airplanes and the right-going airplanes are input, and corresponding label data files are sequentially generated.

3. The real-time monitoring and early warning method for the airport taxiways according to claim 2, wherein: and (4) turning each image left and right, and copying corresponding label data to modify the label name and the position of the label frame.

4. The real-time monitoring and early warning method for the airport taxiways according to claim 1, wherein: in the step b, the real-time detection model is a NanoDet network framework, and the number of output Heads of the NanoDet network framework is 2.

5. The real-time monitoring and early warning method for the airport taxiways according to claim 4, wherein: training the model by adopting the same network training method of the NanoDet until convergence to obtain a NanoDet taxi airplane detection model; and loading the model by using an OpencCV computer vision library, inhibiting a non-maximum value, and establishing a NanoDet two-class real-time detection model of a left-hand aircraft and a right-hand aircraft.

6. The real-time monitoring and early warning method for the airport taxiways according to claim 1, wherein: in the step d, determining the pixel size height Hp width Wp of any taxiing aircraft on the taxiway from the video picture; the pixel sizes of any taxier on the adjacent taxiways of the taxiway are determined to be H1, W1, H2 and W2(H1< H2, W1< W2).

7. The real-time monitoring and early warning method for the airport taxiways according to claim 6, wherein: the calculation formulas of S1 and S2 are,

8. the real-time monitoring and early warning method for the airport taxiways according to claim 1, wherein: in the step f, calculating the area Si of the detected airplane as Hi multiplied by Wi, the horizontal coordinate Xnose of the head position of the left-hand airplane as Xi-Wi/2, and the head position Xnose of the right-hand airplane as Xi + Wi/2 according to the detection result; if Xnose < L1 and S1< Si < S2, outputting a left-going signal of the airplane; if Xnose > L2, and S1< Si < S2, then the aircraft right signal is output.

9. The real-time monitoring and early warning method for the airport taxiways according to claim 1, wherein: and f, releasing N frames of video images to be undetected every time an alarm signal is sent.

10. The real-time monitoring and early warning method for the airport taxiways according to claim 9, wherein: defining a tail-in and head-out List with the length of N for continuously storing the detection results of the continuous N frames; judging according to the detection result, if the current frame has an alarm signal sent, storing the detection result of the current frame in the list as true, otherwise, storing the detection result of the current frame as false; and judging whether the list is true, and if so, setting a read-only video frame picture not to input the model for detection.

Technical Field

The invention belongs to the technical field of airport traffic control engineering, and relates to a real-time monitoring and early warning method for an airport taxiway.

Background

Before taking off and after landing, the airport aircraft needs to take off by running through a navigation channel after sliding for a certain distance from a stand or slide to the stand after landing through the navigation channel. An aircraft is required to pass through one or more taxiway intersections during taxiing; because buildings such as parking spaces, bridges and the like exist in an airport, airport vehicles go to the road junction to have vision blind areas, and can not see the plane sliding in the front cross direction, so that traffic accidents are easy to happen when the aircraft sliding at high speed collides with the plane at the road junction.

In order to avoid the occurrence of the traffic accidents, the running vehicles need to be informed to slow down and stop at the crossroads in advance to avoid taxiing airplanes, aiming at the traffic control, the traffic control is mainly observed by the visual sense of traffic managers, and the traffic signals are directed and sent out at remote places, but the method is subject to more artificial subjective factors, and the observation of human eyes has larger visual blind areas, so that the method has larger traffic accident risks and brings intangible threats to the safety and property loss of airports; the other existing management and control technology carries out real-time positioning and control on the plane sliding position through GPS positioning and buried sensors, but the problems of high cost, difficulty in deployment and maintenance, large external interference on positioning signals and the like exist.

Disclosure of Invention

The invention aims to provide a real-time monitoring and early warning method for an airport taxiway, which aims to solve the problems.

In order to achieve the purpose of the invention, the technical scheme adopted by the invention is as follows: the method comprises the following steps:

a. acquiring image data of an airport taxiing airplane and establishing a training data set and a label set;

b. establishing a second-class airplane real-time detection model;

c. presetting a detection domain coordinate, and presetting an alarm line in the detection domain;

d. determining taxiways to be monitored, and setting airplane area thresholds S1 and S2;

e. sequentially reading each frame of video image, cutting each frame of video image according to the detection domain coordinates, inputting the cut frames into a real-time detection model, and outputting the detection result of each frame of video image;

f. and judging the detection result of each frame according to the area threshold value of the airplane and the coordinate position of the early warning line, and outputting a true alarm signal and sending an alarm signal once if the alarm condition is met.

Preferably, in the step a, video frame images of the lateral taxiing airplanes are collected, the images are cut, the left-going airplanes and the right-going airplanes are respectively subjected to frame selection, corresponding label names of the left-going airplanes and the right-going airplanes are input, and corresponding label data files are sequentially generated.

Preferably, each image is flipped left and right, and the corresponding tag data is copied to modify the tag name and the tag frame position.

Preferably, in the step b, the real-time detection model is a NanoDet network framework, and the number of output Heads of the NanoDet network framework is 2.

Preferably, a NanoDet same network training method is adopted to train the model until convergence, and a NanoDet taxi airplane detection model is obtained; and loading the model by using an OpencCV computer vision library, inhibiting a non-maximum value, and establishing a NanoDet two-class real-time detection model of a left-hand aircraft and a right-hand aircraft.

Preferably, in the step d, the height Hp and the width Wp of the pixel size of any taxiing aircraft on the taxiway are determined from the video picture; the pixel sizes of any taxier on the adjacent taxiways of the taxiway are determined to be H1, W1, H2 and W2(H1< H2, W1< W2).

Preferably, the calculation formulas of S1 and S2 are,

preferably, in the step f, the detected aircraft area Si is Hi × Wi, the horizontal coordinate Xnose of the aircraft nose position on the left row is Xi-Wi/2, and the aircraft nose position Xnose on the right row is Xi + Wi/2; if Xnose < L1 and S1< Si < S2, outputting a left-going signal of the airplane; if Xnose > L2, and S1< Si < S2, then the aircraft right signal is output.

Preferably, in the step f, each time an alarm signal is sent, N frames of video images are released and not detected.

Preferably, a tail-in-head-out List with the length of N is defined for continuously storing the detection results of the continuous N frames; judging according to the detection result, if the current frame has an alarm signal sent, storing the detection result 1 of the current frame in the list, otherwise, storing the detection result 0 of the current frame; and judging whether the list contains 1, and if so, setting a read-only video frame picture not to input a model for detection.

The beneficial technical effects of the invention are as follows: the real-time detection model can quickly detect the left-hand aircraft and the right-hand aircraft in the video picture, and solves the problems that the existing target detection model is low in identification precision and only can identify the aircraft and cannot identify the direction of the aircraft; by selecting the designated area as the area to be detected in advance, the sliced image of the area to be detected is input into the model instead of the whole video frame image, so that the data volume of each frame of image input into the detection model is greatly reduced, and the model can be detected in real time under an embedded CPU.

Therefore, compared with the prior art, the method for monitoring and early warning the airport taxiways in real time can identify the advancing direction of the airplane, and is high in identification precision, strong in stability, low in equipment cost and simple in erection and maintenance.

Drawings

FIG. 1 is a flow chart of the general steps of a real-time monitoring and early warning method for taxiways in an airport;

FIG. 2 is a diagram of a NanoDet two-class real-time detection model framework;

FIG. 3 is a schematic view of the coordinates of the detection area and the alarm line;

FIG. 4 is a flow chart of the detection model detecting each frame of video image;

FIG. 5 is a flow chart of controlling the detection result to send an alarm signal;

fig. 6 is a flow chart of a method of discharging N frames per alarm signal sent.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. Unless otherwise specified, the technical means used in the examples are conventional means well known to those skilled in the art.

In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, are merely for convenience of describing the invention, and do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the invention.

Referring to fig. 1-6, the method for real-time monitoring and pre-warning of an airport taxiway disclosed by the invention comprises the following steps:

a. acquiring image data of an airport taxiing airplane and establishing a training data set and a label set;

b. establishing a second-class airplane real-time detection model;

c. presetting a detection domain coordinate, and presetting an alarm line in the detection domain;

d. determining taxiways to be monitored, and setting airplane area thresholds S1 and S2;

e. sequentially reading each frame of video image, cutting each frame of video image according to the detection domain coordinates, inputting the cut frames into a real-time detection model, and outputting the detection result of each frame of video image;

f. and judging the detection result of each frame according to the area threshold value of the airplane and the coordinate position of the early warning line, and outputting a true alarm signal and sending an alarm signal once if the alarm condition is met.

In the step a, a taxiway at a certain parking place of an airport is selected, a camera is arranged on one side frame of the parking place of the taxiway, and video acquisition is carried out on taxiway airplanes in the area; manually screening 100 video frame images containing the lateral taxi aircraft, selecting 416 × 416 cropping frames to randomly crop the images, wherein the cropped images need to contain the taxi aircraft; and opening the cut image by using Labelimg label making software in sequence, respectively framing the left aircraft and the right aircraft in the image, inputting label names 'left aircraft' and 'right aircraft', and sequentially generating corresponding xml format label data files. In order to balance the number of left and right airplane pictures, each image is turned left and right, corresponding label data is copied to modify the label name and the position of a label frame, and finally 200 training pictures and corresponding 200 label files are obtained. By turning over the images, the number of the images of the left and right airplanes is consistent, the number of training pictures is increased, and the detection precision is increased.

In the step b, a NanoDet network framework is established by applying Python language, the number of Light head categories output by the last 3 NanoDet networks is changed into 2, and the input size is changed into 416 x 416, which is the same as the size of the picture cut in the step a; training the model by adopting the same network training method of the NanoDet until convergence to obtain a NanoDet taxi airplane detection model; the OpencCV computer vision library is used for loading a model and carrying out non-maximum value inhibition, and a NanoDet two-class real-time detection model of a left-going airplane and a right-going airplane is established as shown in figure 2. On the basis of a NanoDet two-class real-time detection model, the airplane on any taxiway can be selectively monitored and signals can be sent in real time by setting judgment conditions of a warning line and an airplane area threshold value, a heavy physical labor mode for sending traffic warning signals through manual visual observation of a traffic worker is replaced, and the labor and material cost of airport traffic control is greatly reduced.

Generally, the number of pixels shot by a common camera is 1024 × 768, and the detection of the whole image by adopting the detection model under an embedded CPU cannot meet the real-time property, so that a specific area needs to be selected for detection. As shown in fig. 3, in step c, two vertical alarm lines Line1 and Line2 are preset in the detection domain. Selecting a rectangular area with the center height 384 and the width 512 of a video picture as an area to be detected, wherein coordinates (Xa and Ya) of a top left corner vertex A of the rectangular area to be detected are (256 and 192), and coordinates (Xb and Yb) of a bottom right corner vertex B of the rectangular area to be detected are (768 and 576); two vertical warning lines are preset on two sides of the vertical center of the detection area, and the abscissa of the left warning Line1 and the abscissa of the right warning Line2 are 384 for L1 and 640 for L2 respectively. By selecting the designated area as the area to be detected in advance, the sliced image of the area to be detected is input into the model instead of the whole video frame image, so that the data volume of each frame of image input into the detection model is greatly reduced, and the model can be detected in real time under the embedded CPU.

In step d, determining a taxiway to be monitored according to monitoring requirements, determining the pixel size of a taxi aircraft on the taxiway as high Hp-74 and wide Wp-126 from a video picture, and determining that the pixel sizes of taxi aircraft on the taxiway adjacent to the taxiway on the left and the right are respectively high H1-66, wide W1-115, high H2-85 and wide W2-138 (H1)<H2,W1<W2); defining an aircraft area threshold of

In step e, each frame of video image is sequentially read, each frame of video image is cut according to the detection domain coordinates preset in step c, then the real-time detection model is input, the detection result of each frame of video image is output, and the flow of detecting each frame of video image by the detection model is shown in fig. 4. Sequentially capturing each frame of video image by using OpenCV, converting image data into a matrix format by using numpy, cutting data with an x interval of [ Xa, Xb ] ═ 256,768 and a y interval of [ Ya, Yb ] ═ 192,576 according to the detection region coordinates A and B defined in the step 3 by using a python matrix slicing operation, inputting the data into the two-category (left-going airplane and right-going airplane) real-time detection model established in the step 2, and outputting a detection result of each frame of video image, namely: if the airplane is detected, the moving direction (left or right) of the airplane, the PCi coordinate (Xi, Yi) of the center of the airplane and the width Wi and height Hi of the airplane are output, for example, if the left-going airplane is detected in a certain frame, the PCi coordinate (Xi, Yi) is (414,320), Wi is 128 and Hi is 76.

The flow chart for controlling the detection result to send the alarm signal is shown in fig. 5. And f, judging the output result of each frame of video image after detection of 4 according to the airplane area threshold value and the coordinate position of the early warning line preset in the step 3, and if the alarm condition is met, outputting the result as true and sending an alarm signal once. According to the detection result output in the step 5, calculating the detected aircraft area Si ═ Hi × Wi ═ 76 × 128 ═ 9728, the horizontal coordinate Xnose of the aircraft nose position on the left row ═ Xi-Wi/2, and the aircraft nose position Xnose on the right row ═ Xi + Wi/2, and defining the judgment condition: if the left row of the airplane is detected, Xnose < L1, and S1< Si < S2, then at the current detection frame, the signal is output: "there is the aircraft left to move" this way, likewise, define the judgement condition: if the airplane right-going, Xnose > L2 is detected, and S1< Si < S2, then at the current detection frame, the signal is output: "this lane has the right row of the aircraft". And 5, obtaining the position of the head of the left-hand aircraft as Xi-Wi/2 (414-128/2) or 350 according to the detection result in the step 5, and judging the conditions as follows: 350<384, 8435<9728<10494, so in the current frame input signal: "this way has the airplane left going".

Further, each time the step f sends an alarm signal, N frames of video images are released and not detected. As shown in fig. 6, a tail-in-head List with a length of N is defined for continuously storing the detection results of N consecutive frames, if the current frame has an alarm signal, the current frame is stored in the List, and otherwise the current frame is stored. The list is true as long as there is an element in the list that is true, otherwise it is false. And judging whether the list is true, if so, setting a read-only video frame picture not to input a model for detection, and setting a frame-placing operation flow as shown in the figure. For example, N is set to 4, the current frame lower list is set to [0, 0, 0, 1], frame skipping is started (read-only non-detection), that is, the next frame list is set to [0, 0, 1, 0], detection is started until the consecutive frame 4 list becomes [0, 0, 0, 0], if an airplane satisfying the condition is detected, the list becomes [0, 0, 0, 1], the consecutive frame skipping is continued for 4 frames, and if an airplane satisfying the requirement is not detected, the list becomes [0, 0, 0], detection is continued. And the list collection and frame skipping operation are carried out on the N frames of images, so that the integral working times are greatly reduced, the working efficiency is increased, and the running burden is reduced.

The NanoDet two-class real-time detection model established based on OpenCV can quickly detect the left-going airplane and the right-going airplane in the video picture, and solves the problems that the existing target detection model is low in identification precision and only can identify the airplane and cannot identify the airplane direction; meanwhile, the problems that in the prior art, manual observation and control risks are high, cost of the wire burying device is high, and maintenance is difficult are solved, and the airport taxiway real-time monitoring and early warning method with high precision, high stability, high efficiency and low cost is provided.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种用于英语教学的智能型英语教学装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!