Distributed target monitoring system and method
阅读说明:本技术 一种分布式目标监测系统和方法 (Distributed target monitoring system and method ) 是由 周飞 刘倞 于 2019-09-30 设计创作,主要内容包括:本发明涉及目标识别与检测领域,公开了一种分布式目标监测系统和方法。系统包括至少一个检测报警设备、图像检测设备和远程监控终端,图像检测设备分别与检测报警设备和远程监控终端连接,检测报警设备中的红外检测单元检测感应范围内的热红外信息,中央处理单元根据红外检测单元检测到的热红外信息控制报警单元发出报警信号,图像检测设备中的图像采集单元采集目标区域图像,并根据目标区域图像控制中央处理单元,以使中央处理单元控制报警单元发出报警信号,远程监控终端中的遥控单元获取目标区域图像,并将目标区域图像在图像显示单元显示,遥控单元控制中央处理单元,以使中央处理单元控制报警单元发出报警信号。(The invention relates to the field of target identification and detection, and discloses a distributed target monitoring system and a method. The system comprises at least one detection alarm device, an image detection device and a remote monitoring terminal, wherein the image detection device is respectively connected with the detection alarm device and the remote monitoring terminal, an infrared detection unit in the detection alarm device detects thermal infrared information in an induction range, a central processing unit controls an alarm unit to send an alarm signal according to the thermal infrared information detected by the infrared detection unit, an image acquisition unit in the image detection device acquires a target area image and controls the central processing unit according to the target area image, so that the central processing unit controls the alarm unit to send the alarm signal, a remote control unit in the remote monitoring terminal acquires the target area image and displays the target area image on an image display unit, and the remote control unit controls the central processing unit so that the central processing unit controls the alarm unit to send the alarm signal.)
1. A distributed target monitoring system is characterized by comprising at least one detection alarm device, an image detection device and a remote monitoring terminal, wherein the image detection device is respectively connected with the detection alarm device and the remote monitoring terminal;
the detection alarm equipment comprises an infrared detection unit, a central processing unit and an alarm unit, wherein the central processing unit is respectively and electrically connected with the infrared detection unit and the alarm unit;
the infrared detection unit is used for detecting thermal infrared signals in an induction range, and the central processing unit is used for receiving the thermal infrared signals and controlling the alarm unit to send out alarm signals according to the thermal infrared signals;
the image detection equipment comprises an image acquisition unit and a control unit which are electrically connected, wherein the image acquisition unit is used for acquiring a target area image, the control unit is used for receiving the target area image, processing the target area image and controlling the central processing unit according to the target area image so that the central processing unit controls the alarm unit to send out an alarm signal;
the remote monitoring terminal comprises an image display unit and a remote control unit which are electrically connected, wherein the remote control unit is used for obtaining a target area image acquired by the image detection equipment, displaying the target area image on the image display unit, and controlling the central processing unit so that the central processing unit controls the alarm unit to send out an alarm signal.
2. The distributed object monitoring system of claim 1, wherein the infrared detection unit is any one of an infrared sensor, a pyroelectric touch sensor, and an infrared thermal sensor.
3. The distributed object monitoring system of claim 1 or 2, wherein the alarm unit comprises a light generator for generating a light source and a sound generator for generating sound, the sound generator and the light generator both being electrically connected to the central processing unit.
4. The distributed object monitoring system of claim 3, wherein the detection alarm device further comprises a light sensitive unit;
the photosensitive unit is electrically connected with the central processing unit and is used for transmitting illumination information to the central processing unit so that the central processing unit controls the infrared detection unit and the alarm unit to work according to the illumination information.
5. The distributed object monitoring system of claim 4, wherein the detection alarm device, the image detection device and the remote monitoring terminal are provided with wireless communication units.
6. The distributed object monitoring system of claim 5, further comprising a cloud server;
the cloud server is connected with the image detection equipment and used for receiving the target area image sent by the image detection equipment, the state information of the cloud server and the state information of the detection alarm equipment.
7. A distributed target monitoring method is applied to image detection equipment, and is characterized by comprising the following steps:
acquiring a target area image;
inputting the target area image into a preset deep neural network model to obtain a first recognition result;
obtaining a second recognition result based on the target area image and a preset reference image;
fusing the first recognition result and the second recognition result to obtain a target object and a target position in the target area image;
acquiring the relation among the target position, the time and the frequency;
and determining the activity track of the target object and/or the activity density information of the target object according to the relation among the target position, the time and the frequency.
8. The method of claim 7, further comprising:
acquiring a sample image and a target object of a target area, and labeling the target object in the sample image to generate labeling information;
and inputting the marked image into a deep neural network model for training to obtain the preset deep neural network model.
9. The method according to claim 7 or 8, wherein the preset deep neural network model comprises a number of convolutional layers and pooling layers, and the first recognition result comprises a first target object in the target area image, a first probability corresponding to the first target object, and a first target position of the first target object in the target area image.
10. The method according to claim 9, wherein the obtaining a second recognition result based on the target area image and a preset reference image comprises:
acquiring a change partial image relative to the preset reference image in the target area image, and converting the change partial image into a gray image;
carrying out noise filtration and threshold segmentation on the gray level image to obtain a binary image, and obtaining a connected region in the binary image through a connected region algorithm;
acquiring a potential contour of the target object according to the connected region;
and performing morphological operation on the potential contour of the target object to obtain a second identification result, wherein the second identification result comprises a second target object in the target area image, a second probability corresponding to the second target object and a second target position of the second target object in the target area image.
11. The method according to claim 10, wherein the obtaining of the target object and the target position in the target area image according to the first recognition result and the second recognition result comprises:
comparing the first probability and the second probability;
if the first probability is greater than the second probability and the first probability is greater than or equal to a preset probability threshold, regarding the first target object as a target object and regarding the first target location as a target location;
if the first probability is smaller than a second probability which is larger than or equal to a preset probability threshold, taking the second target object as a target object and taking the second target position as a target position;
if the first probability and the second probability are both smaller than a preset probability threshold value, but the sum of the first probability and the second probability is larger than a preset second probability threshold value, the suspected target is taken;
and if the first probability and the second probability are both smaller than a preset probability threshold value, and the sum of the first probability and the second probability is smaller than a preset second probability threshold value, discarding the first recognition result and the second recognition result.
12. A distributed target monitoring method is applied to detection alarm equipment and is characterized by comprising the following steps:
acquiring a thermal infrared signal in an induction range;
determining the relation between the position, time and frequency of a target according to the thermal infrared signal;
and determining the activity track of the target object and/or the activity density information of the target object according to the relation among the target position, the time and the frequency.
Technical Field
The invention relates to the field of target identification and detection, in particular to a distributed target monitoring system and a distributed target monitoring method.
Background
Birds in airports and orchards, and rats in fields, restaurants or barns are all difficult problems for human beings. Taking rat sickness as an example, according to the investigation of the world food and agriculture organization, 3300 million tons of stored grains are lost by rats every year in the world, which is equivalent to 3 hundred million people per year of grain. People live on food, and the body shadow of mice can be seen frequently in the catering industry. In recent years, in the catering industry, large brands are reported to be struggled with kitchen rats successively, and the restaurant is reported to be stopped by related departments. For restaurants with bad mess and common environmental conditions, the shadow of mice is still too little, common mice can spread diseases, and the common diseases transmitted by mice are as follows: leptospirosis, epidemic hemorrhagic fever, plague, typhus, rat bite fever, salmonellosis, anthracnose, rabies, forest encephalitis, tsutsutsugamushi disease and the like, which bring great threat to human health.
The traditional physical mouse trapping product comprises a mouse sticking plate and an old mouse cage, and the mouse repelling product is mainly a sound wave mouse repeller. The principle of the mouse catching and repelling product is single, the mouse sticking plate and the mouse cage have good effect for the first time, and then the mouse cannot be used again; the action of the sound wave is limited, and the mouse can generate adaptability after being contacted for a long time, so that the mouse cannot be effectively driven. The killing method by adopting chemical agents needs professional companies to kill the rats regularly, and the mode is passive and cannot control the entrance and exit places and the moving range of the rats. In addition, a large amount of raticide is used, and finally the raticide flows into a sewer and enters peripheral rivers, so that heavy burden is brought to the environment, and negative effects are brought to the sustainable development of the whole city.
Disclosure of Invention
Therefore, it is necessary to provide a distributed target monitoring system and method, which can monitor the key targets in real time and alarm in time, in order to solve the above technical problems.
In a first aspect, an embodiment of the present invention provides a distributed target monitoring system, where the system includes at least one detection alarm device, an image detection device, and a remote monitoring terminal, where the image detection device is connected to the detection alarm device and the remote monitoring terminal, respectively;
the detection alarm equipment comprises an infrared detection unit, a central processing unit and an alarm unit, wherein the central processing unit is respectively and electrically connected with the infrared detection unit and the alarm unit;
the infrared detection unit is used for detecting thermal infrared signals in an induction range, and the central processing unit is used for receiving the thermal infrared signals and controlling the alarm unit to send out alarm signals according to the thermal infrared signals;
the image detection equipment comprises an image acquisition unit and a control unit which are electrically connected, wherein the image acquisition unit is used for acquiring a target area image, the control unit is used for receiving the target area image, processing the target area image and controlling the central processing unit according to the target area image so that the central processing unit controls the alarm unit to send out an alarm signal;
the remote monitoring terminal comprises an image display unit and a remote control unit which are electrically connected, wherein the remote control unit is used for obtaining a target area image acquired by the image detection equipment, displaying the target area image on the image display unit, and controlling the central processing unit so that the central processing unit controls the alarm unit to send out an alarm signal.
In some embodiments, the infrared detection unit is any one of an infrared sensor, a pyroelectric sensor, and an infrared heat sensor.
In some embodiments, the alarm unit comprises a light generator for generating a light source and a sound generator for generating a sound, both the sound generator and the light generator being electrically connected to the central processing unit.
In some embodiments, the detection alarm device further comprises a light sensitive unit;
the photosensitive unit is electrically connected with the central processing unit and is used for transmitting illumination information to the central processing unit so that the central processing unit controls the infrared detection unit and the alarm unit to work according to the illumination information.
In some embodiments, the detection alarm device, the image detection device and the remote monitoring terminal are provided with wireless communication units.
In some embodiments, the system further comprises a cloud server;
the cloud server is connected with the image detection equipment and used for receiving the target area image sent by the image detection equipment, the state information of the cloud server and the state information of the detection alarm equipment.
In a second aspect, an embodiment of the present invention further provides a distributed target monitoring method, which is applied to an image detection device, where the method includes:
acquiring a target area image;
inputting the target area image into a preset deep neural network model to obtain a first recognition result;
obtaining a second recognition result based on the target area image and a preset reference image;
fusing the first recognition result and the second recognition result to obtain a target object and a target position in the target area image;
acquiring the relation among the target position, the time and the frequency;
and determining the activity track of the target object and/or the activity density information of the target object according to the relation among the target position, the time and the frequency.
In some embodiments, the method further comprises:
acquiring a sample image and a target object of a target area, and labeling the target object in the sample image to generate labeling information;
and inputting the marked image into a deep neural network model for training to obtain the preset deep neural network model.
In some embodiments, the preset deep neural network model includes several convolutional layers and pooling layers, and the first recognition result includes a first target object in the target area image, a first probability corresponding to the first target object, and a first target position of the first target object in the target area image.
In some embodiments, the obtaining a second recognition result based on the target area image and a preset reference image includes:
acquiring a change partial image relative to the preset reference image in the target area image, and converting the change partial image into a gray image;
carrying out noise filtration and threshold segmentation on the gray level image to obtain a binary image, and obtaining a connected region in the binary image through a connected region algorithm;
acquiring a potential contour of the target object according to the connected region;
and performing morphological operation on the potential contour of the target object to obtain a second identification result, wherein the second identification result comprises a second target object in the target area image, a second probability corresponding to the second target object and a second target position of the second target object in the target area image.
In some embodiments, the obtaining of the target object and the target position in the target area image according to the first recognition result and the second recognition result includes:
comparing the first probability and the second probability;
if the first probability is greater than the second probability and the first probability is greater than or equal to a preset probability threshold, regarding the first target object as a target object and regarding the first target location as a target location;
if the first probability is smaller than a second probability which is larger than or equal to a preset probability threshold, taking the second target object as a target object and taking the second target position as a target position;
if the first probability and the second probability are both smaller than a preset probability threshold value, but the sum of the first probability and the second probability is larger than a preset second probability threshold value, the suspected target is taken;
and if the first probability and the second probability are both smaller than a preset probability threshold value, and the sum of the first probability and the second probability is smaller than a preset second probability threshold value, discarding the first recognition result and the second recognition result.
In a third aspect, an embodiment of the present invention further provides a distributed target monitoring method, which is applied to detecting an alarm device, where the method includes:
acquiring a thermal infrared signal in an induction range;
determining the relation between the position, time and frequency of a target according to the thermal infrared signal;
and determining the activity track of the target object and/or the activity density information of the target object according to the relation among the target position, the time and the frequency.
Compared with the prior art, the invention has the beneficial effects that: different from the situation of the prior art, in the distributed target monitoring system in the embodiment of the invention, thermal infrared information in an induction range is detected by an infrared detection unit in a detection alarm unit, and a central processing unit controls the alarm unit to send an alarm signal according to the thermal infrared information detected by the infrared detection unit. The detection alarm equipment and the image detection equipment are distributed in a distributed mode and are matched with each other, so that key targets can be monitored in real time and alarm in time.
Drawings
FIG. 1a is a schematic diagram of the connection of an image detection device, a remote monitoring terminal and a detection alarm device according to an embodiment of the present invention;
FIG. 1b is a schematic diagram of a detection alarm device, a network server and a remote monitoring terminal according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image detection device wirelessly connected with a remote monitoring terminal and a detection alarm device according to another embodiment of the present invention;
FIG. 3a is a diagram illustrating the hardware structure of a detection alarm device according to an embodiment of the present invention;
FIG. 3b is a diagram of the hardware structure of a detection alarm device according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of an alarm unit of one embodiment of the present invention;
FIG. 5 is a schematic diagram of the hardware structure of a detection alarm device according to another embodiment of the present invention;
FIG. 6 is a diagram illustrating a hardware configuration of an image detection apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of a remote monitoring terminal according to an embodiment of the present invention;
fig. 8 is a schematic connection diagram of the cloud server, the remote monitoring terminal, the image detection device and the detection alarm device according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a cloud server, a remote monitoring terminal, an image detection device and a detection alarm device according to another embodiment of the present invention, which are wirelessly connected;
FIG. 10 is a flow chart of one embodiment of a distributed object monitoring method of the present invention;
FIG. 11 is a graph of total pest number versus time in one embodiment of the invention;
FIG. 12a is a statistical plot of pest density distribution in one embodiment of the present invention;
FIG. 12b is a schematic illustration of pest density distribution in one embodiment of the present invention;
FIG. 13 is a flow chart of deep neural network model training in one embodiment of the present invention;
FIG. 14 is a flow diagram of data processing using a deep learning network model in one embodiment of the invention;
FIG. 15 is a flow chart of obtaining a second recognition result in one embodiment of the present invention;
FIG. 16 is a detailed flow chart of obtaining a second recognition result in one embodiment of the present invention;
FIG. 17 is a flow chart of determining a target object and a target location in one embodiment of the invention;
FIG. 18 is a flow chart of another embodiment of a distributed object monitoring method of the present invention;
FIG. 19 is a schematic diagram of the distribution of detection alarm devices in one embodiment of the present invention;
FIG. 20 is a schematic structural diagram of one embodiment of a distributed object monitoring apparatus of the present invention;
fig. 21 is a schematic diagram of a hardware structure of a control unit according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if not conflicted, the various features of the embodiments of the invention may be combined with each other within the scope of protection of the invention. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions in apparatus or flowcharts. The terms "first", "second", "third", and the like used in the present invention do not limit data and execution order, but distinguish the same items or similar items having substantially the same function and action.
Referring to fig. 1 to 9 together, an embodiment of the present invention provides a distributed object monitoring system, including at least one detection alarm device, a
Specifically, as shown in fig. 1a, fig. 1a exemplarily shows that the
It is understood that in some other embodiments, as shown in fig. 1b, the embodiment of the present invention further provides a distributed object monitoring system, which includes at least one detection alarm device, a network server 50 and a
Specifically, as shown in fig. 1b, fig. 1b exemplarily shows that the network server 50 is respectively connected with the
It is understood that in some other embodiments, as shown in fig. 2, the detection alarm device, the
As shown in fig. 3a, the
In the embodiment of the present invention, the infrared detection unit 110 may be an infrared sensor, a pyroelectric sensor, another infrared heat sensor, and the like, and is configured to detect a thermal infrared signal in a sensing range. Specifically, fix infrared detection unit 110 in the eminence, infrared sensing area can cover the target activity area, and infrared detection unit 110 can carry out infrared system's setting according to the infrared radiation characteristic of the target that awaits measuring, and when the target got into induction range, infrared inductor detected the change of target infrared spectrum to send it to
It will be appreciated that in other embodiments, the infrared sensor includes an infrared light emitting diode and an infrared light sensitive diode, which are encapsulated in a plastic housing. During the in-service use, infrared light emitting diode lights, sends out the infrared light that a people's eye can not see, if infrared sensor the place ahead does not have the target, then this infrared light just disappears in the space, if there is harmful object in infrared sensor's the place ahead, the infrared light will be reflected back, shine on oneself also shine on the infrared photosensitive diode of next door, infrared photosensitive diode receives infrared light, the resistance value of its output pin will change, through judging the change of infrared photosensitive diode resistance value, just can respond to the target in the place ahead.
In the embodiment of the present invention, the
It is understood that in some other embodiments, as shown in fig. 3b, the
In the embodiment of the present invention, as shown in fig. 4, the alarm unit 130 includes a light generator 131 and a sound generator 132, and both the sound generator 132 and the light generator 131 are electrically connected to the
It is understood that in other embodiments, a plurality of light generators 131 and sound generators 132 may be distributed at each corner of the closed space or the open space, and when the infrared detection unit 110 finds the moving target, the
In some other embodiments, the light generator 131 may be replaced by a light strip, the light strip is fixed on the bottom of the periphery of the wall through 3M glue, similar to a skirting line, and when the infrared detection unit 110 finds a moving object, the
In some embodiments, as shown in FIG. 5, the
In the embodiment of the present invention, as shown in fig. 6, the
It will be appreciated that in other embodiments, the
In the embodiment of the present invention, as shown in fig. 7, the
It is understood that in other embodiments, the
In other embodiments, the distributed object monitoring system further comprises a
In the embodiment of the present invention, as shown in fig. 8, fig. 8 exemplarily shows a
Specifically, the
In the embodiment of the present invention, as shown in fig. 9, fig. 9 exemplarily shows a
Specifically, the image detection device receives information such as a thermal infrared signal and an alarm signal wirelessly transmitted by the detection alarm device through the wireless communication unit, and transmits the thermal infrared signal and the alarm signal, a target area image acquired by the image detection device, state information of the detection alarm device, and the like to the remote monitoring terminal and the
Correspondingly, as shown in fig. 10, an embodiment of the present invention further provides a distributed object monitoring method applied to an image detection device, where the method is executed by a control unit in the image detection device, and includes:
In the embodiment of the present invention, the target area is a biological activity area or a human body activity area, preferably a pest activity area, the image includes a video image and a picture image, and the target area image captured by the camera is acquired, and the target area image includes a pest or a human body.
The first recognition result is obtained by inputting a target area image shot by a camera into a preset deep neural network model, and the preset deep neural network model is obtained by learning and training a large number of carried target area images.
And 1030, obtaining a second identification result based on the target area image and a preset reference image.
The preset reference image is an image which is shot in advance and does not contain a target, the image is used as a background image, and a target area image shot by the infrared equipment is compared with the background image to further obtain a second recognition result.
In the embodiment of the present invention, the target object is a pest or a human body, the target position is a position of the pest or the human body in the target area image, for example, a minimum circumscribed rectangular frame that frames the target object may be the position of the rectangular frame, and the target object and the target position in the target area image are obtained through the first recognition result and the second recognition result.
And 1050, acquiring the relation among the target position, time and frequency.
In the embodiment of the invention, the target position can be marked by using the coordinate position in the electronic map corresponding to the target area, and the coordinate position can be a two-dimensional plane coordinate or a three-dimensional space coordinate. Since the target object may be located at different positions at different time points and the number of times of the target object activity is different at different time points, the infrared device continuously acquires the positions of the target object appearing at different time points and the number of times of the target object appearing at different time points for 24 hours, so as to determine the relationship between the target position, the time and the frequency.
In the embodiment of the invention, the camera continuously acquires images of the target area, the control unit analyzes acquired image information, connects target positions and frequencies of the target object on a time sequence into a line to form an activity track of the target object, counts the target positions, the time and the frequency to determine activity density of the target object, and clearly knows the activity area and the life habit of the target object according to the activity track of the target object and the activity density of the target object, for example, the activity frequency of the target object in a certain time range is the highest, and the target object likes which areas to move in, so that subsequent treatment measures can be taken on the target object.
It will be appreciated that in other embodiments, where the target object is a pest, the activity density information of the target object includes pest density, peak pest density, average pest density, peak pest number, pest continuous time of activity, pest peak continuous time of activity, total pest number time, and pest density profile, among others.
The pest density is a density of the number of times of occurrence of pests per unit time in a unit area.
Wherein N isiThe number of pests in unit time is just; delta t is unit time, and the unit is a time measurement unit such as minutes or hours; the delta s is unit area, and the unit is an area measurement unit such as square meter or square centimeter; rhoiThe typical unit is just/(m) for pest density2H) which is constantly changing at different times.Peak pest density is the maximum number of times pest density occurs per unit time per unit area. Rhomax=MAX{ρi},ρiPest density monitored for the unit area per unit time at the ith time; rhomaxIs the peak pest density over a period of time or over a range of areas.
The average pest density is an average of the number density of pests appearing per unit time in a unit area.ρiPest density monitored for the unit area per unit time at the ith time; n is the number of unit time periods in a period of time;
is the average pest density over a period of time or over a range of areas.The peak pest number is the maximum number of pests present per unit time per unit area. N is a radical ofmax=MAX{NiIn which N isiThe number of the harmful organisms in a unit area at a certain moment is only; n is a radical ofmaxThe peak pest number in a unit area within a certain time period is just.
The pest continuous activity time is the sum of the pest activity time per unit time in the field range.
Wherein, TiIs the ith timeContinuous activity time in units of time measurement units such as minutes or seconds; t isGeneral assemblyIs the total activity time. The unit time is generally 24 hours, and as long as the pest is seen to be active in the field of view, the active time is accumulated to obtain the total time.The peak value continuous activity time of the pests is the longest continuous activity time of the pests in unit time within the field range. T ismax=MAX{Ti},TiThe unit of the ith continuous activity time is a time measurement unit such as minutes or seconds; t ismaxThe peak value continuous activity time of the pests in the large defense time. The unit time is generally 24 hours.
The total pest number time is the sum of the number of pests in the field of view per unit time multiplied by the activity time.
And n ═ T/Δ T, where Δ T is the unit time in units of time units such as minutes or hours; n is a radical ofiThe number of the harmful organisms at a certain moment is just; t is unit time, generally 24 hours; n is the number of unit time periods in a period of time; NTGeneral assemblyThe total number of harmful organisms per unit time is shown in a unit of h (h only), see fig. 11.The pest density distribution map is represented by a chart, which is the density of pest appearance in unit time within the field of view. Each pixel value of the density map represents the number of occurrences of a pest per unit time at the location of the region, see fig. 12a and 12 b.
In the embodiment of the invention, the target area image is acquired through the camera, the target area image is input into the preset deep neural network model to acquire the first recognition result, the second recognition result is acquired based on the target area image and the preset reference image, the target object and the target position are determined according to the first recognition result and the second recognition result, the relation among the target position, the time and the frequency is acquired, and the activity track and/or the activity density of the target object are/is determined according to the relation among the target position, the time and the frequency, so that the pest density can be accurately monitored.
In some embodiments, as shown in fig. 13, the method further comprises:
In the embodiment of the present invention, the target object is a target frame in the target area sample image, where the target frame includes information such as a coordinate position and a center position of the target. After a large number of target area sample images are acquired, each sample image needs to be labeled. Specifically, when the candidate target is determined to be the real target, the system automatically completes the labeling; and when the candidate target is a suspected target, judging whether the candidate target is the target, if so, finishing the labeling work, and if not, abandoning the data.
And 1320, inputting the marked image into a deep neural network model for training to obtain the preset deep neural network model.
And after all the sample images are labeled, training the model by using the labeled sample images, so as to improve the accuracy of model training and further improve the accuracy of density monitoring. The more sample images, the more situations are covered, and the higher the recognition capability of the deep neural network model is.
In other embodiments, as shown in fig. 14, the preset deep neural network model includes multiple convolutional layers and pooling layers, and the obtained target area image is input into the preset deep neural network model, and the input target area image passes through the
It will be appreciated that in other embodiments, the obtained target area image samples may be integrated into a set of image samples, and then the set of image samples may be divided into a training sample set and a testing sample set, wherein the training sample set is used for training the deep neural network model, and the testing sample set is used for testing the trained deep neural network model.
Specifically, each picture in the training sample set is input into the deep neural network model, and the pictures in the training sample set are automatically trained through the deep neural network model to obtain the trained deep neural network model.
Inputting each picture of the test sample set into the trained deep neural network model, identifying each input picture through the model to obtain a corresponding identification result, and integrating the identification results corresponding to each image to obtain an identification result set. And determining a test recognition rate according to the number of the target objects in the recognition result set and the number of the target objects in the test sample set, wherein the test recognition rate is used for measuring the recognition capability of the trained deep neural network model. If the test recognition rate reaches the preset threshold value, the recognition capability of the trained deep neural network model is in accordance with the expectation, and the trained deep neural network model can be directly used as the trained deep neural network model for image recognition. And otherwise, continuously adjusting each parameter of the deep neural network model, and training the deep neural network model again until the recognition rate of the model reaches a preset threshold value.
In some embodiments, referring to fig. 15 and fig. 16, the obtaining a second recognition result based on the target area image and a preset reference image includes:
in
Firstly, the obtained target area image is compared with a preset reference image to obtain a changed part in the target area image, the changed part in the target area image is converted into a gray level image, color information in the image can be removed after gray level conversion, and subsequent calculation amount is reduced. For example, a grayscale image may be obtained by averaging the values of 3 channels RGB at the same pixel position, or an average of the maximum and minimum luminance values of RGB at the same pixel position may be obtained, or a grayscale image may be obtained, and the method of converting the change partial image into a grayscale image is not limited to the above two methods.
During the generation and transmission of the image, the image is interfered and influenced by various noises for various reasons, thereby reducing the quality of the image and influencing the subsequent image processing and analysis. Therefore, noise filtering is required to be performed on the gray level image, noise can be filtered by adopting various modes such as linear filtering, threshold value averaging, weighted averaging, template smoothing and the like to obtain a filtered binary image, pixel points of the filtered binary image are divided into a plurality of classes, the classes of the pixel points are divided to find an interested target point, and foreground pixel points which have the same pixel value and are adjacent in position in the interested target point form a connected region.
And 1530, acquiring a potential contour of the target object according to the connected region.
The foreground pixel points which have the same pixel value and are adjacent in position in the interested target points form a connected region, and the potential contour of each target can be obtained through the connected region.
In the embodiment of the present invention, in order to identify a potential contour of a target object, a morphological operation is required to be performed on the target object, where the morphological operation includes expanding and filling a closed region, and specifically, pixels may be added to a boundary of the target object in an image, and a hole filling may be performed on a feature contour map of the target object, so as to obtain a second target object, a probability corresponding to the second target object, and a second target position of the second target object in the target region image.
In some embodiments, as shown in fig. 17, the obtaining the target object and the target position in the target area image according to the first recognition result and the second recognition result includes:
In the embodiment of the invention, the first probability is obtained by identifying the target object through a deep neural network model; the second probability is obtained by image processing of the target object. By comparing the first probability and the second probability, the target object may be determined.
The preset probability threshold may be used as a criterion for evaluating the target object, and the probability threshold may be preset. And if the first probability of the target probability obtained by the deep neural network model identification is larger than the second probability of the target probability obtained by the image processing and the first probability is larger than or equal to a preset probability threshold, taking the first target object obtained by the deep learning network model identification as a target object and taking the position of the first target object as a target position. For example, if the preset probability threshold is 60%, the first probability is 70%, the second probability is 40%, the first probability is 70% greater than the
Specifically, if the first probability, which is the target probability obtained through the deep neural network model recognition, is smaller than the second probability, which is the target probability obtained through the image processing, and the second probability is greater than or equal to the preset probability threshold, the second target object obtained through the image processing is taken as the target object, and the position of the second target object is taken as the target position. For example, if the preset probability is 60%, the first probability is 20%, the second probability is 80%, the first probability is 20% smaller than the second probability 80%, and the second probability 80% is greater than the preset probability threshold value 60%, the second target object is taken as the target object, and the second target position is taken as the target position.
Specifically, the preset probability is 60%, the preset second probability is 55%, the first probability is 40%, and the second probability is 18%. It can be known that, if the
And discarding the first recognition result, namely the first target object and the position of the first target object, and the first probability recognized here, obtained by the deep neural network model, and discarding the second recognition result, namely the position of the second target object and the second probability recognized here, obtained by image processing at the same time. For example, if the preset probability threshold is 60%, the preset second probability threshold is 55%, the probability of the first target object obtained through the deep neural network model recognition is 40%, the probability of the second target object obtained through the image processing is 10%, the
Correspondingly, an embodiment of the present invention further provides a distributed target monitoring method, as shown in fig. 18 to 19, which is applied to a detection alarm device, where the method is executed by a central processing unit in the detection alarm device, and the method includes:
In the embodiment of the invention, the detection alarm device is placed in a room or at each corner of the room, the central processing unit in the detection alarm device acquires a thermal infrared signal sent by an infrared detection unit, the infrared detection unit can be an infrared sensor or the like, the number of the infrared detection units can be one or more, and reference can be made to the system embodiment.
In the embodiment of the present invention, the thermal infrared signal is a special signal that a target object in the sensing range has, and since the target object may be located at different positions at different time points and the number of times of the target object moving is different at different time points, the infrared detection unit continuously acquires the positions of the target object appearing at different time points and the number of times of the target object appearing at different time points for 24 hours, thereby determining the relationship between the target position, the time, and the frequency.
In the embodiment of the invention, the infrared detection unit continuously performs infrared sensing on the target area, and sends the sensed thermal infrared signal to the central processing unit, the central processing unit connects the target position and the frequency of the target object on the time sequence into a line so as to form the activity track of the target object, and the target position, the time and the frequency are counted so as to determine the activity density of the target object.
It is understood that the target object is a pest, and the activity density information of the target object includes: detecting the density of the pests at the alarm equipment, detecting the density of the pests in the alarm equipment area, detecting the average density of the pests in the alarm equipment area, detecting the density distribution diagram of the pests in the alarm equipment area and the like.
Wherein, the density of the harmful organisms at the alarm device is detected, and the times of detecting the harmful organisms by the alarm device in unit time are taken as the number of times. P is N/T, wherein N is the total number of pests found in a unit time, and the unit is only; t is unit time, and the unit is a time measurement unit such as hour or day; ρ is the pest density, typically in units of days.
And detecting the density of the pests in the alarm device area, wherein the frequency of finding the pests by the alarm device in unit area unit time is used.ρi=Ni[ in which, N isiThe total quantity of the pests found in the ith device in unit time is just; t is unit time, and the unit is a time measurement unit such as hour or day; rhoiPest density monitored for the ith device, typically in units of days;n is the number of the detection alarm devices in the unit area; rhototalThe density of pests in a certain area and a certain time is typically measured in days.
And detecting the average pest density of the alarm device area, wherein the average pest density is the average value of the times density of the pests appearing in unit area in unit time.
ρi=Ni[ in which, N isiThe total quantity of the pests found in the ith device in unit time is just; t is unit time, and the unit is a time measurement unit such as hour or day; rhoiPest density monitored for the ith device, typically in units of days; n is the number of the detection alarm devices in the unit area;is the average pest density over a certain area and time, and is typically expressed in units of days.And detecting a pest density distribution diagram of the alarm equipment area, wherein the density of the pest appearing in unit area per unit time is shown in a chart mode. The value at each sensing location of the density profile is indicative of the number of times a pest has occurred per unit time at that location.
In the embodiment of the invention, the thermal infrared signals in the sensing range are acquired through the detection alarm device, the relation between the target position, the time and the frequency is determined according to the thermal infrared signals, and the activity track of the target object and/or the activity density information of the target object is determined according to the relation between the target position, the time and the frequency, so that the density of harmful organisms can be accurately monitored.
Accordingly, as shown in fig. 20, an embodiment of the present invention further provides a distributed
a first acquiring module 2010, configured to acquire a target area image;
the input module 2020, configured to input the target area image into a preset deep neural network model, so as to obtain a first recognition result;
a second obtaining module 2030, configured to obtain a second recognition result based on the target area image and a preset reference image;
a third obtaining module 2040, configured to obtain a target object and a target position in the target area image according to the first recognition result and the second recognition result;
a fourth obtaining module 2050, configured to obtain a relationship between the target position, the time, and the frequency;
a determining module 2060, configured to determine the activity track of the target object and/or the activity density information of the target object according to the relationship between the target position, the time, and the frequency.
According to the distributed target monitoring device provided by the embodiment of the invention, the target area image is acquired through the first acquisition module, the acquired target area image is input into the preset deep neural network model through the input module to obtain the first recognition result, then the second acquisition module is used for obtaining the second recognition result based on the target area image and the preset reference image, the target object and the target position in the target area image are obtained through the third acquisition module according to the first recognition result and the second recognition result, the fourth acquisition module is used for obtaining the relation among the target position, the time and the frequency, and finally the determining module is used for determining the activity track and/or the activity density of the target according to the relation among the target object, the time and the frequency, so that the pest density can be accurately monitored.
Optionally, in another embodiment of the apparatus, as shown in fig. 20, the
the labeling module 2070 obtains a sample image of a target area and a target object, labels the target object in the sample image, and generates labeling information.
The training module 2080 inputs the labeled image into the deep neural network model for training to obtain the preset deep neural network model.
Optionally, in other embodiments of the apparatus, the input module 2020 is specifically configured to:
the preset deep neural network model comprises a plurality of convolution layers and a pooling layer, and the first recognition result comprises a first target object in the target area image, a first probability corresponding to the first target object and a first target position of the first target object in the target area image.
Optionally, in other embodiments of the apparatus, the second obtaining module 2030 is specifically configured to:
acquiring a change partial image relative to the preset reference image in the target area image, and converting the change partial image into a gray image;
carrying out noise filtration and threshold segmentation on the gray level image to obtain a binary image, and obtaining a connected region in the gray level image through a connected region algorithm;
acquiring a potential contour of the target object according to the connected region;
and performing morphological operation on the potential contour of the target object to obtain a second identification result, wherein the second identification result comprises a second target object in the target area image, a second probability corresponding to the second target object and a second target position of the second target object in the target area image.
Optionally, in other embodiments of the apparatus, the third obtaining module 2040 is specifically configured to:
comparing the first probability and the second probability;
if the first probability is greater than the second probability and the first probability is greater than or equal to a preset probability threshold, regarding the first target object as a target object and regarding the first target location as a target location;
if the first probability is smaller than a second probability which is larger than or equal to a preset probability threshold, taking the second target object as a target object and taking the second target position as a target position;
if the first probability and the second probability are both smaller than a preset probability threshold value, but the sum of the first probability and the second probability is larger than a preset second probability threshold value, the suspected target is taken;
and if the first probability and the second probability are both smaller than a preset probability threshold value, and the sum of the first probability and the second probability is smaller than a preset second probability threshold value, discarding the first recognition result and the second recognition result.
It should be noted that the distributed target monitoring apparatus can execute the distributed target monitoring method provided by the embodiment of the present invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in the embodiment of the distributed object monitoring apparatus, please refer to the distributed object monitoring method provided in the embodiment of the present invention.
Fig. 21 is a schematic diagram of a hardware configuration of a control unit in the image detection apparatus according to the embodiment of the present invention, and as shown in fig. 21, the
one or
The
The
The
The one or more modules are stored in the
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:一种徘徊检测方法