Method for judging whether detected objects in front and back two frames of images of video are same or not

文档序号:1906218 发布日期:2021-11-30 浏览:25次 中文

阅读说明:本技术 判断视频前后两帧图像中所检测到的物体是否相同的方法 (Method for judging whether detected objects in front and back two frames of images of video are same or not ) 是由 戴阳 韦波 王斐 杨胜龙 樊伟 吴祖立 范秀梅 于 2021-08-30 设计创作,主要内容包括:本发明涉及一种判断视频前后两帧图像中所检测到的物体是否相同的方法,包括以下步骤:记录目标检测模型对当前帧图像检测到的目标物体的第一坐标组和第一RGB数值;记录目标检测模型对下一帧图像检测到的目标物体的第二坐标组和第二RGB数值;根据所述第一坐标组、第一RGB数值、第二坐标组和第二RGB数值计算前后两帧图像所检测到的目标物体的面积之比、重合度之比以及色彩差异度;将所述面积之比、重合度之比以及色彩差异度与设置的阈值进行比较,并根据比较结果确定是否为同一物体。本发明能够更加准确的记录所探测物体相关数据,避免重复记录。(The invention relates to a method for judging whether objects detected in front and back two frames of images of a video are the same or not, which comprises the following steps: recording a first coordinate set and a first RGB value of a target object detected by a target detection model for a current frame image; recording a second coordinate set and a second RGB value of the target object detected by the target detection model for the next frame of image; calculating the area ratio, the coincidence ratio and the color difference of the target object detected by the front frame image and the rear frame image according to the first coordinate set, the first RGB value, the second coordinate set and the second RGB value; and comparing the area ratio, the coincidence ratio and the color difference with a set threshold value, and determining whether the objects are the same according to the comparison result. The invention can more accurately record the related data of the detected object and avoid repeated recording.)

1. A method for judging whether objects detected in two frames of images before and after a video are the same or not is characterized by comprising the following steps:

(1) recording a first coordinate set and a first RGB value of a target object detected by a target detection model for a current frame image, wherein the first RGB value is an average value of the RGB values of the target object detected by the current frame image;

(2) recording a second coordinate set and a second RGB value of the target object detected by the target detection model for the next frame of image, wherein the second RGB value is an average value of the RGB values of the target object detected by the next frame of image;

(3) calculating the area ratio, the coincidence ratio and the color difference of the target object detected by the front frame image and the rear frame image according to the first coordinate set, the first RGB value, the second coordinate set and the second RGB value;

(4) and comparing the area ratio, the coincidence ratio and the color difference with a set threshold value, and determining whether the objects are the same according to the comparison result.

2. The method according to claim 1, wherein the step (3) is performed by determining whether the detected objects in the two frames of images are the sameCalculating the ratio of the areas of the target objects detected by the two frames of images, wherein q isNoodleRatio of areas of the target object detected for two frames of images, S0Representing the area of the target object detected in the current frame image, S0=|(x02-x01)*(y02-y01)|,(x01,y01) And (x)02,y02) As the upper left and lower right coordinates in the first coordinate set, S1Representing the area of the target object detected in the next image frame, S1=|(x12-x11)*(y12-y11)|,(x11,y11) And (x)12,y12) The coordinates are the upper left coordinate and the lower right coordinate in the second coordinate set.

3. The method according to claim 1, wherein the step (3) is performed by determining whether the detected objects in the two frames of images are the sameCalculating the ratio of the coincidence degree of the target object detected by the two frames of images, wherein q isHeavy loadRatio of coincidence degree of target object detected for two frames of images, S0Representing the area of the target object detected in the current frame image, S0=|(x02-x01)*(y02-y01)|,(x01,y01) And (x)02,y02) As the upper left and lower right coordinates in the first coordinate set, SHeavy loadShowing two frames before and afterLike the area of the detected target object overlap, SHeavy load=(min(x02,x12)-max(x01-x11))*(min(y02,y12)-max(y01-y11)),(x11,y11) And (x)12,y12) The coordinates are the upper left coordinate and the lower right coordinate in the second coordinate set.

4. The method according to claim 3, further comprising determining whether the object detected in the two previous and next frames of images is the same before calculating the ratio of the overlap ratio of the object detected in the two previous and next frames of imagesJudging whether the target objects detected by the two frames of images are overlapped, wherein,a value of 1 when true, or 0 otherwise,is 1 when it is established and is otherwise 0,&&indicating logical and, when result-1 indicates that there is an overlap, result-0 indicates that there is no overlap,sxfront side=|x01-x02|,sxRear end=|x11-x12|,syFront side=|y01-y02|,syRear end=|y11-y12|。

5. The method according to claim 1, wherein the step (3) is performed by determining whether the detected objects in the two frames of images are the sameCalculating the color difference degree of the target object detected by the front frame image and the back frame image, wherein (R)0,G0,B0) Representing a first RGB value, (R)1,G1,B1) Representing a second RGB value.

6. The method according to claim 1, wherein the first RGB values and the second RGB values are obtained by the following steps: determining the position of the central point of the target object according to the coordinates of the target object, taking the central point of the target object as the central point, taking a rectangular area with the length and the width being 20% of the length and the width of the target object respectively, and calculating the average value of RGB values in the rectangular area.

7. The method according to claim 1, wherein the step (4) is specifically as follows: and when the area ratio exceeds a first threshold value, the coincidence ratio exceeds a second threshold value and the color difference degree is lower than a third threshold value, determining that the objects are the same.

8. The method according to claim 1, wherein the first threshold is 90%, the second threshold is 50%, and the third threshold is 15%.

Technical Field

The invention relates to the technical field of video detection, in particular to a method for judging whether detected objects in front and back frames of images of a video are the same or not.

Background

With the continuous maturity of video detection technology based on deep learning, the technology of fusing a camera and a deep learning model and recording and storing relevant data of a detected target object is gradually mature. Therefore, long-term unmanned monitoring of regional target changes can be achieved by embedding a target detection model in the camera.

In order to obtain the target quantity condition within the detected target area for a period of time, data related to the detected target object needs to be recorded. When the system needs to count the number of targets in a period of time, the same object in different frames of images needs to be removed, so that whether the detected objects in the two frames are the same object needs to be judged.

Disclosure of Invention

The invention aims to solve the technical problem of providing a method for judging whether detected objects in two frames of images before and after a video are the same or not, so that the related data of the detected objects can be recorded more accurately, and repeated recording is avoided.

The technical scheme adopted by the invention for solving the technical problems is as follows: the method for judging whether the objects detected in the front frame image and the rear frame image of the video are the same or not is provided, and comprises the following steps:

(1) recording a first coordinate set and a first RGB value of a target object detected by a target detection model for a current frame image, wherein the first RGB value is an average value of the RGB values of the target object detected by the current frame image;

(2) recording a second coordinate set and a second RGB value of the target object detected by the target detection model for the next frame of image, wherein the second RGB value is an average value of the RGB values of the target object detected by the next frame of image;

(3) calculating the area ratio, the coincidence ratio and the color difference of the target object detected by the front frame image and the rear frame image according to the first coordinate set, the first RGB value, the second coordinate set and the second RGB value;

(4) and comparing the area ratio, the coincidence ratio and the color difference with a set threshold value, and determining whether the objects are the same according to the comparison result.

In the step (3) byCalculating the ratio of the areas of the target objects detected by the two frames of images, wherein q isNoodleRatio of areas of the target object detected for two frames of images, S0Representing the area of the target object detected in the current frame image, S0=|(x02-x01)*(y02-y01)|,(x01,y01) And (x)02,y02) As the upper left and lower right coordinates in the first coordinate set, S1Representing the area of the target object detected in the next image frame, S1=|(x12-x11)*(y12-y11)|,(x11,y11) And (x)12,y12) The coordinates are the upper left coordinate and the lower right coordinate in the second coordinate set.

In the step (3) byCalculating the ratio of the coincidence degree of the target object detected by the two frames of images, wherein q isHeavy loadRatio of coincidence degree of target object detected for two frames of images, S0Representing the area of the target object detected in the current frame image, S0=|(x02-x01)*(y02-y01)|,(x01,y01) And (x)02,y02) As the upper left and lower right coordinates in the first coordinate set, SHeavy loadIndicating the area of the overlapping portion of the target object detected by the two previous and subsequent images, SHeavy load=(min(x02,x12)-max(x01-x11))*(min(y02,y12)-max(y01-y11)),(x11,y11) And (x)12,y12) The coordinates are the upper left coordinate and the lower right coordinate in the second coordinate set.

Before calculating the ratio of the coincidence degree of the target object detected by the two frames of images before and after the calculation, the method also passesJudging whether the target objects detected by the two frames of images are overlapped, wherein,a value of 1 when true, or 0 otherwise,is 1 when it is established and is otherwise 0,&&indicating logical and, when result-1 indicates that there is an overlap, result-0 indicates that there is no overlap,sxfront side=|x01-x02|,sxRear end=|x11-x12|,syFront side=|y01-y02|,syRear end=|y11-y12|。

In the step (3) byCalculating the color difference degree of the target object detected by the front frame image and the back frame image, wherein (R)0,G0,B0) Representing a first RGB value, (R)1,G1,B1) Representing a second RGB value.

The first RGB value and the second RGB value are obtained in the following modes: determining the position of the central point of the target object according to the coordinates of the target object, taking the central point of the target object as the central point, taking a rectangular area with the length and the width being 20% of the length and the width of the target object respectively, and calculating the average value of RGB values in the rectangular area.

The step (4) is specifically as follows: and when the area ratio exceeds a first threshold value, the coincidence ratio exceeds a second threshold value and the color difference degree is lower than a third threshold value, determining that the objects are the same.

The first threshold is 90%, the second threshold is 50%, and the third threshold is 15%.

Advantageous effects

Due to the adoption of the technical scheme, compared with the prior art, the invention has the following advantages and positive effects: the method judges whether the objects are the same object or not by judging the area ratio, the contact ratio and the color difference of the detected objects of the two frames before and after, and can improve the accuracy of the target quantity statistics. The invention also has the advantages of small calculation amount and low requirement on hardware.

Drawings

FIG. 1 is a flow chart of an embodiment of the present invention;

FIG. 2 is a diagram illustrating the detection of a target object at a current frame in an embodiment of the present invention;

fig. 3 is a schematic diagram of objects detected in two frames before and after the detection in the embodiment of the present invention shown in the same drawing.

Detailed Description

The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.

The embodiment of the invention relates to a method for judging whether objects detected in two frames of images before and after a video are the same or not, which comprises the following steps: firstly, all objects in a frame image are detected through a target detection model, and the upper left coordinate and the lower right coordinate of the detected objects and the RGB color values of the target objects are recorded. Whether the object is the same object is judged by processing the upper left coordinate, the lower right coordinate and the RGB value of the object in the two frames of images, wherein the judgment basis is as follows: if the ratio of the areas of the two frame objects, the ratio of the coincidence degree and the color difference degree all satisfy the conditions, the same object is represented. As shown in fig. 1, the specific steps are as follows:

(1) recording a first coordinate set and a first RGB value of a target object detected by a target detection model for a current frame image, wherein the first RGB value is an average value of RGB values of the target object detected by the current frame image, and the first coordinate set comprises an upper left coordinate (x) of the target object detected by the current frame image01,y01) And lower right coordinate (x)02,y02). Fig. 2 is a schematic diagram of a current frame for detecting a target object, where a block a is a detection block of the target object.

(2) Recording a second coordinate set and a second RGB value of the target object detected by the target detection model for the next frame of image, wherein the second RGB value is an average value of the RGB values of the target object detected by the next frame of image, and the second coordinate set comprises an upper left coordinate (x) of the target object detected by the next frame of image11,y11) And lower right coordinate (x)12,y12). Fig. 3 is a schematic diagram of objects detected in two frames before and after being displayed in the same image, where a frame a is a detection frame of a target object in a current frame image, and a frame B is a detection frame of a target object in a next frame image.

In this embodiment, the first RGB values and the second RGB values are obtained by the following steps: firstly, determining the position of the central point of the target object according to the upper left coordinate and the lower right coordinate of the target object, then taking the central point of the target object as the middle point, taking a rectangular area with the length and the width respectively being 20% of the length and the width of the target object, calculating the average value of RGB values in the rectangular area, and respectively recording as (R)0,G0,B0) And (R)1,G1,B1)。

(3) And calculating the area ratio, the coincidence ratio and the color difference of the target object detected by the two frames of images before and after according to the first coordinate set, the first RGB value, the second coordinate set and the second RGB value.

In calculating the areaWhen comparing, set S0Representing the area of the target object detected in the current frame image, S1The area of the target object detected in the next frame of image is shown as follows:

S0=|(x02-x01)*(y02-y01)|

S1=|(x12-x11)*(y12-y11)|

for the sake of unifying the criteria, here the ratio of the control areas is always less than or equal to 1, thus giving the ratio of the areas:

before calculating the ratio of the degree of overlap, it is usually necessary to determine whether there is overlap, and this embodiment determines as follows:

let lx represent the absolute value of the difference between the midpoint values of the X-axis of two frames before and after, and ly represent the absolute value of the difference between the midpoint values of the Y-axis of two frames before and after, then:

at the same time, let sxFront side、sxRear endAbsolute values, sy, respectively representing the lengths of the X-axis of the objects detected in the two frames before and afterFront side、syRear endThe absolute values of the lengths of the Y axes of the detected objects of the two frames before and after, respectively, are represented as:

sxfront side=|x01-x02|

sxRear end=|x11-x12|

syFront side=|y01-y02|

syRear end=|y11-y12|

Therefore, the formula for judging whether two objects are overlapped is obtained as follows:

wherein the content of the first and second substances,a value of 1 when true, or 0 otherwise,is 1 when it is established and is otherwise 0,&&indicating logical and, when result is 1, indicating that there is an overlap, and result is 0, indicating that there is no overlap.

In the actual target detection application of the present embodiment, when the speed is not large, and the time interval between two frames before and after the video is not large and the target density is not large, if the two frames are the same object, there is always a portion where the two objects overlap, and the area calculation formula of the portion where the two objects overlap is:

Sheavy load=(min(x02,x12)-max(x01-x11))*(min(y02,y12)-max(y01-y11))

Therefore, it can be found that the ratio of the contact ratio is calculated by the formula:

when the color difference is calculated, the difference of the colors of two objects in the front frame and the back frame is defined as:

(4) and comparing the area ratio, the coincidence ratio and the color difference with a set threshold value, and determining whether the objects are the same according to the comparison result. The method specifically comprises the following steps: and when the area ratio exceeds a first threshold value, the coincidence ratio exceeds a second threshold value and the color difference degree is lower than a third threshold value, determining that the objects are the same. To determine the specific values of the above three thresholds, the present embodiment uses as test data the video of surface garbage of 20 total flowing waters taken by the inland rehshing island canal, the shanghai south-bound town beach, and the hong kong dovetail port. In the detection of 20 segments of video, the present embodiment finally shows that when the set thresholds of the area ratio, the coincidence ratio, and the color difference are greater than 90%, greater than 50%, and less than 15%, respectively, the average accuracy of the determination in 20 segments of video is as high as 95%.

(5) And revising the floating object quantity data in the statistical data according to whether the same object is finally determined.

The method can judge whether the objects are the same by judging the area ratio, the contact ratio and the color difference of the detected objects of the front frame and the back frame, and can improve the accuracy of the target quantity statistics. The invention also has the advantages of small calculation amount and low requirement on hardware.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:手部关节定位及局部区域面积计算方法、处理器和存储器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!