Object tracking method based on image

文档序号:156301 发布日期:2021-10-26 浏览:57次 中文

阅读说明:本技术 基于图像的物体追踪方法 (Object tracking method based on image ) 是由 康哲源 蔡政达 林彦宇 于 2020-04-24 设计创作,主要内容包括:本发明涉及一种基于图像的物体追踪方法,包含:以主机由摄像装置取得监控图像,以主机依据物体指令由多个预选物体之中选择一个做为目标物体,以终端装置依据圈选指令取得监控图像中的侦测区域,以主机依据事项指令由多个预选事项之中选择一个做为追踪事项,以主机基于目标物体、侦测区域及追踪事项执行物体追踪程序以产生追踪结果,以及输出追踪结果。(The invention relates to an object tracking method based on images, which comprises the following steps: the method comprises the steps of obtaining a monitoring image by a camera device through a host, selecting one of a plurality of preselected objects as a target object according to an object instruction through the host, obtaining a detection area in the monitoring image according to a circle instruction through a terminal device, selecting one of the preselected objects as a tracking item according to an item instruction through the host, executing an object tracking program based on the target object, the detection area and the tracking item through the host to generate a tracking result, and outputting the tracking result.)

1. An object tracking method based on images, wherein the object tracking method comprises:

obtaining a monitoring image by a camera device through a host computer;

selecting one of a plurality of preselected objects as a target object by the host according to an object command;

a terminal device obtains a detection area in the monitoring image according to a round selection instruction;

selecting one of a plurality of preselections as a tracking item by the host according to an item instruction;

executing an object tracking procedure by the host computer based on the target object, the detection area and the tracking item to generate a tracking result; and

and outputting the tracking result.

2. The image-based object tracking method of claim 1, wherein the object tracking procedure comprises:

judging whether the monitoring image contains the target object or not;

when the target object is judged to be contained in the monitored image, judging whether the target object is positioned in the detection area in the monitored image; and

when the target object is judged to be located in the detection area, coordinate information and time information related to the target object are generated.

3. The image-based object tracking method of claim 2, wherein the tracking item comprises a behavior rule, and after generating the coordinate information and the time information associated with the target object, the object tracking method further comprises:

judging whether the behavior of the target object in the detection area accords with the behavior rule; and

when the behavior of the target object is judged to accord with the behavior rule, the coordinate information and the time information of the target object are recorded, and the coordinate information and the time information of the target object are used as the tracking result.

4. The image-based object tracking method of claim 2, wherein the tracking item comprises a behavior rule, and after generating the coordinate information and the time information associated with the target object, the object tracking method further comprises:

judging whether the behavior of the target object in the detection area accords with the behavior rule; and

when the behavior of the target object is judged not to be in accordance with the behavior rule, the coordinate information and the time information of the target object are recorded, and the coordinate information and the time information of the target object are used as the tracking result.

5. The image-based object tracking method according to claim 3, wherein the recording of the coordinate information and the time information of the target object is: counting the number of events recorded based on the target object meeting the behavior rule, wherein the events record corresponds to the behavior rule.

6. The image-based object tracking method of claim 1, wherein the object tracking method further comprises: obtaining another detection area, and executing the object tracking procedure based on the target object, the another detection area and the tracking item to generate another tracking result.

7. The image-based object tracking method of claim 1, wherein the surveillance image obtained by the camera device is associated with a surveillance field, and the pre-selected items correspond to the surveillance field.

8. The image-based object tracking method of claim 7, wherein before the obtaining of the monitor image by the camera device, the object tracking method further comprises: one of a plurality of preselected fields is selected as the monitoring field, wherein each of the preselected fields is provided with a corresponding camera device.

9. The image-based object tracking method of claim 1, wherein after obtaining the detection region, the object tracking method further comprises: and displaying the detection area and the monitoring image by the terminal device.

10. The image-based object tracking method according to claim 1, wherein outputting the tracking result is: and presenting the tracking result by the terminal device.

Technical Field

The present invention relates to an image-based object tracking method, and more particularly, to an image-based object tracking method capable of flexibly selecting a field to be tracked, an object to be tracked, a detection region, and a tracking event.

Background

With the increasing and diversified requirements of image monitoring technologies and the complexity of the monitoring field, the application of a fixed monitoring screen cannot meet the user requirements. Therefore, the technology of Intelligent Video Surveillance (IVS) has come to work, and the technology of Intelligent image Surveillance includes that when the system detects an abnormal event, a warning notification is sent out.

In the technology of intelligent image monitoring, only a fixed detection area can be set in a monitoring screen and only a specific event can be detected, but under the condition that the user demand is more and more varied and the monitoring demand is more and more, the setting of the monitoring area and the monitoring event can be adjusted to be the inevitable trend.

Disclosure of Invention

In view of the above, the present invention provides an image-based object tracking method that satisfies the above-mentioned needs.

An object tracking method based on an image according to an embodiment of the present invention includes: obtaining a monitoring image by a camera device through a host computer; selecting one of a plurality of preselected objects as a target object by the host according to an object command; a terminal device obtains a detection area in the monitoring image according to a round selection instruction; selecting one of a plurality of preselections as a tracking item by the host according to an item instruction; executing an object tracking procedure by the host computer based on the target object, the detection area and the tracking item to generate a tracking result; and outputting the tracking result.

In summary, according to the image-based object tracking method of one or more embodiments of the present invention, in addition to flexibly selecting the field to be tracked, the object to be tracked, the detection area in the field, the event to be tracked, and the like according to the usage requirement in the same monitoring screen, the corresponding result signal and/or notification may be generated based on the behavior of the target object and the behavior rule, so that the monitoring center can control the status of the monitoring field. In addition, according to the image-based object tracking method in one or more embodiments of the present invention, the tracking result can be output immediately and displayed on the display together with the monitoring screen, so that the monitoring person can view the tracking result immediately. In addition, according to the object tracking method based on the image in one or more embodiments of the invention, the computation of the monitoring system can be effectively saved.

The foregoing description of the disclosure and the following description of the embodiments are provided to illustrate and explain the spirit and principles of the invention and to provide further explanation of the invention as claimed.

Drawings

FIG. 1 is a system block diagram illustrating one embodiment of implementing the image-based object tracking method of the present invention.

FIG. 2 is a flowchart illustrating an image-based object tracking method according to an embodiment of the invention.

Fig. 3 is a detailed flowchart illustrating step S50 of fig. 2.

Fig. 4 is a flowchart showing another detail of step S50 of fig. 2.

Description of reference numerals:

100 host

200 image pickup device

300 terminal device

Detailed Description

The detailed features and advantages of the present invention are described in detail in the following embodiments, which are sufficient for anyone skilled in the art to understand the technical contents of the present invention and to implement the present invention, and the related objects and advantages of the present invention can be easily understood by anyone skilled in the art from the disclosure of the present specification, the claims and the drawings. The following examples further illustrate aspects of the invention in detail, but are not intended to limit the scope of the invention in any way.

The object tracking method based on image disclosed in one or more embodiments of the present invention is preferably implemented by a host computer, wherein the host computer may be a host computer of a monitoring center, a server, or other devices with computing capability, and the monitoring center may be a traffic monitoring center for monitoring a general road or an expressway, a monitoring center for monitoring a flow of people in an indoor or outdoor space, or a monitoring center for monitoring animals in a field or farm, which is not limited thereto.

Referring to fig. 1 and 2 together, wherein fig. 1 is a block diagram illustrating a system for implementing an embodiment of the image-based object tracking method according to the present invention; FIG. 2 is a flowchart illustrating an image-based object tracking method according to an embodiment of the invention.

In detail, the system for implementing the image-based object tracking method of the present invention preferably comprises a host 100, an image capturing device 200 and a terminal device 300, wherein the host 100 is connected to the image capturing device 200 and the terminal device 300 in a signal-transmittable manner.

Step S10: and acquiring a monitoring image.

The monitoring image is obtained from the camera device 200 by the host 100, wherein the monitoring image is preferably a plurality of continuous images, such as images presented in the form of a movie. The imaging device 200 is, for example, a camera provided in the periphery of a road, and thus the monitor image is an image of the road and its periphery.

Before the monitoring image is acquired by the image capturing device 200, the image-based object tracking method according to the present invention may further include: one of a plurality of preselected fields is selected as a monitoring field, wherein each of the preselected fields is provided with a corresponding camera device.

In other words, the host 100 may receive an instruction from the terminal 300 to select one of a plurality of preselected fields in which individual image pickup devices are installed as a monitoring field, and the monitoring image acquired by the image pickup device is associated with the monitoring field. For example, the monitoring field is, for example, a certain road and its periphery where the camera device 200 is installed, and the monitoring image obtained by the camera device 200 is the image of the road and its periphery. For example, if the monitoring area is a parking lot provided with the imaging device 200, the monitoring image acquired by the imaging device 200 is an image of the parking lot. The present invention is not limited to the monitoring field to be monitored.

Step S20: one of the plurality of preselected objects is selected as a target object according to the object command.

For example, the plurality of preselected objects may include cars, buses, vans, bicycles, locomotives, pedestrians, and the like. The host 100 may select one or more of the preselected objects as target objects based on the object instructions received by the terminal device 300.

Step S30: and obtaining a detection area in the monitoring image according to the circled instruction.

The selection instruction is, for example, a selection instruction received by the terminal device 300. In detail, the selection instruction received by the terminal device 300 may be input by the user, and the user selects two points on the terminal device 300, so that the terminal device 300 can connect the two points to obtain a line segment, and use the line segment as the detection area; if the user clicks three points on the terminal device 300, the terminal device 300 may sequentially connect the three points to obtain a closed selection area, and use the selection area as the detection area. That is, the detection region may be a one-dimensional line segment, or the detection region may be a two-dimensional block having an area.

In addition, after the detection area is obtained, the method for tracking an object based on an image according to the present invention may further include presenting the detection area and the monitoring image with the terminal device 300. In other words, the terminal device 300 is, for example, a computer, the monitor image can be displayed on the display of the terminal device 300, and the monitor image displayed on the display has the line segment obtained by the selection instruction and the detection area of the selection area.

Step S40: one of the plurality of preselections is selected as a trace item based on the item instruction.

The host 100 may select one or more of the pre-selected items as tracking items based on the item instructions received by the terminal device 300. The presets correspond to the monitored fields captured by the camera 200. For example, when the monitored area is a road, the pre-selected items may include traffic regulations, such as running a red light, speeding, parking violations, and reversing. The plurality of preselections may also include the number of objects passing through/into the detection area, the flow rate of the objects, the dwell time of the objects, the speed of movement of the objects, and the like. For another example, when the surveillance field includes a zebra crossing, the presets may also include the number of pedestrians crossing the road along the zebra crossing, the time taken for the pedestrians to cross the road along the zebra crossing, and the like.

Step S50: an object tracking procedure is performed based on the target object, the detection area, and the tracking item.

For example, if the target object is a vehicle, the detection area is a roadside parking space, and the tracking item is an object staying time, the host 100 may execute an object tracking procedure to obtain the staying time of each vehicle parked in the roadside parking space based on the target object, the detection area, and the tracking item, and use the staying time as the tracking result. The details of the object tracking procedure of step S50 will be described in detail in the following embodiments of fig. 3 and 4.

Step S60: and outputting a tracking result.

In other words, the tracking result is, for example, the staying time of the vehicle in the parking space. The output tracking result of the host 100 is, for example, output to a database for storage for subsequent review by a monitoring person; or output to the terminal device 300 (e.g., a computer) of the monitoring center, so that the terminal device 300 can present the tracking result; the host 100 may also output the tracking result to a mobile device, such as a mobile phone, a tablet computer, etc., and display the tracking result on a display of the mobile device. The present invention is not limited to the target of the host 100 outputting the trace result. That is, the tracking result output by the host 100 can be sent to the database for storage, or the tracking result can be presented on the display in real time, and is preferably presented on the display together with the monitoring screen, so that the monitoring personnel can view the tracking result and monitor the status of the field in real time.

In addition, the terminal device 300 may also receive a plurality of selection commands to obtain the detection area and another detection area, that is, the monitoring image may have one or more detection areas. After obtaining the detection areas, the host 100 may execute an object tracking procedure based on the target object, another detection area and the tracking item to generate another tracking result. Taking the above-mentioned parking space as an example, the other detection area may be another parking space in the monitoring image, and the host 100 may execute the object tracking procedure on the two detection areas in the monitoring image to respectively obtain the staying time of the vehicle in the two parking spaces.

Referring to fig. 3, fig. 3 is a detailed flowchart illustrating step S50 of fig. 2, wherein step S50 of fig. 2 is: an object tracking procedure is performed based on the target object, the detection area, and the tracking item.

After acquiring the target object, the detection area and the tracking item, the host 100 executes step S501: and judging whether the monitored image contains the target object.

That is, the monitored image may include a plurality of objects, and the host 100 may perform an image recognition procedure by a machine learning method to recognize the plurality of objects in the monitored image and further determine whether the objects include a target object to be tracked. For example, the monitored image is an image of a road, and the host 100 recognizes that a plurality of objects included in the image of the road are cars, buses, pedestrians, and locomotives, so that when the target object is a car, the host 100 can determine that the object in the monitored image includes the target object (car). On the contrary, if the target object is a bicycle, the host 100 may determine that the object in the monitored image does not include the target object (car), and then execute step S502: the method is ended.

When the host determines in step S501 that the monitored image has the target object, step S503 is executed: and judging whether the target object is positioned in the detection area in the monitoring image.

That is, the host 100 determines whether the target object is located in a detection region, and the detection region may be a line segment or a round-selection region.

For example, the detection area of the line segment type may be drawn at an intersection, so that the host 100 may determine that the target object is located in the detection area when the target object passes through the line segment; and when the target object does not pass through the line segment, judging that the target object is not positioned in the detection area.

Similarly, the detection area in the circle selection area type may be the parking space drawn in the above example, and when there is a car parked in the parking space, the host 100 determines that the car is located in the detection area; on the contrary, when the host 100 determines that there is no target object in the parking space, the host may determine that the vehicle is not located in the detection area.

When the host 100 determines in step S503 that the target object is not located in the detection area, the method may continue to step S502. Otherwise, when the host 100 determines in step S503 that the target object is located in the detection area, the step S505 may be continued.

Step S505: coordinate information and time information associated with the target object are generated.

When the host 100 determines that the target object is located in the detection area, the host 100 further generates coordinate information and time information of the target object. In other words, the host 100 generates coordinate information of the target object in the monitored image and time information corresponding to the coordinate of the target object.

For example, if the detection area is a line segment drawn at an intersection and the tracking item is the object flow, when a vehicle passes through the line segment, the host 100 may generate coordinate information and time information of the vehicle, and the host 100 may further obtain the number of vehicles passing through the intersection in a time period, so as to calculate the vehicle flow of the intersection in the time period.

Similarly, if the detection area is a circled area marked on a road section, and the tracking item is the moving speed of the object, when a vehicle enters and leaves the circled area, the host 100 can generate coordinate information and time information of the vehicle entering and leaving the circled area, and the host 100 can obtain the moving speed of the vehicle on the road section accordingly.

Accordingly, the host 100 only needs to obtain the coordinate information and the time information of the target object located in the detection area, and does not need to obtain the coordinate information and the time information of all objects in the monitored image, so as to reduce the computation amount of the host 100.

Referring to fig. 4, fig. 4 is a flowchart illustrating another detail of step S50 of fig. 2.

In other words, the implementation manner of the step S50 of executing the object tracking procedure based on the target object, the detection area and the tracking item may be the step S50' shown in fig. 4. Fig. 4 differs from fig. 3 in that step S50' shown in fig. 4 continues to execute steps S507 to S509 after execution as step S505. In detail, according to step S50' shown in fig. 4, after generating the coordinate information and the time information of the target object located in the detection area, the host 100 may further execute steps S507 to S509 to further record the coordinate information and the time information of the target object according to the behavior of the target object in the detection area for the follow-up monitoring personnel to refer to.

In detail, please refer to step S507: and judging whether the behavior of the target object in the detection area conforms to the behavior rule.

For example, the detection area is an intersection, the tracking item may include a behavior rule, and the behavior rule is that the vehicle passes when the traffic light shows green, and after the host 100 generates the coordinate information and the time information of the vehicle in step S505, the host 100 further determines whether the behavior of the target object at the intersection meets the behavior rule (i.e., whether the vehicle passes when the vehicle is green).

When the host 100 determines in step S507 that the vehicle is not in the green light, it indicates that the vehicle may run a yellow/red light, and therefore the host 100 may execute step S508: an alert notification is generated.

In other words, the warning notification generated by the host 100 may be output to the traffic monitoring center for the monitoring personnel to perform the subsequent treatment.

On the contrary, when the host 100 determines in step S507 that the vehicle is passing when the light is green, it indicates that the behavior of the vehicle conforms to the behavior rule, so the host 100 may execute step S509: and recording the coordinate information and the time information of the target object, and taking the coordinate information and the time information of the target object as a tracking result.

In other words, when the tracking item is the behavior rule of the vehicle and the vehicle passes through the intersection when the behavior of the vehicle at the intersection is green (the behavior of the target object in the detection area conforms to the behavior rule), the host 100 may record the coordinate information and the time information of the vehicle conforming to the behavior rule, and use the coordinate information and the time information as the tracking result, and the host 100 may also obtain the traffic flow at the intersection based on the coordinate information and the time information, and use the obtained traffic flow as the tracking result, so that the host 100 executes step S60 of fig. 2 to output the tracking result.

In addition, the host 100 may record the target object meeting the behavior rule into an event record, for example, count the event record based on the target object meeting the behavior rule, wherein the event record corresponds to the behavior rule. For example, the event records are the number of vehicles passing through the intersection when the host 100 determines that the vehicles pass through the intersection when the green light is detected, so that the host can count the event records to count the number of vehicles complying with the behavior rules at the intersection.

If the determination in step S507 shown in fig. 4 is no, the host 100 may execute step S509 in addition to step S508 to generate the warning notification. Specifically, when the vehicle passes through the intersection when the behavior of the vehicle at the intersection is green, it indicates that the behavior of the vehicle does not conform to the behavior rule when the vehicle runs through the red light at the intersection (no in step S507), the host 100 may record the coordinate information and the time information of the vehicle running through the red light, and the host 100 may also record the license plate number of the vehicle running through the red light based on the license plate recognition technology for the monitoring personnel to look up subsequently, in addition to generating the warning notification by the host 100.

In summary, according to the image-based object tracking method of one or more embodiments of the present invention, in addition to flexibly selecting the field to be tracked, the object to be tracked, the detection area in the field, the event to be tracked, and the like according to the usage requirement in the same monitoring screen, the corresponding result signal and/or notification may be generated based on the behavior of the target object and the behavior rule, so that the monitoring center can control the status of the monitoring field. In addition, according to the image-based object tracking method in one or more embodiments of the present invention, the tracking result can be output immediately and displayed on the display together with the monitoring screen, so that the monitoring person can view the tracking result immediately. In addition, according to the object tracking method based on the image in one or more embodiments of the invention, the computation of the monitoring system can be effectively saved.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种培训用车载智能监控系统及其监控方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类