Image monitoring device and method

文档序号:245214 发布日期:2021-11-12 浏览:5次 中文

阅读说明:本技术 影像监控装置与方法 (Image monitoring device and method ) 是由 郑宪君 黄捷 李家昶 于 2020-04-27 设计创作,主要内容包括:本发明提供一种影像监控装置与影像监控方法,影像监控装置包括一影像感测模块及一处理器。影像感测模块经配置以取得一目标场景的一非可见光动态影像,且非可见光动态影像包括数个图像帧。处理器经配置以根据非可见光动态影像的至少一图像帧执行运算,决定对应于目标场景中至少一活体的状态为数个状态类别之一及非可见光动态影像的至少一状态有效区块,且依据此至少一活体的状态类别设定此至少一状态有效区块的每一像素的场景为数个场景类别之一。(The invention provides an image monitoring device and an image monitoring method. The image sensing module is configured to acquire a non-visible dynamic image of a target scene, and the non-visible dynamic image comprises a plurality of image frames. The processor is configured to perform an operation according to at least one image frame of the non-visible light dynamic image, determine a status of at least one living body in the target scene as one of a plurality of status categories and at least one status valid block of the non-visible light dynamic image, and set a scene of each pixel of the at least one status valid block as one of a plurality of scene categories according to the status category of the at least one living body.)

1. An image monitoring device, comprising:

the image sensing module is configured to obtain a non-visible light dynamic image of a target scene, and the non-visible light dynamic image comprises a plurality of image frames; and

a processor configured to perform:

performing an operation according to at least one image frame of the non-visible light dynamic image to determine a state corresponding to at least one living body in the target scene as one of a plurality of state categories and at least one state valid block of the non-visible light dynamic image; and

setting a scene of each pixel of the at least one status valid block to be one of a plurality of scene categories according to the status category of the at least one living body.

2. The image monitoring device of claim 1, wherein the non-visible light dynamic image is a thermal image, a radio frequency echo image, or an ultrasound image.

3. The image monitoring device as claimed in claim 1, wherein the at least one living body is a human body, and the plurality of status categories include at least one of standing, sitting, lying, climbing, and undefined.

4. The image monitoring device of claim 3, wherein the processor is configured to further perform:

setting the scene of each pixel of the at least one status valid block as a floor when the determined status category of the at least one living body is a station;

setting the scene of each pixel of the at least one status valid block as a seat when the status category of the at least one living body is determined to be sitting; and

setting the scene of each pixel of the at least one status valid block as a bed when the status category of the at least one living body is determined to be lying down.

5. The video surveillance device of claim 1, wherein each pixel in the non-visible light motion video has a probability distribution of the plurality of scene classes, and the processor is configured to set a scene class with a highest probability among the probability distributions of the plurality of scene classes for each pixel as the surveillance scene for the pixel.

6. The image monitoring device of claim 5, wherein the processor is configured to update the probability distribution of the plurality of scene classes for each pixel in the state-valid block according to the at least one state-valid block and the scene of each pixel of the at least one state-valid block of the at least one image frame.

7. The image monitoring device of claim 1 or 5, wherein the plurality of scene categories include at least one of floor, bed, seat, and undefined categories.

8. The image monitoring device of claim 1, further comprising a memory electrically connected to the processor, wherein the processor is configured to store the non-visible light motion image and the scene type corresponding to each pixel in the memory.

9. The image monitoring device of claim 1, wherein the processor is configured to perform operations according to another image frame of the non-visible light dynamic image, determine a status of a monitored living body in the target scene as one of the plurality of status categories and at least one detection valid block corresponding to the monitored living body, determine whether the status of the monitored living body is abnormal according to the at least one detection valid block of the monitored living body, the status of the monitored living body and the scene corresponding to the at least one detection valid block of the monitored living body, and output a warning signal when the status of the monitored living body is abnormal.

10. The image monitoring device of claim 5, wherein the processor is configured to perform operations according to another image frame of the non-visible light dynamic image, determine a status of a monitored living body in the target scene as one of the plurality of status categories and at least one detection valid block corresponding to the monitored living body, determine whether the status of the monitored living body is abnormal according to the at least one detection valid block of the monitored living body, the status of the monitored living body, and the monitoring scene corresponding to the at least one detection valid block of the monitored living body, and output a warning signal when the status of the monitored living body is abnormal.

11. The image monitoring device of claim 1, wherein the at least one living body is a plurality of living bodies, the at least one status valid block is a plurality of status valid blocks, the living bodies respectively correspond to the status valid blocks, and the processor is configured to:

performing an operation according to at least one image frame of the non-visible light dynamic image, and determining each of states corresponding to the living bodies in the target scene as one of a plurality of state classes and the plurality of state valid blocks of the non-visible light dynamic image; and

setting a scene of each pixel of the corresponding state valid block as one of a plurality of scene types according to the state type of each of the plurality of living bodies.

12. An image monitoring method, comprising:

obtaining a non-visible light dynamic image of a target scene;

performing an operation according to at least one image frame of the non-visible light dynamic image to determine a state corresponding to at least one living body in the target scene as one of a plurality of state categories and at least one state valid block of the non-visible light dynamic image; and

setting a scene of each pixel of the at least one status valid block to be one of a plurality of scene categories according to the status category of the at least one living body.

13. The image monitoring method of claim 12, wherein the non-visible dynamic image is a thermal image, a radio frequency echo image, or an ultrasound image.

14. The image monitoring method of claim 12, wherein the at least one living body is a human body, and the plurality of status categories include at least one of standing, sitting, lying, climbing, and undefined.

15. The image monitoring method of claim 14, further comprising:

setting the scene of each pixel of the at least one status valid block as a floor when the determined status category of the at least one living body is a station;

setting the scene of each pixel of the at least one status valid block as a seat when the status category of the at least one living body is determined to be sitting; and

setting the scene of each pixel of the at least one status valid block as a bed when the status category of the at least one living body is determined to be lying down.

16. The image monitoring method of claim 12, wherein each pixel of the non-visible light dynamic image has a probability distribution of the scene types, and the image monitoring method further comprises setting a scene type with a highest probability among the probability distributions of the scene types of each pixel as a monitoring scene of the pixel.

17. The method of claim 16, further comprising updating probability distributions of the plurality of scene classes for each pixel in the state-valid block according to the at least one state-valid block and the scene of each pixel of the at least one state-valid block of the at least one image frame.

18. The image monitoring method of claim 12 or 16, wherein the scene categories include at least one of floor, bed, seat, and undefined categories.

19. The method of claim 12, further comprising storing the non-visible light motion image and the scene type corresponding to each pixel in a memory.

20. The method of claim 12, further comprising performing an operation according to another image frame of the non-visible light dynamic image, determining a status of a monitored living body in the target scene as one of the plurality of status categories and at least one detection valid block corresponding to the monitored living body, determining whether the status of the monitored living body is abnormal according to the at least one detection valid block of the monitored living body, the status of the monitored living body and the scene corresponding to the at least one detection valid block of the monitored living body, and outputting a warning signal when the status of the monitored living body is abnormal.

21. The method of claim 16, further comprising performing an operation according to another image frame of the non-visible light dynamic image, determining a status of a monitored living object in the target scene as one of the plurality of status categories and at least one detection valid block corresponding to the monitored living object, determining whether the status of the monitored living object is abnormal according to the at least one detection valid block of the monitored living object, the status of the monitored living object and the monitoring scene corresponding to the at least one detection valid block of the monitored living object, and outputting a warning signal when the status of the monitored living object is abnormal.

22. The image monitoring method of claim 12, wherein the at least one living body is a plurality of living bodies, the at least one status valid block is a plurality of status valid blocks, the living bodies respectively correspond to the status valid blocks, and the image monitoring method comprises:

performing an operation according to at least one image frame of the non-visible light dynamic image, and determining each of states corresponding to the living bodies in the target scene as one of a plurality of state classes and the plurality of state valid blocks of the non-visible light dynamic image; and

setting a scene of each pixel of the corresponding state valid block as one of a plurality of scene types according to the state type of each of the plurality of living bodies.

Technical Field

The present invention relates to a monitoring device and method, and more particularly, to an image monitoring device and method.

Background

With the advancement of medical technology, the average life span of human beings is extended, and thus the need for silver hair health care has arisen. In addition, in the phenomenon of silver hair at home, the number of the silver hair families occupies a certain proportion, and the manpower for nursing the institutions and the communities is limited, so that the services are developed to the home care service with the help of scientific and technological assistance all over the world.

The main accidental injuries of the silver hair group include the action of leaving bed in bedroom, bad abnormal action, slippery ground, etc. Prevention from occurring and real time (real time) processing are therefore important requirements for home security. For example, the silvery hair group gets up and falls at night and is discovered in the morning every other day. For example, the silver hair family can not seek help from the outside due to physical discomfort in bed. Therefore, real-time notification of motion anomalies is an urgent need.

The existing nursing system mainly uses a wearable sensing device or a pressure pad, however, the sensor needs to be worn for a long time, and the wearing desire of a long person is low or the sensor can be removed by the user. In addition, the pressure pad has a limited range of laying, and cannot sense an abnormal fall event at any time. On the other hand, the current AI (artificial intelligence) identification technology has a high accuracy in motion identification, but adopts a general image for identification. The general image means that the privacy contents such as facial features, wearing or body surface of the user can be seen, so that the cared person can feel privacy infringed and the installation will be low.

Disclosure of Invention

The present invention is directed to an image monitoring apparatus and method that can provide good and effective security monitoring using low-sensitivity images of a person being cared for.

An embodiment of the invention provides an image monitoring device, which includes an image sensing module and a processor. The image sensing module is configured to acquire a non-visible dynamic image of a target scene, and the non-visible dynamic image comprises a plurality of image frames. The processor is configured to perform: the method includes the steps of performing an operation according to at least one image frame of the non-visible dynamic image, determining a state of at least one living body corresponding to a target scene as one of a plurality of state classes and at least one state valid block of the non-visible dynamic image, and setting a scene of each pixel of the at least one state valid block as one of the plurality of scene classes according to the at least one active state class.

An embodiment of the present invention provides an image monitoring method, including: obtaining a non-visible light dynamic image of a target scene; the method comprises the steps of performing operation according to at least one image frame of the non-visible light dynamic image, determining the state of at least one living body in a target scene to be one of a plurality of state types and at least one state effective block of the non-visible light dynamic image, and setting a scene of each pixel of the at least one state effective block to be one of the plurality of scene types according to the state type of the at least one living body.

In the image monitoring apparatus and method according to the embodiments of the invention, the living body, the status type and the status valid block are identified by using the non-visible dynamic image, and the scene of the pixels in the status valid block is set to be one of a plurality of scene types according to the status type of the living body. Therefore, the image monitoring apparatus and method of the embodiment of the invention can utilize the low-sensitivity image of the attendee to perform good and effective security monitoring to maintain the privacy of the attendee.

Drawings

Fig. 1 is a schematic view of an image monitoring apparatus according to an embodiment of the invention.

Fig. 2 shows a non-visible light dynamic image obtained by the image monitoring apparatus of fig. 1.

Fig. 3A, 3B, and 3C are distribution diagrams of a monitored scene corresponding to pixels of a target scene at three different times in sequence.

Fig. 4A, 4B, and 4C are probability distributions of scene types possessed by pixels in the region P1 of fig. 3A, 3B, and 3C.

Fig. 5 is a schematic view of a non-visible light dynamic image obtained by the image monitoring apparatus according to another embodiment of the invention.

Fig. 6 is a flowchart of an image monitoring method according to an embodiment of the invention.

Fig. 7 is a flowchart illustrating the detailed steps of steps S220 and S230 in fig. 6.

Fig. 8A is a schematic diagram of the contraction of the living body framing block in steps S110 to S114 in fig. 7.

Fig. 8B is a schematic diagram of the scene type of the 50 pixel height area under the living body framing block in step S120 in fig. 7 being the floor.

Detailed Description

Reference will now be made in detail to exemplary embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts.

Fig. 1 is a schematic view of an image monitoring apparatus according to an embodiment of the present invention, and fig. 2 illustrates a non-visible light dynamic image obtained by the image monitoring apparatus of fig. 1. Referring to fig. 1 and fig. 2, the image monitoring apparatus 100 of the present embodiment includes an image sensing module 110 and a processor 120. The image sensing module 110 is configured to acquire a non-visible light dynamic image of a target scene, and the non-visible light dynamic image includes a plurality of image frames (one of the image frames is shown in fig. 2), that is, the non-visible light dynamic image is composed of the plurality of image frames that are respectively sensed and imaged at different time points. In the present embodiment, the non-visible light dynamic image is a thermal image (thermal image), and the image sensing module 110 may be a thermal image camera for detecting the thermal image. However, in other embodiments, the non-visible dynamic image may be an rf echo image or an ultrasonic image, and the image sensing module 110 may be an ultrasonic transceiver or an rf electromagnetic wave transceiver.

The processor 120 is configured to perform the following steps. First, the processor 120 performs an operation according to at least one image frame (for example, the image frame shown in fig. 2) of the non-visible light dynamic image to determine a state of at least one living body 60 in the target scene as one of a plurality of state categories and at least one state valid block a1 of the non-visible light dynamic image. Then, a scene of each pixel of the at least one status valid block a1 is set as one of a plurality of scene types according to the status type of the at least one living body 60. For example, the living body 60 is a human body, and the state category includes at least one of standing, sitting, lying, climbing, and undefined, and the state category of the living body 60 shown in fig. 2 is lying, for example. Further, these scene categories include at least one of a floor 52, a bed 54, a seat 56, and undefined categories, for example.

In the embodiment shown in fig. 1 and 2, when the processor 120 determines that the status category of the at least one living object 60 is a station, the processor 120 sets the scene of each pixel of the at least one status valid block a1 as a floor. When the processor 120 determines that the status category of the living subject 60 is sitting, the processor 120 sets the scene of each pixel of the status valid area a1 as a seat. When the processor 120 determines that the status category of the at least one living object 60 is lying, the processor 120 sets the scene of each pixel of the at least one status valid block a1 as a bed. Taking the living body 60 shown in fig. 2 as an example, the processor 120 determines that the status category thereof is lying and the corresponding scene category is bed, so the processor 120 further sets the scene of each pixel in the status valid tile a1 as bed.

Fig. 3A, 3B, and 3C are distribution diagrams of the monitored scene corresponding to the pixels of the target scene at three different times in sequence, and fig. 4A, 4B, and 4C are probability distributions of scene types possessed by the pixels in the region P1 of fig. 3A, 3B, and 3C. Referring to fig. 3A and fig. 4A, in the present embodiment, each pixel in the non-visible light dynamic image has the probability distributions of the scene types (as shown in fig. 4A), and the processor 120 is configured to set the scene type with the highest probability in the probability distributions of the scene types of each pixel as a monitoring scene of the pixel. In the present embodiment, the scene types included in the probability distribution of the scene types of each pixel include a floor (e.g., scene type a in 4A), a bed (e.g., scene type B), a seat (e.g., scene type C), and an undefined type (e.g., scene type D). Furthermore, in the present embodiment, the processor 120 is configured to update the probability distribution of the scene type of each pixel in the state-valid block a1 according to the at least one state-valid block a1 of the at least one image frame and the scene of each pixel of the at least one state-valid block a 1.

For example, after the video surveillance device 100 is installed, the surveillance scenes of all pixels of the non-visible light moving image of the entire scene are initially preset to be the scene type D (i.e., undefined type), and the scene type D is also the preset type of the pixel. At this time, the installer can walk on the floor 52, and the processor 120 calculates and determines a plurality of image frames, determines the status type of the living body 60 (i.e., the installer) in each image frame as a station, determines the corresponding status valid block, and updates the probability distribution of the scene type of the pixels of the status valid block (e.g., the left and right sides in fig. 3A) corresponding to the living body 60 in each image frame according to the status type of the living body 60 in each image frame. In the present embodiment, the station is corresponding to the scene type a (i.e., the floor 52), so in the probability distribution of the scene type of the pixels of the state valid block (e.g., the left and right sides in fig. 3A), the probability of the scene type a (i.e., the floor 52) is increased and exceeds the scene type B, the scene type C, and the scene type D. Therefore, the processor 120 sets the monitoring scene of the pixels in the status valid block (e.g., left and right sides in fig. 3A) as scene type a (as shown in fig. 3A). In addition, the area where the installer does not walk and stand, for example, the center of the target scene, so that the area where the installer does not walk and stand (the area near the center) in fig. 3A does not add and accumulate information of the scene type, so that the probability distribution of the scene type is not updated and changed, and thus the monitored scene is maintained as the preset type, namely, the scene type D.

Then, the installer lays down in the central region of the target scene and maintains the lying state for a period of time, the processor 120 calculates and determines a plurality of image frames in the period of time, determines the state type of the living body 60 (i.e., the installer) in each image frame as a lying state and determines a corresponding state valid block, and updates the probability distribution of the scene type of the pixels of the state valid block (i.e., the region near the center in the target scene) corresponding to the living body 60 in each image frame according to the state type of the living body 60 in each image frame. In the present embodiment, the state category is lying down is the category corresponding to the scene category B (i.e., the category corresponding to the bed), and therefore, in the probability distribution of the scene category of the pixels of the state valid block (i.e., the region adjacent to the center in the target scene), the probability of the scene category B (i.e., the category corresponding to the bed) is increased. When the probability of scene type B becomes the highest probability in the probability distribution of scene types, the processor 120 sets the monitored scene of the pixels of the state-valid block (i.e., the region near the center in the target scene) to scene type B.

However, in the case where the probability distribution of the scene type of the state-valid block (i.e., the region near the center in the target scene) is not clearly higher than the boundary between the left and right regions (e.g., the region P1), that is, the probability distribution of the scene type of the pixels in the region P1 is similar to the probability distribution of the scene type B, the processor 120 fails to determine the scene type for the region P1. At this time, the installer may continue to lie on his/her back and also move his/her body to change or expand the lying position, and continue to accumulate the image frames for the processor 120 to perform the calculation and determination, and after a certain time has elapsed, as shown in fig. 3C and 4C, the probability of the scene type B in the probability distribution of the scene types of the pixels in the region P1 becomes the largest of all the scene types, and at this time, the processor 120 determines the monitored scene of all the pixels in the region P1 as the scene type B (i.e., the type corresponding to the bed 54). Thus, although the non-visible dynamic image (e.g., thermal image) does not include more sensitive detailed information such as human face, clothing, body surface, and indoor furnishings, the processor 120 can determine that the range of the bed 54 is the range of the scene type B in fig. 3C, so as to determine the abnormality.

In the present embodiment, the image monitoring apparatus 100 further includes a memory 130 electrically connected to the processor 120, wherein the processor 120 is configured to store the non-visible light dynamic image and the scene type corresponding to each pixel in the memory 130. For example, the processor 120 may store the data of the probability distribution of the scene type as shown in fig. 3C in the memory 130, and may also store the monitoring scene of each pixel of the non-visible light moving image in the memory 130 so as to be used as the basis for determining the abnormal activity. The memory 130 is, for example, a hard disk, a flash memory, a random access memory, or other suitable memory. The above embodiment is to construct the monitoring scene of each pixel of the non-visible light dynamic image of the target scene by the activity of the installer, however, in other embodiments, it may be constructed by the activity of the person to be observed or by other people.

In addition, in the embodiment, the processor 120 is configured to perform operations according to another image frame of the non-visible light dynamic image, determine a status of a monitored living body (e.g., an illuminated person) in the target scene as one of the status categories and at least one detection valid block corresponding to the monitored living body, determine whether the status of the monitored living body is abnormal according to the at least one detection valid block of the monitored living body, the status of the monitored living body and the monitoring scene or scene corresponding to the pixels of the at least one detection valid block of the monitored living body, and output a warning signal when the status of the monitored living body is abnormal, such as a computer or monitoring system that transmits an alert signal to an office in an area (e.g. a cell) via a local area network, or transmit the warning signal to a monitoring host or computer of a remote monitoring center via the internet.

For example, when the processor 120 determines that the status of the monitored living body is lying, the pixels of the detection effective block of the monitored living body are the scene type a (i.e., the floor 52), and the status of the monitored living body is maintained for a predetermined time (e.g., 30 minutes), the processor 120 determines that the monitored living body (e.g., the person to be watched) is lying on the floor 52 for too long time and has an abnormal condition, so the processor 120 outputs a warning signal to notify the nursing or medical staff to check the status, or notify the personnel in the remote monitoring center to notify others to check the status. Alternatively, when the processor 120 determines that the status of the monitoring living body (e.g., the person being watched) is lying down, the pixels of the detection effective area of the monitoring living body are in the scene type B (i.e., the bed 54), and the status of the monitoring living body exceeds another preset time (e.g., exceeds 12 hours), the processor 120 determines that the status of the monitoring living body is abnormal, e.g., the body is in a state and cannot get up, and outputs a warning signal.

The operation determination of the detection valid block obtained in the present embodiment is the same as the status valid block of the previous embodiment, wherein the detection valid block of the present embodiment is determined according to the status of the monitored living body (e.g., the person being irradiated), and the status valid block of the previous embodiment is determined according to the status of the living body (e.g., the installer).

In an embodiment, the processor 120 is, for example, a Central Processing Unit (CPU), a microprocessor (microprocessor), a Digital Signal Processor (DSP), a programmable controller, a Programmable Logic Device (PLD), or other similar devices or combinations thereof, which are not limited by the invention. Furthermore, in one embodiment, the functions of the processor 120 may be implemented as a plurality of program codes. The program codes are stored in a memory and executed by the processor 120. Alternatively, in one embodiment, the functions of the processor 120 may be implemented as one or more circuits. The present invention is not limited to the implementation of the functions of the processor 120 in software or hardware.

In addition, in another embodiment, as shown in fig. 5, the number of the living organisms in the target scene may be multiple, and the number of the state valid blocks corresponding to the living organisms is also multiple, the living organisms of this embodiment are exemplified by two (the first living organism 61 and the second living organism 62 are exemplified in fig. 5), and for the sake of clarity, the first living organism 61 and the corresponding first state valid block B1, and the second living organism 62 and the corresponding second state valid block B2 are described below. The processor 120 is configured to perform the following steps. First, the processor 120 performs operations according to at least one image frame of the non-visible light moving image, and determines that the status of the first living body 61 in the target scene is one of a plurality of status categories, the status of the second living body 62 is one of the plurality of status categories, and determines at least one first status valid block B1 of the non-visible light moving image corresponding to the status of the first living body 61, and at least one second status valid block B2 of the non-visible light moving image corresponding to the status of the second living body 62. Then, the scene of each pixel of the first state valid block B1 is set to one of a plurality of scene categories according to the state category of the first living body 61, and the scene of each pixel of the second state valid block B2 is set to one of the plurality of scene categories according to the state category of the second living body 62. For example, the processor 120 determines the status type of the first living body 61 as lying and determines the corresponding first status valid block B1, while the processor 120 determines the status type of the second living body 62 as standing and determines the corresponding second status valid block B2. Next, the processor 120 updates the probability distribution of the scene type of the pixels of the first state-valid-block B1 according to the state type of the first living body 61, and updates the probability distribution of the scene type of the pixels of the second state-valid-block B2 according to the state type of the second living body 62. In this embodiment, in the probability distribution of the scene types of the pixels of the first state valid block B1, the probability of the scene type B corresponding to the state lying (i.e., the type corresponding to the bed) is increased; in addition, in the probability distribution of the scene type of the pixels of the second state valid block B2, the probability of the scene type a (i.e., floor) corresponding to the state station is raised. The processor 120 determines the monitoring scene of each pixel of the non-visible light dynamic image according to the probability distribution of the scene type of each pixel.

Fig. 6 is a flowchart of an image monitoring method according to an embodiment of the invention. Referring to fig. 1, fig. 2 and fig. 6, the image monitoring method of the present embodiment can be implemented by the image monitoring apparatus 100. The image monitoring method comprises the following steps. First, step S210 is executed to obtain a non-visible light moving image of a target scene. Then, step S220 is executed to perform an operation according to at least one image frame of the non-visible light moving image, and determine a status of at least one living body 60 corresponding to the target scene as one of a plurality of status categories and at least one status valid block a1 of the non-visible light moving image. Then, step S230 is executed to set a scene of each pixel of the at least one status valid block a1 as one of a plurality of scene types according to the status type of the at least one living body 60. Some details of the image monitoring method can refer to the matters executed by the image monitoring apparatus 100, and are not repeated here. The following describes the matters performed by the image monitoring apparatus 100 and the steps of the image monitoring method of the present embodiment in more detail, as shown in fig. 7.

Referring to FIG. 7, the detailed steps of steps S220 and S230 are shown as follows. In the embodiment, the non-visible light moving image is thermal image data, and after the non-visible light moving image is obtained, the processor 120 performs step S104 to perform color gamut conversion on at least one image frame of the non-visible light moving image, wherein the image frame (thermal image data) is converted from a single channel (channel) into three-channel color information. The processor 120 performs step S106 to perform normalization operation on the output after the color gamut conversion in step S104 to enhance the contrast of different temperatures in the image, for example, the normalization operation may be performed according to the highest temperature in a temperature range to highlight the contrast of different temperatures in the temperature range in the image. Next, the processor 120 executes step S108, and executes machine learning according to the result of the calculation processing in step S106 to calculate the status type and area of the heat source, i.e. the living body, in the non-visible light moving image, i.e. to determine the living body in the non-visible light moving image.

Then, step S110 is executed to perform operations on the image frame and the information of the non-visible light moving image obtained by the operations in step S108, determine a living body framing block corresponding to the living body in the image frame of the non-visible light moving image, for example, in the image frame of the non-visible light moving image in fig. 8A, determine a living body framing block a2 corresponding to the living body, and perform operations to shrink the living body framing block a2 to the living body framing block A3, that is, determine that the living body framing block corresponding to the living body is shrunk from including four limbs (the living body framing block a2) to including the trunk (the living body framing block A3), and the details in step S110 further include steps S112 and S114. In step S112, the processor calculates the total number of pixels in the Y-axis direction (i.e., the vertical axis direction) within the living body framing block a2 one by one along the X-axis direction (i.e., the horizontal axis direction) within the living body framing block a2, and frames the block boundary where the X-axis coordinate corresponding to the maximum value of the number of pixels after the accumulation calculation is expanded to 30% of the maximum value in the left-right direction. In step S114, the processor calculates the total number of pixels in the X-axis direction (i.e., the horizontal axis direction) within the living body framing block a2 one by one along the Y-axis direction (i.e., the vertical axis direction) within the living body framing block a2, and frames the block boundary where the Y-axis coordinate corresponding to the maximum value of the number of pixels after the accumulation calculation is extended to 30% of the maximum value. After step S110 is performed (including steps S112 and S114), the living body self-range (mainly trunk) is determined by converging the living body framing block a2 to the living body framing block A3.

Then, step S116 is executed, the processor 120 performs operations according to the living framed block a3 and the status type to obtain a status valid block, and the details of step S116 further include step S118, step S120 and step S122. In step S118, the processor 120 determines that the status category of the living body is standing or lying down, or in other embodiments, stands, sits or lies down. If the status of the living body is determined to be standing, step S120 is executed, the range with the height of 50 pixels below the living body framing block A3 generated after the capturing steps S112 and S114 is taken as the status valid block a4, and the scene type of each pixel of the status valid block a4 is set as the floor 52, as shown in fig. 8A. However, the invention is not limited to a height range of 50 pixels, and in other embodiments, other numbers of pixels may be used. If it is determined that the living body is lying down, step S122 is performed, and the living body framing block generated after steps S112 and S114 is set as the state valid block a1, and the scene type is set as the bed 54, as shown in fig. 2.

After step S120 or step S122 is executed, step S124 is executed to update the probability distribution of the scene type of the pixels in the state-valid block, and the details of step S124 further include step S126, step S128, step S130 and step S132. In step S126, the processor 120 determines whether the pixel in the status valid block has information of a scene type, that is, whether a type other than the scene type D (i.e., undefined type) exists. If the information of the scene type is already stored, step S128 is executed, and the processor 120 increases or decreases the probability distribution of the scene type according to the scene type of the pixel in the state valid block. Then, step S130 is performed to determine whether the scene type with the highest probability in the probability distribution of scene types of each pixel has changed. If the scene type with the highest probability is changed, step S132 is executed to update the monitored scene, for example, the monitored scene in the region P1 is updated from the scene type of the pixels in the region P1 in fig. 3B to the scene type of the pixels in the region P1 in fig. 3C. If the scene type with the highest probability is not changed, step S126 is executed again. If it is determined in step S126 that the category definition does not exist, step S132 is executed to update the monitored scene.

It should be noted that in the embodiments of the present disclosure, the status category includes at least one of standing, sitting, lying, climbing, and undefined, which is taken as an example for illustration, in other embodiments, the status category may be increased or decreased according to the monitoring requirement or monitoring focus; in addition, the scene types can be increased or decreased according to the monitoring environment, the monitoring requirement or the emphasis. In some embodiments, the scene category may also be the same as the state category, i.e., the scene of the pixel is available for standing, walking, or lying down; in another embodiment, the scene categories may further include permission or prohibition, that is, pixels of a block in the non-visible dynamic image where a living body (e.g., an installer) appears are set as a permitted scene category, while pixels of the non-visible dynamic image default (or an un-updated block) are set as a prohibited scene category.

In summary, in the image monitoring apparatus and method according to the embodiments of the invention, the living body, the status type and the status valid block are identified by using the non-visible light dynamic image, and the scene of the pixels in the status valid block is set to be one of a plurality of scene types according to the status type of the living body. Therefore, the image monitoring device and the image monitoring method can provide good and effective safety monitoring under the condition of not invading the privacy of the person to be protected.

Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Description of the reference numerals

52: floor board

54: bed

56: chair (Ref. now to FIGS)

60: living body

61: first living body

62: second living body

100: image monitoring device

110: image sensing module

120: processor with a memory having a plurality of memory cells

130: memory device

A. B, C, D: scene categories

A1: state valid block

A2, A3: living body framing block

B1: first state valid block

B2: second state valid block

P1: region(s)

S102 to S132, S210 to S230: step (ii) of

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于自动光学传感器对准的有源目标

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类