Security monitoring method, electronic device, air conditioner and computer readable storage medium

文档序号:191424 发布日期:2021-11-02 浏览:26次 中文

阅读说明:本技术 安全监控方法、电子装置、空调及计算机可读存储介质 (Security monitoring method, electronic device, air conditioner and computer readable storage medium ) 是由 刘红铮 宋德超 陈翀 于 2021-07-19 设计创作,主要内容包括:本发明提出的一种安全监控方法、电子装置、空调及计算机可读存储介质,所述方法包括步骤:获取深度摄像头检测到的检测区域对应的第一深度数据;根据所述第一深度数据判断所述检测区域是否出现目标行为;若所述检测区域出现目标行为,则执行预设处理操作。通过深度摄像头采集深度数据以避免暴露室内环境信息以及用户的图像信息,从而避免在数据泄露时对用户隐私的影响,同时基于数据仍然能够对目标行为进行监控。(The invention provides a safety monitoring method, an electronic device, an air conditioner and a computer readable storage medium, wherein the method comprises the following steps: acquiring first depth data corresponding to a detection area detected by a depth camera; judging whether a target behavior occurs in the detection area according to the first depth data; and if the target behavior occurs in the detection area, executing preset processing operation. The depth data are collected through the depth camera to avoid exposing indoor environment information and image information of a user, so that influence on privacy of the user when the data are leaked is avoided, and meanwhile target behaviors can still be monitored based on the data.)

1. A security monitoring method, the method comprising:

acquiring first depth data corresponding to a detection area detected by a depth camera;

judging whether a target behavior occurs in the detection area according to the first depth data;

and if the target behavior occurs in the detection area, executing preset processing operation.

2. The security monitoring method of claim 1, wherein the step of determining whether the target behavior occurs in the detection area according to the first depth data comprises:

importing the first depth data into a trained identity recognition model;

operating the trained identity recognition model to obtain an identity recognition result;

and if the identity recognition result is that non-family members appear, taking the first depth data corresponding to the identity recognition result as second depth data, and judging whether the target behavior appears in the detection area according to the second depth data by the trained target behavior recognition model.

3. The security monitoring method according to claim 2, wherein the step of judging whether the target behavior occurs in the detection area according to the second depth data by using the trained target behavior recognition model comprises:

importing the second depth data into a trained target behavior recognition model;

running the trained target behavior recognition model to obtain a target behavior recognition result, and judging whether the target behavior recognition result meets a preset target behavior condition;

if the target behavior recognition result meets a preset target behavior condition, the target behavior occurs in the detection area;

and if the target behavior recognition result does not meet the preset target behavior condition, the target behavior does not appear in the detection area.

4. The security monitoring method of claim 3, wherein the step of importing the second depth data into the trained target behavior recognition model comprises:

acquiring second depth data of a first preset number of detection periods which continuously contain the current detection period, and selecting a second preset number of frames of third depth data from the second depth data of each detection period respectively;

importing the third depth data of each detection period into the trained target behavior recognition model;

the step of judging whether the target behavior recognition result meets a preset target behavior condition comprises the following steps:

judging whether a target behavior occurs in each detection period according to the identification result corresponding to the third depth data of each detection period;

if the number of detection periods with the target behaviors is larger than or equal to a third preset number, the target behaviors appear in the detection area;

and if the number of the detection periods with the target behaviors is smaller than the third preset number, the target behaviors do not appear in the detection area.

5. The security monitoring method of claim 4, wherein the recognition result includes a target behavior accuracy rate; the step of judging whether the target behavior occurs in each detection period according to the identification result corresponding to the third depth data of each detection period comprises the following steps:

calculating the average value of target behavior accuracy rates corresponding to all third depth data in a single detection period in the target behavior identification result;

judging whether the average value is greater than a preset target accuracy rate or not;

if the average value is greater than the preset target accuracy, target behaviors appear in the detection period;

and if the average value is less than or equal to the preset target accuracy, the target behavior does not appear in the detection period.

6. The security monitoring method of claim 1, wherein the step of performing the predetermined processing operation comprises:

continuously acquiring the position of a target actor in the depth data;

and controlling the depth camera to carry out sight tracking on the target agent according to the position of the target agent.

7. The security monitoring method of claim 1, further comprising:

and when a monitoring stopping instruction sent by a user is received, controlling the depth camera to stop running and hiding the depth camera.

8. The safety monitoring method according to claim 7, wherein the step of controlling the depth camera to stop operating and hiding the depth camera when receiving a monitoring stop instruction sent by a user comprises:

continuously acquiring voice data of a user, and performing voice recognition operation on the voice data to obtain a voice recognition result;

matching a control instruction corresponding to the voice recognition result;

and when the control instruction corresponding to the voice recognition result is matched as a monitoring stopping instruction, controlling the depth camera to stop running and hiding the depth camera.

9. An electronic device, comprising:

the first acquisition module is used for acquiring first depth data corresponding to the detection area detected by the depth camera;

the first judging module is used for judging whether the target behavior occurs in the detection area according to the first depth data;

and the first execution module is used for executing preset processing operation if the target behavior occurs in the detection area.

10. An air conditioner, characterized in that the air conditioner comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the security monitoring method according to any one of claims 1 to 8.

11. The air conditioner of claim 10, further comprising a depth camera telescopically disposed on the air conditioner;

when the depth camera extends out, the depth camera is exposed outside the air conditioner; when the depth camera is retracted, the depth camera is hidden in the air conditioner.

12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the security monitoring method according to any one of claims 1 to 8.

Technical Field

The invention relates to the field of home security, in particular to a security monitoring method, an electronic device, an air conditioner and a computer readable storage medium.

Background

The existing part of intelligent air conditioners are provided with cameras, so that not only can temperature regulation and control be realized, but also safety monitoring can be realized; however, in actual use, image data acquired by the camera is easy to leak, and great influence is caused to a user.

Disclosure of Invention

The invention mainly aims to provide a safety monitoring method, an electronic device, an air conditioner and a computer readable storage medium, and aims to solve the problem that image data acquired by a camera used for safety monitoring in the prior art is easy to leak and affects users.

In order to achieve the above object, the present invention provides a security monitoring method, comprising the steps of:

acquiring first depth data corresponding to a detection area detected by a depth camera;

judging whether a target behavior occurs in the detection area according to the first depth data;

and if the target behavior occurs in the detection area, executing preset processing operation.

Optionally, the step of determining whether a target behavior occurs in the detection area according to the first depth data includes:

importing the first depth data into a trained identity recognition model;

operating the trained identity recognition model to obtain an identity recognition result;

and if the identity recognition result is that non-family members appear, taking the first depth data corresponding to the identity recognition result as second depth data, and judging whether the target behavior appears in the detection area according to the second depth data by the trained target behavior recognition model.

Optionally, the step of judging whether the target behavior occurs in the detection area according to the second depth data by using the trained target behavior recognition model includes:

importing the second depth data into a trained target behavior recognition model;

running the trained target behavior recognition model to obtain a target behavior recognition result, and judging whether the target behavior recognition result meets a preset target behavior condition;

if the target behavior recognition result meets a preset target behavior condition, the target behavior occurs in the detection area;

and if the target behavior recognition result does not meet the preset target behavior condition, the target behavior does not appear in the detection area.

Optionally, the step of importing the second depth data into the trained target behavior recognition model includes:

acquiring second depth data of a first preset number of detection periods which continuously contain the current detection period, and selecting a second preset number of frames of third depth data from the second depth data of each detection period respectively;

importing the third depth data of each detection period into the trained target behavior recognition model;

the step of judging whether the target behavior recognition result meets a preset target behavior condition comprises the following steps:

judging whether a target behavior occurs in each detection period according to the identification result corresponding to the third depth data of each detection period;

if the number of detection periods with the target behaviors is larger than or equal to a third preset number, the target behaviors appear in the detection area;

and if the number of the detection periods with the target behaviors is smaller than the third preset number, the target behaviors do not appear in the detection area.

Optionally, the recognition result includes a target behavior accuracy; the step of judging whether the target behavior occurs in each detection period according to the identification result corresponding to the third depth data of each detection period comprises the following steps:

calculating the average value of target behavior accuracy rates corresponding to all third depth data in a single detection period in the target behavior identification result;

judging whether the average value is greater than a preset target accuracy rate or not;

if the average value is greater than the preset target accuracy, target behaviors appear in the detection period;

and if the average value is less than or equal to the preset target accuracy, the target behavior does not appear in the detection period.

Optionally, the step of executing the preset processing operation includes:

continuously acquiring the position of a target actor in the depth data;

and controlling the depth camera to carry out sight tracking on the target agent according to the position of the target agent.

Optionally, the method further comprises:

and when a monitoring stopping instruction sent by a user is received, controlling the depth camera to stop running and hiding the depth camera.

Optionally, when a monitoring stop instruction sent by a user is received, the step of controlling the depth camera to stop operating and hiding the depth camera includes:

continuously acquiring voice data of a user, and performing voice recognition operation on the voice data to obtain a voice recognition result;

matching a control instruction corresponding to the voice recognition result;

and when the control instruction corresponding to the voice recognition result is matched as a monitoring stopping instruction, controlling the depth camera to stop running and hiding the depth camera.

In order to achieve the above object, the present invention also provides an electronic device, including:

the first acquisition module is used for acquiring first depth data corresponding to the detection area detected by the depth camera;

the first judging module is used for judging whether the target behavior occurs in the detection area according to the first depth data;

and the first execution module is used for executing preset processing operation if the target behavior occurs in the detection area.

Optionally, the first determining module includes:

the first execution submodule is used for importing the first depth data into a trained identity recognition model;

the second execution submodule is used for operating the trained identity recognition model to obtain an identity recognition result;

and the third execution sub-module is used for taking the first depth data corresponding to the identity recognition result as second depth data if the identity recognition result indicates that non-family members appear, and judging whether the target behavior appears in the detection area or not by the trained target behavior recognition model according to the second depth data.

Optionally, the third execution submodule includes:

the first execution unit is used for importing the second depth data into a trained target behavior recognition model;

the first judgment unit is used for operating the trained target behavior recognition model to obtain a target behavior recognition result and judging whether the target behavior recognition result meets a preset target behavior condition;

the second execution unit is used for judging that the target behavior occurs in the detection area if the target behavior recognition result meets a preset target behavior condition;

and the third execution unit is used for judging that the target behavior does not appear in the detection area if the target behavior recognition result does not meet the preset target behavior condition.

Optionally, the first execution unit includes:

the first acquisition subunit is used for acquiring continuous second depth data of a first preset number of detection periods including the current detection period, and selecting a second preset number of frames of third depth data from the second depth data of each detection period respectively;

the first execution subunit is used for importing the third depth data of each detection period into the trained target behavior recognition model;

the first judgment unit includes:

the first judging subunit is used for judging whether the target behavior occurs in each detection period according to the identification result corresponding to the third depth data of each detection period;

the second execution subunit is used for judging whether the detection period number of the target behaviors is larger than or equal to a third preset number or not, and if the detection period number of the target behaviors is larger than or equal to the third preset number, the target behaviors appear in the detection area;

and the third execution subunit is used for judging whether the target behaviors appear in the detection area or not if the number of the detection cycles of the target behaviors is smaller than the third preset number.

Optionally, the recognition result includes a target behavior accuracy;

the first judging subunit is further configured to calculate an average value of target behavior accuracy rates corresponding to all third depth data in a single detection period in the target behavior recognition result;

the first judging subunit is further configured to judge whether the average value is greater than a preset target accuracy rate;

the first judging subunit is further configured to determine that a target behavior occurs in the detection period if the average value is greater than the preset target accuracy;

the first judging subunit is further configured to determine that no target behavior occurs in the detection period if the average value is less than or equal to the preset target accuracy.

Optionally, the first execution module includes:

the first acquisition submodule is used for continuously acquiring the position of a target actor in the depth data;

and the fourth execution submodule is used for controlling the depth camera to carry out sight tracking on the target agent according to the position of the target agent.

Optionally, the electronic device further comprises:

and the second execution module is used for controlling the depth camera to stop running and hiding the depth camera when receiving a monitoring stopping instruction sent by a user.

Optionally, the second execution module includes:

the third execution submodule is used for continuously acquiring voice data of a user and carrying out voice recognition operation on the voice data to obtain a voice recognition result;

the fourth execution sub-module is used for matching the control instruction corresponding to the voice recognition result;

and the fifth execution submodule is used for controlling the depth camera to stop running and hiding the depth camera when the control instruction corresponding to the voice recognition result is matched as a monitoring stopping instruction.

To achieve the above object, the present invention further provides an air conditioner, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the computer program, when executed by the processor, implements the steps of the security monitoring method as described above.

Optionally, the air conditioner further comprises a depth camera, and the depth camera is telescopically arranged on the air conditioner;

when the depth camera extends out, the depth camera is exposed outside the air conditioner; when the depth camera is retracted, the depth camera is hidden in the air conditioner.

To achieve the above object, the present invention further provides a computer-readable storage medium, having a computer program stored thereon, where the computer program is executed by a processor to implement the steps of the security monitoring method as described above.

According to the safety monitoring method, the electronic device, the air conditioner and the computer readable storage medium, first depth data corresponding to a detection area detected by the depth camera are obtained; judging whether a target behavior occurs in the detection area according to the first depth data; and if the target behavior occurs in the detection area, executing preset processing operation. The depth data are collected through the depth camera to avoid exposing indoor environment information and image information of a user, so that influence on privacy of the user when the data are leaked is avoided, and meanwhile target behaviors can still be monitored based on the data.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.

FIG. 1 is a schematic flow chart of a security monitoring method according to a first embodiment of the present invention;

FIG. 2 is a detailed flowchart of step S20 of the security monitoring method according to the second embodiment of the present invention;

fig. 3 is a schematic block diagram of an air conditioner according to the present invention.

Detailed Description

It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The invention provides a safety monitoring method, referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of the safety monitoring method, and the method comprises the following steps:

step S10, acquiring first depth data corresponding to the detection area detected by the depth camera;

the depth camera, i.e. the ToF (Time of Flight) camera, emits a set of laser pulses, such as infrared light, which cannot be seen by human eyes, and reflects back to the camera after encountering an object, and forms a set of distance depth data, i.e. first depth data, by calculating a Time difference or a phase difference from emission to reflection back to the camera, thereby obtaining a three-dimensional spatial model. The detection area is a detectable area of the depth camera.

Step S20, judging whether the target behavior occurs in the detection area according to the first depth data;

the human body and the object in the detection area can be distinguished according to the first depth data, and whether target behaviors occur or not is judged according to the states of the human body and the object. Target behaviors include, but are not limited to, theft, robbery, violence, and the like.

Step S30, if the target behavior occurs in the detection area, perform a preset processing operation.

When the target behavior is detected, the user needs to be reminded to respond in time. The preset processing operation includes notifying a user, alarming, scaring target behavior, and the like. Specifically, when leaving the factory, the factory presets the contact way of departments such as public security or fire fighting in the system, the user can also set the user equipment information by himself, and when the target behavior occurs in the detection area, the alarm information is respectively sent to the user equipment and the departments such as public security or fire fighting. The frightening target behavior can be implemented by playing preset audio; if a buzzing alarm audio is preset in the system, or the user records or imports the user voice by himself, and when the target behavior occurs in the detection area, the buzzing alarm audio or the user voice is played.

The depth data is collected through the depth camera so as to avoid exposing indoor environment information and image information of a user, so that influence on privacy of the user when data are leaked is avoided, and meanwhile target behaviors can be monitored based on the data.

Further, referring to fig. 2, in the second embodiment of the safety monitoring method according to the present invention proposed based on the first embodiment of the present invention, the step S20 includes the steps of:

step S21, importing the first depth data into the trained identity recognition model;

step S22, operating the trained identity recognition model to obtain an identity recognition result;

and step S23, if the identity recognition result is that non-family members appear, taking the first depth data corresponding to the identity recognition result as second depth data, and judging whether the target behavior appears in the detection area according to the second depth data by the trained target behavior recognition model.

And if the identity recognition result indicates that no non-family member exists, the target behavior does not exist in the detection area.

In this embodiment, the identity recognition model adopts a convolutional neural network model; it can be understood that the type of the identity recognition model can be selected from the existing models according to the actual application scenario, which is not described herein.

Before the identity recognition model is used, the identity recognition model needs to be trained to obtain a trained identity recognition model. Specifically, acquiring the posture information of the family members through a depth camera, and training an identity recognition model by taking the posture information of the family members as a training sample; it should be noted that the training samples of the identity recognition model need to include a positive sample and a negative sample, wherein the posture information of the family members is the positive sample, and the recognition result of the identity recognition model corresponding to the positive sample is the family members; the identification result of the identity identification model corresponding to the negative sample is a non-family member, the negative sample can be the posture information of other personnel preset in the system by a manufacturer, or can be acquired by the system through a network when the identity identification model needs to be trained, if the manufacturer stores the negative sample in a server, the system takes the negative sample from the server through the network. It can be understood that a loss function needs to be set in the training process of the identity recognition model, and the loss function is used for modifying the recognition logic of the identity recognition model, and specifically, the type of the loss function may be selected in the prior art according to actual needs, which is not described herein again. Sequentially inputting training samples into the identity recognition model to operate, continuously correcting according to an error value obtained by a loss function, and obtaining a trained identity recognition model when the training of the identity recognition model reaches a completion condition; the completion condition includes, but is not limited to, a number of training sessions or a total training error value.

Inputting the first depth data into a trained identity recognition model for recognition, and when the recognition result is that no person or family member exists, determining that no target behavior exists and judging the subsequent target behavior is not needed; when the recognition result is a non-family member, it is considered that there is a possibility of a target behavior, and therefore, judgment of a subsequent target behavior is performed for further confirmation.

The second depth data is the depth data of the non-family members contained in the first depth data, and the contribution of the depth data of the non-family members to the basis of providing the target behavior is low, so that the calculation amount is reduced, the influence on the recognition result is avoided, the depth data of the non-family members in the first depth data is eliminated to obtain the second depth data, and the target behavior is judged according to the second depth data.

According to the embodiment, the second depth data can be obtained by screening the first depth data by identifying whether the first depth data comprises non-family members or not, so that the target behavior is judged according to the second depth data, and the calculation amount is reduced.

Further, in a third embodiment of the safety monitoring method according to the present invention based on the second embodiment of the present invention, the step S23 includes the steps of:

step S231, importing the second depth data into a trained target behavior recognition model;

step S232, operating the trained target behavior recognition model to obtain a target behavior recognition result, and judging whether the target behavior recognition result meets a preset target behavior condition;

step S233, if the target behavior recognition result meets the preset target behavior condition, the target behavior occurs in the detection area;

in step S234, if the target behavior recognition result does not satisfy the preset target behavior condition, no target behavior occurs in the detection area.

In this embodiment, the target behavior recognition model adopts a convolutional neural network model, and it can be understood that the type of the target behavior recognition model can also be selected from existing models according to an actual application scenario, which is not described herein again.

Before the target behavior recognition model is used, the target behavior recognition model needs to be trained to obtain a trained target behavior recognition model. The training of the target behavior recognition model may be performed before shipment or may be performed after the system is started. Specifically, a training sample of the target behavior recognition model needs to include a positive sample and a negative sample, wherein the recognition result of the target behavior recognition model corresponding to the positive sample is that the target behavior exists; the recognition result of the target behavior recognition model corresponding to the negative sample is that no target behavior exists, the training sample can be preset in the system by a manufacturer, or can be acquired by the system through a network when the target behavior recognition model needs to be trained, if the manufacturer stores the sample needing to be changed into the face in a server, the system takes the sample needing to be changed into the face from the server through the network. It can be understood that a loss function is also required to be set in the training process of the target behavior recognition model, and the loss function is used to modify the recognition logic of the target behavior recognition model, and specifically, the type of the loss function may be selected in the prior art according to actual needs, which is not described herein again. Sequentially inputting training samples into the target behavior recognition model for operation, continuously correcting according to an error value obtained by a loss function, and obtaining a trained target behavior recognition model when the training of the target behavior recognition model reaches a completion condition; completion conditions include, but are not limited to, training times or total training error values

Inputting the second depth data into a trained target behavior recognition model for recognition, and when the recognition result does not meet the preset target behavior condition, determining that no target behavior exists; and when the recognition result meets the preset target behavior condition, the possibility of the target behavior is considered to exist.

The embodiment can reasonably identify the target behavior.

Further, in a fourth embodiment of the safety monitoring method according to the present invention based on the third embodiment of the present invention, the step S231 includes the steps of:

step S2311, acquiring continuous second depth data of a first preset number of detection periods including the current detection period, and selecting a second preset number of frames of third depth data from the second depth data of each detection period respectively;

step S2312, importing the third depth data of each detection period into the trained target behavior recognition model;

the step S232 includes the steps of:

step S2321, judging whether a target behavior occurs in each detection period according to the identification result corresponding to the third depth data of each detection period;

step S2322, if the number of detection cycles with target behaviors is greater than or equal to a third preset number, the target behaviors appear in the detection area;

step S2323, if the number of detection cycles in which the target behavior occurs is smaller than the third preset number, the target behavior does not occur in the detection area.

In order to avoid the occurrence of misjudgment and improve the accuracy of identification, the present embodiment does not consider whether the target behavior occurs according to the identification result of the single primary target behavior identification model, but comprehensively judges whether the target behavior occurs according to the multiple identification results.

Dividing the detection period by the time length, wherein the length of the detection period is 2 minutes and the first preset number is 3 in the embodiment; it can be understood that the specific detection cycle length and the first preset number may be set according to an actual application scenario; it should be noted that the current detection period is the latest period in which all depth data in the period has been acquired. Acquiring three continuous detections including the current detection periodThe second depth data of the period is the latest period, so the three consecutive detection periods are the three periods with the current detection period as the latest period, and if the current period is the period X, the three consecutive detection periods are respectively XT-2、XT-1、XT(ii) a In the next stage of judgment, the collected three consecutive detection periods are XT-1、XT、XT+1

The third depth data is frame data selected from the second depth data; the third depth data may be randomly selected from the second depth data, or may be selected from the second depth data according to a certain rule, for example, a frame is selected as the third depth data every preset time; in this embodiment, 10 frames of third depth data are selected from a single detection period.

Judging whether the target behaviors appear in each detection period according to the third depth data in each detection period; and when the detection period of the target behavior reaches a third preset number, the target behavior is considered to occur. Specifically, the third preset number may be a fixed value, in this embodiment, the third preset number is 2, and the third preset number may also be set according to the total number of the detection cycles, for example, the third preset number is set to be a minimum natural number greater than half of the total number of the detection cycles; the number of sensing cycles is 3, the third predetermined number is set to 2.

The embodiment can reasonably avoid misjudgment of the target behavior.

Further, in a fifth embodiment of the safety monitoring method according to the present invention based on the fourth embodiment of the present invention, the identification result includes a target behavior accuracy; the step S2321 includes the steps of:

step S23211, calculating an average value of target behavior accuracy rates corresponding to all third depth data in a single detection period in the target behavior identification result;

step S23212, judging whether the average value is greater than a preset target accuracy rate;

step S23213, if the average value is greater than the preset target accuracy, target behaviors appear in the detection period;

step S23214, if the average value is less than or equal to the preset target accuracy, no target behavior occurs in the detection period.

After the target behavior recognition model is operated, the target behavior recognition model outputs a recognition result, and the recognition result comprises the accuracy rate of the target behavior and the accuracy rate of the target behavior not appearing; the higher the accuracy, the greater the likelihood of representing its corresponding result.

After acquiring the accuracy rates corresponding to all the third depth data in a single detection period, calculating the average accuracy rate, namely an average value, of all the third depth data in the single detection period, and when the average value is greater than the preset target accuracy rate, considering that the detection period has a target behavior; in this embodiment, the preset target accuracy is 0.6. It can be understood that the value setting of the specific preset target accuracy may be adjusted according to the actual usage scenario, which is not described herein again.

The embodiment can reasonably identify the target behavior in a single detection period.

Further, in a sixth embodiment of the safety monitoring method according to the present invention based on the first embodiment of the present invention, the step S30 includes the steps of:

step S31, continuously acquiring the position of the target actor in the depth data;

and step S32, controlling the depth camera to carry out sight tracking on the target agent according to the position of the target agent.

When the target behavior is detected, the target behavior is subjected to sight tracking in order to better realize monitoring operation. Specifically, be provided with in the depth camera and focus on the region, like the center of the detection range that can gather, discern the target agent position at current monitoring cycle to after discerning the target agent position, the region of focusing on of control depth camera moves to the target agent position, carries out sight tracking to the target agent.

The present embodiment enables the target process to be captured effectively by performing gaze tracking on the target agent.

Further, in a seventh embodiment of the safety monitoring method according to the present invention proposed based on the first embodiment of the present invention, the method further includes the steps of:

and step S40, when receiving a monitoring stopping instruction sent by a user, controlling the depth camera to stop running and hiding the depth camera.

The step S40 includes the steps of:

step S41, continuously collecting voice data of a user, and carrying out voice recognition operation on the voice data to obtain a voice recognition result;

step S42, matching a control instruction corresponding to the voice recognition result;

and step S43, when the control instruction corresponding to the voice recognition result is matched as a monitoring stopping instruction, controlling the depth camera to stop running and hiding the depth camera.

The monitoring stopping instruction is an instruction for controlling the depth camera to close by a user; in the embodiment, a monitoring stopping instruction of a user is obtained in a voice recognition mode; the specific voice recognition mode can be selected according to the actual application scene, and is not described herein; when the monitoring is required, the user can also send a monitoring stopping instruction in modes of remote control, intelligent terminal equipment control and the like. When the user does not need to monitor the target behavior, the depth camera can be closed in consideration of the privacy problem of the user.

The depth camera in the embodiment is telescopically arranged on the air conditioner, and can rotate; the depth camera extends out when working and is exposed outside the air conditioner; the depth camera is contracted when being closed and is hidden in the air conditioner.

In the embodiment, when the user does not need to use the monitoring function, the depth camera is closed, and the privacy safety of the user is ensured.

It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.

Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.

The present application further provides an electronic device for implementing the above security monitoring method, where the electronic device includes:

the first acquisition module is used for acquiring first depth data corresponding to the detection area detected by the depth camera;

the first judging module is used for judging whether the target behavior occurs in the detection area according to the first depth data;

and the first execution module is used for executing preset processing operation if the target behavior occurs in the detection area.

It should be noted that the first obtaining module in this embodiment may be configured to execute step S10 in this embodiment, the first determining module in this embodiment may be configured to execute step S20 in this embodiment, and the first executing module in this embodiment may be configured to execute step S30 in this embodiment.

The depth data are collected through the depth camera to avoid exposing indoor environment information and image information of a user, so that influence on privacy of the user when the data are leaked is avoided, and meanwhile target behaviors can still be monitored based on the data.

Further, the first determining module includes:

the first execution submodule is used for importing the first depth data into a trained identity recognition model;

the second execution submodule is used for operating the trained identity recognition model to obtain an identity recognition result;

and the third execution sub-module is used for taking the first depth data corresponding to the identity recognition result as second depth data if the identity recognition result indicates that non-family members appear, and judging whether the target behavior appears in the detection area or not by the trained target behavior recognition model according to the second depth data.

Further, the third execution submodule includes:

the first execution unit is used for importing the second depth data into a trained target behavior recognition model;

the first judgment unit is used for operating the trained target behavior recognition model to obtain a target behavior recognition result and judging whether the target behavior recognition result meets a preset target behavior condition;

the second execution unit is used for judging that the target behavior occurs in the detection area if the target behavior recognition result meets a preset target behavior condition;

and the third execution unit is used for judging that the target behavior does not appear in the detection area if the target behavior recognition result does not meet the preset target behavior condition.

Further, the first execution unit includes:

the first acquisition subunit is used for acquiring continuous second depth data of a first preset number of detection periods including the current detection period, and selecting a second preset number of frames of third depth data from the second depth data of each detection period respectively;

the first execution subunit is used for importing the third depth data of each detection period into the trained target behavior recognition model;

the first judgment unit includes:

the first judging subunit is used for judging whether the target behavior occurs in each detection period according to the identification result corresponding to the third depth data of each detection period;

the second execution subunit is used for judging whether the detection period number of the target behaviors is larger than or equal to a third preset number or not, and if the detection period number of the target behaviors is larger than or equal to the third preset number, the target behaviors appear in the detection area;

and the third execution subunit is used for judging whether the target behaviors appear in the detection area or not if the number of the detection cycles of the target behaviors is smaller than the third preset number.

Further, the identification result comprises target behavior accuracy;

the first judging subunit is further configured to calculate an average value of target behavior accuracy rates corresponding to all third depth data in a single detection period in the target behavior recognition result;

the first judging subunit is further configured to judge whether the average value is greater than a preset target accuracy rate;

the first judging subunit is further configured to determine that a target behavior occurs in the detection period if the average value is greater than the preset target accuracy;

the first judging subunit is further configured to determine that no target behavior occurs in the detection period if the average value is less than or equal to the preset target accuracy.

Further, the first execution module includes:

the first acquisition submodule is used for continuously acquiring the position of a target actor in the depth data;

and the fourth execution submodule is used for controlling the depth camera to carry out sight tracking on the target agent according to the position of the target agent.

Further, the electronic device further includes:

and the second execution module is used for controlling the depth camera to stop running and hiding the depth camera when receiving a monitoring stopping instruction sent by a user.

Further, the second execution module includes:

the third execution submodule is used for continuously acquiring voice data of a user and carrying out voice recognition operation on the voice data to obtain a voice recognition result;

the fourth execution sub-module is used for matching the control instruction corresponding to the voice recognition result;

and the fifth execution submodule is used for controlling the depth camera to stop running and hiding the depth camera when the control instruction corresponding to the voice recognition result is matched as a monitoring stopping instruction.

It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. The modules may be implemented by software as part of the apparatus, or may be implemented by hardware, where the hardware environment includes a network environment.

Referring to fig. 3, the air conditioner may include components of a communication module 10, a memory 20, and a processor 30 in a hardware structure. In the air conditioner, the processor 30 is connected to the memory 20 and the communication module 10, respectively, the memory 20 stores thereon a computer program, which is executed by the processor 30 at the same time, and when executed, implements the steps of the above-mentioned method embodiments.

The communication module 10 may be connected to an external communication device through a network. The communication module 10 may receive a request from an external communication device, and may also send a request, an instruction, and information to the external communication device, where the external communication device may be another air conditioner, a server, or an internet of things device, such as a television.

The memory 20 may be used to store software programs as well as various data. The memory 20 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (for example, determining whether a target behavior occurs in the detection area according to the first depth data), and the like; the storage data area may include a database, and the storage data area may store data or information created according to use of the system, or the like. Further, the memory 20 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.

The processor 30, which is a control center of the air conditioner, connects various parts of the entire air conditioner using various interfaces and lines, and performs various functions of the air conditioner and processes data by operating or executing software programs and/or modules stored in the memory 20 and calling data stored in the memory 20, thereby integrally monitoring the air conditioner. Processor 30 may include one or more processing units; alternatively, the processor 30 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 30.

Furthermore, the air conditioner also comprises a depth camera which is telescopically arranged on the air conditioner;

when the depth camera extends out, the depth camera is exposed outside the air conditioner; when the depth camera is retracted, the depth camera is hidden in the air conditioner.

A user controls the working state of the depth camera by stopping the monitoring instruction, and the depth camera extends out when working; the depth camera retracts when closed.

Although not shown in fig. 3, the air conditioner may further include a circuit control module for connecting with a power supply to ensure normal operation of other components. Those skilled in the art will appreciate that the air conditioning configuration shown in fig. 3 does not constitute a limitation of the air conditioner, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.

The invention also proposes a computer-readable storage medium on which a computer program is stored. The computer-readable storage medium may be the Memory 20 in the air conditioner of fig. 3, and may also be at least one of a ROM (Read-Only Memory)/RAM (Random Access Memory), a magnetic disk, and an optical disk, where the computer-readable storage medium includes instructions for enabling a terminal device (which may be a television, an automobile, a mobile phone, a computer, a server, a terminal, or a network device) having a processor to execute the method according to the embodiments of the present invention.

In the present invention, the terms "first", "second", "third", "fourth" and "fifth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance, and those skilled in the art can understand the specific meanings of the above terms in the present invention according to specific situations.

In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.

Although the embodiment of the present invention has been shown and described, the scope of the present invention is not limited thereto, it should be understood that the above embodiment is illustrative and not to be construed as limiting the present invention, and that those skilled in the art can make changes, modifications and substitutions to the above embodiment within the scope of the present invention, and that these changes, modifications and substitutions should be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:课室人员姿态判断方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!