Information processing method, information processing program, and information processing system
阅读说明:本技术 信息处理方法、信息处理程序以及信息处理系统 (Information processing method, information processing program, and information processing system ) 是由 高柳哲也 于 2019-05-21 设计创作,主要内容包括:一种信息处理方法,使计算机执行:检测位于预定空间内的人物咳嗽或者打喷嚏(S6),取得在检测到所述咳嗽或者所述喷嚏时所拍摄到的所述预定空间的图像(S7),从所述图像中检测所述人物的嘴部的状态(S8),基于识别出的所述人物的嘴部的状态,生成用于控制从使所述预定空间内产生气流的气流产生装置送出的空气的风向和风量中的至少一方的控制信号(S18),将生成的所述控制信号输出。(An information processing method for causing a computer to execute: a method for controlling a vehicle, which comprises detecting a person in a predetermined space coughing or sneezing (S6), acquiring an image of the predetermined space captured when the person coughs or sneezing is detected (S7), detecting a state of a mouth of the person from the image (S8), generating a control signal for controlling at least one of an air flow direction and an air flow amount of air sent from an air flow generator for generating an air flow in the predetermined space based on the recognized state of the mouth of the person (S18), and outputting the generated control signal.)
1. An information processing method for causing a computer to execute:
detecting a person within a predetermined space coughing or sneezing,
acquiring an image of the predetermined space captured when the cough or the sneeze is detected,
detecting a state of the mouth of the person from the image,
generating a control signal for controlling at least one of a wind direction and an air volume of air sent from an air flow generating device that generates an air flow in the predetermined space, based on the recognized state of the mouth of the person,
and outputting the generated control signal.
2. The information processing method according to claim 1,
the state recognition of the mouth part of the person recognizes one of a state where the mouth of the person is not covered and a state where the mouth of the person is covered with a hand.
3. The information processing method according to claim 1,
the state recognition of the mouth part of the person recognizes any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, and a state in which the mouth of the person is covered with a mask.
4. The information processing method according to claim 1,
the state recognition of the mouth part of the person recognizes any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief or clothes, and a state in which the mouth of the person is covered with a mask.
5. The information processing method according to any one of claims 1 to 4,
also identifies from the image the face orientation of the person at the time when the person coughs or sneezes were detected,
the wind direction is made different in a case where the face faces forward and in a case where the face faces downward.
6. The information processing method according to any one of claims 1 to 5,
the position coordinates of the person are also calculated from the image,
the control signal is generated based on the recognized state of the mouth of the person and the position coordinates.
7. The information processing method according to claim 6,
selecting the airflow generating device from a plurality of airflow generating devices based on the position coordinates.
8. A program that causes a computer to execute a process, the process comprising:
detecting a person within a predetermined space coughing or sneezing,
acquiring an image of the predetermined space captured when the cough or the sneeze is detected,
detecting a state of the mouth of the person from the image,
generating a control signal for controlling at least one of a wind direction and an air volume of air sent from an air flow generating device that generates an air flow in the predetermined space, based on a state of the mouth,
and outputting the generated control signal.
9. An information processing system is provided with:
a camera that photographs a predetermined space;
an air flow generating device which generates air flow in the predetermined space; and
an information processing apparatus for processing an information signal,
the information processing apparatus is provided with a plurality of processing units,
detecting a person within the predetermined space coughing or sneezing,
acquiring an image of the predetermined space captured by the camera when the cough or the sneeze is detected,
detecting a state of the mouth of the person from the image,
generating a control signal for controlling at least one of a wind direction and an air volume of air sent from the airflow generating device based on a state of the mouth,
and outputting the generated control signal.
Technical Field
The present disclosure relates to an information processing method, an information processing program, and an information processing system for controlling an air flow in a predetermined space in which a cough or sneeze is detected.
Background
Many infectious diseases (infectious diseases) including influenza (influenza) are transmitted from person to person by contact infection, droplet infection, or air infection, for example. In particular, the occurrence of an infected person in a facility (institution) or the like is likely to involve collective infection throughout the facility, and therefore countermeasures are urgent. For example, in facilities such as nursing facilities where many elderly people live, infectious diseases are likely to become serious, and in the worst case, the elderly people infected with infectious diseases may die. In the care facility, measures against infection on the personal level are taken, for example, a caregiver wears a mask and thorough hand hygiene. In addition, regarding influenza, it is important that the infected person is not exposed to coughing or sneezing from the viewpoint of a countermeasure against infection, considering that droplet infection or air infection is a main infection route.
For example,
Further,
As a result, when a person coughs at an initial speed of 10m/s (m/s), the droplets reach the affected person (susceptible individual) 1m ahead in about 5 seconds, and the affected person is exposed to the droplets. After that, the droplets spread around the user after several tens of seconds or more.
However, since the ventilation condition in
Disclosure of Invention
However, in the above-described conventional techniques, the risk of infection in a predetermined space where a cough or sneeze is detected cannot be reduced, and further improvement is required.
The present disclosure has been made to solve the above-described problems, and provides a technique capable of reducing the risk of infection in a predetermined space where coughing or sneezing is detected.
An information processing method according to an aspect of the present disclosure causes a computer to execute: a method for controlling a vehicle, which comprises detecting a person in a predetermined space to cough or sneeze, acquiring an image of the predetermined space captured when the person has coughed or sneezed, detecting a state of a mouth of the person from the image, generating a control signal for controlling at least one of a wind direction and a wind volume of air sent from an air flow generator for generating an air flow in the predetermined space based on the recognized state of the mouth of the person, and outputting the generated control signal.
The general or specific technical means may be realized by an apparatus, a system, an integrated circuit, a computer program, or a computer-readable recording medium, or may be realized by any combination of an apparatus, a system, a method, an integrated circuit, a computer program, and a computer-readable recording medium. Examples of the computer-readable recording medium include nonvolatile recording media such as CD-ROM (Compact Disc-Read Only Memory).
According to the present disclosure, since the locally present droplets can be diffused and the concentration thereof can be made uniform, the risk of infection in a predetermined space where a cough or sneeze is detected can be reduced.
Further advantages and effects in one aspect of the disclosure can be seen from the description and the accompanying drawings. The advantages and/or effects described above are provided by several embodiments and features described in the specification and drawings, respectively, but not all embodiments and features need to be provided in order to obtain one or more of the same features.
Drawings
Fig. 1 is a diagram showing the configuration of an airflow control system according to
Fig. 2 is a diagram for explaining the
Fig. 3 is a diagram for explaining the
Fig. 4 is a diagram showing an example of a time-series change in the area of the mouth of the subject person or the distance between the face and the hand of the subject person in
Fig. 5 is a diagram showing an example of the 1 st airflow control table in a case where the airflow control system includes one airflow generation device and the airflow generation device is an air conditioner (air conditioning equipment).
Fig. 6 is a diagram showing an example of a result of simulation (simulation) of a wind speed distribution in a case where the air conditioner is driven while the air conditioner is not driven and an air current is generated toward 30 degrees below the horizontal direction in a space in which the air conditioner and the air cleaner are arranged.
Fig. 7 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is driven without driving the air conditioner in a space in which the air conditioner and the air cleaner are arranged and an air current is generated toward 90 degrees downward from the horizontal direction.
Fig. 8 is a diagram showing an example of the 2 nd airflow control table in the case where the airflow control system includes one airflow generating device and the airflow generating device is an air cleaner.
Fig. 9 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is not driven but the air cleaner is driven and an airflow is generated in an upward direction of 90 degrees from the horizontal direction in a space in which the air conditioner and the air cleaner are arranged.
Fig. 10 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is not driven but the air cleaner is driven and an airflow is generated at 45 degrees upward from the horizontal direction in a space in which the air conditioner and the air cleaner are arranged.
Fig. 11 is a diagram showing an example of the 3 rd airflow control table in the case where the airflow control system includes two airflow generation devices, and the two airflow generation devices are an air conditioner and an air cleaner, respectively.
Fig. 12 is a
Fig. 13 is a flow chart of fig. 2 for explaining the operation of the airflow control device in
Fig. 14 is a flowchart for explaining the operation of the airflow generation device in
Fig. 15 is a diagram showing the configuration of an airflow control system according to
Fig. 16 is a
Fig. 17 is a flow chart of fig. 2 for explaining the operation of the airflow control device in
Fig. 18 is a diagram showing the configuration of an airflow control system according to
Fig. 19 is a flowchart for explaining the operation of the camera (camera) in
Fig. 20 is a flowchart for explaining the operation of the airflow control device in
Fig. 21 is a diagram showing the configuration of an airflow control system according to
Fig. 22 is a flowchart for explaining the operation of the camera in
Fig. 23 is a diagram showing the configuration of the infection risk evaluation system of the present disclosure.
Fig. 24 is a view showing an example of an infection risk evaluation table stored in an infection risk evaluation table storage unit in the infection risk evaluation system of the present disclosure.
Fig. 25 is a 1 st flowchart for explaining the operation of the infection risk evaluating device of the present disclosure.
Fig. 26 is a flow chart of fig. 2 for explaining the operation of the infection risk evaluating device of the present disclosure.
Detailed Description
(insight underlying the present disclosure)
In the above-described conventional techniques, although a person at risk of infection can be estimated, it is difficult to prevent infection of an infected person before infection. That is, it is difficult to prevent infection due to droplet infection or air infection as a result of exposure of the person to cough or sneeze of the infected person.
People cough or sneeze in various states. For example, many people cover (cover) a part of the face (face) such as the nose and mouth with hands when coughing or sneezing. In addition, a person may cough or sneeze while wearing a mask. The behavior of the droplets differs depending on the state of the person who coughs or sneezes.
For example, when a person coughs or sneezes with a part of the face covered with hands, many droplets are adhered to the hands without spreading. Although the droplets or droplet nuclei having small particle diameters leak from the gaps between the hands, the convection velocity is expected to be about the same as the wind velocity in the room due to the pressure loss caused by the covering with the hands. That is, in this case, the droplets are locally present (localized) around the infected person, and can be said to be almost static. In this case, it is important to rapidly diffuse the droplets left around the infected person to the surroundings.
In order to solve the above problem, an information processing method according to an aspect of the present disclosure causes a computer to execute: a method for controlling a vehicle, which comprises detecting a person in a predetermined space to cough or sneeze, acquiring an image of the predetermined space captured when the person has coughed or sneezed, detecting a state of a mouth of the person from the image, generating a control signal for controlling at least one of a wind direction and a wind volume of air sent from an air flow generator for generating an air flow in the predetermined space based on the recognized state of the mouth of the person, and outputting the generated control signal.
According to this configuration, the state of the mouth of the person at the time of coughing or sneezing of the person is recognized from the image acquired when the person is detected in the predetermined space, and the control signal for controlling at least one of the wind direction and the wind volume of the air sent from the air flow generating device that generates the air flow in the predetermined space is generated based on the recognized state of the mouth of the person.
Therefore, by generating an air flow at a local area where droplets are generated by the person's cough or sneeze, the droplets can be diffused and the concentration of the droplets can be made uniform, and thus the risk of infection in a predetermined space where a cough or sneeze is detected can be reduced.
In the above information processing method, the state recognition of the mouth portion of the person may be performed to recognize either a state in which the mouth of the person is not covered or a state in which the mouth of the person is covered with a hand.
According to this configuration, the portion where the droplet is locally generated by the cough or sneeze of the person differs between a state where the mouth of the person is not covered and a state where the mouth of the person is covered with the hand. Therefore, by determining the position where the air flow is generated based on which state of the mouth portion of the person is the state where the mouth of the person is not covered and the state where the mouth of the person is covered with the hand, it is possible to more reliably diffuse the locally present spray.
In the information processing method, the state recognition of the mouth portion of the person may be performed to recognize any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, and a state in which the mouth of the person is covered with a mask.
According to this configuration, the location where the droplet is generated by the cough or sneeze of the person is different between a state where the mouth of the person is not covered, a state where the mouth of the person is covered with the hand, and a state where the mouth of the person is covered with the mask. Therefore, by determining the position where the air flow is generated based on which state of the mouth portion of the person is the state where the mouth of the person is not covered, the state where the mouth of the person is covered with the hand, and the state where the mouth of the person is covered with the mask, it is possible to more reliably diffuse the locally present spray.
In the information processing method, the state recognition of the mouth portion of the person may recognize any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief or clothes, and a state in which the mouth of the person is covered with a mask.
According to this configuration, the location where the droplet is generated by the cough or sneeze of the person is different between a state where the mouth of the person is not covered, a state where the mouth of the person is covered with the hand, a state where the mouth of the person is covered with the handkerchief or clothes, and a state where the mouth of the person is covered with the mask. Therefore, by determining the position where the air flow is generated based on which state of the mouth portion of the person is the state where the mouth of the person is not covered, the state where the mouth of the person is covered with the hand, the state where the mouth of the person is covered with the handkerchief or clothes, and the state where the mouth of the person is covered with the mask, it is possible to more reliably diffuse the locally present droplets.
In the information processing method, the orientation of the face of the person at the time when the person coughs or sneezes is detected may be identified from the image, and the wind direction may be made different between a case where the face is oriented to the front and a case where the face is oriented to the lower side.
According to this configuration, since the droplets are scattered forward of the front of the person when the person coughs or sneezes with the face facing forward, and the droplets are locally present below the predetermined space when the person coughs or sneezes with the face facing downward, the airflow can be accurately generated at the location where the droplets are locally present by making the direction of the air flow sent by the airflow generating device different between when the face of the person faces forward and when the face of the person faces downward.
In the above information processing method, the position coordinates of the person may be calculated from the image, and the control signal may be generated based on the recognized state of the mouth of the person and the position coordinates.
According to this configuration, the position where the droplet is locally present can be more accurately specified based on the state of the mouth of the person when the cough or sneeze of the person is detected and the position coordinates where the person is located.
In the information processing method, the air flow generator may be selected from a plurality of air flow generators based on the position coordinates.
According to this configuration, the airflow generation device includes a plurality of airflow generation devices. Then, the air flow generating device to be controlled among the plurality of air flow generating devices is selected based on the calculated position coordinates where the person is located. Therefore, for example, by sending air to a local area where droplets are present from the airflow generation device closest to the position where a person who coughs or sneezes is located among the plurality of airflow generation devices, the droplets that are locally present can be diffused more efficiently and quickly.
Another aspect of the present disclosure relates to a program for causing a computer to execute processing including: a method for controlling a vehicle, which comprises detecting a person in a predetermined space coughing or sneezing, acquiring an image of the predetermined space captured when the person coughs or sneezing is detected, detecting a state of a mouth of the person from the image, generating a control signal for controlling at least one of an airflow direction and an airflow volume of air sent from an airflow generating device that generates an airflow in the predetermined space based on the state of the mouth, and outputting the generated control signal.
According to this configuration, the state of the mouth of the person at the time of coughing or sneezing of the person is recognized from the image acquired when the person is detected in the predetermined space, and the control signal for controlling at least one of the wind direction and the wind volume of the air sent from the air flow generating device that generates the air flow in the predetermined space is generated based on the recognized state of the mouth of the person.
Therefore, by generating an air flow at a local area where droplets are generated by the person's cough or sneeze, the droplets can be diffused and the concentration of the droplets can be made uniform, and thus the risk of infection in a predetermined space where a cough or sneeze is detected can be reduced.
An information processing system according to another aspect of the present disclosure includes: a camera that photographs a predetermined space; an air flow generating device which generates air flow in the predetermined space; and an information processing device that detects a person in the predetermined space coughing or sneezing, acquires an image of the predetermined space captured by the camera when the person coughs or sneezing is detected, detects a state of a mouth of the person from the image, generates a control signal for controlling at least one of a wind direction and a wind volume of air sent from the airflow generation device based on the state of the mouth, and outputs the generated control signal.
According to this configuration, the state of the mouth of the person at the time of coughing or sneezing of the person is recognized from the image acquired when the person is detected in the predetermined space, and the control signal for controlling at least one of the wind direction and the wind volume of the air sent from the air flow generating device that generates the air flow in the predetermined space is generated based on the recognized state of the mouth of the person.
Therefore, by generating an air flow at a local area where droplets are generated by the person's cough or sneeze, the droplets can be diffused and the concentration of the droplets can be made uniform, and thus the risk of infection in a predetermined space where a cough or sneeze is detected can be reduced.
Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. The following embodiments are merely examples embodying the present disclosure, and do not limit the technical scope of the present disclosure.
(embodiment mode 1)
Fig. 1 is a diagram showing the configuration of an airflow control system according to
The
The
The
The
The camera 11 is disposed in a predetermined space, and photographs the predetermined space. The camera 11 acquires an image of the subject person in a predetermined space. The subject person is a person staying in the space in which the
Here, the
The camera 11 is a camera for monitoring a room, is installed on a ceiling or the like so as to be able to detect a target person in a wide range, and continuously acquires a moving image of the room. The camera 11 may further include a rotating unit for scanning (sweep) an entire area in a room for a predetermined time. In this way, by providing the rotating portion to the camera 11, the entire indoor space can be imaged by one camera 11 even in a wider space of 20 stacks (1 stack corresponds to 1.62 square meters) or more.
The microphone 12 is disposed in a predetermined space, and collects sound in the predetermined space. The microphone 12 acquires the sound of the subject person in a predetermined space.
In
The processor 13 includes an image processing unit 131, a cough/sneeze detection unit 132, a person state determination unit 133, and a control
The image storage unit 141 stores the image captured by the camera 11. The camera 11 stores an image obtained by imaging a predetermined space in the image storage unit 141.
The image processing unit 131 obtains an image obtained by imaging the predetermined space from the image storage unit 141. The image processing unit 131 performs image processing on the acquired image, and extracts human features such as the face, nose, mouth, hands, clothes, whether or not a mask is present, and the position of the subject in the room. The image processing unit 131 may use machine learning or deep learning for feature extraction, or may use a widely known feature extractor such as a Haar-Like extractor for face detection or the Like. When extracting the features, the image processing unit 131 detects information such as the center of gravity position or area of each feature extracted by the mouth, face, and the like, together with the position information of the target person in the room.
The cough/sneeze detection unit 132 detects a person who is located in a predetermined space from coughing or sneezing. When the subject coughs or sneezes, the cough/sneeze detection unit 132 detects the cough or sneeze.
The cough/sneeze detection unit 132 detects a person's cough or sneeze in the indoor space. The cough/sneeze detection unit 132 detects a person in a predetermined space from coughing or sneezing using the sound collected by the microphone 12 and the image captured by the camera 11.
For example, the cough/sneeze detection unit 132 determines whether or not the volume of the sound collected by the microphone 12 is equal to or greater than a threshold value. When determining that the volume of the sound collected by the microphone 12 is equal to or greater than the threshold value, the cough/sneeze detection unit 132 determines that a person in the predetermined space has coughed or sneezed. As the threshold value, for example, 70dB (decibel) may be used. Since the detected sound volume varies depending on the distance between the microphone 12 and the person, the cough/sneeze detecting unit 132 may calculate the distance between the microphone 12 and the person from the image and correct the threshold value based on the calculated distance.
The cough/sneeze detecting unit 132 may perform spectral analysis of the sound collected by the microphone 12, and detect a cough or a sneeze based on the analysis result by an algorithm such as machine learning. In this case, since the detection can be performed using a spectrum pattern unique to coughing or sneezing, the detection accuracy is improved.
The cough/sneeze detection unit 132 detects at least one of a cough and a sneeze of a person in a predetermined space from the image. The camera 11 acquires a moving image. Therefore, the cough/sneeze detection unit 132 can detect the operation pattern of the subject person using the features extracted by the image processing unit 131. For example, as an action immediately before coughing or sneezing, a person performs a characteristic action such as covering the mouth or closing the eyes with a hand. Therefore, the cough/sneeze detecting unit 132 can detect a person in the predetermined space that coughs or sneezes by detecting a characteristic motion at the time of coughing or sneezing.
The cough/sneeze detection unit 132 can use an operation pattern detected from the image from the camera 11. For example, the cough/sneeze detection unit 132 may determine the motion immediately before the cough or the sneeze using a classifier that performs machine learning on the characteristic motion.
More simply, the cough/sneeze detection unit 132 may calculate the distance between the center of gravity position of the face and the center of gravity position of the hand extracted from the image, and determine whether or not the distance between the center of gravity position of the face and the center of gravity position of the hand is equal to or less than a threshold value.
Fig. 2 is a diagram for explaining the
The cough/sneeze detection unit 132 determines whether or not the distance between the position of the face of the person included in the image and the position of one hand of the person included in the image is equal to or less than a threshold value, and detects a cough or sneeze when the distance is determined to be equal to or less than the threshold value.
First, the image processing section 131 extracts a face region FR indicating the face of the subject person, a right-hand region RH indicating the right hand of the subject person, and a left-hand region LH indicating the left hand of the subject person from the image G1. At this time, the extracted face region FR, right-hand region RH, and left-hand region LH are rectangular in shape. The image processing unit 131 calculates the center of gravity position of the face region FR, the center of gravity position of the right-hand region RH, and the center of gravity position of the left-hand region LH.
The cough/sneeze detection unit 132 determines whether or not the width fw of the face region FR, the distance r1 between the center of gravity position of the face region FR and the center of gravity position of the right-hand region RH, and the distance r2 between the center of gravity position of the face region FR and the center of gravity position of the left-hand region LH satisfy the following expression (1).
min(r1/fw,r2/fw)<0.5……(1)
In the above equation (1), min () is a function that returns the minimum value among the set parameters. That is, the cough/sneeze detecting unit 132 compares the smaller value of r1/fw and r2/fw with 0.5.
When it is determined that the above expression (1) is satisfied, the cough/sneeze detection unit 132 determines that a person in the predetermined space has coughed or sneezed. On the other hand, when it is determined that the above expression (1) is not satisfied, the cough/sneeze detection unit 132 determines that the person in the predetermined space has not coughed and the person in the predetermined space has not sneezed.
The cough/sneeze detection unit 132 may determine whether or not the area of the mouth extracted from the image is equal to or smaller than a threshold value.
Fig. 3 is a diagram for explaining the
The cough/sneeze detection unit 132 may determine whether or not the area of the mouth of the person included in the image is equal to or smaller than a threshold value, and may detect a cough or sneeze when determining that the area is equal to or smaller than the threshold value.
First, the image processing unit 131 extracts a mouth region MR indicating the mouth of the subject from the image G2. At this time, the extracted mouth region MR has a rectangular shape. Then, the image processing unit 131 calculates the area s (t) of the mouth region MR.
The cough/sneeze detection unit 132 determines whether or not the area s (t) of the mouth region MR is equal to or less than a threshold value. Specifically, the cough/sneeze detection unit 132 determines whether or not the area S (t) of the mouth region MR and the geometric average S0 of the time-series values of the area of the mouth region MR satisfy the following expression (2).
S(t)/S0<0.2……(2)
When it is determined that the above expression (2) is satisfied, the cough/sneeze detection unit 132 determines that a person in the predetermined space has coughed or sneezed. On the other hand, when it is determined that the above expression (2) is not satisfied, the cough/sneeze detection unit 132 determines that the person in the predetermined space has not coughed and the person in the predetermined space has not sneezed.
Fig. 4 is a diagram showing an example of a time-series change in the area of the mouth of the subject person or the distance between the face and the hand of the subject person in
As shown in fig. 4, the area s (t) of the mouth of the subject person or the distance r (t) between the face and the hand of the subject person becomes equal to or less than the threshold value at
In addition, the detection method may be switched according to the state of the subject person. For example, since the mouth of a person wearing a mask is covered with the mask, the detection may be performed using a classifier that has been machine-learned or using the distance between the hand and the face. The memory 14 may store the extracted features or the detected operation pattern, and the control
In addition, when extracting the features of the person, the area of the mouth or the distance between the hand and the mouth detected changes according to the distance between the camera 11 and the person. Therefore, the cough/sneeze detection unit 132 may calculate the area of the mouth or the distance between the hand and the mouth using the length normalized based on the width of the face or the like. The cough/sneeze detection unit 132 can determine a cough or a sneeze by using the standardized length regardless of the positions of the camera 11 and the subject person. Further, a plurality of grid patterns whose size and position are known may be arranged in a predetermined space, and the image processing unit 131 may perform camera calibration (calibration) based on the size and position of the grid pattern included in the image. By performing camera calibration, the absolute position of the subject person in the predetermined space can be determined more accurately.
Further, the
In order to improve the accuracy of detecting coughing or sneezing, the coughing/sneezing detecting unit 132 detects that a person located in a predetermined space coughs or sneezing based on images and sounds. For example, the cough/sneeze detection unit 132 may detect that the subject person coughs or sneezes when it is determined that the sound volume of the sound collected by the microphone 12 is equal to or greater than a threshold value and that the distance between the position of the face of the person included in the image captured by the camera 11 and the position of one hand of the person included in the image is equal to or less than a threshold value. When sound is used instead of images in detection of coughing or sneezing, there is a possibility of erroneous detection, and detection by combining images and sound can improve the accuracy of detection of coughing or sneezing. The memory 14 may store the detection result of the cough or sneeze, and the control
In
The person state determination unit 133 recognizes the state of the mouth of the person at the time of coughing or sneezing from the image acquired when the person is detected to have coughed or sneezed.
The person state determination unit 133 recognizes any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief or clothes (for example, sleeves of a jacket), and a state in which the mouth of the person is covered with a mask. The person state determination unit 133 recognizes the face orientation of the person at the time of coughing or sneezing from the image acquired when the person is detected to have coughed or sneezed. The person state determination unit 133 calculates the position coordinates of the person in the predetermined space from the image acquired when the person is detected to cough or sneeze.
The person state determination unit 133 recognizes the state of the subject person by referring to the image when the cough or sneeze is detected by the cough/sneeze detection unit 132. The state of the mouth of the subject refers to any one of a state in which a part of the face of the subject, such as the mouth, is covered with a hand at the time of coughing or sneezing, a state in which a part of the face of the subject, such as the mouth, is covered with a handkerchief or a clothes sleeve, a state in which the face of the subject is not covered at all, and a state in which a part of the face of the subject, such as the mouth, is covered with a mask. The
For example, when the subject coughs or sneezes with the mouth covered with the hand, large droplets are attached to the hand, and therefore contribute little to droplet infection or air infection, and particles having small particle diameters such as small droplets or droplet nuclei are likely to leak from the gaps between the hands. However, since the pressure loss is high due to the hand covering, small particles are left around the subject person and are gradually discharged by indoor ventilation.
In addition, when the subject coughs or sneezes while wearing the mask, the droplets are almost trapped on the filter layer of the mask. However, depending on the wearing state of the mask, fine particles having a particle diameter of about 0.3 μm (micrometer) that are difficult to be trapped by the filter layer are likely to leak from the slits of the mask.
Therefore, when the subject coughs or sneezes while the mouth is covered with the hands or while the subject wears the mask, there is a possibility that viruses are locally present around the subject, and it is necessary to rapidly spread the locally present viruses in order to prevent air infection. Therefore, for example, since the position of the subject person can be recognized by image processing, when the subject person coughs or sneezes while the subject person is wearing the mask in a state where the mouth is covered with hands, the
When the subject coughs or sneezes without any covering of the mouth, droplets or droplet nuclei are scattered at high speed in the space by the cough airflow. It is statistically known that the initial velocity of cough is about 10m/s and continues for about 0.5s, and in fact, 10m/s is also used as the initial velocity of cough in
Even if the subject coughs or sneezes without any covering of the mouth, the direction in which the droplets fly changes depending on whether the face is facing forward or downward. When the subject coughs or sneezes toward the front with the mouth not covered, the droplets or droplet nuclei reach 1 to 1.5m ahead within about 5 to 10 seconds and rapidly decelerate as described above. In addition, droplets having a large particle size slow down due to inertia and reach a distance farther than small droplets. When the subject coughs or sneezes downward with the mouth not covered, droplets or droplets nuclei stay indoors.
Therefore, the person state determination unit 133 determines the face orientation of the target person. By controlling the airflow according to the face orientation, air infection can be efficiently prevented. In this case, when there are a plurality of
In this way, the position where the droplets stay differs depending on the state of the mouth of the person and the orientation of the face of the person when the person coughs or sneezes.
The person state determination unit 133 classifies the state of the mouth of the person into a plurality of patterns by image processing from images before and after the time when the cough or sneeze of the subject person is detected. For example, the human condition determination unit 133 performs pattern classification based on an algorithm that has been machine-learned. By using the algorithm that has been machine-learned, it is possible to classify patterns with high accuracy.
Further, as a simple attachment (implementation) method, the person state determination unit 133 may determine the state of the mouth of the person based on an image processing algorithm. As an image processing algorithm, for example, a Haar-Like extractor is used, and a face, a mouth, and hands can be detected, and a mask, a handkerchief, and a jacket sleeve can be detected by color extraction. In this way, by using a simple image processing algorithm, a flow of supervised learning necessary for machine learning does not need to be performed, and therefore, the system can be easily installed.
In this way, the air flow control for suppressing air infection is performed after classifying the state of the subject person. In this case, the optimum control method differs depending on the type, number, and positional relationship of the
The equipment
The control
The airflow control
The control
The control
In
Fig. 5 is a diagram showing an example of the 1 st airflow control table in the case where the airflow control system includes one airflow generation device and the airflow generation device is an air conditioner. Further, the air conditioning equipment is disposed on a wall surface near a ceiling in the predetermined space. Further, the air conditioner sends air downward from the horizontal direction.
First, as shown in fig. 5, when the mouth is not covered and the face is facing forward, the control content for controlling the wind direction such that the air is sent 1 meter ahead of the face is associated.
That is, when a cough or sneeze is detected while a portion of the face such as the mouth is not covered and the face is facing the front, the droplet generated by the subject reaches 1 to 1.5m ahead of the face of the subject in the direction in which the face is facing, before or after 5 seconds. Then, the droplets having small particle diameters are subjected to air resistance due to the resistance, and are temporarily locally present in the periphery thereof. Then, the
Therefore, when the mouth is not covered and the face is oriented in the front direction, the control
Fig. 6 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is driven while the air conditioner is not driven and an air current is generated toward 30 degrees below the horizontal direction in a space in which the air conditioner and the air cleaner are arranged. Further, the wind speed distribution shown in fig. 6 represents the result of a simulation based on CFD (Computational Fluid Dynamics).
In fig. 6, an
Next, as shown in fig. 5, when the mouth is not covered and the face is directed downward, a control content for controlling the wind direction so as to send air downward by 90 degrees is associated.
That is, when a part of the face such as the mouth is not covered at the time of detecting a cough or sneeze and the face is directed downward, the droplet is locally present at a low position in the room. In this case, a person with a height of at least 150cm, such as an average adult, is less likely to be infected with air, whereas a person with a relatively low height, such as a child of a primary school or a less resistant person, is more likely to be infected with air. Since the air conditioner is usually installed near the ceiling in a room, the direction of the wind can be controlled to be 90 degrees downward. Then, the
Therefore, in a state where the mouth is not covered and the face is oriented downward, the control
Fig. 7 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is driven without driving the air conditioner in a space in which the air conditioner and the air cleaner are arranged and an air current is generated toward 90 degrees downward from the horizontal direction. Further, the wind speed distribution shown in fig. 7 represents the simulation result based on CFD.
In fig. 7, an
Next, as shown in fig. 5, in the case where the mouth is covered with the hand, a control content for controlling the wind direction so as to send air in the direction of the subject person is associated.
That is, when a part of the face such as the mouth of a subject is covered with a hand when a cough or sneeze is detected, scattering of droplets can be suppressed, but droplets are locally present around the subject. Then, the
Therefore, in the case where the mouth is covered with the hand, the control
Next, as shown in fig. 5, in a state where the mouth is covered with the handkerchief or the jacket sleeve, a control content for changing the operation mode to the powerful operation is associated.
That is, when a part of the face such as the mouth of a subject is covered with a handkerchief or a jacket sleeve when a cough or sneeze is detected, droplets may adhere to the handkerchief or the jacket sleeve. In this case, although scattering of the mist can be suppressed, a part of the virus attached to the handkerchief or the jacket spreads into the space. Then, the
Therefore, in a state where the mouth is covered with the handkerchief or the jacket, the control
Next, as shown in fig. 5, in the case where the mouth is covered with the mask, a control content for controlling the wind direction so as to send air in the direction of the subject person is associated.
That is, when the mask is worn by the subject when a cough or sneeze is detected, many droplets are trapped on the filter layer of the mask, and fine particles having a particle size of about 0.3 μm, which are difficult to be trapped on the filter layer, leak from the mask. Alternatively, when the mask is not worn correctly, the fine particles may leak from the gap of the mask. That is, the leaked droplets are locally present around the subject person. Then, the
Therefore, in a state where the mouth is covered with the mask, the control
Fig. 8 is a diagram showing an example of the 2 nd airflow control table in the case where the airflow control system includes one airflow generating device and the airflow generating device is an air cleaner. In addition, the air cleaner is fixedly placed on the floor in the space. The air cleaner sends the cleaned air from the upper part of the air cleaner to a direction higher than the horizontal direction.
First, as shown in fig. 8, when the mouth is not covered and the face is facing forward, the control content for controlling the wind direction such that the air is sent 1 meter ahead of the face is associated.
That is, when a part of the face such as the mouth is not covered at the time of detection of a cough or sneeze and the face is facing the front, droplets having small particle diameters are locally present 1 to 1.5m ahead in the direction in which the face of the subject faces. Then, the
Therefore, when the mouth is not covered and the face is oriented in the front direction, the control
Fig. 9 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is not driven but the air cleaner is driven and an airflow is generated in an upward direction of 90 degrees from the horizontal direction in a space in which the air conditioner and the air cleaner are arranged. Fig. 10 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is not driven but the air cleaner is driven and an airflow is generated upward at 45 degrees from the horizontal direction in a space in which the air conditioner and the air cleaner are arranged. The wind speed distributions shown in fig. 9 and 10 show the results of the CFD-based simulation.
In fig. 9 and 10, an
Next, as shown in fig. 8, in a state where the mouth is not covered and the face is directed downward, a control content for changing the operation mode to the powerful operation is associated.
That is, when a cough or sneeze is detected while a portion of the face such as the mouth is not covered and the face is facing downward, the droplets locally exist at a low position in the room. The air purifier is fixedly placed on the ground in a predetermined space. In many air cleaners, the direction of airflow control by the louvers is horizontal or above horizontal.
Therefore, in the case where the mouth is not covered and the face is oriented downward, and the
Next, as shown in fig. 8, in the case where the mouth is covered with the hand, a control content for controlling the wind direction so as to send air in the direction of the subject person is associated. As shown in fig. 8, in the case where the mouth is covered with the handkerchief or the jacket sleeve, a control content for changing the operation mode to the powerful operation is associated. As shown in fig. 8, in the case where the mouth is covered with the mask, a control content for controlling the wind direction so as to send air in the direction of the subject person is associated.
Note that, when a part of the face such as the mouth is covered with a hand, a handkerchief, or a jacket when a cough or sneeze is detected, or when the subject wears a mask, the control contents are the same as those in the case where the airflow control system includes one air conditioner, and therefore, the description thereof is omitted.
Fig. 11 is a diagram showing an example of the 3 rd airflow control table in the case where the airflow control system includes two airflow generation devices, and the two airflow generation devices are an air conditioner and an air cleaner, respectively. Further, the air conditioning apparatus is disposed on a wall surface near a ceiling within the predetermined space. Further, the air conditioner sends air in a direction lower than the horizontal direction. The air purifier is fixedly arranged on the ground in the space. The air cleaner sends the cleaned air from the upper part of the air cleaner to a direction higher than the horizontal direction.
In this case, the option of the optimal condition of the air flow control table shown so far will be selected in consideration of the distance between the subject person and the air flow generation device in addition to the state of the subject person.
First, as shown in fig. 11, when the mouth is not covered and the face is facing forward, a control content for controlling the wind direction such that air is sent from the airflow generation device closest to the subject person to the front 1 meter of the face direction is associated.
That is, when a cough or sneeze is detected in a state where a part of the face such as the mouth is not covered and the face is facing the front, the air flow generator closest to the subject is selected from the plurality of air flow generators, and the air flow direction is controlled so that the air is sent 1 meter ahead of the front of the face of the subject by the louvers of the selected air flow generator or the like. In this way, air infection can be suppressed at an early stage.
In this case, the control
Therefore, when the mouth is not covered and the face is facing forward, the control
Next, as shown in fig. 11, in a state where the mouth is not covered and the face is directed downward, a control content for controlling the wind direction so that air is sent downward by 90 degrees from the airflow generation device that is an air conditioning apparatus is associated.
That is, when a part of the face such as the mouth is not covered and the face is directed downward when a cough or sneeze is detected, an air flow generator that is an air conditioner is selected from the plurality of air flow generators, and the air flow is controlled to be directed vertically downward by the louvers of the selected air flow generator. This makes it possible to diffuse droplets locally present at a low position in the room.
Therefore, in a case where the mouth is not covered and the face is directed downward, the control
In the case where there is no air conditioning equipment in the plurality of airflow generation devices and all of the plurality of airflow generation devices are air cleaners, the
Next, as shown in fig. 11, in a case where the mouth is covered with the hand or in a case where the mouth is covered with the mask, a control content for controlling the wind direction so that the air is sent in the direction of the subject person from the airflow generation device closest to the subject person is associated.
That is, when a subject covers a part of the face such as the mouth of a person with hands when a cough or sneeze is detected, or when the subject wears a mask, the droplets are locally present around the subject. Then, the airflow generation device closest to the subject person is selected from the plurality of airflow generation devices, and the wind direction is controlled to send air in the direction of the subject person by the louver of the selected airflow generation device. This makes it possible to quickly disperse the droplets locally present around the subject.
Therefore, in a case where the mouth is covered with the hand or in a case where the mouth is covered with the mask, the control
Next, as shown in fig. 11, in a state where the mouth is covered with the handkerchief or the jacket sleeve, a control content for changing the operation mode of the airflow generation device closest to the subject person to the powerful operation is associated.
That is, when the subject covers a part of the face such as the mouth with a handkerchief or a jacket when the cough or sneeze is detected, the operation mode of the airflow generation device closest to the subject is changed to the powerful operation. This enables efficient removal of droplets.
Therefore, when the mouth is covered with the handkerchief or the jacket, the control
In this case, the subject may move around in the room, and the airflow generation device closest to the subject may vary depending on the time. In this case, the control
In
Next, the
The
The
The communication unit 21 communicates with the
Further, the communication unit 21 may transmit the position of the air
The processor 22 includes an airflow control unit 221. The airflow control unit 221 controls the airflow generating unit 24 and the airflow direction changing unit 25 in accordance with the control signal received by the communication unit 21.
The memory 23 is, for example, a semiconductor memory, and stores various kinds of information. When the operation mode of the
The airflow generating unit 24 is, for example, a fan motor, and sends air into a predetermined space. In the case where the
The airflow direction changing unit 25 controls the airflow generated from the airflow generating unit 24. The wind direction changing unit 25 controls the wind direction. The wind direction changing unit 25 is, for example, a louver. The airflow direction changing unit 25 changes the airflow direction of the air sent from the airflow generating unit 24 by adjusting the orientation of the louver.
Next, the operation of the
Fig. 12 is a 1 st flowchart for explaining the operation of the airflow control device in
First, in step S1, the processor 13 determines whether or not the
On the other hand, when it is determined that the
Next, in step S3, the image processing unit 131 acquires an image from the image storage unit 141.
Next, in step S4, the image processing unit 131 extracts the feature of the subject person from the image. Here, the feature of the subject person refers to, for example, the face, eyes, mouth, right hand, left hand, clothes, and mask of the subject person. Further, the image processing unit 131 also detects the center of gravity position of each feature.
Next, in step S5, the cough/sneeze detection unit 132 acquires a sound from the microphone 12.
Next, in step S6, the cough/sneeze detection unit 132 determines whether or not the subject in the predetermined space has coughed or sneezed. Here, the cough/sneeze detection unit 132 calculates the 1 st distance between the center of gravity position of the face extracted from the image and the center of gravity position of the right hand, and calculates the 2 nd distance between the center of gravity position of the face extracted from the image and the center of gravity position of the left hand. The cough/sneeze detector 132 determines whether the shorter of the 1 st distance and the 2 nd distance is equal to or less than a threshold value. When determining that the shorter of the 1 st distance and the 2 nd distance is equal to or less than the threshold value, the cough/sneeze detection unit 132 determines whether or not the volume of the sound acquired from the microphone 12 is equal to or more than the threshold value. The cough/sneeze detection unit 132 determines that the subject person in the predetermined space has coughed or sneezed when the shorter of the 1 st distance and the 2 nd distance is determined to be equal to or less than the threshold and the sound volume is equal to or more than the threshold. When it is determined that the shorter of the 1 st distance and the 2 nd distance is longer than the threshold value or that the sound volume is smaller than the threshold value, the cough/sneeze detection unit 132 determines that the subject person in the predetermined space is not detected coughing and that the subject person in the predetermined space is not detected sneezing.
Here, if it is determined that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze (no in step S6), the process returns to step S1.
On the other hand, when it is determined that the subject person in the predetermined space has coughed or sneezed (yes in step S6), in step S7, the person state determination unit 133 acquires an image of the image at the time when the subject person in the predetermined space has been detected to have coughed or sneezed from the image storage unit 141.
Next, in step S8, the person state determination unit 133 identifies the state of the mouth of the subject person when the subject person coughs or sneezes. Here, the person state determination unit 133 identifies which state of the mouth of the subject person is one of a state in which the mouth of the subject person is not covered, a state in which the mouth of the subject person is covered with hands, a state in which the mouth of the subject person is covered with handkerchiefs, a state in which the mouth of the subject person is covered with jacket sleeves, and a state in which the mouth of the subject person is covered with a mask, based on an image at a time point when the subject person in the predetermined space is detected to cough or sneeze.
The person state determination unit 133 may recognize the state of the mouth of the subject person from not only the image at the time point when the cough or sneeze is detected but also the images before and after the time point when the cough or sneeze is detected.
Next, in step S9, the person state determination unit 133 recognizes the orientation of the face of the subject person when the subject person coughs or sneezes, based on the image at the time point when the subject person in the predetermined space is detected to cough or sneeze. At this time, the person state determination unit 133 determines which direction the face of the subject person faces in the front or the lower direction when the subject person coughs or sneezes.
Next, in step S10, the human condition determination unit 133 recognizes the position of the subject person in the predetermined space when the subject person coughs or sneezes, based on the image at the time point when the subject person in the predetermined space is detected to cough or sneeze.
Next, in step S11, the control
Next, in step S12, the control
Here, when it is determined that the type of the air flow generator is an air conditioner (yes in step S13), in step S14, the
On the other hand, in the case where it is determined that the type of the air flow generating device is not the air conditioning apparatus, that is, in the case where it is determined that the type of the air flow generating device is the air cleaner (no in step S13), in step S15, the control
Further, when it is judged in step S12 that a plurality of airflow generation devices exist in the predetermined space (YES in step S12), in step S16, the control
Next, in step S17, the control
Next, in step S18, the control
In addition, when the control content for controlling the wind direction so as to send the air downward 90 degrees is determined, the control
When the control content for controlling the wind direction is determined so that the air is sent from the air flow generator closest to the subject to the front 1 meter of the face direction, the
When the control content for controlling the wind direction is determined so that air is sent from the airflow generating device closest to the subject person in the direction of the subject person, the control
When the control content for changing the operation mode of the airflow generation device closest to the subject to the powerful operation is determined, the control
Next, in step S19, the communication unit 15 transmits the control signal generated by the control
The control signal may include a change duration indicating a time for changing the control content of the
In
The person state determination unit 133 may identify, based on the image at the time point when the subject person in the predetermined space is detected to cough or sneeze, which of a state in which the mouth of the subject person is not covered, a state in which the mouth of the subject person is covered with the hand, and a state in which the mouth of the subject person is covered with the mask. The person state determination unit 133 may identify, based on the image at the time point when the subject person in the predetermined space is detected to cough or sneeze, which of the state in which the mouth of the subject person is not covered, the state in which the mouth of the subject person is covered with the hand, the state in which the mouth of the subject person is covered with the handkerchief, and the state in which the mouth of the subject person is covered with the mask.
Next, the operation of the
Fig. 14 is a flowchart for explaining the operation of the airflow generation device in
First, in step S21, the processor 22 determines whether or not the power supply of the
On the other hand, when determining that the
On the other hand, when determining that the control signal has been received (yes in step S22), in step S23, the airflow controller 221 stores the current control parameter in the memory 23. The control parameters include, for example, an operation mode, a set temperature, a wind direction, and a wind volume.
Next, in step S24, the airflow control unit 221 controls the airflow generated from the airflow generating unit 24 based on the control signal received by the communication unit 21. That is, airflow control unit 221 instructs airflow generating unit 24 to send air at the air volume indicated by the control signal, and instructs airflow direction changing unit 25 to change the airflow direction indicated by the control signal.
Next, in step S25, the airflow control unit 221 determines whether or not the change duration included in the control signal has elapsed. If it is determined that the change duration has not elapsed (no in step S25), the determination process in step S25 is repeatedly executed.
On the other hand, when determining that the change duration has elapsed (yes in step S25), in step S26, the airflow control unit 221 reads the control parameter stored in the memory 23.
Next, in step S27, the airflow control unit 221 changes the control parameter to the read control parameter.
In this way, the state of the mouth of the person at the time of coughing or sneezing of the person is recognized from the image acquired when the person is detected to have coughed or sneezed in the predetermined space, and a control signal for controlling at least one of the wind direction and the air volume of the air sent from the air flow generating device that generates the air flow in the predetermined space is generated based on the recognized state of the mouth of the person. Therefore, by generating an air flow at a local area where droplets are generated by the person's cough or sneeze, the droplets can be diffused and the concentration of the droplets can be made uniform, and thus the risk of infection in a predetermined space where a cough or sneeze is detected can be reduced.
(embodiment mode 2)
In
Fig. 15 is a diagram showing the configuration of an airflow control system according to
The airflow control device 1A controls the airflow in a predetermined space. The airflow control device 1A is disposed on a wall or a ceiling in a predetermined space. The airflow control device 1A is connected to the
The airflow control device 1A includes a camera 11, a processor 13A, a memory 14, and a communication unit 15.
The processor 13A includes an image processing unit 131, a cough/sneeze detecting unit 132A, a person state determining unit 133, and a control
The cough/sneeze detection unit 132A detects a person in a predetermined space from coughing or sneezing. In
That is, the cough/sneeze detecting unit 132A determines whether or not the distance between the position of the face of the person included in the image and the position of one hand of the person included in the image is equal to or less than a threshold value, and detects at least one of a cough and a sneeze when the distance is determined to be equal to or less than the threshold value. More specifically, the cough/sneeze detection unit 132A calculates the 1 st distance between the center of gravity position of the face extracted from the image and the center of gravity position of the right hand, and calculates the 2 nd distance between the center of gravity position of the face extracted from the image and the center of gravity position of the left hand. The cough/sneeze detection unit 132A determines whether the shorter of the 1 st distance and the 2 nd distance is equal to or less than a threshold value. When the shorter of the 1 st distance and the 2 nd distance is determined to be equal to or less than the threshold value, the cough/sneeze detection unit 132A determines that the subject in the predetermined space has coughed or sneezed. When it is determined that the shorter of the 1 st distance and the 2 nd distance is longer than the threshold value, the cough/sneeze detection unit 132A determines that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze.
The cough/sneeze detecting unit 132A may determine whether or not the area of the mouth of the person included in the image is equal to or smaller than a threshold value, and may detect a cough or a sneeze when determining that the area is equal to or smaller than the threshold value.
Next, the operation of the airflow control device 1A in
Fig. 16 is a 1 st flowchart for explaining the operation of the airflow control device in
The processing of steps S31 to S34 shown in fig. 16 is the same as the processing of steps S1 to S4 shown in fig. 12, and thus detailed description is omitted.
Next, in step S35, the cough/sneeze detection unit 132A determines whether or not the subject person in the predetermined space has coughed or sneezed. Here, the cough/sneeze detecting unit 132A calculates the 1 st distance between the center of gravity position of the face extracted from the image and the center of gravity position of the right hand, and calculates the 2 nd distance between the center of gravity position of the face extracted from the image and the center of gravity position of the left hand. The cough/sneeze detection unit 132A determines whether the shorter of the 1 st distance and the 2 nd distance is equal to or less than a threshold value. When the shorter of the 1 st distance and the 2 nd distance is determined to be equal to or less than the threshold value, the cough/sneeze detection unit 132A determines that the subject in the predetermined space has coughed or sneezed. When it is determined that the shorter of the 1 st distance and the 2 nd distance is longer than the threshold value, the cough/sneeze detection unit 132A determines that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze.
Here, if it is determined that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze (no in step S35), the process returns to step S31.
On the other hand, when it is determined that the subject person in the predetermined space has coughed or sneezed (yes in step S35), in step S36, the person state determination unit 133 acquires an image of the image at the time when the subject person in the predetermined space has been detected to have coughed or sneezed from the image storage unit 141.
The processing of steps S37 to S48 shown in fig. 17 is the same as the processing of steps S8 to S19 shown in fig. 13, and thus detailed description is omitted.
In this way, using the image from the camera 11 that captures the image of the inside of the predetermined space, it is possible to detect that the person in the predetermined space coughs or sneezes. This can simplify the structure of the airflow control device 1A, and can reduce the cost of the airflow control device 1A.
(embodiment mode 3)
In
Fig. 18 is a diagram showing the configuration of an airflow control system according to
The
The
The
For example, the cough/
The cough/
When the person is detected to cough or sneeze in the predetermined space by the cough/
The
The
The
The
The
The
The
The cough/
The person
The person
The person
The person
The person
The function of the human
The
The
The
The
The
The control
The control
The control
Next, the operation of the
Fig. 19 is a flowchart for explaining the operation of the camera in
First, in step S51, the
On the other hand, when it is determined that the power of the
Next, in step S53, the cough/
On the other hand, when determining that the cough/sneeze detection signal has been received (yes in step S53), in step S54, the human
Next, in step S55, the person
Next, in step S56, the human
Next, in step S57, the human
Next, in step S58, the
Fig. 20 is a flowchart for explaining the operation of the airflow control device in
First, in step S71, the
On the other hand, when determining that the
On the other hand, when determining that the status information has been received (YES at step S72), at step S73,
Further, the processing of steps S74 to S81 shown in fig. 20 is the same as the processing of steps S12 to S19 shown in fig. 13.
In this way, when the person in the predetermined space is detected to have coughed or sneezed by the
In
(embodiment mode 4)
In
Fig. 21 is a diagram showing the configuration of an airflow control system according to
The camera 3A is provided on a ceiling or a wall in a predetermined space. The camera 3A and the
The
The cough/
That is, the cough/
The cough/
The
Next, the operation of the camera 3A in
Fig. 22 is a flowchart for explaining the operation of the camera in
First, in step S91, the
On the other hand, when it is determined that the power of the camera 3A is turned on (YES in step S91), the
Next, in step S93, the
Next, in step S94, the
Next, in step S95, the cough/
Here, if it is determined that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze (no in step S95), the process returns to step S91.
On the other hand, when it is determined that the subject person in the predetermined space has coughed or sneezed (yes in step S95), in step S96, the human
The processing of steps S97 to S100 shown in fig. 22 is the same as the processing of steps S55 to S58 shown in fig. 19, and therefore detailed description is omitted.
In this way, the camera 3A detects that the person in the predetermined space coughs or sneezes, recognizes the state of the mouth, the orientation of the face, and the position in the predetermined space of the subject person at the time of coughing or sneezing, and generates a control signal for controlling the air flow in the predetermined space by the air
(infection risk evaluation System)
The present disclosure includes an infection risk evaluation system described below. In the explanation of the infection risk evaluation system, the same components as those of the airflow control system described above are assigned the same reference numerals, and detailed explanation thereof is omitted.
Fig. 23 is a diagram showing the configuration of the infection risk evaluation system of the present disclosure. The infection risk evaluation system shown in fig. 23 is an example of an information processing system, and includes an infection risk evaluation device 1C and a terminal device 5.
The infection risk evaluation device 1C is an example of an information processing device, and evaluates the risk of contracting an infectious disease (infection risk). The infection risk evaluation device 1C is disposed on a wall or ceiling in a predetermined space.
The infection risk evaluating apparatus 1C is connected to the terminal apparatus 5 via a network so as to be able to communicate with each other.
The terminal device 5 is, for example, a personal computer, a smartphone, or a tablet computer. The terminal device 5 is used by, for example, a manager or staff of a facility where the subject person is located.
The infection risk evaluating device 1C includes a camera 11, a microphone 12, a processor 13, a memory 14, and a communication unit 15. When coughing or sneezing is detected not by sound but by images, the infection risk evaluation device 1C may not include a microphone.
The infection risk evaluation device 1C does not determine whether or not the subject person is infected with the infectious disease, and regards the subject person who coughs or sneezes as an infected person.
The camera 11 and the microphone 12 may be provided inside the infection risk evaluation device 1C or may be provided outside the infection risk evaluation device 1C. When the camera 11 and the microphone 12 are provided outside the infection risk evaluating apparatus 1C, the infection risk evaluating apparatus 1C is connected to the camera 11 and the microphone 12 so as to be able to communicate with each other by wire or wirelessly.
The processor 13 includes an image processing unit 131, a cough/sneeze detection unit 132, a person state determination unit 133, an infection risk evaluation unit 135, and an evaluation result notification unit 136. The memory 14 is, for example, a semiconductor memory, and includes an image storage unit 141 and an infection risk evaluation table storage unit 144.
The infection risk evaluating apparatus 1C may include a plurality of cameras. Therefore, the large-range shooting can be realized without scanning by one camera, and the camera calibration is easier.
When a person coughs or sneezes, the person may perform various actions in a conditioned reflex manner. For example, a person may cough or sneeze with a part of the face such as the nose and mouth covered with a hand, cough or sneeze with a mouth not covered with any other part, cough or sneeze with a part of the face such as the nose and mouth covered with a handkerchief, cough or sneeze with a part of the face such as the nose and mouth covered with a top sleeve, or cough or sneeze with a mouth covered with a mask. It is considered that the risk of infection in the subsequent space varies depending on the state of the subject person at the time of coughing or sneezing. For example, when a person coughs or sneezes without any covering of the mouth, droplets or droplets nuclei fly several meters ahead of the person. That is, when the user coughs or sneezes without any covering of the mouth, the risk of infection in the space thereafter becomes extremely high due to droplet infection or air infection. Further, it is considered that the droplets or droplets nuclei are attached to or deposited on surrounding furniture or the like after flying into the space, and the risk of infection due to contact infection is also low.
When the nose and mouth are coughed or sneezed with the hands covered, the virus is mainly attached to the hands. When a person or an object in the vicinity is touched with a hand to which a virus is attached, the person who is touched may be infected with the virus, and further, the person who is touched with the object may be infected with the virus. Therefore, when the user coughs or sneezes with the mouth covered with the hand, the risk of infection due to contact infection increases. The initial velocity of the occurrence of coughing or sneezing is generally 10m/s or more, that is, viruses fly at a high speed. Therefore, even when the nozzle is covered with a hand, if there is a gap in the hand, the droplets or droplets core may leak from the gap. Therefore, when the user coughs or sneezes with the mouth covered with the hand, the risk of infection due to air infection or droplet infection is not low.
Further, when the mouth is coughed or sneezed with the handkerchief or the jacket sleeve covered, the probability of virus adhesion to the hand is very low and the gap is less likely to occur as compared with the case where the mouth is covered with the hands. Thus, the risk of infection is lower in the case of covering the mouth with a handkerchief or coat sleeve than in the case of covering the mouth with a hand. However, when the mouth is covered with the upper sleeves, the virus attached to the sleeves may be scattered again with time due to the movement of the subject. Therefore, the risk of infection due to air infection is higher when the mouth is covered with the jacket sleeves than when the mouth is covered with the handkerchief.
In addition, when the mouth is covered with a mask and the patient coughs or sneezes, almost all droplets or droplets are trapped in the filter layer of the mask if the mask is worn correctly. Therefore, it can be said that the risk of infection is not high when the mouth is covered with the mask.
In addition, a person may cough or sneeze with the face down. When a person coughs or sneezes with the face down in this way, droplets or droplet nuclei spread to the lower side of the space, and therefore the risk of infection due to droplet infection is generally reduced.
As described above, the risk of contracting an infectious disease varies depending on the state of the mouth of a person at the time of coughing or sneezing. Further, the risk of infection through which infection route is high also varies depending on the state of the mouth of the person.
The person state determination unit 133 recognizes the state of the mouth of the subject person from the images of the time before and after the time point when the cough or sneeze is detected. The state of the mouth of the person can be classified into a plurality of patterns. For example, the state of the mouth of the person includes a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief, a state in which the mouth of the person is covered with clothes (for example, sleeves of a jacket), and a state in which the mouth of the person is covered with a mask.
The person state determination unit 133 recognizes any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with the hand, a state in which the mouth of the person is covered with the handkerchief, a state in which the mouth of the person is covered with clothes (for example, jacket sleeves), and a state in which the mouth of the person is covered with the mask.
The infection risk evaluation table storage unit 144 stores an infection risk evaluation table in which the state of the mouth of the person is associated with an evaluation value obtained by numerically (quantitatively) quantifying the risk of contracting an infectious disease due to each of a droplet infection, a contact infection, and an air infection.
Fig. 24 is a diagram showing an example of the infection risk evaluation table stored in the infection risk evaluation table storage unit 144.
As shown in fig. 24, in the state where the mouth is not covered, an evaluation value indicating the risk of infection with droplet infection is associated with "3", an evaluation value indicating the risk of infection with contact infection is associated with "2", and an evaluation value indicating the risk of infection with air is associated with "3". The evaluation values are represented by numerical values "1" to "3", and the larger the numerical value is, the higher the risk of infection is.
In addition, with respect to the state in which the mouth is covered with the hand, an evaluation value indicating the risk of infection with droplet infection is associated with "2", an evaluation value indicating the risk of infection with contact infection is associated with "3", and an evaluation value indicating the risk of infection with air infection is associated with "2".
In addition, in the state where the mouth is covered with the handkerchief, the evaluation value indicating the risk of infection with droplet infection is associated with "1", the evaluation value indicating the risk of infection with contact infection is associated with "1", and the evaluation value indicating the risk of infection with air infection is associated with "1".
In addition, with respect to the state in which the mouth is covered with the sleeves of the jacket, "1" is associated with the evaluation value indicating the risk of infection with droplet infection, "1" is associated with the evaluation value indicating the risk of infection with contact infection, and "2" is associated with the evaluation value indicating the risk of infection with air infection.
In addition, in the state where the mouth is covered with the mask, "1" is associated with the evaluation value indicating the risk of infection with droplet infection, "1" is associated with the evaluation value indicating the risk of infection with contact infection, and "1" is associated with the evaluation value indicating the risk of infection with air infection.
The infection risk evaluation unit 135 evaluates the risk of contracting an infection in the predetermined space based on the state of the mouth of the person recognized by the person state determination unit 133. The infection risk evaluation unit 135 evaluates the risk of contracting an infectious disease by each of a droplet infection, a contact infection, and an air infection. The infection risk evaluation unit 135 extracts, from the infection risk evaluation table, an evaluation value of each of the droplet infection, the contact infection, and the air infection associated with the state of the mouth of the person recognized by the person state determination unit 133, and accumulates each of the extracted evaluation values for a predetermined time.
The evaluation result notification unit 136 outputs the evaluation result obtained by the infection risk evaluation unit 135 to the communication unit 15. When the integrated value is equal to or greater than the threshold value, the evaluation result notification unit 136 outputs an evaluation result indicating that the risk of contracting an infectious disease is high in the predetermined space to the communication unit 15.
The communication unit 15 transmits an evaluation result indicating that the risk of contracting an infectious disease is high in the predetermined space to the terminal device 5.
The terminal device 5 receives the evaluation result transmitted from the communication unit 15. The terminal device 5 displays the received evaluation result.
Next, the operation of the infection risk evaluating apparatus 1C according to the present embodiment will be described.
Fig. 25 is a 1 st flowchart for explaining the operation of the infection risk evaluating device, and fig. 26 is a 2 nd flowchart for explaining the operation of the infection risk evaluating device in the present embodiment.
First, in step S101, the processor 13 determines whether or not the power of the infection risk evaluating apparatus 1C is turned on. If it is determined that the power supply of the infection risk evaluating apparatus 1C is off (no in step S101), the process ends.
On the other hand, when it is determined that the infection risk evaluating apparatus 1C is powered on (step S101: YES), the camera 11 takes an image of the predetermined space in step S102. The camera 11 stores the captured image in the image storage unit 141. The camera 11 also stores the moving image in the image storage unit 141.
Next, in step S103, the processor 13 determines whether or not a predetermined time has elapsed. Here, the predetermined time is, for example, 30 minutes. In the present embodiment, whether or not to notify the evaluation result of the risk of infectious diseases is determined at predetermined time intervals. Further, for example, in the case where the evaluation result is frequently notified at 1 minute intervals or the like, the person to be notified may feel troublesome, and therefore, it is preferable to perform the notification at 30 minute intervals, for example. This makes it possible to evaluate the risk of contracting an infectious disease in a predetermined space within a predetermined time. The predetermined time may be set by an administrator, for example.
When determining that the predetermined time has not elapsed (no in step S103), the image processing unit 131 acquires an image from the image storage unit 141 in step S104.
Next, in step S105, the image processing unit 131 extracts the feature of the subject person from the image. Here, the feature of the subject person refers to, for example, the face, eyes, mouth, right hand, left hand, clothes, and mask of the subject person. Further, the image processing unit 131 also detects the center of gravity position of each feature.
Next, in step S106, the cough/sneeze detection unit 132 acquires a sound from the microphone 12.
Next, in step S107, the cough/sneeze detection unit 132 determines whether or not the subject person in the predetermined space has coughed or sneezed. Here, the cough/sneeze detection unit 132 calculates the 1 st distance between the center of gravity position of the face extracted from the image and the center of gravity position of the right hand, and calculates the 2 nd distance between the center of gravity position of the face extracted from the image and the center of gravity position of the left hand. The cough/sneeze detector 132 determines whether the shorter of the 1 st distance and the 2 nd distance is equal to or less than a threshold value. When determining that the shorter of the 1 st distance and the 2 nd distance is equal to or less than the threshold value, the cough/sneeze detection unit 132 determines whether or not the volume of the sound acquired from the microphone 12 is equal to or more than the threshold value. The cough/sneeze detection unit 132 determines that the subject person in the predetermined space has coughed or sneezed when the shorter of the 1 st distance and the 2 nd distance is determined to be equal to or less than the threshold and the sound volume is equal to or more than the threshold. When it is determined that the shorter of the 1 st distance and the 2 nd distance is longer than the threshold value or that the sound volume of the sound information is smaller than the threshold value, the cough/sneeze detection unit 132 determines that the subject person in the predetermined space is not detected to cough and that the subject person in the predetermined space is not detected to sneeze.
Here, if it is determined that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze (no in step S107), the process returns to step S101.
On the other hand, when it is determined that the subject person in the predetermined space has coughed or sneezed (yes in step S107), in step S108, the person state determination unit 133 acquires an image of the image storage unit 141 at the time when the subject person in the predetermined space has coughed or sneezed.
Next, in step S109, the person state determination unit 133 recognizes the state of the mouth of the subject person at the time of coughing or sneezing. Here, the person state determination unit 133 identifies which state of the mouth of the subject person is one of a state in which the mouth of the subject person is not covered, a state in which the mouth of the subject person is covered with hands, a state in which the mouth of the subject person is covered with handkerchiefs, a state in which the mouth of the subject person is covered with jacket sleeves, and a state in which the mouth of the subject person is covered with a mask, based on an image at a time point when the subject person in the predetermined space is detected to cough or sneeze.
The person state determination unit 133 may recognize the state of the mouth of the subject person from not only the image at the time point when the cough or sneeze is detected but also the images before and after the time point when the cough or sneeze is detected.
Next, in step S110, the infection risk evaluating unit 135 acquires an integrated value of the evaluation values stored in the memory 14. The memory 14 stores an integrated value obtained by integrating evaluation values of infection risks due to each of droplet infection, contact infection, and air infection in a predetermined space. The infection risk evaluation unit 135 acquires, from the memory 14, an integrated value of evaluation values of the risk of infection due to each of droplet infection, contact infection, and air infection in a predetermined space.
Next, in step S111, the infection risk evaluation unit 135 reads the infection risk evaluation table from the infection risk evaluation table storage unit 144.
Next, in step S112, the infection risk evaluating unit 135 refers to the infection risk evaluation table and determines an evaluation value of the infection risk due to each of the droplet infection, the contact infection, and the air infection corresponding to the state of the mouth of the target person recognized by the person state determining unit 133.
Next, in step S113, the infection risk evaluating unit 135 adds the determined evaluation values of the infection risks due to each of the droplet infection, the contact infection, and the air infection to the obtained integrated value, and stores the integrated value of the evaluation values of the infection risks due to each of the droplet infection, the contact infection, and the air infection in the memory 14. Thereby, the integrated value of the memory 14 is updated. Thereafter, the process returns to step S101, and the process of and after step S101 is performed.
On the other hand, when it is determined in step S103 that the predetermined time has elapsed (step S103: YES), in step S114, the infection risk evaluation unit 135 determines whether or not the total value of the integrated values of the respective infection pathways is equal to or greater than a threshold value. That is, the infection risk evaluation unit 135 sums the integrated values of the evaluation values of the infection risk due to each of the droplet infection, the contact infection, and the air infection stored in the memory 14, and determines whether or not the summed values are equal to or greater than a threshold value. If it is determined that the total value of the integrated values is not equal to or greater than the threshold value (no in step S114), the process proceeds to step S117.
On the other hand, when it is determined that the total value of the integrated values is equal to or greater than the threshold value (step S114: YES), the evaluation result notification unit 136 outputs an evaluation result indicating that the risk of contracting an infectious disease is high in the predetermined space to the communication unit 15 in step S115.
Next, in step S116, the communication unit 15 transmits an evaluation result indicating that the risk of contracting an infectious disease is high in the predetermined space to the terminal device 5. The terminal device 5 receives the evaluation result transmitted from the infection risk evaluation device 1C, and displays the received evaluation result. The manager who has confirmed the evaluation result displayed on the terminal device 5 has a high risk of contracting an infectious disease in the predetermined space, and therefore, ventilation of the predetermined space, power-on of the air cleaner disposed in the predetermined space, and movement of a person in the predetermined space to another place are performed.
Next, in step S117, the infection risk evaluating unit 135 initializes the integrated value of the evaluation values of the respective infection routes stored in the memory 14 and a predetermined time. Thereafter, the process returns to step S101, and the process of and after step S101 is performed.
In step S114, the infection risk evaluation unit 135 determines whether or not the total value of the integrated values of the respective infection pathways is equal to or greater than a threshold value, but the present disclosure is not particularly limited thereto, and may determine whether or not at least one of the integrated values of the respective infection pathways is equal to or greater than a threshold value. That is, the infection risk evaluating unit 135 may determine whether or not at least one of the integrated value of the evaluation values of the risk of infection due to droplet infection, the integrated value of the evaluation values of the risk of infection due to contact infection, and the integrated value of the evaluation values of the risk of infection due to air infection is equal to or greater than a threshold value.
The evaluation result notification unit 136 outputs the evaluation result indicating that the risk of contracting an infectious disease is high in the predetermined space to the communication unit 15, but the present disclosure is not particularly limited thereto, and an integrated value of each of a droplet infection, a contact infection, and an air infection may be output to the communication unit 15 as the evaluation result. At this time, the evaluation result notification unit 136 may output the integrated value of each of the droplet infection, the contact infection, and the air infection as the evaluation result to the communication unit 15 when determining that the total value of the integrated values is equal to or greater than the threshold value. In addition, when the predetermined time has elapsed, the evaluation result notification unit 136 may output the integrated value of each of the droplet infection, the contact infection, and the air infection to the communication unit 15 as the evaluation result without determining whether or not the total value of the integrated values is equal to or greater than the threshold value.
In the present disclosure, the evaluation result is transmitted to the terminal device 5 when the predetermined time has elapsed and it is determined that the total value of the integrated values is equal to or greater than the threshold value, but the present disclosure is not particularly limited thereto, and the integrated value of each of the droplet infection, the contact infection, and the air infection may be transmitted to the terminal device 5 each time the integrated value of each of the droplet infection, the contact infection, and the air infection is stored in step S113. In this case, the terminal device 5 can display the integrated value of each of the droplet infection, the contact infection, and the air infection in real time.
In addition, the target person in the predetermined space is not limited to one person, and a plurality of target persons may be present. When a plurality of target persons are present in the predetermined space, it is also possible to detect cough or sneeze of each of the plurality of target persons, identify the state of the mouth of each of the plurality of target persons, determine the evaluation value of the risk of infection due to each of droplet infection, contact infection, and air infection corresponding to the state of the mouth of each of the plurality of identified target persons, and store the accumulated value of the evaluation values of the risk of infection due to each of droplet infection, contact infection, and air infection.
The memory 14 may store infected person information in which the face image of the subject person is associated with information indicating whether or not the subject person is infected with an infectious disease. In this case, the infection risk evaluating unit 135 may determine whether or not the subject person is infected with the infectious disease, based on the facial image of the subject person included in the image information. When it is determined that the subject person is infected with an infectious disease, the infection risk evaluation unit 135 may weight the determined evaluation value. When it is determined that the subject person is not infected with the infectious disease, the infection risk evaluation unit 135 may determine the evaluation value to be 0. The infection risk evaluating apparatus 1C may take a face image of the subject person in advance, acquire biological information of the subject person from a biosensor, and determine whether or not the subject person is infected with an infectious disease based on the acquired biological information. The infection risk evaluating apparatus 1C may receive input of information on whether or not the subject person is infected with the infectious disease from a doctor or a manager.
The infection risk evaluation system described above is an example of the following information processing system.
An information processing system includes a camera that captures an image of a predetermined space, and an information processing device that detects a person in the predetermined space coughing or sneezing, acquires an image of the predetermined space captured by the camera when the person coughs or sneezing is detected, detects a state of a mouth of the person from the image, evaluates a risk of infection in the predetermined space based on the state of the mouth, and outputs an evaluation result.
In addition, the information processing system can realize the following information processing method.
An information processing method comprising: a method for evaluating a person in a predetermined space, which comprises detecting a cough or a sneeze of the person located in the predetermined space, acquiring an image of the predetermined space captured when the cough or the sneeze is detected, detecting a state of a mouth of the person from the image, evaluating a risk of infection in the predetermined space based on the state of the mouth, and outputting an evaluation result.
According to the configuration of the information processing method, the state of the mouth of the person is detected from the image of the predetermined space captured when the cough or sneeze is detected, and the risk of infection in the predetermined space is evaluated based on the state of the mouth of the person. In addition, when it is estimated that the risk of infectious diseases in the predetermined space is high, appropriate treatment can be prompted so as to reduce the risk of infectious diseases.
In the above information processing method, the state recognition of the mouth portion of the person may be performed to recognize either a state in which the mouth of the person is not covered or a state in which the mouth of the person is covered with a hand.
According to this configuration, the risk of infection is different between a state in which the mouth of the person is not covered and a state in which the mouth of the person is covered with the hand. Therefore, the risk of contracting an infectious disease in the predetermined space can be evaluated more accurately based on whether the state of the mouth of the person is a state in which the mouth of the person is not covered or a state in which the mouth of the person is covered with the hand.
In the information processing method, the state recognition of the mouth portion of the person may be performed to recognize any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, and a state in which the mouth of the person is covered with a mask.
According to this configuration, the risk of infection differs between a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with the hand, and a state in which the mouth of the person is covered with the mask. Therefore, the risk of contracting an infectious disease in the predetermined space can be evaluated more accurately based on which of the state of the mouth of the person is the state in which the mouth of the person is not covered, the state in which the mouth of the person is covered with the hand, and the state in which the mouth of the person is covered with the mask.
In the information processing method, the state recognition of the mouth portion of the person may be performed to recognize any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief, and a state in which the mouth of the person is covered with a mask.
According to this configuration, the risk of infection is different between a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief, and a state in which the mouth of the person is covered with a mask. Therefore, the risk of contracting an infectious disease in the predetermined space can be evaluated more accurately based on which of the state of the mouth portion of the person is the state in which the mouth of the person is not covered, the state in which the mouth of the person is covered with the hand, the state in which the mouth of the person is covered with the handkerchief, and the state in which the mouth of the person is covered with the mask.
In the information processing method, the state recognition of the mouth portion of the person may recognize any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief, a state in which the mouth of the person is covered with clothes, and a state in which the mouth of the person is covered with a mask.
According to this configuration, the risk of infection is different between a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief, a state in which the mouth of the person is covered with clothes, and a state in which the mouth of the person is covered with a mask. Therefore, the risk of contracting an infectious disease in the predetermined space can be evaluated more accurately based on which of the state of the mouth portion of the person, the state in which the mouth of the person is not covered, the state in which the mouth of the person is covered with the hand, the state in which the mouth of the person is covered with the handkerchief, the state in which the mouth of the person is covered with the clothes, and the state in which the mouth of the person is covered with the mask.
In the information processing method, the detection of the cough or the sneeze may be a detection of a cough or a sneeze of a person located in the predetermined space based on the image.
According to this configuration, it is possible to detect a person who is located in a predetermined space from coughing or sneezing using an image.
In the information processing method, the detection of the cough or the sneeze may be performed by determining whether or not a distance between a position of the face of the person included in the image and a position of one hand of the person included in the image is equal to or less than a threshold value, and detecting the cough or the sneeze when the distance is determined to be equal to or less than the threshold value.
Generally, a person will perform the act of placing their hands on their mouths when they are to cough or sneeze. Therefore, it is possible to easily detect that the person coughs or sneezes by determining whether or not the distance between the position of the face of the person included in the image and the position of one hand of the person included in the image is equal to or less than the threshold value.
In the information processing method, the detection of the cough or the sneeze may be performed by determining whether or not an area of a mouth of the person included in the image is equal to or smaller than a threshold value, and detecting the cough or the sneeze when the area is determined to be equal to or smaller than the threshold value.
Generally, a person will perform the act of placing their hands on their mouths when they are to cough or sneeze. Therefore, by determining whether or not the area of the mouth of the person included in the image is equal to or smaller than the threshold value, it is possible to easily detect that the person coughs or sneezes.
In the information processing method, the sound obtained by collecting the sound in the predetermined space may be acquired from a microphone provided in the predetermined space, and the detection of the cough or the sneeze may be performed by detecting the cough or the sneeze of the person located in the predetermined space based on the image and the sound.
According to this configuration, the sound obtained by collecting the sound in the predetermined space is acquired from the microphone provided in the predetermined space. The detection of coughing or sneezing is to detect a person in a predetermined space from images and sounds.
Therefore, the person in the predetermined space can be detected from the image and also from the sound, and therefore, the person in the predetermined space can be more accurately detected from the cough or the sneeze.
In the information processing method, the risk of contracting the infectious disease may be evaluated by evaluating the risk of contracting the infectious disease by each of a droplet infection, a contact infection, and an air infection.
According to this configuration, since the risk of infection with each of droplet infection, contact infection, and air infection can be evaluated, the risk of infection with each of droplet infection, contact infection, and air infection can be estimated for each infection route. In addition, it is possible to implement measures against infectious diseases according to the infection routes of droplet infection, contact infection, and air infection.
In the information processing method, the evaluation of the risk of contracting the infectious disease may be performed by extracting the evaluation value of each of the droplet infection, the contact infection, and the air infection associated with the recognized state of the mouth of the person from an evaluation table in which the state of the mouth of the person and an evaluation value numerically expressing the risk of contracting the infectious disease by each of the droplet infection, the contact infection, and the air infection are associated with each other, and accumulating the extracted evaluation values, and the output of the evaluation result may be output as the evaluation result.
According to this configuration, the evaluation table associates the state of the mouth of the person with an evaluation value obtained by numerically expressing the risk of infection due to each of droplet infection, contact infection, and air infection. The evaluation value of each of the droplet infection, the contact infection, and the air infection associated with the state of the mouth of the recognized person can be extracted from the evaluation table. The extracted evaluation values are accumulated respectively. The integrated value of each of the droplet infection, the contact infection, and the air infection is output as the evaluation result.
Therefore, the risk of contracting an infectious disease by each of droplet infection, contact infection, and air infection can be easily estimated using the integrated value of each of droplet infection, contact infection, and air infection.
In the information processing method, the evaluation of the risk of contracting the infectious disease may be performed by extracting the evaluation value of each of the droplet infection, the contact infection, and the air infection associated with the recognized state of the mouth of the person from an evaluation table in which the state of the mouth of the person and an evaluation value numerically expressing the risk of contracting the infectious disease by each of the droplet infection, the contact infection, and the air infection are associated with each other, and accumulating each of the extracted evaluation values for a predetermined time, and the output of the evaluation result may be output as the evaluation result indicating that the risk of contracting the infectious disease is high in the predetermined space when the accumulated value is equal to or greater than a threshold value.
According to this configuration, the evaluation table associates the state of the mouth of the person with an evaluation value obtained by numerically expressing the risk of infection due to each of droplet infection, contact infection, and air infection. The evaluation value of each of the droplet infection, the contact infection, and the air infection associated with the state of the mouth of the recognized person can be extracted from the evaluation table. Each of the extracted evaluation values is accumulated for a predetermined time. And outputting the evaluation result indicating that the risk of contracting the infectious disease is high in the predetermined space when the integrated value is not less than the threshold value.
Therefore, the risk of contracting an infectious disease by each of the droplet infection, the contact infection, and the air infection can be easily estimated using the integrated value of each of the droplet infection, the contact infection, and the air infection over a predetermined time.
The device of the present disclosure has been described above based on the embodiments, but the present disclosure is not limited to the embodiments. Embodiments obtained by implementing various modifications to the present embodiment and embodiments constructed by combining constituent elements in different embodiments may be included in the scope of one or more embodiments of the present disclosure, as long as the embodiments do not depart from the spirit of the present disclosure.
In the above embodiments, each component may be configured by dedicated hardware, or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded in a recording medium such as a hard disk or a semiconductor memory.
A part or all of the functions of the apparatus according to the embodiments of the present disclosure are typically implemented as an LSI (Large Scale Integration) that is an integrated circuit. These may be formed into a single chip individually, or may be formed into a single chip including a part or all of them. The integrated circuit is not limited to an LSI, and may be realized by a dedicated circuit or a general-purpose processor. An FPGA (Field programmable gate Array) that can be programmed after LSI manufacturing or a reconfigurable processor that can reconfigure connection and setting of circuit cells within an LSI may be used.
In addition, a part or all of the functions of the apparatus according to the embodiments of the present disclosure may be realized by executing a program by a processor such as a CPU.
In addition, the numerals used hereinabove are all exemplified for specifically explaining the present disclosure, and the present disclosure is not limited by the exemplified numerals.
The order of execution of the steps shown in the flowcharts is described for the purpose of specifically explaining the present disclosure, and may be other than the above order as long as the same effects can be obtained. Further, a part of the above steps may be executed simultaneously (in parallel) with other steps.
Various modifications of the embodiments of the present disclosure, which are made by changing the embodiments of the present disclosure within the scope that will occur to those skilled in the art, are also included in the present disclosure as long as the modifications do not depart from the spirit of the present disclosure.
Industrial applicability
The information processing method, the information processing program, and the information processing system according to the present disclosure can reduce the risk of infection in a predetermined space where a cough or sneeze is detected, and are useful as an information processing method, an information processing program, and an information processing system that control airflow in a predetermined space where a cough or sneeze is detected.
Description of the reference symbols
1. 1A, 1B airflow control means; 1C infection risk evaluation device; 2 an airflow generating device; 3. a 3A camera; 4, a microphone; 5, a terminal device; 11 a camera; 12 microphones; 13. 13A, 13B processor; 14. 14B memory; 15. 15B a communication unit; 21 a communication unit; 22 a processor; 23 a memory; 24 an airflow generating part; 25 a wind direction changing unit; 31 an imaging unit; 32. a 32A processor; 33 a memory; 34. 34A communication part; 41 a sound collecting part; 42 a processor; 43 a communication unit; 131 an image processing section; 132. 132A cough/sneeze detection unit; 133 person state determination unit; 134 a control signal generating section; 135 infection risk evaluation department; 136 evaluation result notification unit; 141 an image storage section; 142 a device information storage unit; 143 an airflow control table storage part; 144 infection risk evaluation table storage unit; 201 air conditioning equipment; 202 an air purifier; 221 an air flow control section; 321 an image processing unit; 322 cough/sneeze judging part; 323 a person state determination unit; 324 a cough/sneeze detector; 331 an image storage unit; 421 cough/sneeze detection portion.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:空调机的室内机以及室内系统