Information processing method, information processing program, and information processing system

文档序号:1131960 发布日期:2020-10-02 浏览:10次 中文

阅读说明:本技术 信息处理方法、信息处理程序以及信息处理系统 (Information processing method, information processing program, and information processing system ) 是由 高柳哲也 于 2019-05-21 设计创作,主要内容包括:一种信息处理方法,使计算机执行:检测位于预定空间内的人物咳嗽或者打喷嚏(S6),取得在检测到所述咳嗽或者所述喷嚏时所拍摄到的所述预定空间的图像(S7),从所述图像中检测所述人物的嘴部的状态(S8),基于识别出的所述人物的嘴部的状态,生成用于控制从使所述预定空间内产生气流的气流产生装置送出的空气的风向和风量中的至少一方的控制信号(S18),将生成的所述控制信号输出。(An information processing method for causing a computer to execute: a method for controlling a vehicle, which comprises detecting a person in a predetermined space coughing or sneezing (S6), acquiring an image of the predetermined space captured when the person coughs or sneezing is detected (S7), detecting a state of a mouth of the person from the image (S8), generating a control signal for controlling at least one of an air flow direction and an air flow amount of air sent from an air flow generator for generating an air flow in the predetermined space based on the recognized state of the mouth of the person (S18), and outputting the generated control signal.)

1. An information processing method for causing a computer to execute:

detecting a person within a predetermined space coughing or sneezing,

acquiring an image of the predetermined space captured when the cough or the sneeze is detected,

detecting a state of the mouth of the person from the image,

generating a control signal for controlling at least one of a wind direction and an air volume of air sent from an air flow generating device that generates an air flow in the predetermined space, based on the recognized state of the mouth of the person,

and outputting the generated control signal.

2. The information processing method according to claim 1,

the state recognition of the mouth part of the person recognizes one of a state where the mouth of the person is not covered and a state where the mouth of the person is covered with a hand.

3. The information processing method according to claim 1,

the state recognition of the mouth part of the person recognizes any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, and a state in which the mouth of the person is covered with a mask.

4. The information processing method according to claim 1,

the state recognition of the mouth part of the person recognizes any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief or clothes, and a state in which the mouth of the person is covered with a mask.

5. The information processing method according to any one of claims 1 to 4,

also identifies from the image the face orientation of the person at the time when the person coughs or sneezes were detected,

the wind direction is made different in a case where the face faces forward and in a case where the face faces downward.

6. The information processing method according to any one of claims 1 to 5,

the position coordinates of the person are also calculated from the image,

the control signal is generated based on the recognized state of the mouth of the person and the position coordinates.

7. The information processing method according to claim 6,

selecting the airflow generating device from a plurality of airflow generating devices based on the position coordinates.

8. A program that causes a computer to execute a process, the process comprising:

detecting a person within a predetermined space coughing or sneezing,

acquiring an image of the predetermined space captured when the cough or the sneeze is detected,

detecting a state of the mouth of the person from the image,

generating a control signal for controlling at least one of a wind direction and an air volume of air sent from an air flow generating device that generates an air flow in the predetermined space, based on a state of the mouth,

and outputting the generated control signal.

9. An information processing system is provided with:

a camera that photographs a predetermined space;

an air flow generating device which generates air flow in the predetermined space; and

an information processing apparatus for processing an information signal,

the information processing apparatus is provided with a plurality of processing units,

detecting a person within the predetermined space coughing or sneezing,

acquiring an image of the predetermined space captured by the camera when the cough or the sneeze is detected,

detecting a state of the mouth of the person from the image,

generating a control signal for controlling at least one of a wind direction and an air volume of air sent from the airflow generating device based on a state of the mouth,

and outputting the generated control signal.

Technical Field

The present disclosure relates to an information processing method, an information processing program, and an information processing system for controlling an air flow in a predetermined space in which a cough or sneeze is detected.

Background

Many infectious diseases (infectious diseases) including influenza (influenza) are transmitted from person to person by contact infection, droplet infection, or air infection, for example. In particular, the occurrence of an infected person in a facility (institution) or the like is likely to involve collective infection throughout the facility, and therefore countermeasures are urgent. For example, in facilities such as nursing facilities where many elderly people live, infectious diseases are likely to become serious, and in the worst case, the elderly people infected with infectious diseases may die. In the care facility, measures against infection on the personal level are taken, for example, a caregiver wears a mask and thorough hand hygiene. In addition, regarding influenza, it is important that the infected person is not exposed to coughing or sneezing from the viewpoint of a countermeasure against infection, considering that droplet infection or air infection is a main infection route.

For example, patent document 1 discloses the following technique: detecting that the infected person has performed the operation of generating the splash, judging whether the person to be inspected is located at a place where the infected person has performed the operation of generating the splash when the infected person has performed the operation of generating the splash, and outputting identification information of the person to be inspected when the person to be inspected is judged to be located at the place.

Further, non-patent document 1 discloses a result of simulating how the droplets are scattered when a person with an infection coughs in an air-conditioned room whose interior is being ventilated.

As a result, when a person coughs at an initial speed of 10m/s (m/s), the droplets reach the affected person (susceptible individual) 1m ahead in about 5 seconds, and the affected person is exposed to the droplets. After that, the droplets spread around the user after several tens of seconds or more.

However, since the ventilation condition in non-patent document 1 is set to be larger than the generally desired ventilation amount, the time for spreading the mist is estimated to be short. However, the behavior of the spray is generally known to be divided into two stages: a 1 st stage in which the droplets are scattered at a high speed by an unsteady cough airflow within 5 to 10 seconds; and a 2 nd stage in which the spray is rapidly decelerated by air resistance after the 1 st stage and is carried by the indoor air flow.

Disclosure of Invention

However, in the above-described conventional techniques, the risk of infection in a predetermined space where a cough or sneeze is detected cannot be reduced, and further improvement is required.

The present disclosure has been made to solve the above-described problems, and provides a technique capable of reducing the risk of infection in a predetermined space where coughing or sneezing is detected.

An information processing method according to an aspect of the present disclosure causes a computer to execute: a method for controlling a vehicle, which comprises detecting a person in a predetermined space to cough or sneeze, acquiring an image of the predetermined space captured when the person has coughed or sneezed, detecting a state of a mouth of the person from the image, generating a control signal for controlling at least one of a wind direction and a wind volume of air sent from an air flow generator for generating an air flow in the predetermined space based on the recognized state of the mouth of the person, and outputting the generated control signal.

The general or specific technical means may be realized by an apparatus, a system, an integrated circuit, a computer program, or a computer-readable recording medium, or may be realized by any combination of an apparatus, a system, a method, an integrated circuit, a computer program, and a computer-readable recording medium. Examples of the computer-readable recording medium include nonvolatile recording media such as CD-ROM (Compact Disc-Read Only Memory).

According to the present disclosure, since the locally present droplets can be diffused and the concentration thereof can be made uniform, the risk of infection in a predetermined space where a cough or sneeze is detected can be reduced.

Further advantages and effects in one aspect of the disclosure can be seen from the description and the accompanying drawings. The advantages and/or effects described above are provided by several embodiments and features described in the specification and drawings, respectively, but not all embodiments and features need to be provided in order to obtain one or more of the same features.

Drawings

Fig. 1 is a diagram showing the configuration of an airflow control system according to embodiment 1 of the present disclosure.

Fig. 2 is a diagram for explaining the method 1 of detecting coughing or sneezing of the subject person from the image in embodiment 1.

Fig. 3 is a diagram for explaining the method 2 of detecting coughing or sneezing of the subject person from the image in embodiment 1.

Fig. 4 is a diagram showing an example of a time-series change in the area of the mouth of the subject person or the distance between the face and the hand of the subject person in embodiment 1.

Fig. 5 is a diagram showing an example of the 1 st airflow control table in a case where the airflow control system includes one airflow generation device and the airflow generation device is an air conditioner (air conditioning equipment).

Fig. 6 is a diagram showing an example of a result of simulation (simulation) of a wind speed distribution in a case where the air conditioner is driven while the air conditioner is not driven and an air current is generated toward 30 degrees below the horizontal direction in a space in which the air conditioner and the air cleaner are arranged.

Fig. 7 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is driven without driving the air conditioner in a space in which the air conditioner and the air cleaner are arranged and an air current is generated toward 90 degrees downward from the horizontal direction.

Fig. 8 is a diagram showing an example of the 2 nd airflow control table in the case where the airflow control system includes one airflow generating device and the airflow generating device is an air cleaner.

Fig. 9 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is not driven but the air cleaner is driven and an airflow is generated in an upward direction of 90 degrees from the horizontal direction in a space in which the air conditioner and the air cleaner are arranged.

Fig. 10 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is not driven but the air cleaner is driven and an airflow is generated at 45 degrees upward from the horizontal direction in a space in which the air conditioner and the air cleaner are arranged.

Fig. 11 is a diagram showing an example of the 3 rd airflow control table in the case where the airflow control system includes two airflow generation devices, and the two airflow generation devices are an air conditioner and an air cleaner, respectively.

Fig. 12 is a flowchart 1 for explaining the operation of the airflow control device in embodiment 1.

Fig. 13 is a flow chart of fig. 2 for explaining the operation of the airflow control device in embodiment 1.

Fig. 14 is a flowchart for explaining the operation of the airflow generation device in embodiment 1.

Fig. 15 is a diagram showing the configuration of an airflow control system according to embodiment 2 of the present disclosure.

Fig. 16 is a flowchart 1 for explaining the operation of the airflow control device in embodiment 2.

Fig. 17 is a flow chart of fig. 2 for explaining the operation of the airflow control device in embodiment 2.

Fig. 18 is a diagram showing the configuration of an airflow control system according to embodiment 3 of the present disclosure.

Fig. 19 is a flowchart for explaining the operation of the camera (camera) in embodiment 3.

Fig. 20 is a flowchart for explaining the operation of the airflow control device in embodiment 3.

Fig. 21 is a diagram showing the configuration of an airflow control system according to embodiment 4 of the present disclosure.

Fig. 22 is a flowchart for explaining the operation of the camera in embodiment 4.

Fig. 23 is a diagram showing the configuration of the infection risk evaluation system of the present disclosure.

Fig. 24 is a view showing an example of an infection risk evaluation table stored in an infection risk evaluation table storage unit in the infection risk evaluation system of the present disclosure.

Fig. 25 is a 1 st flowchart for explaining the operation of the infection risk evaluating device of the present disclosure.

Fig. 26 is a flow chart of fig. 2 for explaining the operation of the infection risk evaluating device of the present disclosure.

Detailed Description

(insight underlying the present disclosure)

In the above-described conventional techniques, although a person at risk of infection can be estimated, it is difficult to prevent infection of an infected person before infection. That is, it is difficult to prevent infection due to droplet infection or air infection as a result of exposure of the person to cough or sneeze of the infected person.

People cough or sneeze in various states. For example, many people cover (cover) a part of the face (face) such as the nose and mouth with hands when coughing or sneezing. In addition, a person may cough or sneeze while wearing a mask. The behavior of the droplets differs depending on the state of the person who coughs or sneezes.

For example, when a person coughs or sneezes with a part of the face covered with hands, many droplets are adhered to the hands without spreading. Although the droplets or droplet nuclei having small particle diameters leak from the gaps between the hands, the convection velocity is expected to be about the same as the wind velocity in the room due to the pressure loss caused by the covering with the hands. That is, in this case, the droplets are locally present (localized) around the infected person, and can be said to be almost static. In this case, it is important to rapidly diffuse the droplets left around the infected person to the surroundings.

In order to solve the above problem, an information processing method according to an aspect of the present disclosure causes a computer to execute: a method for controlling a vehicle, which comprises detecting a person in a predetermined space to cough or sneeze, acquiring an image of the predetermined space captured when the person has coughed or sneezed, detecting a state of a mouth of the person from the image, generating a control signal for controlling at least one of a wind direction and a wind volume of air sent from an air flow generator for generating an air flow in the predetermined space based on the recognized state of the mouth of the person, and outputting the generated control signal.

According to this configuration, the state of the mouth of the person at the time of coughing or sneezing of the person is recognized from the image acquired when the person is detected in the predetermined space, and the control signal for controlling at least one of the wind direction and the wind volume of the air sent from the air flow generating device that generates the air flow in the predetermined space is generated based on the recognized state of the mouth of the person.

Therefore, by generating an air flow at a local area where droplets are generated by the person's cough or sneeze, the droplets can be diffused and the concentration of the droplets can be made uniform, and thus the risk of infection in a predetermined space where a cough or sneeze is detected can be reduced.

In the above information processing method, the state recognition of the mouth portion of the person may be performed to recognize either a state in which the mouth of the person is not covered or a state in which the mouth of the person is covered with a hand.

According to this configuration, the portion where the droplet is locally generated by the cough or sneeze of the person differs between a state where the mouth of the person is not covered and a state where the mouth of the person is covered with the hand. Therefore, by determining the position where the air flow is generated based on which state of the mouth portion of the person is the state where the mouth of the person is not covered and the state where the mouth of the person is covered with the hand, it is possible to more reliably diffuse the locally present spray.

In the information processing method, the state recognition of the mouth portion of the person may be performed to recognize any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, and a state in which the mouth of the person is covered with a mask.

According to this configuration, the location where the droplet is generated by the cough or sneeze of the person is different between a state where the mouth of the person is not covered, a state where the mouth of the person is covered with the hand, and a state where the mouth of the person is covered with the mask. Therefore, by determining the position where the air flow is generated based on which state of the mouth portion of the person is the state where the mouth of the person is not covered, the state where the mouth of the person is covered with the hand, and the state where the mouth of the person is covered with the mask, it is possible to more reliably diffuse the locally present spray.

In the information processing method, the state recognition of the mouth portion of the person may recognize any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief or clothes, and a state in which the mouth of the person is covered with a mask.

According to this configuration, the location where the droplet is generated by the cough or sneeze of the person is different between a state where the mouth of the person is not covered, a state where the mouth of the person is covered with the hand, a state where the mouth of the person is covered with the handkerchief or clothes, and a state where the mouth of the person is covered with the mask. Therefore, by determining the position where the air flow is generated based on which state of the mouth portion of the person is the state where the mouth of the person is not covered, the state where the mouth of the person is covered with the hand, the state where the mouth of the person is covered with the handkerchief or clothes, and the state where the mouth of the person is covered with the mask, it is possible to more reliably diffuse the locally present droplets.

In the information processing method, the orientation of the face of the person at the time when the person coughs or sneezes is detected may be identified from the image, and the wind direction may be made different between a case where the face is oriented to the front and a case where the face is oriented to the lower side.

According to this configuration, since the droplets are scattered forward of the front of the person when the person coughs or sneezes with the face facing forward, and the droplets are locally present below the predetermined space when the person coughs or sneezes with the face facing downward, the airflow can be accurately generated at the location where the droplets are locally present by making the direction of the air flow sent by the airflow generating device different between when the face of the person faces forward and when the face of the person faces downward.

In the above information processing method, the position coordinates of the person may be calculated from the image, and the control signal may be generated based on the recognized state of the mouth of the person and the position coordinates.

According to this configuration, the position where the droplet is locally present can be more accurately specified based on the state of the mouth of the person when the cough or sneeze of the person is detected and the position coordinates where the person is located.

In the information processing method, the air flow generator may be selected from a plurality of air flow generators based on the position coordinates.

According to this configuration, the airflow generation device includes a plurality of airflow generation devices. Then, the air flow generating device to be controlled among the plurality of air flow generating devices is selected based on the calculated position coordinates where the person is located. Therefore, for example, by sending air to a local area where droplets are present from the airflow generation device closest to the position where a person who coughs or sneezes is located among the plurality of airflow generation devices, the droplets that are locally present can be diffused more efficiently and quickly.

Another aspect of the present disclosure relates to a program for causing a computer to execute processing including: a method for controlling a vehicle, which comprises detecting a person in a predetermined space coughing or sneezing, acquiring an image of the predetermined space captured when the person coughs or sneezing is detected, detecting a state of a mouth of the person from the image, generating a control signal for controlling at least one of an airflow direction and an airflow volume of air sent from an airflow generating device that generates an airflow in the predetermined space based on the state of the mouth, and outputting the generated control signal.

According to this configuration, the state of the mouth of the person at the time of coughing or sneezing of the person is recognized from the image acquired when the person is detected in the predetermined space, and the control signal for controlling at least one of the wind direction and the wind volume of the air sent from the air flow generating device that generates the air flow in the predetermined space is generated based on the recognized state of the mouth of the person.

Therefore, by generating an air flow at a local area where droplets are generated by the person's cough or sneeze, the droplets can be diffused and the concentration of the droplets can be made uniform, and thus the risk of infection in a predetermined space where a cough or sneeze is detected can be reduced.

An information processing system according to another aspect of the present disclosure includes: a camera that photographs a predetermined space; an air flow generating device which generates air flow in the predetermined space; and an information processing device that detects a person in the predetermined space coughing or sneezing, acquires an image of the predetermined space captured by the camera when the person coughs or sneezing is detected, detects a state of a mouth of the person from the image, generates a control signal for controlling at least one of a wind direction and a wind volume of air sent from the airflow generation device based on the state of the mouth, and outputs the generated control signal.

According to this configuration, the state of the mouth of the person at the time of coughing or sneezing of the person is recognized from the image acquired when the person is detected in the predetermined space, and the control signal for controlling at least one of the wind direction and the wind volume of the air sent from the air flow generating device that generates the air flow in the predetermined space is generated based on the recognized state of the mouth of the person.

Therefore, by generating an air flow at a local area where droplets are generated by the person's cough or sneeze, the droplets can be diffused and the concentration of the droplets can be made uniform, and thus the risk of infection in a predetermined space where a cough or sneeze is detected can be reduced.

Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. The following embodiments are merely examples embodying the present disclosure, and do not limit the technical scope of the present disclosure.

(embodiment mode 1)

Fig. 1 is a diagram showing the configuration of an airflow control system according to embodiment 1 of the present disclosure. The airflow control system shown in fig. 1 is an example of an information processing system, and includes an airflow control device 1 and an airflow generation device 2.

The airflow control device 1 is an example of an information processing device, and controls an airflow in a predetermined space. The airflow control device 1 is disposed on a wall or a ceiling in a predetermined space. The predetermined space may be a space in which a camera or the like can be installed, and may be, for example, a public living room in a care facility or a waiting room in a hospital. The predetermined space may be a relatively narrow space such as an inside of a train.

The airflow generating device 2 generates an airflow in a predetermined space. The airflow generating device 2 is, for example, an air conditioning apparatus having a cooling and/or heating function, an air cleaner having an air cleaning function, or a blower having an air blowing function. The airflow generating device 2 is disposed in a predetermined space. The airflow generating device 2 has a function of changing the wind direction and the air volume.

The airflow control device 1 is connected to the airflow generation device 2 via a network so as to be able to communicate with each other. The network is for example an intranet or the internet.

The airflow control device 1 includes a camera 11, a microphone 12, a processor 13, a memory 14, and a communication unit 15.

The camera 11 is disposed in a predetermined space, and photographs the predetermined space. The camera 11 acquires an image of the subject person in a predetermined space. The subject person is a person staying in the space in which the airflow control device 1 is installed.

Here, the airflow control device 1 regards a subject who coughs or sneezes as an infected person without determining whether or not the subject is infected with an infectious disease. When a person suffers from an infection, there is a shift between infectious and symptomatic periods, usually, the two periods are different. According to the current technology, it is difficult to judge whether or not there is infectivity before symptoms appear, and it is possible to judge that it is an infected person after a long time has passed since there is infectivity. Therefore, the term "infected person" is used for individuals who have developed symptoms and who have been confirmed to be infectious by some measure such as diagnosis by a doctor.

The camera 11 is a camera for monitoring a room, is installed on a ceiling or the like so as to be able to detect a target person in a wide range, and continuously acquires a moving image of the room. The camera 11 may further include a rotating unit for scanning (sweep) an entire area in a room for a predetermined time. In this way, by providing the rotating portion to the camera 11, the entire indoor space can be imaged by one camera 11 even in a wider space of 20 stacks (1 stack corresponds to 1.62 square meters) or more.

The microphone 12 is disposed in a predetermined space, and collects sound in the predetermined space. The microphone 12 acquires the sound of the subject person in a predetermined space.

In embodiment 1, the camera 11 and the microphone 12 may be provided inside the airflow control device 1 or outside the airflow control device 1. When the camera 11 and the microphone 12 are provided outside the airflow control device 1, the airflow control device 1 is connected to the camera 11 and the microphone 12 so as to be able to communicate with each other by wire or wirelessly.

The processor 13 includes an image processing unit 131, a cough/sneeze detection unit 132, a person state determination unit 133, and a control signal generation unit 134. The memory 14 is, for example, a semiconductor memory, and includes an image storage unit 141, a device information storage unit 142, and an airflow control table storage unit 143.

The image storage unit 141 stores the image captured by the camera 11. The camera 11 stores an image obtained by imaging a predetermined space in the image storage unit 141.

The image processing unit 131 obtains an image obtained by imaging the predetermined space from the image storage unit 141. The image processing unit 131 performs image processing on the acquired image, and extracts human features such as the face, nose, mouth, hands, clothes, whether or not a mask is present, and the position of the subject in the room. The image processing unit 131 may use machine learning or deep learning for feature extraction, or may use a widely known feature extractor such as a Haar-Like extractor for face detection or the Like. When extracting the features, the image processing unit 131 detects information such as the center of gravity position or area of each feature extracted by the mouth, face, and the like, together with the position information of the target person in the room.

The cough/sneeze detection unit 132 detects a person who is located in a predetermined space from coughing or sneezing. When the subject coughs or sneezes, the cough/sneeze detection unit 132 detects the cough or sneeze.

The cough/sneeze detection unit 132 detects a person's cough or sneeze in the indoor space. The cough/sneeze detection unit 132 detects a person in a predetermined space from coughing or sneezing using the sound collected by the microphone 12 and the image captured by the camera 11.

For example, the cough/sneeze detection unit 132 determines whether or not the volume of the sound collected by the microphone 12 is equal to or greater than a threshold value. When determining that the volume of the sound collected by the microphone 12 is equal to or greater than the threshold value, the cough/sneeze detection unit 132 determines that a person in the predetermined space has coughed or sneezed. As the threshold value, for example, 70dB (decibel) may be used. Since the detected sound volume varies depending on the distance between the microphone 12 and the person, the cough/sneeze detecting unit 132 may calculate the distance between the microphone 12 and the person from the image and correct the threshold value based on the calculated distance.

The cough/sneeze detecting unit 132 may perform spectral analysis of the sound collected by the microphone 12, and detect a cough or a sneeze based on the analysis result by an algorithm such as machine learning. In this case, since the detection can be performed using a spectrum pattern unique to coughing or sneezing, the detection accuracy is improved.

The cough/sneeze detection unit 132 detects at least one of a cough and a sneeze of a person in a predetermined space from the image. The camera 11 acquires a moving image. Therefore, the cough/sneeze detection unit 132 can detect the operation pattern of the subject person using the features extracted by the image processing unit 131. For example, as an action immediately before coughing or sneezing, a person performs a characteristic action such as covering the mouth or closing the eyes with a hand. Therefore, the cough/sneeze detecting unit 132 can detect a person in the predetermined space that coughs or sneezes by detecting a characteristic motion at the time of coughing or sneezing.

The cough/sneeze detection unit 132 can use an operation pattern detected from the image from the camera 11. For example, the cough/sneeze detection unit 132 may determine the motion immediately before the cough or the sneeze using a classifier that performs machine learning on the characteristic motion.

More simply, the cough/sneeze detection unit 132 may calculate the distance between the center of gravity position of the face and the center of gravity position of the hand extracted from the image, and determine whether or not the distance between the center of gravity position of the face and the center of gravity position of the hand is equal to or less than a threshold value.

Fig. 2 is a diagram for explaining the method 1 of detecting coughing or sneezing of the subject person from the image in embodiment 1.

The cough/sneeze detection unit 132 determines whether or not the distance between the position of the face of the person included in the image and the position of one hand of the person included in the image is equal to or less than a threshold value, and detects a cough or sneeze when the distance is determined to be equal to or less than the threshold value.

First, the image processing section 131 extracts a face region FR indicating the face of the subject person, a right-hand region RH indicating the right hand of the subject person, and a left-hand region LH indicating the left hand of the subject person from the image G1. At this time, the extracted face region FR, right-hand region RH, and left-hand region LH are rectangular in shape. The image processing unit 131 calculates the center of gravity position of the face region FR, the center of gravity position of the right-hand region RH, and the center of gravity position of the left-hand region LH.

The cough/sneeze detection unit 132 determines whether or not the width fw of the face region FR, the distance r1 between the center of gravity position of the face region FR and the center of gravity position of the right-hand region RH, and the distance r2 between the center of gravity position of the face region FR and the center of gravity position of the left-hand region LH satisfy the following expression (1).

min(r1/fw,r2/fw)<0.5……(1)

In the above equation (1), min () is a function that returns the minimum value among the set parameters. That is, the cough/sneeze detecting unit 132 compares the smaller value of r1/fw and r2/fw with 0.5.

When it is determined that the above expression (1) is satisfied, the cough/sneeze detection unit 132 determines that a person in the predetermined space has coughed or sneezed. On the other hand, when it is determined that the above expression (1) is not satisfied, the cough/sneeze detection unit 132 determines that the person in the predetermined space has not coughed and the person in the predetermined space has not sneezed.

The cough/sneeze detection unit 132 may determine whether or not the area of the mouth extracted from the image is equal to or smaller than a threshold value.

Fig. 3 is a diagram for explaining the method 2 of detecting coughing or sneezing of the subject person from the image in embodiment 1.

The cough/sneeze detection unit 132 may determine whether or not the area of the mouth of the person included in the image is equal to or smaller than a threshold value, and may detect a cough or sneeze when determining that the area is equal to or smaller than the threshold value.

First, the image processing unit 131 extracts a mouth region MR indicating the mouth of the subject from the image G2. At this time, the extracted mouth region MR has a rectangular shape. Then, the image processing unit 131 calculates the area s (t) of the mouth region MR.

The cough/sneeze detection unit 132 determines whether or not the area s (t) of the mouth region MR is equal to or less than a threshold value. Specifically, the cough/sneeze detection unit 132 determines whether or not the area S (t) of the mouth region MR and the geometric average S0 of the time-series values of the area of the mouth region MR satisfy the following expression (2).

S(t)/S0<0.2……(2)

When it is determined that the above expression (2) is satisfied, the cough/sneeze detection unit 132 determines that a person in the predetermined space has coughed or sneezed. On the other hand, when it is determined that the above expression (2) is not satisfied, the cough/sneeze detection unit 132 determines that the person in the predetermined space has not coughed and the person in the predetermined space has not sneezed.

Fig. 4 is a diagram showing an example of a time-series change in the area of the mouth of the subject person or the distance between the face and the hand of the subject person in embodiment 1.

As shown in fig. 4, the area s (t) of the mouth of the subject person or the distance r (t) between the face and the hand of the subject person becomes equal to or less than the threshold value at time t 1. Therefore, the cough/sneeze detector 132 detects that the subject has coughed or sneezed at time t 1.

In addition, the detection method may be switched according to the state of the subject person. For example, since the mouth of a person wearing a mask is covered with the mask, the detection may be performed using a classifier that has been machine-learned or using the distance between the hand and the face. The memory 14 may store the extracted features or the detected operation pattern, and the control signal generating unit 134 may refer to these information as necessary.

In addition, when extracting the features of the person, the area of the mouth or the distance between the hand and the mouth detected changes according to the distance between the camera 11 and the person. Therefore, the cough/sneeze detection unit 132 may calculate the area of the mouth or the distance between the hand and the mouth using the length normalized based on the width of the face or the like. The cough/sneeze detection unit 132 can determine a cough or a sneeze by using the standardized length regardless of the positions of the camera 11 and the subject person. Further, a plurality of grid patterns whose size and position are known may be arranged in a predetermined space, and the image processing unit 131 may perform camera calibration (calibration) based on the size and position of the grid pattern included in the image. By performing camera calibration, the absolute position of the subject person in the predetermined space can be determined more accurately.

Further, the airflow control device 1 may include a plurality of cameras. Therefore, the large-range shooting can be realized without scanning by one camera, and the camera calibration is easier.

In order to improve the accuracy of detecting coughing or sneezing, the coughing/sneezing detecting unit 132 detects that a person located in a predetermined space coughs or sneezing based on images and sounds. For example, the cough/sneeze detection unit 132 may detect that the subject person coughs or sneezes when it is determined that the sound volume of the sound collected by the microphone 12 is equal to or greater than a threshold value and that the distance between the position of the face of the person included in the image captured by the camera 11 and the position of one hand of the person included in the image is equal to or less than a threshold value. When sound is used instead of images in detection of coughing or sneezing, there is a possibility of erroneous detection, and detection by combining images and sound can improve the accuracy of detection of coughing or sneezing. The memory 14 may store the detection result of the cough or sneeze, and the control signal generation unit 134 may refer to this information as necessary.

In embodiment 1, the cough/sneeze detection unit 132 may detect that the subject person coughs or sneezes using the sound collected by the microphone 12 without using an image.

The person state determination unit 133 recognizes the state of the mouth of the person at the time of coughing or sneezing from the image acquired when the person is detected to have coughed or sneezed.

The person state determination unit 133 recognizes any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief or clothes (for example, sleeves of a jacket), and a state in which the mouth of the person is covered with a mask. The person state determination unit 133 recognizes the face orientation of the person at the time of coughing or sneezing from the image acquired when the person is detected to have coughed or sneezed. The person state determination unit 133 calculates the position coordinates of the person in the predetermined space from the image acquired when the person is detected to cough or sneeze.

The person state determination unit 133 recognizes the state of the subject person by referring to the image when the cough or sneeze is detected by the cough/sneeze detection unit 132. The state of the mouth of the subject refers to any one of a state in which a part of the face of the subject, such as the mouth, is covered with a hand at the time of coughing or sneezing, a state in which a part of the face of the subject, such as the mouth, is covered with a handkerchief or a clothes sleeve, a state in which the face of the subject is not covered at all, and a state in which a part of the face of the subject, such as the mouth, is covered with a mask. The control signal generator 134 calculates the airflow control mode of the airflow generation device 2 based on the state of the subject person.

For example, when the subject coughs or sneezes with the mouth covered with the hand, large droplets are attached to the hand, and therefore contribute little to droplet infection or air infection, and particles having small particle diameters such as small droplets or droplet nuclei are likely to leak from the gaps between the hands. However, since the pressure loss is high due to the hand covering, small particles are left around the subject person and are gradually discharged by indoor ventilation.

In addition, when the subject coughs or sneezes while wearing the mask, the droplets are almost trapped on the filter layer of the mask. However, depending on the wearing state of the mask, fine particles having a particle diameter of about 0.3 μm (micrometer) that are difficult to be trapped by the filter layer are likely to leak from the slits of the mask.

Therefore, when the subject coughs or sneezes while the mouth is covered with the hands or while the subject wears the mask, there is a possibility that viruses are locally present around the subject, and it is necessary to rapidly spread the locally present viruses in order to prevent air infection. Therefore, for example, since the position of the subject person can be recognized by image processing, when the subject person coughs or sneezes while the subject person is wearing the mask in a state where the mouth is covered with hands, the airflow generation device 2 controls the wind direction so as to send air in the direction of the subject person. This enables rapid spread of a virus present locally. Further, the airflow generation device 2 may control not only the wind direction but also the wind speed. By controlling the wind speed based on the positional relationship between the subject person and the airflow generation device 2, the airflow can be controlled more efficiently.

When the subject coughs or sneezes without any covering of the mouth, droplets or droplet nuclei are scattered at high speed in the space by the cough airflow. It is statistically known that the initial velocity of cough is about 10m/s and continues for about 0.5s, and in fact, 10m/s is also used as the initial velocity of cough in non-patent document 1. When the subject coughs or sneezes without any covering of the mouth, the virus flies forward 1 to 1.5m approximately 5 to 10 seconds, and then rapidly decelerates due to the air resistance. Although it is difficult for droplets or droplets nuclei to spread by the airflow within 5 to 10 seconds after occurrence of a cough or sneeze, if the droplets or droplets are in front of the subject 1m that is rapidly decelerated by air resistance, viruses locally exist around the subject for several tens of seconds or more after reaching the front 1m of the subject. Therefore, when the subject coughs or sneezes without any covering of the mouth, the wind direction is controlled so that the air is blown forward by about 1 to 1.5m from the front of the subject, and small droplets or droplets nuclei after deceleration can be diffused.

Even if the subject coughs or sneezes without any covering of the mouth, the direction in which the droplets fly changes depending on whether the face is facing forward or downward. When the subject coughs or sneezes toward the front with the mouth not covered, the droplets or droplet nuclei reach 1 to 1.5m ahead within about 5 to 10 seconds and rapidly decelerate as described above. In addition, droplets having a large particle size slow down due to inertia and reach a distance farther than small droplets. When the subject coughs or sneezes downward with the mouth not covered, droplets or droplets nuclei stay indoors.

Therefore, the person state determination unit 133 determines the face orientation of the target person. By controlling the airflow according to the face orientation, air infection can be efficiently prevented. In this case, when there are a plurality of airflow generation devices 2, it is possible to more efficiently prevent air infection by using the airflow generation device 2 closest to the subject person.

In this way, the position where the droplets stay differs depending on the state of the mouth of the person and the orientation of the face of the person when the person coughs or sneezes.

The person state determination unit 133 classifies the state of the mouth of the person into a plurality of patterns by image processing from images before and after the time when the cough or sneeze of the subject person is detected. For example, the human condition determination unit 133 performs pattern classification based on an algorithm that has been machine-learned. By using the algorithm that has been machine-learned, it is possible to classify patterns with high accuracy.

Further, as a simple attachment (implementation) method, the person state determination unit 133 may determine the state of the mouth of the person based on an image processing algorithm. As an image processing algorithm, for example, a Haar-Like extractor is used, and a face, a mouth, and hands can be detected, and a mask, a handkerchief, and a jacket sleeve can be detected by color extraction. In this way, by using a simple image processing algorithm, a flow of supervised learning necessary for machine learning does not need to be performed, and therefore, the system can be easily installed.

In this way, the air flow control for suppressing air infection is performed after classifying the state of the subject person. In this case, the optimum control method differs depending on the type, number, and positional relationship of the airflow generation devices 2 provided indoors.

The equipment information storage unit 142 stores equipment information in which the type information of the airflow generation device disposed in the predetermined space is associated with the position information of the airflow generation device in the predetermined space. The type information of the airflow generation device is information indicating which of an air conditioning apparatus having a cooling and/or heating function, an air cleaner having an air cleaning function, and a blower having an air blowing function is the airflow generation device disposed in the predetermined space. The positional information of the airflow generation device is represented by, for example, coordinates in a predetermined space. Further, from the equipment information, it is possible to recognize that several airflow generation devices exist in the predetermined space.

The control signal generating unit 134 generates a control signal for controlling at least one of the wind direction and the air volume of the air sent from the airflow generating device 2 that generates the airflow in the predetermined space, based on the state of the mouth of the person recognized by the person state determining unit 133. The control signal generating unit 134 makes the wind direction of the air sent from the airflow generating device 2 different between the case where the face of the person is facing forward and the case where the face of the person is facing downward. The control signal generator 134 generates a control signal based on the state of the mouth of the person recognized by the person state determiner 133 and the position coordinates calculated by the person state determiner 133.

The airflow control table storage section 143 stores an airflow control table associating the state of the mouth of the person, the orientation of the face of the person, and the control content of the airflow generation device. The airflow control table associates the state of the subject at the time of coughing or sneezing with the control content of the airflow generation device for suppressing air infection in the predetermined space.

The control signal generating unit 134 acquires control contents corresponding to the state of the mouth of the person and the orientation of the face of the person identified by the person state determining unit 133 from the air flow control table stored in the air flow control table storage unit 143, and generates a control signal for controlling the air flow generating device 2 according to the acquired control contents.

The control signal generation unit 134 outputs the generated control signal to the communication unit 15. The communication unit 15 transmits the control signal generated by the control signal generation unit 134 to the airflow generation device 2.

In embodiment 1, the contents of control of the airflow generation devices differ depending on the types of the airflow generation devices and the number of the airflow generation devices. The following describes an airflow control table for each of the following cases: the air flow control system is provided with an air flow generating device, and the air flow generating device is an air conditioning equipment; the air flow control system is provided with an air flow generating device which is an air purifier; and the case where the airflow control system includes two airflow generation devices, and the two airflow generation devices are an air conditioning apparatus and an air cleaner, respectively.

Fig. 5 is a diagram showing an example of the 1 st airflow control table in the case where the airflow control system includes one airflow generation device and the airflow generation device is an air conditioner. Further, the air conditioning equipment is disposed on a wall surface near a ceiling in the predetermined space. Further, the air conditioner sends air downward from the horizontal direction.

First, as shown in fig. 5, when the mouth is not covered and the face is facing forward, the control content for controlling the wind direction such that the air is sent 1 meter ahead of the face is associated.

That is, when a cough or sneeze is detected while a portion of the face such as the mouth is not covered and the face is facing the front, the droplet generated by the subject reaches 1 to 1.5m ahead of the face of the subject in the direction in which the face is facing, before or after 5 seconds. Then, the droplets having small particle diameters are subjected to air resistance due to the resistance, and are temporarily locally present in the periphery thereof. Then, the airflow generating device 2 controls the wind direction so that the air is sent to the front 1 meter of the face of the subject person, thereby diffusing the locally present droplets and suppressing air infection.

Therefore, when the mouth is not covered and the face is oriented in the front direction, the control signal generating unit 134 generates a control signal for controlling the wind direction so as to send air to the front 1 meter of the face of the subject person. For example, when the airflow generating device 2 is an air conditioner and the air conditioner includes a louver (louver), the airflow generating device 2 adjusts the angle of the louver so that the airflow direction is controlled so that the air is delivered toward the front 1 meter of the face of the subject person. This can suppress air infection.

Fig. 6 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is driven while the air conditioner is not driven and an air current is generated toward 30 degrees below the horizontal direction in a space in which the air conditioner and the air cleaner are arranged. Further, the wind speed distribution shown in fig. 6 represents the result of a simulation based on CFD (Computational Fluid Dynamics).

In fig. 6, an air conditioner 201 and an air cleaner 202 are disposed in a space of 20 stacks. The air conditioner 201 sends air downward 30 degrees from the horizontal. In addition, "COMSOL Multiphysics", which is commercially available finite element method simulation software, is used for numerical calculation. As can be seen from fig. 6, by controlling the louver of the air conditioner 201, an air current can be generated at a desired place in the space.

Next, as shown in fig. 5, when the mouth is not covered and the face is directed downward, a control content for controlling the wind direction so as to send air downward by 90 degrees is associated.

That is, when a part of the face such as the mouth is not covered at the time of detecting a cough or sneeze and the face is directed downward, the droplet is locally present at a low position in the room. In this case, a person with a height of at least 150cm, such as an average adult, is less likely to be infected with air, whereas a person with a relatively low height, such as a child of a primary school or a less resistant person, is more likely to be infected with air. Since the air conditioner is usually installed near the ceiling in a room, the direction of the wind can be controlled to be 90 degrees downward. Then, the airflow generating device 2 controls the wind direction so that the air is sent downward at 90 degrees from the horizontal direction, thereby diffusing the droplets locally present at a low position in the room and suppressing air pollution.

Therefore, in a state where the mouth is not covered and the face is oriented downward, the control signal generating section 134 generates a control signal for controlling the wind direction of the airflow generating device 2 to be vertically downward. For example, in the case where the airflow generating device 2 is an air conditioner and the air conditioner is provided with a louver, the airflow generating device 2 adjusts the angle of the louver so that the wind direction of the air conditioner is controlled vertically downward. This makes it possible to generate a region having a high wind speed near the floor surface in the room, and to efficiently diffuse droplets locally present at a low position in the room.

Fig. 7 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is driven without driving the air conditioner in a space in which the air conditioner and the air cleaner are arranged and an air current is generated toward 90 degrees downward from the horizontal direction. Further, the wind speed distribution shown in fig. 7 represents the simulation result based on CFD.

In fig. 7, an air conditioner 201 and an air cleaner 202 are disposed in a space of 20 stacks. The air conditioner 201 sends air downward by 90 degrees from the horizontal direction. In addition, "comsolmutiphatics", which is commercially available finite element method simulation software, is used for numerical calculation. As can be seen from fig. 7, by sending air vertically downward from the air conditioner 201, a region where the wind speed is high can be generated at a height of about several tens of centimeters from the ground.

Next, as shown in fig. 5, in the case where the mouth is covered with the hand, a control content for controlling the wind direction so as to send air in the direction of the subject person is associated.

That is, when a part of the face such as the mouth of a subject is covered with a hand when a cough or sneeze is detected, scattering of droplets can be suppressed, but droplets are locally present around the subject. Then, the airflow generating device 2 can quickly diffuse the droplets locally present near the subject person by directing the wind direction toward the subject person, thereby suppressing air infection.

Therefore, in the case where the mouth is covered with the hand, the control signal generating unit 134 generates a control signal for controlling the wind direction so that the air is sent in the direction of the subject person. For example, when the airflow generating device 2 is an air conditioner and the air conditioner includes louvers, the airflow generating device 2 adjusts the angle of the louvers so that the airflow direction is controlled so that the air is sent in the direction of the subject person. This can suppress air infection.

Next, as shown in fig. 5, in a state where the mouth is covered with the handkerchief or the jacket sleeve, a control content for changing the operation mode to the powerful operation is associated.

That is, when a part of the face such as the mouth of a subject is covered with a handkerchief or a jacket sleeve when a cough or sneeze is detected, droplets may adhere to the handkerchief or the jacket sleeve. In this case, although scattering of the mist can be suppressed, a part of the virus attached to the handkerchief or the jacket spreads into the space. Then, the airflow generation device 2 changes the operation mode to the powerful operation for a predetermined time, thereby making the spread virus uniform and suppressing the air infection.

Therefore, in a state where the mouth is covered with the handkerchief or the jacket, the control signal generating unit 134 generates a control signal for changing the operation mode to the powerful operation. For example, in the case where the airflow generating device 2 is an air conditioner, the airflow generating device 2 adjusts the wind speed so that the speed of the sent air becomes faster, or adjusts the air volume so that the sent air becomes more. This makes it possible to uniformly distribute indoor spray and suppress air infection.

Next, as shown in fig. 5, in the case where the mouth is covered with the mask, a control content for controlling the wind direction so as to send air in the direction of the subject person is associated.

That is, when the mask is worn by the subject when a cough or sneeze is detected, many droplets are trapped on the filter layer of the mask, and fine particles having a particle size of about 0.3 μm, which are difficult to be trapped on the filter layer, leak from the mask. Alternatively, when the mask is not worn correctly, the fine particles may leak from the gap of the mask. That is, the leaked droplets are locally present around the subject person. Then, the airflow generating device 2 can quickly disperse the droplets locally present around the subject person by directing the airflow toward the subject person, thereby suppressing air infection.

Therefore, in a state where the mouth is covered with the mask, the control signal generating unit 134 generates a control signal for controlling the wind direction so that the air is sent in the direction of the subject person. For example, when the airflow generating device 2 is an air conditioner and the air conditioner includes louvers, the airflow generating device 2 adjusts the angle of the louvers so that the airflow direction is controlled so that the air is sent in the direction of the subject person. This can suppress air infection.

Fig. 8 is a diagram showing an example of the 2 nd airflow control table in the case where the airflow control system includes one airflow generating device and the airflow generating device is an air cleaner. In addition, the air cleaner is fixedly placed on the floor in the space. The air cleaner sends the cleaned air from the upper part of the air cleaner to a direction higher than the horizontal direction.

First, as shown in fig. 8, when the mouth is not covered and the face is facing forward, the control content for controlling the wind direction such that the air is sent 1 meter ahead of the face is associated.

That is, when a part of the face such as the mouth is not covered at the time of detection of a cough or sneeze and the face is facing the front, droplets having small particle diameters are locally present 1 to 1.5m ahead in the direction in which the face of the subject faces. Then, the airflow generating device 2 controls the wind direction so that the air is sent to the front 1 meter of the face of the subject person, thereby diffusing the locally present droplets and suppressing air infection.

Therefore, when the mouth is not covered and the face is oriented in the front direction, the control signal generating unit 134 generates a control signal for controlling the wind direction so as to send air to the front 1 meter of the face of the subject person. For example, when the airflow generating device 2 is an air cleaner and the air cleaner includes louvers, the airflow generating device 2 adjusts the angle of the louvers so that the airflow direction is controlled so that the air is delivered to the front 1 meter ahead of the face of the subject person. This can suppress air infection.

Fig. 9 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is not driven but the air cleaner is driven and an airflow is generated in an upward direction of 90 degrees from the horizontal direction in a space in which the air conditioner and the air cleaner are arranged. Fig. 10 is a diagram showing an example of a simulation result of a wind speed distribution in a case where the air conditioner is not driven but the air cleaner is driven and an airflow is generated upward at 45 degrees from the horizontal direction in a space in which the air conditioner and the air cleaner are arranged. The wind speed distributions shown in fig. 9 and 10 show the results of the CFD-based simulation.

In fig. 9 and 10, an air conditioner 201 and an air cleaner 202 are disposed in a space of 20 stacks. In addition, "COMSOL Multiphysics", which is commercially available finite element method simulation software, is used for numerical calculation. In fig. 9, the air cleaner 202 sends air vertically upward by controlling the louvers. In fig. 10, the air cleaner 202 controls the louver to send air upward by 45 degrees from the horizontal direction. As can be seen from fig. 9 and 10, by controlling the wind direction of the louvers of the air cleaner 202, an airflow can be generated at a desired place in the space.

Next, as shown in fig. 8, in a state where the mouth is not covered and the face is directed downward, a control content for changing the operation mode to the powerful operation is associated.

That is, when a cough or sneeze is detected while a portion of the face such as the mouth is not covered and the face is facing downward, the droplets locally exist at a low position in the room. The air purifier is fixedly placed on the ground in a predetermined space. In many air cleaners, the direction of airflow control by the louvers is horizontal or above horizontal.

Therefore, in the case where the mouth is not covered and the face is oriented downward, and the airflow generating device 2 is an air cleaner, the control signal generating section 134 generates a control signal for changing the operation mode to the powerful operation. In the case where the airflow generating device 2 is an air cleaner, the airflow generating device 2 cannot control the direction of the wind vertically downward, and thus the operation mode is changed to the powerful operation. This allows the air flow to circulate throughout the room, and indirectly promotes the dispersion of the droplets. In addition, many air cleaners take in air from the lower portion or side of the main body. Therefore, by changing the operation mode to the powerful operation, more air is taken in from the lower part or the side surface of the air cleaner, and therefore, droplets locally present at a low position in the room can be efficiently collected or diffused.

Next, as shown in fig. 8, in the case where the mouth is covered with the hand, a control content for controlling the wind direction so as to send air in the direction of the subject person is associated. As shown in fig. 8, in the case where the mouth is covered with the handkerchief or the jacket sleeve, a control content for changing the operation mode to the powerful operation is associated. As shown in fig. 8, in the case where the mouth is covered with the mask, a control content for controlling the wind direction so as to send air in the direction of the subject person is associated.

Note that, when a part of the face such as the mouth is covered with a hand, a handkerchief, or a jacket when a cough or sneeze is detected, or when the subject wears a mask, the control contents are the same as those in the case where the airflow control system includes one air conditioner, and therefore, the description thereof is omitted.

Fig. 11 is a diagram showing an example of the 3 rd airflow control table in the case where the airflow control system includes two airflow generation devices, and the two airflow generation devices are an air conditioner and an air cleaner, respectively. Further, the air conditioning apparatus is disposed on a wall surface near a ceiling within the predetermined space. Further, the air conditioner sends air in a direction lower than the horizontal direction. The air purifier is fixedly arranged on the ground in the space. The air cleaner sends the cleaned air from the upper part of the air cleaner to a direction higher than the horizontal direction.

In this case, the option of the optimal condition of the air flow control table shown so far will be selected in consideration of the distance between the subject person and the air flow generation device in addition to the state of the subject person.

First, as shown in fig. 11, when the mouth is not covered and the face is facing forward, a control content for controlling the wind direction such that air is sent from the airflow generation device closest to the subject person to the front 1 meter of the face direction is associated.

That is, when a cough or sneeze is detected in a state where a part of the face such as the mouth is not covered and the face is facing the front, the air flow generator closest to the subject is selected from the plurality of air flow generators, and the air flow direction is controlled so that the air is sent 1 meter ahead of the front of the face of the subject by the louvers of the selected air flow generator or the like. In this way, air infection can be suppressed at an early stage.

In this case, the control signal generating unit 134 generates a control signal based on the state of the mouth of the recognized person and the calculated position coordinates. Further, the control signal generating unit 134 selects an air flow generator to be controlled from among the plurality of air flow generators based on the calculated position coordinates.

Therefore, when the mouth is not covered and the face is facing forward, the control signal generating unit 134 selects the airflow generating device closest to the subject person from the plurality of airflow generating devices, and generates a control signal for controlling the wind direction so that the air is sent from the selected airflow generating device to the front 1 meter of the face facing direction of the subject person. The communication section 15 transmits a control signal to the selected airflow generation device.

Next, as shown in fig. 11, in a state where the mouth is not covered and the face is directed downward, a control content for controlling the wind direction so that air is sent downward by 90 degrees from the airflow generation device that is an air conditioning apparatus is associated.

That is, when a part of the face such as the mouth is not covered and the face is directed downward when a cough or sneeze is detected, an air flow generator that is an air conditioner is selected from the plurality of air flow generators, and the air flow is controlled to be directed vertically downward by the louvers of the selected air flow generator. This makes it possible to diffuse droplets locally present at a low position in the room.

Therefore, in a case where the mouth is not covered and the face is directed downward, the control signal generating section 134 selects an airflow generating device that is an air conditioning apparatus from the plurality of airflow generating devices, and generates a control signal for controlling the wind direction of the selected airflow generating device to be vertically downward. The communication section 15 transmits a control signal to the selected airflow generation device.

In the case where there is no air conditioning equipment in the plurality of airflow generation devices and all of the plurality of airflow generation devices are air cleaners, the control signal generator 134 may select an airflow generation device closest to the subject person from the plurality of airflow generation devices and generate a control signal for changing the operation mode of the selected airflow generation device to the powerful operation.

Next, as shown in fig. 11, in a case where the mouth is covered with the hand or in a case where the mouth is covered with the mask, a control content for controlling the wind direction so that the air is sent in the direction of the subject person from the airflow generation device closest to the subject person is associated.

That is, when a subject covers a part of the face such as the mouth of a person with hands when a cough or sneeze is detected, or when the subject wears a mask, the droplets are locally present around the subject. Then, the airflow generation device closest to the subject person is selected from the plurality of airflow generation devices, and the wind direction is controlled to send air in the direction of the subject person by the louver of the selected airflow generation device. This makes it possible to quickly disperse the droplets locally present around the subject.

Therefore, in a case where the mouth is covered with the hand or in a case where the mouth is covered with the mask, the control signal generating unit 134 selects an airflow generating device closest to the subject person from among the plurality of airflow generating devices, and generates a control signal for controlling the wind direction so that the air is sent from the selected airflow generating device in the direction of the subject person. The communication section 15 transmits a control signal to the selected airflow generation device.

Next, as shown in fig. 11, in a state where the mouth is covered with the handkerchief or the jacket sleeve, a control content for changing the operation mode of the airflow generation device closest to the subject person to the powerful operation is associated.

That is, when the subject covers a part of the face such as the mouth with a handkerchief or a jacket when the cough or sneeze is detected, the operation mode of the airflow generation device closest to the subject is changed to the powerful operation. This enables efficient removal of droplets.

Therefore, when the mouth is covered with the handkerchief or the jacket, the control signal generating unit 134 selects the air flow generator closest to the subject person from the plurality of air flow generators, and generates a control signal for changing the operation mode of the selected air flow generator to the powerful operation. The communication section 15 transmits a control signal to the selected airflow generation device. For example, the airflow generating device adjusts the wind speed so that the speed of the sent air becomes faster, or adjusts the air volume so that the sent air becomes more.

In this case, the subject may move around in the room, and the airflow generation device closest to the subject may vary depending on the time. In this case, the control signal generating unit 134 may calculate the distance between the subject person and each of the plurality of airflow generation devices at regular intervals, select the airflow generation device closest to the subject person, and change the operation mode of the selected airflow control device. This makes it possible to efficiently disperse the droplets in accordance with the movement of the subject person.

In embodiment 1, the 1 st, 2 nd and 3 rd air flow control tables are examples. The 3 rd airflow control table can be used not only for an airflow control system including one air conditioner and one air purifier, but also for an airflow control system including a plurality of air conditioners and an airflow control system including a plurality of air purifiers.

Next, the airflow generating device 2 shown in fig. 1 will be explained.

The airflow generating device 2 generates an airflow in a predetermined space. The airflow generating device 2 is, for example, an air conditioning apparatus or an air cleaner. The airflow generating device 2 may be an air curtain (air) or a direct current fan (DC fan) installed indoors to generate a specific airflow pattern. By carefully designing the installation position of the airflow generation device 2 in this manner, the airflow control can be performed more easily. Further, the airflow control system may include a plurality of airflow generation devices. Thereby, more complicated air flow control can be performed.

The airflow generation device 2 includes a communication unit 21, a processor 22, a memory 23, an airflow generation unit 24, and an airflow direction changing unit 25.

The communication unit 21 communicates with the airflow control device 1 and receives a control signal transmitted from the airflow control device 1. The control signal mainly includes an instruction to change the wind direction or the wind volume of the air sent from the airflow generating device 2, but may include an instruction to turn on the power supply of the airflow generating device 2 that is not energized.

Further, the communication unit 21 may transmit the position of the air flow generating device 2 to the air flow control device 1. In this way, not only can the positional relationship between the subject person and the airflow generating device 2 be flexibly used in the calculation of the airflow control, but also, when there are a plurality of airflow generating devices 2, the airflow control can be performed more efficiently by controlling the airflow generating device 2 closest to the subject person.

The processor 22 includes an airflow control unit 221. The airflow control unit 221 controls the airflow generating unit 24 and the airflow direction changing unit 25 in accordance with the control signal received by the communication unit 21.

The memory 23 is, for example, a semiconductor memory, and stores various kinds of information. When the operation mode of the airflow generation device 2 is to be temporarily changed, the airflow control unit 221 stores the control parameters in the operation mode before the change in the memory 23. When the operation mode of the airflow generation device 2 is to be returned to the operation mode before the change, the airflow control unit 221 reads the control parameter before the change stored in the memory 23 and changes the control parameter to the read control parameter.

The airflow generating unit 24 is, for example, a fan motor, and sends air into a predetermined space. In the case where the airflow generating device 2 is an air conditioner, the airflow generating unit 24 may send warm air or cold air generated by the refrigerant into a predetermined space, or may send the taken-in air as it is. When the airflow generating device 2 is an air cleaner, the airflow generating unit 24 sends the cleaned air into a predetermined space.

The airflow direction changing unit 25 controls the airflow generated from the airflow generating unit 24. The wind direction changing unit 25 controls the wind direction. The wind direction changing unit 25 is, for example, a louver. The airflow direction changing unit 25 changes the airflow direction of the air sent from the airflow generating unit 24 by adjusting the orientation of the louver.

Next, the operation of the airflow control device 1 according to embodiment 1 will be described.

Fig. 12 is a 1 st flowchart for explaining the operation of the airflow control device in embodiment 1, and fig. 13 is a 2 nd flowchart for explaining the operation of the airflow control device in embodiment 1.

First, in step S1, the processor 13 determines whether or not the airflow control device 1 is powered on. Here, if it is determined that the power supply of the airflow control device 1 is off (no in step S1), the process ends.

On the other hand, when it is determined that the airflow control device 1 is powered on (yes in step S1), the camera 11 captures an image of the inside of the predetermined space in step S2. The camera 11 stores the captured image in the image storage unit 141. The camera 11 also stores the moving image in the image storage unit 141.

Next, in step S3, the image processing unit 131 acquires an image from the image storage unit 141.

Next, in step S4, the image processing unit 131 extracts the feature of the subject person from the image. Here, the feature of the subject person refers to, for example, the face, eyes, mouth, right hand, left hand, clothes, and mask of the subject person. Further, the image processing unit 131 also detects the center of gravity position of each feature.

Next, in step S5, the cough/sneeze detection unit 132 acquires a sound from the microphone 12.

Next, in step S6, the cough/sneeze detection unit 132 determines whether or not the subject in the predetermined space has coughed or sneezed. Here, the cough/sneeze detection unit 132 calculates the 1 st distance between the center of gravity position of the face extracted from the image and the center of gravity position of the right hand, and calculates the 2 nd distance between the center of gravity position of the face extracted from the image and the center of gravity position of the left hand. The cough/sneeze detector 132 determines whether the shorter of the 1 st distance and the 2 nd distance is equal to or less than a threshold value. When determining that the shorter of the 1 st distance and the 2 nd distance is equal to or less than the threshold value, the cough/sneeze detection unit 132 determines whether or not the volume of the sound acquired from the microphone 12 is equal to or more than the threshold value. The cough/sneeze detection unit 132 determines that the subject person in the predetermined space has coughed or sneezed when the shorter of the 1 st distance and the 2 nd distance is determined to be equal to or less than the threshold and the sound volume is equal to or more than the threshold. When it is determined that the shorter of the 1 st distance and the 2 nd distance is longer than the threshold value or that the sound volume is smaller than the threshold value, the cough/sneeze detection unit 132 determines that the subject person in the predetermined space is not detected coughing and that the subject person in the predetermined space is not detected sneezing.

Here, if it is determined that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze (no in step S6), the process returns to step S1.

On the other hand, when it is determined that the subject person in the predetermined space has coughed or sneezed (yes in step S6), in step S7, the person state determination unit 133 acquires an image of the image at the time when the subject person in the predetermined space has been detected to have coughed or sneezed from the image storage unit 141.

Next, in step S8, the person state determination unit 133 identifies the state of the mouth of the subject person when the subject person coughs or sneezes. Here, the person state determination unit 133 identifies which state of the mouth of the subject person is one of a state in which the mouth of the subject person is not covered, a state in which the mouth of the subject person is covered with hands, a state in which the mouth of the subject person is covered with handkerchiefs, a state in which the mouth of the subject person is covered with jacket sleeves, and a state in which the mouth of the subject person is covered with a mask, based on an image at a time point when the subject person in the predetermined space is detected to cough or sneeze.

The person state determination unit 133 may recognize the state of the mouth of the subject person from not only the image at the time point when the cough or sneeze is detected but also the images before and after the time point when the cough or sneeze is detected.

Next, in step S9, the person state determination unit 133 recognizes the orientation of the face of the subject person when the subject person coughs or sneezes, based on the image at the time point when the subject person in the predetermined space is detected to cough or sneeze. At this time, the person state determination unit 133 determines which direction the face of the subject person faces in the front or the lower direction when the subject person coughs or sneezes.

Next, in step S10, the human condition determination unit 133 recognizes the position of the subject person in the predetermined space when the subject person coughs or sneezes, based on the image at the time point when the subject person in the predetermined space is detected to cough or sneeze.

Next, in step S11, the control signal generation section 134 reads the device information from the device information storage section 142. Further, the equipment information contains the category information of the airflow generation device 2 existing in the predetermined space and the position information of the airflow generation device 2 in the predetermined space. Further, in the case where a plurality of air flow generation devices are present in a predetermined space, the equipment information includes respective category information of the plurality of air flow generation devices 2 present in the predetermined space and position information of each of the air flow generation devices 2 in the predetermined space.

Next, in step S12, the control signal generating unit 134 determines whether or not a plurality of airflow generation devices are present in the predetermined space based on the device information. Here, when it is determined that a plurality of airflow generation devices are not present in the predetermined space, that is, when it is determined that one airflow generation device is present in the predetermined space (no in step S12), in step S13, control signal generation unit 134 determines whether or not the type of airflow generation device is an air conditioner.

Here, when it is determined that the type of the air flow generator is an air conditioner (yes in step S13), in step S14, the control signal generator 134 reads the 1 st air flow control table used when the air flow generator is one air conditioner from the air flow control table storage 143.

On the other hand, in the case where it is determined that the type of the air flow generating device is not the air conditioning apparatus, that is, in the case where it is determined that the type of the air flow generating device is the air cleaner (no in step S13), in step S15, the control signal generating section 134 reads the 2 nd air flow control table used in the case where the air flow generating device is one air cleaner from the air flow control table storage section 143.

Further, when it is judged in step S12 that a plurality of airflow generation devices exist in the predetermined space (YES in step S12), in step S16, the control signal generation section 134 reads the 3 rd airflow control table used in the case where the airflow generation devices are one air conditioner and one air purifier from the airflow control table storage section 143.

Next, in step S17, the control signal generating unit 134 refers to the 1 st, 2 nd, or 3 rd air flow control table, and determines the control content corresponding to the state of the mouth of the subject person recognized by the person state determining unit 133 and the orientation of the face of the subject person.

Next, in step S18, the control signal generating unit 134 generates a control signal based on the determined control content. For example, when the control content for controlling the wind direction so that the air is sent to the front 1 meter of the face direction is determined, the control signal generating unit 134 specifies the position 1 meter in front of the face direction of the subject person, calculates the wind direction from the position of the airflow generation device 2 to the specified position, and generates the control signal for sending the air in the calculated wind direction. When the control content for controlling the wind direction so as to send air in the direction of the subject person is determined, the control signal generating unit 134 calculates the wind direction from the position of the airflow generating device 2 toward the position of the subject person, and generates a control signal for sending air in the calculated wind direction.

In addition, when the control content for controlling the wind direction so as to send the air downward 90 degrees is determined, the control signal generating unit 134 generates a control signal for sending the air downward 90 degrees. When the control content for changing the operation mode to the power operation is determined, the control signal generator 134 generates a control signal for changing the operation mode to the power operation.

When the control content for controlling the wind direction is determined so that the air is sent from the air flow generator closest to the subject to the front 1 meter of the face direction, the control signal generator 134 selects the air flow generator closest to the subject from among the plurality of air flow generators. At this time, the control signal generating unit 134 calculates the distance between the position of the subject person and each of the plurality of airflow generation devices, and selects the airflow generation device having the shortest calculated distance as the airflow generation device closest to the subject person. The control signal generating unit 134 specifies the position 1 meter ahead of the face of the subject person, calculates the wind direction from the position of the airflow generation device closest to the subject person to the specified position, and generates a control signal for sending air in the calculated wind direction.

When the control content for controlling the wind direction is determined so that air is sent from the airflow generating device closest to the subject person in the direction of the subject person, the control signal generating unit 134 selects the airflow generating device closest to the subject person from the plurality of airflow generating devices. At this time, the control signal generating unit 134 calculates the distance between the position of the subject person and each of the plurality of airflow generation devices, and selects the airflow generation device having the shortest calculated distance as the airflow generation device closest to the subject person. Then, the control signal generating unit 134 calculates the wind direction from the position of the airflow generating device closest to the subject person toward the position of the subject person, and generates a control signal for sending air in the calculated wind direction.

When the control content for changing the operation mode of the airflow generation device closest to the subject to the powerful operation is determined, the control signal generation unit 134 selects the airflow generation device closest to the subject from the plurality of airflow generation devices. Then, the control signal generating unit 134 generates a control signal for changing the operation mode of the airflow generation device closest to the subject person to the powerful operation.

Next, in step S19, the communication unit 15 transmits the control signal generated by the control signal generation unit 134 to the airflow generation device 2. At this time, when a plurality of airflow generation devices exist in the predetermined space, the communication unit 15 transmits a control signal to an airflow generation device selected from the plurality of airflow generation devices when generating the control signal.

The control signal may include a change duration indicating a time for changing the control content of the airflow generation device 2. The change duration is a time for changing the control parameter of the airflow generation device 2 in accordance with the control signal, and the same change duration may be used for all the control contents, or a table in which the change duration is associated with each control content may be prepared, and the change duration may be determined for each control content.

In embodiment 1, the person state determination unit 133 identifies which of the state of the mouth of the subject person is a state in which the mouth of the subject person is not covered, a state in which the mouth of the subject person is covered with hands, a state in which the mouth of the subject person is covered with handkerchiefs, a state in which the mouth of the subject person is covered with jacket sleeves, and a state in which the mouth of the subject person is covered with a mask, based on the image at the time point when the subject person in the predetermined space is detected to cough or sneeze. The person state determination unit 133 may recognize, from the image at the time point when the subject person in the predetermined space is detected to cough or sneeze, which of the state in which the mouth of the subject person is not covered and the state in which the mouth of the subject person is covered with the hand.

The person state determination unit 133 may identify, based on the image at the time point when the subject person in the predetermined space is detected to cough or sneeze, which of a state in which the mouth of the subject person is not covered, a state in which the mouth of the subject person is covered with the hand, and a state in which the mouth of the subject person is covered with the mask. The person state determination unit 133 may identify, based on the image at the time point when the subject person in the predetermined space is detected to cough or sneeze, which of the state in which the mouth of the subject person is not covered, the state in which the mouth of the subject person is covered with the hand, the state in which the mouth of the subject person is covered with the handkerchief, and the state in which the mouth of the subject person is covered with the mask.

Next, the operation of the airflow generating device 2 in embodiment 1 will be described.

Fig. 14 is a flowchart for explaining the operation of the airflow generation device in embodiment 1.

First, in step S21, the processor 22 determines whether or not the power supply of the airflow generation device 2 is turned on. Here, if it is determined that the power supply of the airflow generation device 2 is off (no in step S21), the process ends.

On the other hand, when determining that the airflow generation device 2 is powered on (yes in step S21), the airflow control unit 221 determines whether or not the communication unit 21 has received the control signal in step S22. Here, when it is determined that the control signal has not been received (no in step S22), the process returns to step S21.

On the other hand, when determining that the control signal has been received (yes in step S22), in step S23, the airflow controller 221 stores the current control parameter in the memory 23. The control parameters include, for example, an operation mode, a set temperature, a wind direction, and a wind volume.

Next, in step S24, the airflow control unit 221 controls the airflow generated from the airflow generating unit 24 based on the control signal received by the communication unit 21. That is, airflow control unit 221 instructs airflow generating unit 24 to send air at the air volume indicated by the control signal, and instructs airflow direction changing unit 25 to change the airflow direction indicated by the control signal.

Next, in step S25, the airflow control unit 221 determines whether or not the change duration included in the control signal has elapsed. If it is determined that the change duration has not elapsed (no in step S25), the determination process in step S25 is repeatedly executed.

On the other hand, when determining that the change duration has elapsed (yes in step S25), in step S26, the airflow control unit 221 reads the control parameter stored in the memory 23.

Next, in step S27, the airflow control unit 221 changes the control parameter to the read control parameter.

In this way, the state of the mouth of the person at the time of coughing or sneezing of the person is recognized from the image acquired when the person is detected to have coughed or sneezed in the predetermined space, and a control signal for controlling at least one of the wind direction and the air volume of the air sent from the air flow generating device that generates the air flow in the predetermined space is generated based on the recognized state of the mouth of the person. Therefore, by generating an air flow at a local area where droplets are generated by the person's cough or sneeze, the droplets can be diffused and the concentration of the droplets can be made uniform, and thus the risk of infection in a predetermined space where a cough or sneeze is detected can be reduced.

(embodiment mode 2)

In embodiment 1, the airflow control device includes a camera and a microphone and detects a cough or sneeze of the subject person based on the image and the sound, but in embodiment 2, the airflow control device includes a camera without a microphone and detects a cough or sneeze of the subject person based on the image and not the sound.

Fig. 15 is a diagram showing the configuration of an airflow control system according to embodiment 2 of the present disclosure. The airflow control system shown in fig. 15 includes an airflow control device 1A and an airflow generating device 2. In embodiment 2, the same components as those in embodiment 1 are denoted by the same reference numerals, and detailed description thereof is omitted.

The airflow control device 1A controls the airflow in a predetermined space. The airflow control device 1A is disposed on a wall or a ceiling in a predetermined space. The airflow control device 1A is connected to the airflow generation device 2 via a network so as to be able to communicate with each other.

The airflow control device 1A includes a camera 11, a processor 13A, a memory 14, and a communication unit 15.

The processor 13A includes an image processing unit 131, a cough/sneeze detecting unit 132A, a person state determining unit 133, and a control signal generating unit 134. The memory 14 is, for example, a semiconductor memory, and includes an image storage unit 141, a device information storage unit 142, and an airflow control table storage unit 143.

The cough/sneeze detection unit 132A detects a person in a predetermined space from coughing or sneezing. In embodiment 2, the cough/sneeze detecting unit 132A detects at least one of a cough and a sneeze of a person in a predetermined space based on an image without using sound. Note that a method of detecting at least one of coughing and sneezing of a person present in a predetermined space from an image is the same as that of embodiment 1.

That is, the cough/sneeze detecting unit 132A determines whether or not the distance between the position of the face of the person included in the image and the position of one hand of the person included in the image is equal to or less than a threshold value, and detects at least one of a cough and a sneeze when the distance is determined to be equal to or less than the threshold value. More specifically, the cough/sneeze detection unit 132A calculates the 1 st distance between the center of gravity position of the face extracted from the image and the center of gravity position of the right hand, and calculates the 2 nd distance between the center of gravity position of the face extracted from the image and the center of gravity position of the left hand. The cough/sneeze detection unit 132A determines whether the shorter of the 1 st distance and the 2 nd distance is equal to or less than a threshold value. When the shorter of the 1 st distance and the 2 nd distance is determined to be equal to or less than the threshold value, the cough/sneeze detection unit 132A determines that the subject in the predetermined space has coughed or sneezed. When it is determined that the shorter of the 1 st distance and the 2 nd distance is longer than the threshold value, the cough/sneeze detection unit 132A determines that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze.

The cough/sneeze detecting unit 132A may determine whether or not the area of the mouth of the person included in the image is equal to or smaller than a threshold value, and may detect a cough or a sneeze when determining that the area is equal to or smaller than the threshold value.

Next, the operation of the airflow control device 1A in embodiment 2 will be described.

Fig. 16 is a 1 st flowchart for explaining the operation of the airflow control device in embodiment 2, and fig. 17 is a 2 nd flowchart for explaining the operation of the airflow control device in embodiment 2.

The processing of steps S31 to S34 shown in fig. 16 is the same as the processing of steps S1 to S4 shown in fig. 12, and thus detailed description is omitted.

Next, in step S35, the cough/sneeze detection unit 132A determines whether or not the subject person in the predetermined space has coughed or sneezed. Here, the cough/sneeze detecting unit 132A calculates the 1 st distance between the center of gravity position of the face extracted from the image and the center of gravity position of the right hand, and calculates the 2 nd distance between the center of gravity position of the face extracted from the image and the center of gravity position of the left hand. The cough/sneeze detection unit 132A determines whether the shorter of the 1 st distance and the 2 nd distance is equal to or less than a threshold value. When the shorter of the 1 st distance and the 2 nd distance is determined to be equal to or less than the threshold value, the cough/sneeze detection unit 132A determines that the subject in the predetermined space has coughed or sneezed. When it is determined that the shorter of the 1 st distance and the 2 nd distance is longer than the threshold value, the cough/sneeze detection unit 132A determines that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze.

Here, if it is determined that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze (no in step S35), the process returns to step S31.

On the other hand, when it is determined that the subject person in the predetermined space has coughed or sneezed (yes in step S35), in step S36, the person state determination unit 133 acquires an image of the image at the time when the subject person in the predetermined space has been detected to have coughed or sneezed from the image storage unit 141.

The processing of steps S37 to S48 shown in fig. 17 is the same as the processing of steps S8 to S19 shown in fig. 13, and thus detailed description is omitted.

In this way, using the image from the camera 11 that captures the image of the inside of the predetermined space, it is possible to detect that the person in the predetermined space coughs or sneezes. This can simplify the structure of the airflow control device 1A, and can reduce the cost of the airflow control device 1A.

(embodiment mode 3)

In embodiment 1, the airflow control device includes a camera and a microphone, but in embodiment 3, the airflow control device does not include a camera and a microphone, and is connected to the camera and the microphone so as to be able to communicate with each other.

Fig. 18 is a diagram showing the configuration of an airflow control system according to embodiment 3 of the present disclosure. The airflow control system shown in fig. 18 includes an airflow control device 1B, an airflow generation device 2, a camera 3, and a microphone 4. In embodiment 3, the same components as those in embodiment 1 are denoted by the same reference numerals, and detailed description thereof is omitted.

The microphone 4 is disposed in a predetermined space. The microphone 4 and the camera 3 are connected so as to be able to communicate with each other via a network. The microphone 4 includes a sound collection unit 41, a processor 42, and a communication unit 43.

The sound collection unit 41 collects sound in a predetermined space and outputs the collected sound to the processor 42.

The processor 42 includes a cough/sneeze detection unit 421. The cough/sneeze detection portion 421 detects a person in a predetermined space to cough or sneeze. The cough/sneeze detection portion 421 detects that a person has coughed or sneezed in the indoor space. The cough/sneeze detection unit 421 detects a person in a predetermined space to cough or sneeze using the sound collected by the microphone 4.

For example, the cough/sneeze detection unit 421 determines whether or not the volume of the sound collected by the sound collection unit 41 is equal to or greater than a threshold value. When determining that the sound volume of the sound collected by the sound collection unit 41 is equal to or greater than the threshold value, the cough/sneeze detection unit 421 determines that the person in the predetermined space has at least one of coughed and sneezed. As the threshold value, for example, 70dB may be used.

The cough/sneeze detecting unit 421 may perform spectral analysis of the sound collected by the sound collecting unit 41, and detect a cough or a sneeze based on the analysis result by an algorithm such as machine learning. In this case, since the detection can be performed using a spectrum pattern unique to coughing or sneezing, the detection accuracy is improved.

When the person is detected to cough or sneeze in the predetermined space by the cough/sneeze detecting portion 421, the communication portion 43 transmits a cough/sneeze detection signal indicating that the person is detected to cough or sneeze in the predetermined space to the camera 3.

The camera 3 is provided on a ceiling or a wall in a predetermined space. The camera 3, the airflow control device 1B, and the microphone 4 are connected to be able to communicate with each other via a network. The camera 3 includes an imaging unit 31, a processor 32, a memory 33, and a communication unit 34.

The imaging unit 31 is, for example, an imaging element, and captures an image in a predetermined space and outputs the captured image to the memory 33.

The processor 32 includes an image processing unit 321, a cough/sneeze determination unit 322, and a person state determination unit 323.

The memory 33 is, for example, a semiconductor memory, and includes an image storage unit 331. The image storage unit 331 stores the image captured by the imaging unit 31. The imaging unit 31 stores an image obtained by imaging the predetermined space in the image storage unit 331.

The image processing unit 321 acquires an image obtained by imaging the predetermined space from the image storage unit 331. The image processing unit 321 performs image processing on the acquired image to extract human features such as the face, nose, mouth, hands, clothes, whether or not a mask is present, and the position of the subject in the room. The image processing unit 321 may use machine learning or deep learning for feature extraction, or may use a widely known feature extractor such as a Haar-Like extractor for face detection or the Like.

The image processing unit 321 has the same function as the image processing unit 131 in embodiment 1.

The communication unit 34 receives the cough/sneeze detection signal transmitted from the microphone 4.

The cough/sneeze determination unit 322 determines that the person has coughed or sneezed in the predetermined space when the communication unit 34 receives the cough/sneeze detection signal.

The person state determination unit 323 recognizes the state of the mouth of the person at the time of coughing or sneezing from the image acquired when the person is detected to have coughed or sneezed.

The person state determination unit 323 recognizes the state of the mouth of the subject person from the images of the time before and after the time point when the cough or sneeze is detected. The state of the mouth of the person can be classified into a plurality of patterns. For example, the state of the mouth of the person includes a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief, a state in which the mouth of the person is covered with sleeves of a jacket, and a state in which the mouth of the person is covered with a mask.

The person state determination unit 323 identifies any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with the hand, a state in which the mouth of the person is covered with the handkerchief, a state in which the mouth of the person is covered with the sleeves of the jacket, and a state in which the mouth of the person is covered with the mask.

The person state determination unit 323 recognizes the orientation of the face of the person at the time of coughing or sneezing from the image acquired when the person is detected to have coughed or sneezed.

The person state determination unit 323 recognizes the position of the person in the predetermined space when the person coughs or sneezes from the image acquired when the person coughs or sneezes are detected.

The function of the human condition determination unit 323 is the same as that of the human condition determination unit 133 according to embodiment 1.

The communication unit 34 transmits status information indicating the status of the mouth of the person recognized by the person status determination unit 323, the orientation of the face of the person, and the position of the person in the predetermined space to the airflow control device 1B.

The airflow control device 1B controls the airflow in the predetermined space. The position at which the airflow control device 1B is disposed is not particularly limited. The airflow control device 1B may be a server, for example. The airflow control device 1B is connected to the airflow generation device 2 and the camera 3 via a network so as to be able to communicate with each other.

The airflow control device 1B includes a processor 13B, a memory 14B, and a communication unit 15B.

The processor 13B includes a control signal generation unit 134. The memory 14B is, for example, a semiconductor memory, and includes a device information storage unit 142 and an airflow control table storage unit 143.

The communication unit 15B receives the status information transmitted from the camera 3. The communication unit 15B transmits a control signal to the airflow generation device 2.

The control signal generating unit 134 generates a control signal for controlling at least one of the wind direction and the air volume of the air sent from the air flow generating device 2 that generates the air flow in the predetermined space, based on the state of the mouth of the person included in the state information received by the communication unit 15B. The control signal generating unit 134 makes the wind direction of the air sent from the airflow generating device 2 different between the case where the face of the person is facing forward and the case where the face of the person is facing downward. Further, the control signal generating unit 134 generates a control signal based on the state of the mouth of the person included in the state information received by the communication unit 15B and the position coordinates included in the state information received by the communication unit 15B.

The control signal generating unit 134 acquires control contents corresponding to the state of the mouth of the person and the orientation of the face of the person included in the state information received by the communication unit 15B from the airflow control table stored in the airflow control table storage unit 143, and generates a control signal for controlling the airflow generating device 2 according to the acquired control contents.

The control signal generation unit 134 outputs the generated control signal to the communication unit 15B. The communication unit 15B transmits the control signal generated by the control signal generation unit 134 to the airflow generation device 2.

Next, the operation of the airflow control device 1B and the camera 3 in embodiment 3 will be described.

Fig. 19 is a flowchart for explaining the operation of the camera in embodiment 3.

First, in step S51, the processor 32 determines whether or not the power of the camera 3 is turned on. Here, if it is determined that the power of the camera 3 is off (no in step S51), the process ends.

On the other hand, when it is determined that the power of the camera 3 is turned on (YES in step S51), the image pickup unit 31 picks up an image of the predetermined space in step S52. The imaging unit 31 stores the captured image in the image storage unit 331. The imaging unit 31 also stores the moving image in the image storage unit 331.

Next, in step S53, the cough/sneeze determination unit 322 determines whether or not the cough/sneeze detection signal is received by the communication unit 34. A cough/sneeze detection signal is sent by the microphone 4. If it is determined that the cough/sneeze detection signal has not been received (no in step S53), the process returns to step S51.

On the other hand, when determining that the cough/sneeze detection signal has been received (yes in step S53), in step S54, the human state determination unit 323 acquires an image of the image at the time when the cough or sneeze of the subject person in the predetermined space is detected from the image storage unit 331. The cough/sneeze detection signal includes a time when the person is detected to cough or sneeze in the predetermined space. In addition, the image includes the time of the shot. The person state determination unit 323 acquires an image captured at a timing included in the cough/sneeze detection signal from the image storage unit 331.

Next, in step S55, the person state determination unit 323 identifies the state of the mouth of the subject person at the time of coughing or sneezing. Further, the process of step S55 shown in fig. 19 is the same as the process of step S8 shown in fig. 13.

Next, in step S56, the human condition determination unit 323 recognizes the orientation of the face of the subject person when the subject person coughs or sneezes, based on the image at the time point when the subject person in the predetermined space is detected to cough or sneeze. Further, the process of step S56 shown in fig. 19 is the same as the process of step S9 shown in fig. 13.

Next, in step S57, the human condition determination unit 323 recognizes the position of the subject person in the predetermined space when the subject person coughs or sneezes, based on the image at the time point when the subject person in the predetermined space is detected to cough or sneeze. Further, the process of step S57 shown in fig. 19 is the same as the process of step S10 shown in fig. 13.

Next, in step S58, the communication unit 34 transmits status information indicating the status of the mouth of the subject person, the orientation of the face of the subject person, and the position of the subject person in the predetermined space, which are recognized by the person status determination unit 323, to the airflow control device 1B.

Fig. 20 is a flowchart for explaining the operation of the airflow control device in embodiment 3.

First, in step S71, the processor 13B determines whether or not the power supply of the airflow control device 1B is on. Here, if it is determined that the power supply of the airflow control device 1B is off (no in step S71), the process ends.

On the other hand, when determining that the airflow control device 1B is powered on (yes in step S71), in step S72, the control signal generator 134 determines whether or not the communication unit 15B has received the status information. The status information is sent by the camera 3. Here, when it is determined that the state information has not been received (no in step S72), the process returns to step S71.

On the other hand, when determining that the status information has been received (YES at step S72), at step S73, control signal generator 134 reads the device information from device information storage 142.

Further, the processing of steps S74 to S81 shown in fig. 20 is the same as the processing of steps S12 to S19 shown in fig. 13.

In this way, when the person in the predetermined space is detected to have coughed or sneezed by the microphone 4, the state of the mouth, the orientation of the face, and the position in the predetermined space of the subject person at the time of coughing or sneezing of the subject person are recognized by the camera 3, and the control signal for controlling the air flow in the predetermined space is generated by the air flow control device 1B. Therefore, the configuration of the airflow control device 1B can be further simplified, and the processing load of the airflow control device 1B can be suppressed.

In embodiment 3, the cough/sneeze determination unit 322 determines that the person has coughed or sneezed in the predetermined space when the cough/sneeze detection signal is received by the communication unit 34, but the present disclosure is not limited thereto. The cough/sneeze determining unit 322 may determine whether the person has coughed or sneezed in the predetermined space based on the image and the cough/sneeze detection signal. For example, the coughing/sneezing determining unit 322 may determine that the subject person coughs or sneezes when the communication unit 34 receives the coughing/sneezing detection signal and determines that the distance between the position of the face of the person included in the image captured by the image capturing unit 31 and the position of one hand of the person included in the image is equal to or less than a threshold value.

(embodiment mode 4)

In embodiment 3, the airflow control system includes a microphone and detects a cough or a sneeze of the subject person based on sound, but in embodiment 4, the airflow control system does not include a microphone and detects a cough or a sneeze of the subject person based on an image.

Fig. 21 is a diagram showing the configuration of an airflow control system according to embodiment 4 of the present disclosure. The airflow control system shown in fig. 21 includes an airflow control device 1B, an airflow generation device 2, and a camera 3A. In embodiment 4, the same components as those in embodiment 3 are denoted by the same reference numerals, and detailed description thereof is omitted.

The camera 3A is provided on a ceiling or a wall in a predetermined space. The camera 3A and the airflow control device 1B are connected to be able to communicate with each other via a network. The camera 3A includes an imaging unit 31, a processor 32A, a memory 33, and a communication unit 34A.

The processor 32A includes an image processing unit 321, a person state determination unit 323, and a cough/sneeze detection unit 324.

The cough/sneeze detector 324 detects a person in a predetermined space coughing or sneezing. In embodiment 4, the cough/sneeze detector 324 detects a person in a predetermined space from an image to cough or sneeze. Note that a method of detecting a person who has coughed or sneezed in a predetermined space from an image is the same as that of embodiment 1.

That is, the cough/sneeze detecting unit 324 determines whether or not the distance between the position of the face of the person included in the image and the position of one hand of the person included in the image is equal to or less than a threshold value, and detects a cough or sneeze when the distance is determined to be equal to or less than the threshold value. More specifically, the cough/sneeze detection unit 324 calculates the 1 st distance between the center of gravity position of the face extracted from the image and the center of gravity position of the right hand, and calculates the 2 nd distance between the center of gravity position of the face extracted from the image and the center of gravity position of the left hand. The cough/sneeze detector 324 determines whether the shorter of the 1 st distance and the 2 nd distance is equal to or less than a threshold value. When the shorter of the 1 st distance and the 2 nd distance is determined to be equal to or less than the threshold value, the cough/sneeze detection unit 324 determines that the subject in the predetermined space has coughed or sneezed. When the shorter of the 1 st distance and the 2 nd distance is determined to be longer than the threshold value, the cough/sneeze detection unit 324 determines that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze.

The cough/sneeze detecting unit 324 may determine whether or not the area of the mouth of the person included in the image is equal to or smaller than a threshold value, and may detect a cough or a sneeze when determining that the area is equal to or smaller than the threshold value.

The communication unit 34A transmits status information indicating the status of the mouth of the person recognized by the person status determination unit 323, the orientation of the face of the person, and the position of the person in the predetermined space to the airflow control device 1B.

Next, the operation of the camera 3A in embodiment 4 will be described.

Fig. 22 is a flowchart for explaining the operation of the camera in embodiment 4.

First, in step S91, the processor 32A determines whether or not the power of the camera 3A is turned on. Here, when it is determined that the power of the camera 3A is off (no in step S91), the process ends.

On the other hand, when it is determined that the power of the camera 3A is turned on (YES in step S91), the image pickup unit 31 picks up an image of the predetermined space in step S92. The imaging unit 31 stores the captured image in the image storage unit 331. The imaging unit 31 also stores the moving image in the image storage unit 331.

Next, in step S93, the image processing unit 321 acquires an image from the image storage unit 331.

Next, in step S94, the image processing unit 321 extracts the feature of the subject person from the image. Here, the feature of the subject person refers to, for example, the face, eyes, mouth, right hand, left hand, clothes, and mask of the subject person. Further, the image processing unit 321 also detects the center of gravity position of each feature.

Next, in step S95, the cough/sneeze detection unit 324 determines whether or not the subject in the predetermined space has coughed or sneezed. Here, the cough/sneeze detecting unit 324 calculates the 1 st distance between the center of gravity position of the face extracted from the image and the center of gravity position of the right hand, and calculates the 2 nd distance between the center of gravity position of the face extracted from the image and the center of gravity position of the left hand. The cough/sneeze detector 324 determines whether the shorter of the 1 st distance and the 2 nd distance is equal to or less than a threshold value. When the shorter of the 1 st distance and the 2 nd distance is determined to be equal to or less than the threshold value, the cough/sneeze detection unit 324 determines that the subject in the predetermined space has coughed or sneezed. When the shorter of the 1 st distance and the 2 nd distance is determined to be longer than the threshold value, the cough/sneeze detection unit 324 determines that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze.

Here, if it is determined that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze (no in step S95), the process returns to step S91.

On the other hand, when it is determined that the subject person in the predetermined space has coughed or sneezed (yes in step S95), in step S96, the human state determination unit 323 acquires an image of the image at the time when the subject person in the predetermined space has been detected coughed or sneezed from the image storage unit 331.

The processing of steps S97 to S100 shown in fig. 22 is the same as the processing of steps S55 to S58 shown in fig. 19, and therefore detailed description is omitted.

In this way, the camera 3A detects that the person in the predetermined space coughs or sneezes, recognizes the state of the mouth, the orientation of the face, and the position in the predetermined space of the subject person at the time of coughing or sneezing, and generates a control signal for controlling the air flow in the predetermined space by the air flow control device 1B. This can simplify the structure of the airflow control system and suppress the cost of the airflow control system.

(infection risk evaluation System)

The present disclosure includes an infection risk evaluation system described below. In the explanation of the infection risk evaluation system, the same components as those of the airflow control system described above are assigned the same reference numerals, and detailed explanation thereof is omitted.

Fig. 23 is a diagram showing the configuration of the infection risk evaluation system of the present disclosure. The infection risk evaluation system shown in fig. 23 is an example of an information processing system, and includes an infection risk evaluation device 1C and a terminal device 5.

The infection risk evaluation device 1C is an example of an information processing device, and evaluates the risk of contracting an infectious disease (infection risk). The infection risk evaluation device 1C is disposed on a wall or ceiling in a predetermined space.

The infection risk evaluating apparatus 1C is connected to the terminal apparatus 5 via a network so as to be able to communicate with each other.

The terminal device 5 is, for example, a personal computer, a smartphone, or a tablet computer. The terminal device 5 is used by, for example, a manager or staff of a facility where the subject person is located.

The infection risk evaluating device 1C includes a camera 11, a microphone 12, a processor 13, a memory 14, and a communication unit 15. When coughing or sneezing is detected not by sound but by images, the infection risk evaluation device 1C may not include a microphone.

The infection risk evaluation device 1C does not determine whether or not the subject person is infected with the infectious disease, and regards the subject person who coughs or sneezes as an infected person.

The camera 11 and the microphone 12 may be provided inside the infection risk evaluation device 1C or may be provided outside the infection risk evaluation device 1C. When the camera 11 and the microphone 12 are provided outside the infection risk evaluating apparatus 1C, the infection risk evaluating apparatus 1C is connected to the camera 11 and the microphone 12 so as to be able to communicate with each other by wire or wirelessly.

The processor 13 includes an image processing unit 131, a cough/sneeze detection unit 132, a person state determination unit 133, an infection risk evaluation unit 135, and an evaluation result notification unit 136. The memory 14 is, for example, a semiconductor memory, and includes an image storage unit 141 and an infection risk evaluation table storage unit 144.

The infection risk evaluating apparatus 1C may include a plurality of cameras. Therefore, the large-range shooting can be realized without scanning by one camera, and the camera calibration is easier.

When a person coughs or sneezes, the person may perform various actions in a conditioned reflex manner. For example, a person may cough or sneeze with a part of the face such as the nose and mouth covered with a hand, cough or sneeze with a mouth not covered with any other part, cough or sneeze with a part of the face such as the nose and mouth covered with a handkerchief, cough or sneeze with a part of the face such as the nose and mouth covered with a top sleeve, or cough or sneeze with a mouth covered with a mask. It is considered that the risk of infection in the subsequent space varies depending on the state of the subject person at the time of coughing or sneezing. For example, when a person coughs or sneezes without any covering of the mouth, droplets or droplets nuclei fly several meters ahead of the person. That is, when the user coughs or sneezes without any covering of the mouth, the risk of infection in the space thereafter becomes extremely high due to droplet infection or air infection. Further, it is considered that the droplets or droplets nuclei are attached to or deposited on surrounding furniture or the like after flying into the space, and the risk of infection due to contact infection is also low.

When the nose and mouth are coughed or sneezed with the hands covered, the virus is mainly attached to the hands. When a person or an object in the vicinity is touched with a hand to which a virus is attached, the person who is touched may be infected with the virus, and further, the person who is touched with the object may be infected with the virus. Therefore, when the user coughs or sneezes with the mouth covered with the hand, the risk of infection due to contact infection increases. The initial velocity of the occurrence of coughing or sneezing is generally 10m/s or more, that is, viruses fly at a high speed. Therefore, even when the nozzle is covered with a hand, if there is a gap in the hand, the droplets or droplets core may leak from the gap. Therefore, when the user coughs or sneezes with the mouth covered with the hand, the risk of infection due to air infection or droplet infection is not low.

Further, when the mouth is coughed or sneezed with the handkerchief or the jacket sleeve covered, the probability of virus adhesion to the hand is very low and the gap is less likely to occur as compared with the case where the mouth is covered with the hands. Thus, the risk of infection is lower in the case of covering the mouth with a handkerchief or coat sleeve than in the case of covering the mouth with a hand. However, when the mouth is covered with the upper sleeves, the virus attached to the sleeves may be scattered again with time due to the movement of the subject. Therefore, the risk of infection due to air infection is higher when the mouth is covered with the jacket sleeves than when the mouth is covered with the handkerchief.

In addition, when the mouth is covered with a mask and the patient coughs or sneezes, almost all droplets or droplets are trapped in the filter layer of the mask if the mask is worn correctly. Therefore, it can be said that the risk of infection is not high when the mouth is covered with the mask.

In addition, a person may cough or sneeze with the face down. When a person coughs or sneezes with the face down in this way, droplets or droplet nuclei spread to the lower side of the space, and therefore the risk of infection due to droplet infection is generally reduced.

As described above, the risk of contracting an infectious disease varies depending on the state of the mouth of a person at the time of coughing or sneezing. Further, the risk of infection through which infection route is high also varies depending on the state of the mouth of the person.

The person state determination unit 133 recognizes the state of the mouth of the subject person from the images of the time before and after the time point when the cough or sneeze is detected. The state of the mouth of the person can be classified into a plurality of patterns. For example, the state of the mouth of the person includes a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief, a state in which the mouth of the person is covered with clothes (for example, sleeves of a jacket), and a state in which the mouth of the person is covered with a mask.

The person state determination unit 133 recognizes any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with the hand, a state in which the mouth of the person is covered with the handkerchief, a state in which the mouth of the person is covered with clothes (for example, jacket sleeves), and a state in which the mouth of the person is covered with the mask.

The infection risk evaluation table storage unit 144 stores an infection risk evaluation table in which the state of the mouth of the person is associated with an evaluation value obtained by numerically (quantitatively) quantifying the risk of contracting an infectious disease due to each of a droplet infection, a contact infection, and an air infection.

Fig. 24 is a diagram showing an example of the infection risk evaluation table stored in the infection risk evaluation table storage unit 144.

As shown in fig. 24, in the state where the mouth is not covered, an evaluation value indicating the risk of infection with droplet infection is associated with "3", an evaluation value indicating the risk of infection with contact infection is associated with "2", and an evaluation value indicating the risk of infection with air is associated with "3". The evaluation values are represented by numerical values "1" to "3", and the larger the numerical value is, the higher the risk of infection is.

In addition, with respect to the state in which the mouth is covered with the hand, an evaluation value indicating the risk of infection with droplet infection is associated with "2", an evaluation value indicating the risk of infection with contact infection is associated with "3", and an evaluation value indicating the risk of infection with air infection is associated with "2".

In addition, in the state where the mouth is covered with the handkerchief, the evaluation value indicating the risk of infection with droplet infection is associated with "1", the evaluation value indicating the risk of infection with contact infection is associated with "1", and the evaluation value indicating the risk of infection with air infection is associated with "1".

In addition, with respect to the state in which the mouth is covered with the sleeves of the jacket, "1" is associated with the evaluation value indicating the risk of infection with droplet infection, "1" is associated with the evaluation value indicating the risk of infection with contact infection, and "2" is associated with the evaluation value indicating the risk of infection with air infection.

In addition, in the state where the mouth is covered with the mask, "1" is associated with the evaluation value indicating the risk of infection with droplet infection, "1" is associated with the evaluation value indicating the risk of infection with contact infection, and "1" is associated with the evaluation value indicating the risk of infection with air infection.

The infection risk evaluation unit 135 evaluates the risk of contracting an infection in the predetermined space based on the state of the mouth of the person recognized by the person state determination unit 133. The infection risk evaluation unit 135 evaluates the risk of contracting an infectious disease by each of a droplet infection, a contact infection, and an air infection. The infection risk evaluation unit 135 extracts, from the infection risk evaluation table, an evaluation value of each of the droplet infection, the contact infection, and the air infection associated with the state of the mouth of the person recognized by the person state determination unit 133, and accumulates each of the extracted evaluation values for a predetermined time.

The evaluation result notification unit 136 outputs the evaluation result obtained by the infection risk evaluation unit 135 to the communication unit 15. When the integrated value is equal to or greater than the threshold value, the evaluation result notification unit 136 outputs an evaluation result indicating that the risk of contracting an infectious disease is high in the predetermined space to the communication unit 15.

The communication unit 15 transmits an evaluation result indicating that the risk of contracting an infectious disease is high in the predetermined space to the terminal device 5.

The terminal device 5 receives the evaluation result transmitted from the communication unit 15. The terminal device 5 displays the received evaluation result.

Next, the operation of the infection risk evaluating apparatus 1C according to the present embodiment will be described.

Fig. 25 is a 1 st flowchart for explaining the operation of the infection risk evaluating device, and fig. 26 is a 2 nd flowchart for explaining the operation of the infection risk evaluating device in the present embodiment.

First, in step S101, the processor 13 determines whether or not the power of the infection risk evaluating apparatus 1C is turned on. If it is determined that the power supply of the infection risk evaluating apparatus 1C is off (no in step S101), the process ends.

On the other hand, when it is determined that the infection risk evaluating apparatus 1C is powered on (step S101: YES), the camera 11 takes an image of the predetermined space in step S102. The camera 11 stores the captured image in the image storage unit 141. The camera 11 also stores the moving image in the image storage unit 141.

Next, in step S103, the processor 13 determines whether or not a predetermined time has elapsed. Here, the predetermined time is, for example, 30 minutes. In the present embodiment, whether or not to notify the evaluation result of the risk of infectious diseases is determined at predetermined time intervals. Further, for example, in the case where the evaluation result is frequently notified at 1 minute intervals or the like, the person to be notified may feel troublesome, and therefore, it is preferable to perform the notification at 30 minute intervals, for example. This makes it possible to evaluate the risk of contracting an infectious disease in a predetermined space within a predetermined time. The predetermined time may be set by an administrator, for example.

When determining that the predetermined time has not elapsed (no in step S103), the image processing unit 131 acquires an image from the image storage unit 141 in step S104.

Next, in step S105, the image processing unit 131 extracts the feature of the subject person from the image. Here, the feature of the subject person refers to, for example, the face, eyes, mouth, right hand, left hand, clothes, and mask of the subject person. Further, the image processing unit 131 also detects the center of gravity position of each feature.

Next, in step S106, the cough/sneeze detection unit 132 acquires a sound from the microphone 12.

Next, in step S107, the cough/sneeze detection unit 132 determines whether or not the subject person in the predetermined space has coughed or sneezed. Here, the cough/sneeze detection unit 132 calculates the 1 st distance between the center of gravity position of the face extracted from the image and the center of gravity position of the right hand, and calculates the 2 nd distance between the center of gravity position of the face extracted from the image and the center of gravity position of the left hand. The cough/sneeze detector 132 determines whether the shorter of the 1 st distance and the 2 nd distance is equal to or less than a threshold value. When determining that the shorter of the 1 st distance and the 2 nd distance is equal to or less than the threshold value, the cough/sneeze detection unit 132 determines whether or not the volume of the sound acquired from the microphone 12 is equal to or more than the threshold value. The cough/sneeze detection unit 132 determines that the subject person in the predetermined space has coughed or sneezed when the shorter of the 1 st distance and the 2 nd distance is determined to be equal to or less than the threshold and the sound volume is equal to or more than the threshold. When it is determined that the shorter of the 1 st distance and the 2 nd distance is longer than the threshold value or that the sound volume of the sound information is smaller than the threshold value, the cough/sneeze detection unit 132 determines that the subject person in the predetermined space is not detected to cough and that the subject person in the predetermined space is not detected to sneeze.

Here, if it is determined that the subject person in the predetermined space is not detected to cough and the subject person in the predetermined space is not detected to sneeze (no in step S107), the process returns to step S101.

On the other hand, when it is determined that the subject person in the predetermined space has coughed or sneezed (yes in step S107), in step S108, the person state determination unit 133 acquires an image of the image storage unit 141 at the time when the subject person in the predetermined space has coughed or sneezed.

Next, in step S109, the person state determination unit 133 recognizes the state of the mouth of the subject person at the time of coughing or sneezing. Here, the person state determination unit 133 identifies which state of the mouth of the subject person is one of a state in which the mouth of the subject person is not covered, a state in which the mouth of the subject person is covered with hands, a state in which the mouth of the subject person is covered with handkerchiefs, a state in which the mouth of the subject person is covered with jacket sleeves, and a state in which the mouth of the subject person is covered with a mask, based on an image at a time point when the subject person in the predetermined space is detected to cough or sneeze.

The person state determination unit 133 may recognize the state of the mouth of the subject person from not only the image at the time point when the cough or sneeze is detected but also the images before and after the time point when the cough or sneeze is detected.

Next, in step S110, the infection risk evaluating unit 135 acquires an integrated value of the evaluation values stored in the memory 14. The memory 14 stores an integrated value obtained by integrating evaluation values of infection risks due to each of droplet infection, contact infection, and air infection in a predetermined space. The infection risk evaluation unit 135 acquires, from the memory 14, an integrated value of evaluation values of the risk of infection due to each of droplet infection, contact infection, and air infection in a predetermined space.

Next, in step S111, the infection risk evaluation unit 135 reads the infection risk evaluation table from the infection risk evaluation table storage unit 144.

Next, in step S112, the infection risk evaluating unit 135 refers to the infection risk evaluation table and determines an evaluation value of the infection risk due to each of the droplet infection, the contact infection, and the air infection corresponding to the state of the mouth of the target person recognized by the person state determining unit 133.

Next, in step S113, the infection risk evaluating unit 135 adds the determined evaluation values of the infection risks due to each of the droplet infection, the contact infection, and the air infection to the obtained integrated value, and stores the integrated value of the evaluation values of the infection risks due to each of the droplet infection, the contact infection, and the air infection in the memory 14. Thereby, the integrated value of the memory 14 is updated. Thereafter, the process returns to step S101, and the process of and after step S101 is performed.

On the other hand, when it is determined in step S103 that the predetermined time has elapsed (step S103: YES), in step S114, the infection risk evaluation unit 135 determines whether or not the total value of the integrated values of the respective infection pathways is equal to or greater than a threshold value. That is, the infection risk evaluation unit 135 sums the integrated values of the evaluation values of the infection risk due to each of the droplet infection, the contact infection, and the air infection stored in the memory 14, and determines whether or not the summed values are equal to or greater than a threshold value. If it is determined that the total value of the integrated values is not equal to or greater than the threshold value (no in step S114), the process proceeds to step S117.

On the other hand, when it is determined that the total value of the integrated values is equal to or greater than the threshold value (step S114: YES), the evaluation result notification unit 136 outputs an evaluation result indicating that the risk of contracting an infectious disease is high in the predetermined space to the communication unit 15 in step S115.

Next, in step S116, the communication unit 15 transmits an evaluation result indicating that the risk of contracting an infectious disease is high in the predetermined space to the terminal device 5. The terminal device 5 receives the evaluation result transmitted from the infection risk evaluation device 1C, and displays the received evaluation result. The manager who has confirmed the evaluation result displayed on the terminal device 5 has a high risk of contracting an infectious disease in the predetermined space, and therefore, ventilation of the predetermined space, power-on of the air cleaner disposed in the predetermined space, and movement of a person in the predetermined space to another place are performed.

Next, in step S117, the infection risk evaluating unit 135 initializes the integrated value of the evaluation values of the respective infection routes stored in the memory 14 and a predetermined time. Thereafter, the process returns to step S101, and the process of and after step S101 is performed.

In step S114, the infection risk evaluation unit 135 determines whether or not the total value of the integrated values of the respective infection pathways is equal to or greater than a threshold value, but the present disclosure is not particularly limited thereto, and may determine whether or not at least one of the integrated values of the respective infection pathways is equal to or greater than a threshold value. That is, the infection risk evaluating unit 135 may determine whether or not at least one of the integrated value of the evaluation values of the risk of infection due to droplet infection, the integrated value of the evaluation values of the risk of infection due to contact infection, and the integrated value of the evaluation values of the risk of infection due to air infection is equal to or greater than a threshold value.

The evaluation result notification unit 136 outputs the evaluation result indicating that the risk of contracting an infectious disease is high in the predetermined space to the communication unit 15, but the present disclosure is not particularly limited thereto, and an integrated value of each of a droplet infection, a contact infection, and an air infection may be output to the communication unit 15 as the evaluation result. At this time, the evaluation result notification unit 136 may output the integrated value of each of the droplet infection, the contact infection, and the air infection as the evaluation result to the communication unit 15 when determining that the total value of the integrated values is equal to or greater than the threshold value. In addition, when the predetermined time has elapsed, the evaluation result notification unit 136 may output the integrated value of each of the droplet infection, the contact infection, and the air infection to the communication unit 15 as the evaluation result without determining whether or not the total value of the integrated values is equal to or greater than the threshold value.

In the present disclosure, the evaluation result is transmitted to the terminal device 5 when the predetermined time has elapsed and it is determined that the total value of the integrated values is equal to or greater than the threshold value, but the present disclosure is not particularly limited thereto, and the integrated value of each of the droplet infection, the contact infection, and the air infection may be transmitted to the terminal device 5 each time the integrated value of each of the droplet infection, the contact infection, and the air infection is stored in step S113. In this case, the terminal device 5 can display the integrated value of each of the droplet infection, the contact infection, and the air infection in real time.

In addition, the target person in the predetermined space is not limited to one person, and a plurality of target persons may be present. When a plurality of target persons are present in the predetermined space, it is also possible to detect cough or sneeze of each of the plurality of target persons, identify the state of the mouth of each of the plurality of target persons, determine the evaluation value of the risk of infection due to each of droplet infection, contact infection, and air infection corresponding to the state of the mouth of each of the plurality of identified target persons, and store the accumulated value of the evaluation values of the risk of infection due to each of droplet infection, contact infection, and air infection.

The memory 14 may store infected person information in which the face image of the subject person is associated with information indicating whether or not the subject person is infected with an infectious disease. In this case, the infection risk evaluating unit 135 may determine whether or not the subject person is infected with the infectious disease, based on the facial image of the subject person included in the image information. When it is determined that the subject person is infected with an infectious disease, the infection risk evaluation unit 135 may weight the determined evaluation value. When it is determined that the subject person is not infected with the infectious disease, the infection risk evaluation unit 135 may determine the evaluation value to be 0. The infection risk evaluating apparatus 1C may take a face image of the subject person in advance, acquire biological information of the subject person from a biosensor, and determine whether or not the subject person is infected with an infectious disease based on the acquired biological information. The infection risk evaluating apparatus 1C may receive input of information on whether or not the subject person is infected with the infectious disease from a doctor or a manager.

The infection risk evaluation system described above is an example of the following information processing system.

An information processing system includes a camera that captures an image of a predetermined space, and an information processing device that detects a person in the predetermined space coughing or sneezing, acquires an image of the predetermined space captured by the camera when the person coughs or sneezing is detected, detects a state of a mouth of the person from the image, evaluates a risk of infection in the predetermined space based on the state of the mouth, and outputs an evaluation result.

In addition, the information processing system can realize the following information processing method.

An information processing method comprising: a method for evaluating a person in a predetermined space, which comprises detecting a cough or a sneeze of the person located in the predetermined space, acquiring an image of the predetermined space captured when the cough or the sneeze is detected, detecting a state of a mouth of the person from the image, evaluating a risk of infection in the predetermined space based on the state of the mouth, and outputting an evaluation result.

According to the configuration of the information processing method, the state of the mouth of the person is detected from the image of the predetermined space captured when the cough or sneeze is detected, and the risk of infection in the predetermined space is evaluated based on the state of the mouth of the person. In addition, when it is estimated that the risk of infectious diseases in the predetermined space is high, appropriate treatment can be prompted so as to reduce the risk of infectious diseases.

In the above information processing method, the state recognition of the mouth portion of the person may be performed to recognize either a state in which the mouth of the person is not covered or a state in which the mouth of the person is covered with a hand.

According to this configuration, the risk of infection is different between a state in which the mouth of the person is not covered and a state in which the mouth of the person is covered with the hand. Therefore, the risk of contracting an infectious disease in the predetermined space can be evaluated more accurately based on whether the state of the mouth of the person is a state in which the mouth of the person is not covered or a state in which the mouth of the person is covered with the hand.

In the information processing method, the state recognition of the mouth portion of the person may be performed to recognize any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, and a state in which the mouth of the person is covered with a mask.

According to this configuration, the risk of infection differs between a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with the hand, and a state in which the mouth of the person is covered with the mask. Therefore, the risk of contracting an infectious disease in the predetermined space can be evaluated more accurately based on which of the state of the mouth of the person is the state in which the mouth of the person is not covered, the state in which the mouth of the person is covered with the hand, and the state in which the mouth of the person is covered with the mask.

In the information processing method, the state recognition of the mouth portion of the person may be performed to recognize any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief, and a state in which the mouth of the person is covered with a mask.

According to this configuration, the risk of infection is different between a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief, and a state in which the mouth of the person is covered with a mask. Therefore, the risk of contracting an infectious disease in the predetermined space can be evaluated more accurately based on which of the state of the mouth portion of the person is the state in which the mouth of the person is not covered, the state in which the mouth of the person is covered with the hand, the state in which the mouth of the person is covered with the handkerchief, and the state in which the mouth of the person is covered with the mask.

In the information processing method, the state recognition of the mouth portion of the person may recognize any one of a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief, a state in which the mouth of the person is covered with clothes, and a state in which the mouth of the person is covered with a mask.

According to this configuration, the risk of infection is different between a state in which the mouth of the person is not covered, a state in which the mouth of the person is covered with a hand, a state in which the mouth of the person is covered with a handkerchief, a state in which the mouth of the person is covered with clothes, and a state in which the mouth of the person is covered with a mask. Therefore, the risk of contracting an infectious disease in the predetermined space can be evaluated more accurately based on which of the state of the mouth portion of the person, the state in which the mouth of the person is not covered, the state in which the mouth of the person is covered with the hand, the state in which the mouth of the person is covered with the handkerchief, the state in which the mouth of the person is covered with the clothes, and the state in which the mouth of the person is covered with the mask.

In the information processing method, the detection of the cough or the sneeze may be a detection of a cough or a sneeze of a person located in the predetermined space based on the image.

According to this configuration, it is possible to detect a person who is located in a predetermined space from coughing or sneezing using an image.

In the information processing method, the detection of the cough or the sneeze may be performed by determining whether or not a distance between a position of the face of the person included in the image and a position of one hand of the person included in the image is equal to or less than a threshold value, and detecting the cough or the sneeze when the distance is determined to be equal to or less than the threshold value.

Generally, a person will perform the act of placing their hands on their mouths when they are to cough or sneeze. Therefore, it is possible to easily detect that the person coughs or sneezes by determining whether or not the distance between the position of the face of the person included in the image and the position of one hand of the person included in the image is equal to or less than the threshold value.

In the information processing method, the detection of the cough or the sneeze may be performed by determining whether or not an area of a mouth of the person included in the image is equal to or smaller than a threshold value, and detecting the cough or the sneeze when the area is determined to be equal to or smaller than the threshold value.

Generally, a person will perform the act of placing their hands on their mouths when they are to cough or sneeze. Therefore, by determining whether or not the area of the mouth of the person included in the image is equal to or smaller than the threshold value, it is possible to easily detect that the person coughs or sneezes.

In the information processing method, the sound obtained by collecting the sound in the predetermined space may be acquired from a microphone provided in the predetermined space, and the detection of the cough or the sneeze may be performed by detecting the cough or the sneeze of the person located in the predetermined space based on the image and the sound.

According to this configuration, the sound obtained by collecting the sound in the predetermined space is acquired from the microphone provided in the predetermined space. The detection of coughing or sneezing is to detect a person in a predetermined space from images and sounds.

Therefore, the person in the predetermined space can be detected from the image and also from the sound, and therefore, the person in the predetermined space can be more accurately detected from the cough or the sneeze.

In the information processing method, the risk of contracting the infectious disease may be evaluated by evaluating the risk of contracting the infectious disease by each of a droplet infection, a contact infection, and an air infection.

According to this configuration, since the risk of infection with each of droplet infection, contact infection, and air infection can be evaluated, the risk of infection with each of droplet infection, contact infection, and air infection can be estimated for each infection route. In addition, it is possible to implement measures against infectious diseases according to the infection routes of droplet infection, contact infection, and air infection.

In the information processing method, the evaluation of the risk of contracting the infectious disease may be performed by extracting the evaluation value of each of the droplet infection, the contact infection, and the air infection associated with the recognized state of the mouth of the person from an evaluation table in which the state of the mouth of the person and an evaluation value numerically expressing the risk of contracting the infectious disease by each of the droplet infection, the contact infection, and the air infection are associated with each other, and accumulating the extracted evaluation values, and the output of the evaluation result may be output as the evaluation result.

According to this configuration, the evaluation table associates the state of the mouth of the person with an evaluation value obtained by numerically expressing the risk of infection due to each of droplet infection, contact infection, and air infection. The evaluation value of each of the droplet infection, the contact infection, and the air infection associated with the state of the mouth of the recognized person can be extracted from the evaluation table. The extracted evaluation values are accumulated respectively. The integrated value of each of the droplet infection, the contact infection, and the air infection is output as the evaluation result.

Therefore, the risk of contracting an infectious disease by each of droplet infection, contact infection, and air infection can be easily estimated using the integrated value of each of droplet infection, contact infection, and air infection.

In the information processing method, the evaluation of the risk of contracting the infectious disease may be performed by extracting the evaluation value of each of the droplet infection, the contact infection, and the air infection associated with the recognized state of the mouth of the person from an evaluation table in which the state of the mouth of the person and an evaluation value numerically expressing the risk of contracting the infectious disease by each of the droplet infection, the contact infection, and the air infection are associated with each other, and accumulating each of the extracted evaluation values for a predetermined time, and the output of the evaluation result may be output as the evaluation result indicating that the risk of contracting the infectious disease is high in the predetermined space when the accumulated value is equal to or greater than a threshold value.

According to this configuration, the evaluation table associates the state of the mouth of the person with an evaluation value obtained by numerically expressing the risk of infection due to each of droplet infection, contact infection, and air infection. The evaluation value of each of the droplet infection, the contact infection, and the air infection associated with the state of the mouth of the recognized person can be extracted from the evaluation table. Each of the extracted evaluation values is accumulated for a predetermined time. And outputting the evaluation result indicating that the risk of contracting the infectious disease is high in the predetermined space when the integrated value is not less than the threshold value.

Therefore, the risk of contracting an infectious disease by each of the droplet infection, the contact infection, and the air infection can be easily estimated using the integrated value of each of the droplet infection, the contact infection, and the air infection over a predetermined time.

The device of the present disclosure has been described above based on the embodiments, but the present disclosure is not limited to the embodiments. Embodiments obtained by implementing various modifications to the present embodiment and embodiments constructed by combining constituent elements in different embodiments may be included in the scope of one or more embodiments of the present disclosure, as long as the embodiments do not depart from the spirit of the present disclosure.

In the above embodiments, each component may be configured by dedicated hardware, or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded in a recording medium such as a hard disk or a semiconductor memory.

A part or all of the functions of the apparatus according to the embodiments of the present disclosure are typically implemented as an LSI (Large Scale Integration) that is an integrated circuit. These may be formed into a single chip individually, or may be formed into a single chip including a part or all of them. The integrated circuit is not limited to an LSI, and may be realized by a dedicated circuit or a general-purpose processor. An FPGA (Field programmable gate Array) that can be programmed after LSI manufacturing or a reconfigurable processor that can reconfigure connection and setting of circuit cells within an LSI may be used.

In addition, a part or all of the functions of the apparatus according to the embodiments of the present disclosure may be realized by executing a program by a processor such as a CPU.

In addition, the numerals used hereinabove are all exemplified for specifically explaining the present disclosure, and the present disclosure is not limited by the exemplified numerals.

The order of execution of the steps shown in the flowcharts is described for the purpose of specifically explaining the present disclosure, and may be other than the above order as long as the same effects can be obtained. Further, a part of the above steps may be executed simultaneously (in parallel) with other steps.

Various modifications of the embodiments of the present disclosure, which are made by changing the embodiments of the present disclosure within the scope that will occur to those skilled in the art, are also included in the present disclosure as long as the modifications do not depart from the spirit of the present disclosure.

Industrial applicability

The information processing method, the information processing program, and the information processing system according to the present disclosure can reduce the risk of infection in a predetermined space where a cough or sneeze is detected, and are useful as an information processing method, an information processing program, and an information processing system that control airflow in a predetermined space where a cough or sneeze is detected.

Description of the reference symbols

1. 1A, 1B airflow control means; 1C infection risk evaluation device; 2 an airflow generating device; 3. a 3A camera; 4, a microphone; 5, a terminal device; 11 a camera; 12 microphones; 13. 13A, 13B processor; 14. 14B memory; 15. 15B a communication unit; 21 a communication unit; 22 a processor; 23 a memory; 24 an airflow generating part; 25 a wind direction changing unit; 31 an imaging unit; 32. a 32A processor; 33 a memory; 34. 34A communication part; 41 a sound collecting part; 42 a processor; 43 a communication unit; 131 an image processing section; 132. 132A cough/sneeze detection unit; 133 person state determination unit; 134 a control signal generating section; 135 infection risk evaluation department; 136 evaluation result notification unit; 141 an image storage section; 142 a device information storage unit; 143 an airflow control table storage part; 144 infection risk evaluation table storage unit; 201 air conditioning equipment; 202 an air purifier; 221 an air flow control section; 321 an image processing unit; 322 cough/sneeze judging part; 323 a person state determination unit; 324 a cough/sneeze detector; 331 an image storage unit; 421 cough/sneeze detection portion.

62页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:空调机的室内机以及室内系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!