Information processing apparatus, program, and information processing method

文档序号:788023 发布日期:2021-04-09 浏览:3次 中文

阅读说明:本技术 信息处理装置、程序和信息处理方法 (Information processing apparatus, program, and information processing method ) 是由 中村雄大 内藤正博 于 2019-04-24 设计创作,主要内容包括:具有:皮肤区域检测部(110),其从预定期间内的多个帧中的各个帧中检测人的皮肤区域;计测区域设定部(120),其在皮肤区域设定多个计测区域;脉搏源信号提取部(130),其从多个计测区域中的各个计测区域中提取表示亮度变化的多个脉搏源信号;相位一致度计算部(140),其计算表示构成多个脉搏源信号中的各个脉搏源信号的多个基础分量中的各个基础分量的相位在对应的基础分量彼此之间一致的程度的多个相位一致度;以及脉搏估计部(150),其确定相位一致的程度最高的相位一致度,根据与确定的相位一致度对应的基础分量估计人的脉搏。(Comprising: a skin area detection unit (110) that detects a human skin area from each of a plurality of frames within a predetermined period; a measurement region setting unit (120) for setting a plurality of measurement regions in the skin region; a pulse source signal extraction unit (130) that extracts a plurality of pulse source signals representing a change in brightness from each of the plurality of measurement regions; a phase matching degree calculation unit (140) that calculates a plurality of phase matching degrees that indicate the degree to which the phases of the respective basic components of the plurality of basic components that constitute the respective pulse source signals match each other in the corresponding basic components; and a pulse wave estimating unit (150) that specifies the phase matching degree with the highest degree of phase matching, and estimates the pulse wave of the person from the base component corresponding to the specified phase matching degree.)

1. An information processing apparatus, characterized in that the information processing apparatus has:

a skin region detection unit that detects a skin region, which is a region including human skin, from each of a plurality of frames representing an image within a predetermined period;

a measurement region setting unit that sets a plurality of measurement regions in the skin region;

a pulse source signal extraction unit that extracts a pulse source signal indicating a change in luminance in the predetermined period from each of the plurality of measurement regions, thereby extracting a plurality of pulse source signals corresponding to each of the plurality of measurement regions;

a phase matching degree calculation unit that calculates a phase matching degree indicating a degree to which phases of each of a plurality of basis components constituting each of the plurality of pulse source signals match each other among the corresponding basis components, thereby calculating a plurality of phase matching degrees corresponding to each of the plurality of pulse source signals; and

and a pulse wave estimating unit that specifies a phase matching degree with a highest degree of matching among the plurality of phase matching degrees, and estimates a pulse wave of the person from a base component corresponding to the specified phase matching degree.

2. The information processing apparatus according to claim 1,

the phase matching degree calculation unit selects one pair serving as a1 st pulse source signal and a2 nd pulse source signal from the plurality of pulse source signals, calculates a matching degree indicating a degree to which the phases match each other between the corresponding basis components between each of the plurality of basis components constituting the 1 st pulse source signal and each of the plurality of basis components constituting the 2 nd pulse source signal, calculates a plurality of matching degrees corresponding to each of the plurality of basis components, and sets the plurality of calculated matching degrees as the plurality of phase matching degrees.

3. The information processing apparatus according to claim 1,

the phase matching degree calculation unit selects a plurality of pairs serving as a1 st pulse source signal and a2 nd pulse source signal from the plurality of pulse source signals, calculates a matching degree indicating a degree to which the phases match each other between each of the plurality of basis components constituting the 1 st pulse source signal and each of the plurality of basis components constituting the 2 nd pulse source signal in each of the plurality of pairs, calculates a plurality of matching degrees corresponding to each of the plurality of basis components, and sums the plurality of matching degrees calculated in the plurality of pairs for each corresponding basis component, thereby calculating the plurality of phase matching degrees.

4. The information processing apparatus according to claim 1,

the phase matching degree calculation unit selects a plurality of pairs serving as a1 st pulse source signal and a2 nd pulse source signal from the plurality of pulse source signals, setting a weighting coefficient for each of the plurality of pairs, in each of the plurality of pairs, calculating a degree of coincidence indicating a degree to which the phases coincide with each other between the corresponding basis components between each of the plurality of basis components constituting the 1 st pulse source signal and each of the plurality of basis components constituting the 2 nd pulse source signal, thereby, a plurality of the coincidence degrees respectively corresponding to the plurality of basis components are calculated, the plurality of the coincidence degrees calculated in the plurality of pairs are weighted using the weighting coefficients, the plurality of phase matching degrees are calculated by summing the corresponding basis components.

5. The information processing apparatus according to claim 4,

the phase matching degree calculation unit sets the weight coefficient as follows: the weight is weighted more when the degree of coincidence is higher, among the plurality of coincidence degrees calculated from each of the plurality of pairs.

6. The information processing apparatus according to claim 4,

the phase matching degree calculation unit sets the weight coefficient as follows: the longer the distance between the two measurement regions corresponding to each of the plurality of pairs is, the heavier the weight is.

7. The information processing apparatus according to claim 4,

the phase matching degree calculation unit sets the weight coefficient as follows: the weighting is increased as the direction in which the two measurement regions corresponding to each of the plurality of pairs are arranged is closer to the direction in which the person moves.

8. The information processing apparatus according to claim 4,

the phase matching degree calculation unit sets the weight coefficient as follows: the weight is weighted more as the size change pattern of the two measurement regions corresponding to each of the plurality of pairs is closer.

9. The information processing apparatus according to claim 4,

the phase matching degree calculation unit sets the weight coefficient as follows: the weight is weighted more as the shape change pattern of the two measurement regions corresponding to each of the plurality of pairs is closer.

10. The information processing apparatus according to claim 4,

the phase matching degree calculation unit calculates a representative value of each of the two measurement regions included in each of the plurality of pairs using the plurality of matching degrees calculated from each of the plurality of pairs, and sets the weight coefficient so that the weight is heavier as the representative value is higher.

11. The information processing apparatus according to any one of claims 1 to 10,

the base component is a frequency component of the pulse source signal.

12. The information processing apparatus according to any one of claims 2 to 10,

the base component is a frequency component of the pulse source signal,

the coincidence degree is an absolute value of a phase difference between the frequency component constituting the 1 st pulse source signal and the corresponding frequency component constituting the 2 nd pulse source signal.

13. A program for causing a computer to function as:

a skin region detection unit that detects a skin region, which is a region including human skin, from each of a plurality of frames representing an image within a predetermined period;

a measurement region setting unit that sets a plurality of measurement regions in the skin region;

a pulse source signal extraction unit that extracts a pulse source signal indicating a change in luminance in the predetermined period from each of the plurality of measurement regions, thereby extracting a plurality of pulse source signals corresponding to each of the plurality of measurement regions;

a phase matching degree calculation unit that calculates a phase matching degree indicating a degree to which phases of each of a plurality of basis components constituting each of the plurality of pulse source signals match each other among the corresponding basis components, thereby calculating a plurality of phase matching degrees corresponding to each of the plurality of pulse source signals; and

and a pulse wave estimating unit that specifies a phase matching degree with a highest degree of matching among the plurality of phase matching degrees, and estimates a pulse wave of the person from a base component corresponding to the specified phase matching degree.

14. An information processing method characterized by comprising, in a first step,

a skin area, which is an area including human skin, is detected from each of a plurality of frames representing a video image within a predetermined period,

setting a plurality of measurement regions in the skin region,

extracting a pulse wave source signal indicating a change in luminance in the predetermined period from each of the plurality of measurement areas, thereby extracting a plurality of pulse wave source signals corresponding to each of the plurality of measurement areas,

calculating a phase coincidence degree indicating a degree to which phases of respective basis components of a plurality of basis components constituting respective ones of the plurality of pulse source signals coincide with each other at corresponding basis components, thereby calculating a plurality of phase coincidence degrees respectively corresponding to the respective ones of the plurality of pulse source signals,

determining a phase coincidence degree with the highest degree of phase coincidence among the plurality of phase coincidence degrees, and estimating the pulse of the person from a basis component corresponding to the determined phase coincidence degree.

Technical Field

The invention relates to an information processing apparatus, a program, and an information processing method.

Background

In daily life, it is important to manage and maintain the health of the subject. In daily life, in particular, managing and maintaining the health of a driver who is driving a vehicle is particularly important for the prevention of accidents. When managing and maintaining the health of the driver, it is effective to constantly acquire vital information such as the heart rate, the heartbeat variation, the respiration rate, and the sweating.

Among the biological information, information related to the number of heartbeats or heartbeat fluctuation is often used as an index indicating the state of autonomic nerves, and is important information when managing the health of a driver. When directly acquiring information on the heartbeat or heartbeat variation, it is necessary to measure the heart activity by attaching electrodes for measuring an electrocardiogram to the chest, and the like, which imposes a large burden on the driver.

Therefore, instead of directly measuring the heart movement, there is a method of attaching a contact device such as a pulse oximeter to a fingertip or an earlobe and acquiring a pulse wave from a change in the volume of a blood vessel. In this method, it is additionally necessary to always attach the device to the fingertip or the earlobe, which imposes a heavy burden on the driver and makes it impractical to wear the device while driving the vehicle.

As a method of estimating a pulse wave in a non-contact manner without placing a burden on a subject, for example, as described in non-patent document 1, there is a method of: the face of the subject is imaged by a camera, and the pulse is estimated from a slight luminance change on the surface of the face of the subject. In non-patent document 1, a plurality of measurement regions are set on a face image of a subject, and a frequency power spectrum of a luminance signal acquired in each measurement region is calculated. The pulse is estimated from the peak frequency of the frequency power spectrum calculated in each region, and the heart rate is estimated from the peak of the frequency power spectrum of the synthesized pulse.

Documents of the prior art

Non-patent document

Non-patent document 1: mayank Kumar, et al, "" DistancePPG "Robust non-contact video signal monitoring using a camera", biological optics express,6(5), "1565-" 1588,2015

Disclosure of Invention

Problems to be solved by the invention

However, the following problems exist in the prior art: when the face of the subject moves, the accuracy of pulse estimation decreases. This is because, when the face of the subject moves, a component corresponding to the movement of the face appears as a peak of the frequency power spectrum, and therefore, a component corresponding to the movement of the face is erroneously detected as a pulse, rather than a frequency component corresponding to a pulse.

When a vehicle is being driven, a scene in which the face of the driver moves due to vibration of the vehicle is easily conceivable, and therefore it is necessary to estimate the pulse wave with high accuracy even if the face of the subject moves.

Accordingly, an object of one or more aspects of the present invention is to estimate a pulse wave with high accuracy from a frame of a video even when a face of a person moves.

Means for solving the problems

An information processing apparatus according to 1 aspect of the present invention is an information processing apparatus including: a skin region detection unit that detects a skin region, which is a region including human skin, from each of a plurality of frames representing an image within a predetermined period; a measurement region setting unit that sets a plurality of measurement regions in the skin region; a pulse source signal extraction unit that extracts a pulse source signal indicating a change in luminance in the predetermined period from each of the plurality of measurement regions, thereby extracting a plurality of pulse source signals corresponding to each of the plurality of measurement regions; a phase matching degree calculation unit that calculates a phase matching degree indicating a degree to which phases of each of a plurality of basis components constituting each of the plurality of pulse source signals match each other among the corresponding basis components, thereby calculating a plurality of phase matching degrees corresponding to each of the plurality of pulse source signals; and a pulse estimation unit that specifies a phase matching degree with the highest degree of matching among the plurality of phase matching degrees, and estimates a pulse of the person from a base component corresponding to the specified phase matching degree.

A program according to 1 aspect of the present invention is a program for causing a computer to function as: a skin region detection unit that detects a skin region, which is a region including human skin, from each of a plurality of frames representing an image within a predetermined period; a measurement region setting unit that sets a plurality of measurement regions in the skin region; a pulse source signal extraction unit that extracts a pulse source signal indicating a change in luminance in the predetermined period from each of the plurality of measurement regions, thereby extracting a plurality of pulse source signals corresponding to each of the plurality of measurement regions; a phase matching degree calculation unit that calculates a phase matching degree indicating a degree to which phases of each of a plurality of basis components constituting each of the plurality of pulse source signals match each other among the corresponding basis components, thereby calculating a plurality of phase matching degrees corresponding to each of the plurality of pulse source signals; and a pulse estimation unit that specifies a phase matching degree with the highest degree of matching among the plurality of phase matching degrees, and estimates a pulse of the person from a base component corresponding to the specified phase matching degree.

An information processing method according to 1 aspect of the present invention is characterized in that a skin area, which is an area including a human skin, is detected from each of a plurality of frames representing a video image within a predetermined period, a plurality of measurement areas are set in the skin area, a pulse source signal representing a luminance change within the predetermined period is extracted from each of the plurality of measurement areas, thereby extracting a plurality of pulse source signals corresponding to each of the plurality of measurement areas, a phase coincidence degree representing a degree to which phases of each of a plurality of basis components constituting each of the plurality of pulse source signals coincide with each other in corresponding basis components is calculated, thereby calculating a plurality of phase coincidence degrees corresponding to each of the plurality of pulse source signals, and a phase coincidence degree to which the phase of the plurality of phase coincidence degrees is the highest is determined, estimating the person's pulse from a basis component corresponding to the determined phase consistency.

Effects of the invention

According to the aspect 1 or more of the present invention, even when the face of the person moves, the pulse can be estimated with high accuracy from the frame of the video.

Drawings

Fig. 1 is a block diagram schematically showing the configuration of a pulse estimation device according to embodiments 1, 2, and 4.

Fig. 2 (a) to (c) are schematic diagrams showing an example of setting a measurement region by facial organ detection.

Fig. 3 is a schematic diagram for explaining a specific setting method of the measurement region.

Fig. 4 is a block diagram schematically showing the configuration of the phase matching degree calculation unit in embodiment 1.

Fig. 5 (a) and (b) are schematic diagrams showing an example of a hardware configuration.

Fig. 6 is a flowchart showing the operation of the pulse estimation device according to embodiment 1.

Fig. 7 is a schematic diagram showing a positional relationship between the face of the subject, the imaging device, and the light source of the ambient light in embodiment 1.

Fig. 8 (a) to (c) are schematic diagrams showing examples of images obtained by imaging the face of the subject by the imaging device in embodiment 1.

Fig. 9 (a) to (d) are graphs showing changes in average luminance values in the measurement region in the case of the face motion of the subject in embodiment 1.

Fig. 10 is a block diagram schematically showing the configuration of the phase matching degree calculation unit in embodiments 2 and 4.

Fig. 11 is a block diagram schematically showing the configuration of a pulse estimation device according to embodiment 3.

Fig. 12 is a block diagram schematically showing the configuration of a phase matching degree calculation unit in embodiment 3.

Fig. 13 is a schematic diagram showing a positional relationship between the face of the subject, the imaging device, and the light source of the ambient light in embodiment 3.

Fig. 14 (a) to (c) are schematic diagrams showing examples of images obtained by imaging the face of the subject by the imaging device in embodiment 3.

Fig. 15 (a) to (f) are graphs showing changes in average luminance values in the measurement region in the case of the face motion of the subject in embodiment 3.

Detailed Description

Embodiment mode 1

Fig. 1 is a block diagram schematically showing the configuration of a pulse wave estimating apparatus 100 as an information processing apparatus according to embodiment 1.

The pulse wave estimation device 100 is a device capable of executing the pulse wave estimation method that is the information processing method according to embodiment 1.

As shown in fig. 1, the pulse estimation device 100 includes a skin region detection unit 110, a measurement region setting unit 120, a pulse source signal extraction unit 130, a phase matching degree calculation unit 140, and a pulse estimation unit 150.

First, an outline of the pulse estimation device 100 will be described. The pulse estimation device 100 receives image data of an image composed of a series of frames im (k) representing an image captured in a space including a skin region of a subject, the image captured in the space at a predetermined frame rate Fr. Here, k denotes frame numbers respectively assigned to the frames. For example, a frame given at the next timing of the frame Im (k) is the frame Im (k + 1). Then, pulse wave estimation apparatus 100 outputs pulse wave estimation result p (t) from a series of frames Im (k-Tp +1) to Im (k) for a certain specific number of frames Tp. Here, t denotes an output number allocated by a specific number of frames Tp. For example, the pulse estimation result given at the next timing of the pulse estimation result P (t) is the pulse estimation result P (t + 1).

Here, the frame number k and the output number t are integers of 1 or more. The frame number Tp is an integer of 2 or more.

The number of subjects, which are persons included in the image data, may be 1 person or a plurality of persons. For simplicity of explanation, the following description will be given assuming that the number of subjects included in the image data is 1 person.

The frame rate Fr is preferably 30 frames per 1 second, for example. The image data is, for example, a color image, a grayscale image, or a distance image. For simplicity of description, a case where the image data is an 8-bit grayscale image having a width of 640 pixels and a height of 480 pixels will be described below. The number of frames Tp can be any number, but is, for example, a number of frames corresponding to 10 seconds, and is preferably 300 frames in the above example.

Next, each part constituting the pulse estimation device 100 will be described.

The skin region detection unit 110 detects a skin region, which is a region including the skin of the subject, from a frame im (k) included in image data supplied from an imaging device, which will be described later, as input information, and generates skin region information s (k) indicating the detected skin region. The generated skin region information s (k) is supplied to the measurement region setting unit 120.

The skin region in embodiment 1 is a region corresponding to the face of the subject. However, the skin region may be a region other than the face of the subject. For example, the skin region may be a region corresponding to a part of the face such as eyes, eyebrows, a nose, a mouth, a forehead, a cheek, or a chin. The skin region may be a region corresponding to a body part other than the face, such as a head, a shoulder, a hand, a neck, or a foot. In addition, the skin region may be a plurality of regions.

The skin region information s (k) can include information indicating the presence or absence of the detected skin region and the position and size of the detected skin region on the image. Here, the skin area information s (k) is described as information showing a rectangular area indicating the position and size of the face on the image.

Specifically, when the skin region is a region corresponding to the face of the subject, the skin region information s (k) indicates, for example, whether or not the face of the subject is detected, the center coordinates Fc (Fcx, Fcy) of a rectangle surrounding the face, and the width Fcw and height Fch of the rectangle.

For example, the presence or absence of a detected face is set to "1" when the face can be detected, and is set to "0" when the face cannot be detected.

The center coordinates of the rectangle are expressed by the coordinate system of the frame im (k), the upper left of the frame im (k) is set as the origin, the right direction of the frame im (k) is set as the positive direction of the x-axis, and the lower direction of the frame im (k) is set as the positive direction of the y-axis.

The face detection of the subject can be realized by a known means. For example, a face detector of a cascade type using a Haar-like feature amount can be used to extract a rectangular region surrounding the face of the examinee.

The measurement area setting unit 120 receives the frame im (k) and the skin area information s (k), sets a plurality of measurement areas for extracting a pulse signal in an image area corresponding to the skin area indicated by the skin area information s (k), and generates measurement area information r (k) indicating the plurality of set measurement areas. The generated measurement region information r (k) is supplied to the pulse source signal extraction unit 130.

The measurement region information r (k) may include information indicating the positions and sizes of Rn (positive integer) measurement regions on the image. Each measurement region is defined as a measurement region ri (k) (i ═ 1, 2, …, and Rn). Here, the measurement region ri (k) is a quadrangle, and the position and size of the measurement region ri (k) are designed to be coordinate values of 4 vertices of the quadrangle in the image.

Next, an example of using facial organ detection will be described with reference to fig. 2 as an example of a method of setting the measurement region ri (k).

First, the measurement region setting unit 120 detects Ln (positive integer) landmarks of facial organs (such as the external canthus, internal canthus, nose, and mouth) shown in fig. 2 a or 2 b in the skin region sr shown in the skin region information s (k), and sets a vector storing coordinate values of each landmark as l (k). In fig. 2 (a) and 2 (b), the landmarks are indicated by circles.

Facial organ detection can be achieved using known means. For example, a Model called a Constrained Local Model (CLM) can be used to detect coordinate values of landmarks of facial organs. The number Ln of landmarks is not particularly limited, but 66 points shown in fig. 2 (a) or 29 points shown in fig. 2 (b) are preferable. When the number of landmarks is large, the detection result is stable, while the processing amount increases, and therefore, it is preferable to determine the number of landmarks by hardware such as a CPU. Next, the number Ln of the indices will be described as 66 points.

Next, the measurement region setting unit 120 sets the coordinates of the vertices of the quadrangle of the measurement region ri (k) with reference to the detected landmarks. For example, the measurement region setting unit 120 sets the vertex coordinates of a quadrangle shown in fig. 2 (c), and sets Rn measurement regions ri (k). Here, the number of measurement regions Rn will be described as 12 as an example.

A specific setting method of the measurement region ri (k) will be described with reference to fig. 3.

Here, a case will be described where the measurement region ri (k) is set in a portion of the skin region sr corresponding to the cheek.

First, the measurement region setting unit 120 selects a landmark a1 of the contour of the face and a landmark a2 of the nose. For example, the measurement region setting unit 120 may select the landmark a2 of the nose and select the landmark a1 of the contour of the face closest to the landmark a2 of the nose.

Then, the measurement region setting unit 120 sets the auxiliary landmarks a1, a2, and A3 so that the line segment between the landmark a1 and the landmark a2 is quartered.

Similarly, the measurement region setting unit 120 selects the landmark B1 of the contour of the face and the landmark B2 of the nose, and sets the auxiliary landmarks B1, B2, and B3 so as to quarter the line segment between the landmark B1 and the landmark B2. The landmark B1 may be selected from, for example, landmarks of the nose adjacent to the landmark a 1. Landmark B2 may be selected from the landmarks of the face adjacent to landmark a 2.

Next, the measurement region setting unit 120 defines a rectangular region surrounded by the auxiliary landmarks a1, b1, b2, and a2 as one measurement region R1. The auxiliary landmarks a1, b1, b2, and a2 are vertex coordinates corresponding to the measurement region R1.

Similarly, the measurement region setting unit 120 defines a rectangular region surrounded by the auxiliary landmarks a2, b2, b3, and a3 as one measurement region R2. The auxiliary landmarks a2, b2, b3, and a3 are vertex coordinates corresponding to the measurement region R2.

The measurement region setting unit 120 performs the same processing on the other parts of the cheek and the part corresponding to the chin, thereby setting the vertex coordinates of the quadrangle of the measurement region ri (k).

Then, the measurement region setting unit 120 generates information including coordinates of 4 vertices of each measurement region ri (k) as measurement region information r (k), and supplies the measurement region information r (k) to the pulse source signal extraction unit 130.

In the above example, the measurement region setting unit 120 detects the coordinates of the landmarks by CLM, but is not limited to this. For example, the measurement area setting unit 120 may use a tracking technique such as a Kanade-Lucas-tomasi (klt) tracker. Specifically, the measurement area setting unit 120 may detect coordinates of a landmark by CLM for the first frame Im (1), track the coordinates of the landmark by KLT tracker from the next frame Im (2), and calculate the coordinates of the landmark for each frame Im (k). By performing tracking, CLM is not required for each frame im (k), and thus the amount of processing can be reduced. In this case, since detection errors due to tracking accumulate, the measurement region setting unit 120 may perform a reset process such as resetting the coordinate position of the landmark by performing CLM once for several frames.

The positions of the measurement regions are not limited to the 12 regions shown in fig. 2 (c). For example, the forehead portion may be included, and the nose region may be included. The measurement region setting unit 120 may change the setting region according to the subject. For example, the measurement region setting unit 120 may detect a subject with a bang hanging on the forehead and exclude the forehead region from the measurement region. In addition, if the subject is wearing glasses with thick rims, the measurement region setting unit 120 may detect the position of the glasses and exclude the region from the measurement region. In addition, if the subject is a beard, the measurement region setting unit 120 may exclude the region of the beard from the measurement region. Further, the measurement region may overlap with another measurement region.

Returning to fig. 1, the pulse source signal extracting unit 130 receives the frame im (k) and the measurement region information r (k), extracts a pulse source signal indicating a luminance change in a period corresponding to the number of frames Tp included in a predetermined period from each of the plurality of measurement regions ri (k) indicated by the measurement region information r (k), and generates pulse source signal information w (t) indicating the extracted pulse source signal. The pulse source signal is a signal that becomes a pulse source. The generated pulse source signal information w (t) is supplied to the phase matching degree calculation unit 140 and the pulse estimation unit 150.

The pulse source signal information w (t) may include information indicating the pulse source signal wi (t) extracted in the measurement region ri (k). The pulse source signal wi (t) is time series data of the amount of Tp, and is extracted from frames Im (k-Tp +1), Im (k-Tp +2), …, Im (k) of the amount of Tp in the past, and measurement region information R (k-Tp +1), R (k-Tp +2), …, R (k).

When performing the extraction, the pulse source signal extraction unit 130 calculates the luminance feature amount gi (j) k-Tp +1, k-Tp +2, …, k) of each measurement region ri (k) for each frame im (k). The luminance characteristic amount gi (j) is a value calculated from the luminance value in the frame im (j) for each measurement region ri (j), and is, for example, an average or a variance of the luminance values of pixels included in the measurement region ri (j). Here, the description will be given with the luminance characteristic amount gi (j) being an average of luminance values of pixels included in the measurement region ri (j). A signal obtained by arranging gi (j) calculated for each frame im (k) in time series is referred to as a pulse source signal wi (t). That is, let the pulse source signals wi (t) ═ Gi (k-Tp +1), Gi (k-Tp +2), …, Gi (k) ].

Then, the pulse source signal extraction unit 130 generates information obtained by integrating the pulse source signals wi (t) in the measurement regions ri (k) as pulse source signal information w (t). The generated pulse source signal information w (t) is supplied to the phase matching degree calculation unit 140 and the pulse estimation unit 150.

The pulse source signal wi (t) includes various noise components in addition to the pulse component and the facial motion component. As the noise component, for example, there is noise due to a device defect of an imaging device described later. In order to remove these noise components, a filtering process is preferably performed as a preprocessing for the pulse source signal wi (t).

In the filtering process, for the pulse source signal wi (t), the process is performed using, for example, a low-pass filter, a high-pass filter, or a band-pass filter. In the following description, bandpass filtering is performed.

As the band pass filter, for example, a butterworth filter or the like can be used. As the cutoff frequency of the band-pass filter, for example, the lower cutoff frequency is preferably 0.5Hz, and the higher cutoff frequency is preferably 5.0 Hz.

In addition, the kind of the filtering process is not limited to the above band-pass filtering. In addition, the cutoff frequency is not limited thereto. The type and cutoff frequency of the filtering process may be set according to the state or situation of the subject.

The phase matching degree calculation unit 140 receives the pulse source signal information w (t), calculates a phase matching degree indicating a degree to which the phases of the respective basis components of the plurality of basis components included in the pulse source signal information w (t) match each other in the corresponding basis components, and generates phase matching degree information c (t) indicating the phase matching degree for each basis component. The phase matching degree information c (t) is supplied to the pulse estimating unit 150. Here, since the phase is an attribute of the fundamental component, the degree of coincidence of the phases is the degree of coincidence of the attributes, and it can be said that the degree of coincidence of the phases is the degree of coincidence of the attributes. Therefore, the phase matching degree calculation unit 140 can be said to be an attribute matching degree calculation unit.

Specifically, the phase matching degree calculation unit 140 selects a pair of two pulse source signals from the plurality of pulse source signals indicated by the pulse source signal information w (t). Here, the two pulse source signals of the selected pair are set as the 1 st pulse source signal and the 2 nd pulse source signal. The phase matching degree calculation unit 140 calculates a plurality of matching degrees indicating the degree to which the phases match each other between the corresponding basis components between each of the plurality of basis components constituting the 1 st pulse source signal and each of the plurality of basis components constituting the 2 nd pulse source signal. Then, the phase matching degree calculation unit 140 specifies a plurality of attribute matching degrees according to the plurality of calculated matching degrees. Here, each of the plurality of attribute coincidence degrees corresponds to each of the plurality of base components.

The phase matching degree calculation unit 140 may select one pair, or may select two or more pairs.

When a plurality of pairs are selected, the phase matching degree calculation unit 140 may determine a plurality of values obtained by summing the plurality of matching degrees calculated in the plurality of pairs for each corresponding basis component as a plurality of phase matching degrees.

The phase matching degree calculation unit 140 may select one pair and determine the plurality of matching degrees calculated in the pair as the plurality of phase matching degrees.

Next, the phase matching degree calculation unit 140 selects a plurality of pairs to be described.

Fig. 4 is a block diagram schematically showing the configuration of the phase matching degree calculation unit 140.

The phase matching degree calculation unit 140 includes two inter-region phase matching degree calculation units 141 and a phase matching degree summation unit 142.

The inter-region phase matching degree calculation unit 141 selects a pair of two measurement regions ru (k) and rv (k) from among the plurality of measurement regions ri (k) used for calculating the pulse source signal wi (t) based on the pulse source signal information w (t), and selects a pair of two pulse source signals wu (t) and wv (t) corresponding to the two measurement regions ru (k) and rv (k). Then, the inter-region phase matching degree calculation unit 141 calculates the inter-region phase matching degree cuv (t), which is the matching degree of the phases of the fundamental components in the pair of the two selected pulse source signals wu (t) and wv (t). Further, u, v are 1, 2, …, and Rn, and u ≠ v is satisfied.

Further, at least one pair may be generated, or a plurality of pairs may be generated, and the phase matching degree cuv (t) between the two regions may be calculated. The two-region phase matching degree calculation unit 141 supplies information obtained by integrating the two generated two-region phase matching degrees cuv (t) to the phase matching degree calculation unit 142 as two-region phase matching degree information n (t).

The operation of calculating the phase matching degree of the fundamental component by the inter-region phase matching degree calculating unit 141 will be described in detail.

When calculating the degree of matching of the phases of the fundamental components, first, the inter-region phase matching degree calculation unit 141 decomposes the pulse source signal wi (t) into the fundamental components. Next, a case where a frequency component is used as a basic component will be described as an example. The base component is a signal component constituting the pulse source signal wi (t), and is a signal component capable of expressing the pulse source signal when the base component is given as an argument of an arbitrary function.

First, the inter-region phase matching degree calculation unit 141 decomposes the pulse source signals wi (t) in the measurement regions ri (k) included in the pulse source signal information w (t) into frequency components. In decomposing the pulse source signal wi (t) into frequency components, for example, a Fast Fourier Transform (FFT) is used. The pulse source signal wi (t) as time series data can be decomposed into data of frequency components (magnitude (power) and phase of each frequency component) by FFT. When FFT is performed on pulse source signal wi (t) which is time series data, the magnitude of each frequency component f is defined as | Fi (f, t) |, and the phase is defined as ≦ Fi (f, t). In the case of performing FFT, since a spurious signal is generated at the boundary of the nyquist frequency (half of the sampling frequency), f is 0, Δ f, 2 × Δ f, …, or Sr × Δ f/2. Here, Δ f is a value determined by the length of the time-series data, that is, the Tp frame, and when the Tp frame is Ts seconds, Δ f is 1/Ts.

Next, the two inter-region phase matching degree calculation unit 141 selects a pair of two measurement regions ru (k) and rv (k) from the plurality of measurement regions ri (k) used for calculating the pulse source signal wi (t), and calculates the two inter-region phase matching degrees cuv (t) which are matching degrees of the phases of the fundamental components (frequency components) in the pair of the two measurement regions ru (k) and rv (k). The degree of phase coincidence between the two regions, cuv (t), is calculated, for example, as the absolute value of the difference between the phase & (f, t) of the measurement region u and the phase & (f, t) of the measurement region v. By obtaining the absolute value of the phase difference for each frequency component, the degree of coincidence of the phase in each frequency component can be calculated.

In this case, the smaller the absolute value of the phase difference, the higher the degree of phase matching, and the larger the absolute value of the phase difference, the lower the degree of phase matching. The degree of coincidence in phase of each frequency component is calculated, and the degree of coincidence after the arrangement is set as the degree of coincidence in phase cuv (t) between the two regions.

Then, the two inter-region phase matching degree calculation unit 141 collects the two inter-region phase matching degrees cuv (t) for each frequency component to generate two inter-region phase matching degree information n (t).

The two inter-region phase matching degree calculation units 141 may not calculate the phase matching degree for all the frequency components, or may calculate the phase matching degree only for the frequency components satisfying a specific condition. For example, when the power (magnitude) of the frequency component is extremely small, the noise component can be regarded as the pulse component, and therefore, the inter-region phase matching degree calculation unit 141 does not calculate the matching degree of the phase for the corresponding frequency component. Alternatively, the phase matching degree between the two regions may be set to be pseudo-low, and the phase matching degree calculation unit 141 may give a constant value to the phase matching degree of the corresponding frequency component.

The phase matching degree summing unit 142 is supplied with the two inter-region phase matching degree information n (t), sums the two inter-region phase matching degrees cuv (t) for each basis component, and generates the phase matching degree information c (t) indicating the phase matching degree for each basis component in the measurement region ri (k). The phase matching degree information c (t) is supplied to the pulse estimating unit 150. For example, the phase matching degree information c (t) is calculated by adding the two inter-region phase matching degrees cuv (t) included in the two inter-region phase matching degree information n (t) for each frequency component.

The method of calculating the phase matching degree information c (t) is not limited to the addition of each component, and multiplication or the like may be used.

In the case where the two inter-region phase matching degree calculation unit 141 selects only one pair of two measurement regions ru (k) and rv (k) from among the plurality of measurement regions ri (k) used for calculating the pulse source signal wi (t), the phase matching degree summation unit 142 need not be provided, and the phase matching degree summation unit 142 may supply the two inter-region phase matching degree information n (t) to the pulse estimation unit 150 as the phase matching degree information c (t).

Returning to fig. 1, the pulse wave estimating unit 150 estimates a pulse wave from the pulse wave source signal information w (t) and the phase matching degree information c (t), and outputs a pulse wave estimation result p (t) which is pulse wave information indicating the estimated pulse wave. The pulse information may be, for example, time series data of the estimated pulse or the number of heart beats. Here, for simplicity of explanation, the pulse information is assumed to indicate the heart rate (the rate of beats per 1 minute).

For example, when the heart rate is output as the pulse wave estimation result p (t), the pulse wave estimation unit 150 specifies the frequency component that is the base component having the highest degree of phase matching among the phase matching degree information c (t) for each frequency component, and estimates the pulse wave from the specified frequency component. Specifically, the pulse wave estimating unit 150 sets the frequency component having the highest degree of phase matching to correspond to the pulse wave, and outputs the frequency of the frequency component corresponding to the pulse wave as the heart rate.

For example, as shown in fig. 5 (a), a part or all of the skin region detecting Unit 110, the measurement region setting Unit 120, the pulse source signal extracting Unit 130, the phase matching degree calculating Unit 140, and the pulse estimating Unit 150 described above may be configured by a processor 2 such as a memory 1 and a CPU (Central Processing Unit) that executes a program stored in the memory 1. Such a program may be provided via a network, or may be recorded in a recording medium. That is, such a program may be provided, for example, as a program product.

As shown in fig. 5 (b), for example, a part or all of the skin region detection unit 110, the measurement region setting unit 120, the pulse source signal extraction unit 130, the phase matching degree calculation unit 140, and the pulse estimation unit 150 may be formed of a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other processing circuit 3.

Fig. 6 is a flowchart showing the operation of the pulse estimation device 100 according to embodiment 1.

The operation shown in fig. 6 is performed once every 1 frame of the input captured image, that is, within a 1-frame period.

First, the skin area detection unit 110 detects a skin area of a subject from a frame im (k) supplied as input information from an imaging device described later, and generates skin area information S (k) indicating the detected skin area (S10). The generated skin region information s (k) is supplied to the measurement region setting unit 120.

Next, the measurement area setting unit 120 receives the frame im (k) and the skin area information S (k), sets a plurality of measurement areas ri (k) for extracting pulse signals from the skin areas indicated by the skin area information S (k), and generates measurement area information r (k) indicating the set measurement areas ri (k) (S11). The generated measurement region information r (k) is supplied to the pulse source signal extraction unit 130.

Next, the pulse source signal extracting unit 130 receives the frame im (k) and the measurement region information r (k), extracts the pulse source signal wi (t) which is the pulse source from the luminance value in each measurement region ri (k) indicated by the measurement region information r (k), and generates pulse source signal information w (t) (S12) indicating the extracted pulse source signal wi (t). The generated pulse source signal information w (t) is supplied to the phase matching degree calculation unit 140 and the pulse estimation unit 150.

Next, the phase matching degree calculation unit 140 receives the pulse source signal information w (t), calculates the matching degree of the phase between the measurement regions ri (k) with respect to the fundamental component included in the pulse source signal wi (t) indicated by the pulse source signal information w (t), and generates phase matching degree information c (t) indicating the matching degree of the phase for each fundamental component (S13). The phase matching degree information c (t) is supplied to the pulse estimating unit 150.

Next, the pulse wave estimating unit 150 estimates a pulse wave from the pulse wave source signal information w (t) and the phase matching degree information c (t), and outputs a pulse wave estimation result p (t) indicating the estimated pulse wave (S14).

Next, the effects of the pulse estimation device 100 according to embodiment 1 will be described with reference to fig. 7 to 9.

Fig. 7 is a schematic diagram showing a positional relationship among the face of the subject, the imaging device 160, and the light source 161 of the ambient light.

As shown in fig. 7, a measurement area a and a measurement area B are disposed in a skin area provided on the face of a subject.

Fig. 8 (a) to (c) show examples of images obtained by imaging the face of the subject by the imaging device 160 shown in fig. 7.

The image shown in fig. 8 (a) is an example when the face of the subject is positioned at the center of the imaging device 160. This position is set as a reference position.

The image shown in fig. 8 (b) is an example when the face of the subject is located on the right side of the center of the imaging device 160. This position is set as the right position.

The image shown in fig. 8 (c) is an example in which the face of the subject is positioned on the left side of the center of the imaging device 160. This position is set as the left position.

When the face is at the right position, the measurement area a becomes brighter than the reference position, and when the face is at the left position, the measurement area a becomes darker than the reference position.

When the face is at the right position, the measurement area B becomes darker than the reference position, and when the face is at the left position, the measurement area B becomes brighter than the reference position.

Fig. 9 (a) to (d) show changes in average of luminance values in the measurement region A, B when the face of the subject moves in the order of the reference position, right position, reference position, left position, reference position, right position, reference position, and left position.

Fig. 9 (a) shows the average change in the luminance value of the motion component of the face of the measurement area a when the face of the subject moves as described above, and fig. 9 (b) shows the average change in the luminance value of the pulse component of the face of the measurement area a when the face of the subject moves as described above.

Fig. 9 (c) shows the average change in the luminance value of the motion component of the face of the measurement region B when the face of the subject moves as described above, and fig. 9 (d) shows the average change in the luminance value of the pulse component of the face of the measurement region B when the face of the subject moves as described above.

As shown in (a) to (d) of fig. 9, when the face of the subject moves, the luminance value in the measurement region changes on average in accordance with the movement. At this time, the frequency of the change in the luminance value of the motion component of the face is a closer value depending on the measurement region, but the phase is different.

For example, as shown in fig. 9 (a) and (c), the measurement region a and the measurement region B are different in the timing of lighting and the timing of dimming, in other words, when the measurement region a is bright, the measurement region B is dimmed, and therefore, when the change in the luminance value is observed as a signal component, the phases of the frequency components are different.

On the other hand, as shown in fig. 9 (b) and (d), the frequency and the phase of the change in the luminance value of the pulse component become closer values in any measurement region.

That is, since there is a difference in which the phase differs or becomes a closer value with respect to the change in the luminance value due to the movement of the face and the change in the luminance value due to the pulse wave, the component of the change in the luminance value due to the movement of the face and the component of the change in the luminance value due to the pulse wave can be discriminated by comparing the degrees of coincidence of the phases of the fundamental components in the respective measurement areas.

In particular, by selecting a base component having a high phase matching degree as the pulse component, it is possible to suppress the influence of the facial movement and estimate the pulse with high accuracy.

Since the above-described characteristics are exhibited in the change in the luminance value in the measurement region, the pulse estimation device 100 according to embodiment 1 can estimate the pulse based on the degree of phase matching with respect to the fundamental component of the pulse source signal extracted between the measurement regions, thereby suppressing the accuracy deterioration due to the movement of the face and estimating the pulse with high accuracy.

Further, by generating pairs of two regions from the plurality of measurement regions and calculating and summing the matching degrees of the basis components for each pair, the phase matching degrees of the basis components between the plurality of measurement regions can be calculated, and therefore, the pulse can be estimated with higher accuracy.

In embodiment 1, the description has been given taking the image data as a grayscale image, but the image data is not limited to this. For example, an RGB image may also be used as the image data. The grayscale image may be image data acquired by an imaging device capable of receiving near-infrared light (for example, light having a wavelength of 850nm or 940 nm). In this case, the pulse estimation device 100 according to embodiment 1 can estimate the pulse even at night by illuminating the subject with the illumination device using near-infrared light and capturing an image.

In embodiment 1, as shown in fig. 7, the description has been given assuming that the ambient light is irradiated from above, but the present invention is not limited to this. For example, the ambient light may be emitted from the side.

In embodiment 1, the pulse estimation result p (t) is described as the heart rate, but the present invention is not limited thereto. For example, the pulse estimation unit 150 may synthesize pulses by performing inverse fourier transform using data of corresponding frequency components, assuming that the component having the highest degree of phase matching among the phase matching degree information c (t) of each component corresponds to a pulse.

In embodiment 1, the number of subjects included in the image data is 1 person, but the present invention is not limited to this. When 2 or more persons are present, the pulse rate may be estimated for each subject.

Embodiment mode 2

As shown in fig. 1, the pulse wave estimating device 200 according to embodiment 2 includes a skin region detecting unit 110, a measurement region setting unit 120, a pulse wave source signal extracting unit 130, a phase matching degree calculating unit 240, and a pulse wave estimating unit 150.

The skin region detection unit 110, the measurement region setting unit 120, the pulse source signal extraction unit 130, and the pulse estimation unit 150 of the pulse estimation device 200 according to embodiment 2 are the same as the skin region detection unit 110, the measurement region setting unit 120, the pulse source signal extraction unit 130, and the pulse estimation unit 150 of the pulse estimation device 100 according to embodiment 1.

The pulse wave estimating apparatus 200 is an apparatus capable of executing the pulse wave estimating method that is the information processing method according to embodiment 2.

The phase matching degree calculation unit 240 in embodiment 2 selects a pair of two pulse source signals of a plurality of pairs from the plurality of pulse source signals indicated by the pulse source signal information w (t). Then, as in embodiment 1, the phase matching degree calculation unit 240 calculates a plurality of matching degrees indicating the degree to which the phases match between the corresponding basis components, from each of the selected plurality of pairs. Then, the phase matching degree calculation unit 240 sets a weight coefficient for each of the plurality of pairs, weights the plurality of matching degrees calculated from each of the plurality of pairs using the weight coefficient, and determines a value obtained by summing up the plurality of matching degrees for each corresponding basis component as the plurality of phase matching degrees.

Here, the phase coincidence degree calculation unit 240 can set the weight coefficient so that the higher the coincidence degree of the phases is included in the plurality of coincidence degrees calculated from each of the plurality of pairs, the more the weight is.

The phase matching degree calculation unit 240 may set the weight coefficient so that the longer the distance between the two measurement regions ru (k) and rv (k) corresponding to each of the plurality of pairs, the heavier the weight.

Fig. 10 is a block diagram schematically showing the configuration of the phase matching degree calculation unit 240 in embodiment 2.

The phase matching degree calculation unit 240 includes two inter-region phase matching degree calculation units 141, a phase matching degree summation unit 242, and a weight coefficient calculation unit 243.

The two inter-region phase matching degree calculation units 141 of the phase matching degree calculation unit 240 in embodiment 2 are the same as the two inter-region phase matching degree calculation units 141 of the phase matching degree calculation unit 140 in embodiment 1. However, the two-region phase matching degree calculation unit 141 in embodiment 2 supplies the two-region phase matching degree information n (t) to the phase matching degree summation unit 242 and the weight coefficient calculation unit 243.

The weight coefficient calculation unit 243 receives the inter-region phase matching degree information n (t), calculates a weight coefficient duv (t) for each of the two inter-region phase matching degrees cuv (t) indicated by the inter-region phase matching degree information n (t), and generates weight information d (t) indicating the calculated weight coefficient duv (t). The weight information d (t) is supplied to the phase matching degree totaling unit 242.

The weight information d (t) can include a weight coefficient duv (t) for each of the two inter-region phase matching degrees cuv (t) included in the two inter-region phase matching degree information n (t). The weight coefficient duv (t) takes a value between "0" and "1", for example. For example, when the weight coefficient duv (t) is "0", the weight for the corresponding phase matching degree cuv (t) between the two regions is small, and when the weight coefficient duv (t) is "1", the weight for the corresponding phase matching degree cuv (t) between the two regions is large. When the weight coefficient duv (t) is "0.5", the weight is intermediate between "0" and "1".

The weighting factor duv (t) is determined according to the phase matching degree cuv (t) between two regions, for example. As described above, when the absolute value of the phase difference of the basis component acquired in each measurement region ri (k) is used as the two inter-region phase matching degree cuv (t), each element of the two inter-region phase matching degree cuv (t) becomes smaller as the phase of the basis component matches between the two measurement regions ru (k) and rv (k). Therefore, the smaller the phase matching degree cuv (t) between the two regions, the more basic components whose phases are matched between the two measurement regions ru (k) and rv (k) exist. That is, the smaller the phase matching degree cuv (t) between the two regions is, the larger the corresponding weight coefficient duv (t) is set, and thus the matching degree of the phase between the measurement regions ri (k) can be calculated using the information of the pair of the two measurement regions ru (k) and rv (k) including the pulse component.

Based on the above, for example, it is preferable to determine the corresponding weight coefficient duv (t) based on the minimum value of the phase matching degree cuv (t) between the two regions so that the weight is set to be larger as the phases match.

Next, a method of using the minimum value of the phase matching degree cuv (t) between the two regions will be described as a method of determining the weight coefficient duv (t).

The degree of phase coincidence between the two regions, cuv (t), is takenIs set as cmaxSetting the minimum value as cmin. The minimum value of the phase matching degrees cuv (t) between the two corresponding regions is referred to as cuvminIn the case of (t), the weight coefficient duv (t) is calculated by the following expression (1).

duv(t)=1.0-(cuvmin(t)-cmin)/(cmax-cmin) (1)

By calculating the weight coefficient duv (t) using the above-described equation, the weight coefficient duv (t) can be set to be larger as the phase is more uniform, in other words, the phase matching degree cuv (t) between the two corresponding regions includes a smaller value.

The weight coefficient calculation unit 243 collects the weight coefficients duv (t) calculated for the phase matching degrees cuv (t) between the two regions, and generates weight information d (t) indicating the weight coefficients duv (t). The weight information d (t) is supplied to the phase matching degree totaling unit 242.

The phase matching degree summing unit 242 receives the phase matching degree information n (t) between the two regions and the weight information d (t), generates phase matching degree information c (t) for each fundamental component between the measurement regions, and supplies the generated phase matching degree information c (t) to the pulse estimating unit 150.

Specifically, the phase matching degree adding unit 242 performs weighted addition of the two inter-region phase matching degrees cuv (t) indicated by the two inter-region phase matching degree information n (t) using the weight coefficient duv (t) indicated by the corresponding weight information d (t) for each basis component.

The weighted addition is performed as shown in the following expression (2).

Σu,v(duv(t)×cuv(t)) (2)

As shown in expression (2), the phase matching degree summing unit 242 multiplies the phase matching degree cuv (t) between the two regions, which is indicated by the phase matching degree information n (t) between the two regions, by the corresponding weight information d (t), and adds the phase matching degrees to each of the basis components, thereby generating the phase matching degree information c (t) indicating the matching degree of the phase of each of the basis components between the measurement regions ri (k). The phase matching degree information c (t) is supplied to the pulse estimating unit 150.

As described above, according to the pulse estimation device 200 of embodiment 2, the weight coefficient duv (t) is calculated based on the minimum value of the two inter-region phase matching degrees cuv (t), in other words, the weight coefficient duv (t) is calculated such that the smaller the minimum value of the two inter-region phase matching degrees cuv (t), the larger the weight coefficient duv (t), and thus the phase matching degree between the measurement regions ri (k) can be calculated with emphasis placed on the pair of the measurement regions ri (k) whose phases are more matched. Therefore, embodiment 2 can estimate the pulse with higher accuracy.

In embodiment 2, the weight coefficient duv (t) is calculated from the minimum value of the phase matching degrees cuv (t) between the two regions, but the method of calculating the weight coefficient duv (t) is not limited to this method. For example, the corresponding weight coefficient duv (t) may be determined based on the distance between the two measurement regions ru (k) and rv (k). Regarding the motion component of the face included in the change of the average luminance value, when the distance between the two regions used for the calculation of the phase matching degree cuv (t) between the two regions is large, a difference in phase is likely to occur. Therefore, the weight coefficient calculation unit 243 calculates the distance between the two measurement regions ru (k) and rv (k) from the positions of the measurement regions ri (k) on the image, and sets the weight coefficient duv (t) based on the calculated distance. In this case, the weight coefficient calculation unit 243 sets the weight coefficient duv (t) such that the larger the distance, the larger the value of the weight coefficient duv (t).

The weight coefficient duv (t) may be calculated by a method other than the above-described method of calculating the minimum value of the phase matching degree cuv (t) between the two regions or the method of calculating the distance between the two measurement regions ru (k) and rv (k), or the weight coefficient duv (t) may be determined by combining a plurality of methods in an integrated manner.

Embodiment 3

Fig. 11 is a block diagram schematically showing the configuration of a pulse wave estimating device 300 as an information processing device according to embodiment 3.

The pulse wave estimation device 300 is a device capable of executing the pulse wave estimation method that is the information processing method according to embodiment 3.

As shown in fig. 11, the pulse wave estimating device 300 includes a skin region detecting unit 110, a measurement region setting unit 120, a pulse wave source signal extracting unit 130, a phase matching degree calculating unit 340, a pulse wave estimating unit 150, and a fluctuation information acquiring unit 370.

The skin region detection unit 110, the measurement region setting unit 120, the pulse source signal extraction unit 130, and the pulse estimation unit 150 of the pulse estimation device 300 according to embodiment 3 are the same as the skin region detection unit 110, the measurement region setting unit 120, the pulse source signal extraction unit 130, and the pulse estimation unit 150 of the pulse estimation device 100 according to embodiment 1.

The fluctuation-information acquiring unit 370 specifies the fluctuation of each measurement region ri (k) based on the measurement-region information r (k), and generates fluctuation information m (t) indicating the specified fluctuation. The variation information m (t) is supplied to the phase matching degree calculation unit 340.

The fluctuation information m (t) may include element information mi (t) indicating the movement, the change in size, the change in shape, and the like of each measurement region ri (k) on the image.

The motion on the image is, for example, a two-dimensional vector representing the difference between the position on the image of the measurement region ri (e) in the e-th frame Im (e) which is an integer of 2 or more and the position on the image of the corresponding measurement region ri (e-1) in the e-1-th frame Im (e-1).

For example, the center of gravity positions of the measurement regions ri (e), (e) and ri (e-1) can be used as the positions of the measurement regions ri (e), (e) and ri (e-1) on the image. When the barycentric position is used, barycentric coordinates of 4 vertexes constituting the measurement regions ri (e) and ri (e-1) may be used as the barycentric position.

The size change is, for example, the difference between the area of the measurement region ri (e) in the e-th frame Im (e) and the area of the corresponding measurement region ri (e-1) in the e-1 th frame Im (e-1).

The change in shape is, for example, a four-dimensional vector indicating a difference between a ratio of each side length of 4 sides of the measurement region ri (e) in the e-th frame Im (e) to the total length (4 values because there are 4 sides), and a ratio of each side length of 4 sides of the corresponding measurement region ri (e-1) in the k-1-th frame Im (e-1) to the total length.

The element information mi (t) is, for example, time series data of the amount of Tp of the above information, and is extracted from the measurement region information R (k-Tp), R (k-Tp +1), …, R (k) of the past amount of Tp +1, for example.

For the sake of simplicity, in the following description, the element information mi (t) is time series data of the Tp amount formed of two-dimensional vectors indicating the center-of-gravity motion of each measurement region ri (k), and the variation information m (t) is information obtained by integrating these time series data.

The element information mi (t) does not need to be time-series data of Tp amount, and may be constituted by an arbitrary number of data.

Similarly to embodiment 2, the phase matching degree calculation unit 340 in embodiment 3 also selects a plurality of pairs of two pulse source signals, and calculates a plurality of matching degrees indicating the degree to which the phases match between the corresponding basis components, from each pair. Then, the phase matching degree calculation unit 340 sets a weight coefficient for each of the plurality of pairs, weights the plurality of matching degrees calculated from each of the plurality of pairs using the weight coefficient, and determines a value obtained by summing up the plurality of matching degrees for each corresponding basis component as the plurality of phase matching degrees.

In embodiment 3, the phase matching degree calculation unit 340 can set a weight coefficient such that the weight is more important as the direction in which the two measurement regions ru (k) and rv (k) corresponding to each of the plurality of pairs are arranged is closer to the direction in which the subject moves, based on the variation information m (t).

Furthermore, the phase matching degree calculation unit 340 may set a weight coefficient such that the weight is weighted more as the dimensional change patterns of the two measurement regions ru (k) and rv (k) corresponding to each of the plurality of pairs become closer, based on the variation information m (t).

Furthermore, the phase matching degree calculation unit 340 may set a weight coefficient such that the weight is weighted more as the shape change pattern of the two measurement regions ru (k) and rv (k) corresponding to each of the plurality of pairs is closer, based on the variation information m (t).

Fig. 12 is a block diagram schematically showing the configuration of the phase matching degree calculation unit 340.

The phase matching degree calculation unit 340 includes two inter-region phase matching degree calculation units 141, a phase matching degree summation unit 242, and a weight coefficient calculation unit 343.

The two inter-region phase matching degree calculation units 141 of the phase matching degree calculation unit 340 in embodiment 3 are the same as the two inter-region phase matching degree calculation units 141 of the phase matching degree calculation unit 140 in embodiment 1. However, the two-region phase matching degree calculation unit 141 in embodiment 3 supplies the two-region phase matching degree information n (t) to the phase matching degree summation unit 242 and the weight coefficient calculation unit 343.

The phase matching degree counting unit 242 of the phase matching degree calculation unit 340 in embodiment 3 is the same as the phase matching degree counting unit 242 of the phase matching degree calculation unit 240 in embodiment 2. However, the phase matching degree summing unit 242 according to embodiment 3 acquires the weight information d (t) from the weight coefficient calculation unit 343.

The weight coefficient calculation unit 343 receives the inter-region phase matching degree information n (t) and the variation information m (t), calculates a weight coefficient duv (t) for each of the two inter-region phase matching degrees cuv (t) indicated by the inter-region phase matching degree information n (t) using the variation information m (t), and generates weight information d (t) indicating the calculated weight coefficient duv (t). The weight information d (t) is supplied to the phase matching degree totaling unit 242.

A method of calculating the weight coefficient duv (t) from the fluctuation information m (t) will be described with reference to fig. 13 to 14.

Fig. 13 is a schematic diagram showing a positional relationship among the face of the subject, the imaging device 160, and the light source 161 of the ambient light.

As shown in fig. 13, a measurement area a, a measurement area B, and a measurement area C are provided in the skin area of the face of the subject.

Fig. 14 (a) to (c) show examples of images obtained by imaging the face of the subject by the imaging device 160 shown in fig. 13.

The image shown in fig. 14 (a) is an example when the face of the subject is positioned at the center of the imaging device 160. This position is set as a reference position.

The image shown in fig. 14 (b) is an example when the face of the subject is located on the right side of the center of the imaging device 160. This position is set as the right position.

The image shown in fig. 14 (c) is an example in which the face of the subject is positioned on the left side of the center of the imaging device 160. This position is set as the left position.

When the face is at the right position, the measurement area a and the measurement area C are brighter than the reference position, and when the face is at the left position, the measurement area a and the measurement area C are darker than the reference position.

When the face is at the right position, the measurement area B becomes darker than the reference position, and when the face is at the left position, the measurement area B becomes brighter than the reference position.

As shown in (a) to (c) of fig. 14, when the position of the face changes to the left or right, the manner of change in luminance is different in the measurement region having a lateral positional relationship in the image, such as the measurement region a and the measurement region B.

On the other hand, in the measurement region in the vertical positional relationship in the image, such as the measurement region a and the measurement region C, the manner of luminance change is similar.

That is, the manner of luminance change differs in the measurement region that is in the same positional relationship with the direction of movement of the face, and the manner of luminance change is similar in the measurement region that is in the positional relationship with the direction perpendicular to the direction of movement of the face.

Fig. 15 (a) to (f) show changes in average of luminance values in the measurement region A, B, C when the face of the subject moves in the order of the reference position, right position, reference position, left position, reference position, right position, reference position, and left position.

Fig. 15 (a) shows the average change in the luminance value of the motion component of the face of the measurement area a when the face of the subject moves as described above, and fig. 15 (b) shows the average change in the luminance value of the pulse component of the face of the measurement area a when the face of the subject moves as described above.

Fig. 15 (c) shows the average change in the luminance value of the motion component of the face of the measurement region B when the face of the subject moves as described above, and fig. 15 (d) shows the average change in the luminance value of the pulse component of the face of the measurement region B when the face of the subject moves as described above.

Fig. 15 (e) shows the average change in the luminance value of the motion component of the face of the measurement area C when the face of the subject moves as described above, and fig. 15 (f) shows the average change in the luminance value of the pulse component of the face of the measurement area C when the face of the subject moves as described above.

When the position of the face changes left and right, as shown in fig. 15 a and 15 c, the phases of the motion components of the face included in the average luminance values are different in the measurement regions (measurement region a and measurement region B) in the lateral positional relationship in the image.

On the other hand, as shown in fig. 15 a and 15 e, in the measurement regions (measurement region a and measurement region C) in the vertical positional relationship in the image, the phases of the motion components of the face included in the average luminance values are close to each other.

Returning to fig. 12, the weight coefficient calculation unit 343 calculates the weight coefficient duv (t) from the fluctuation information m (t) using the above-described features. Specifically, when the pair of measurement regions ru (k), rv (k) is in the same positional relationship with the two-dimensional vector included in the variation information m (t), the weight coefficient duv (t) for the phase matching degree cuv (t) between the two regions is set to be large. On the other hand, when the pair of measurement regions ru (k), rv (k) is in the vertical positional relationship, the weight coefficient duv (t) is set to be small.

When calculating the weight coefficient duv (t), first, the weight coefficient calculation unit 343 specifies the representative vector ms (t) (two-dimensional vector) which is a representative motion vector included in the variation information m (t). For example, the weight coefficient calculation unit 343 may specify the maximum value of the two-dimensional vectors included in the variation information m (t), for example, as the representative vector ms (t).

When calculating the representative vector ms (t), first, the weight coefficient calculation unit 343 calculates an average value m _ ave (t) (data having Tp two-dimensional vectors) of the element information mi (t) of each of the Rn measurement regions included in the variation information m (t). Next, the weight coefficient calculation unit 343 selects a two-dimensional vector having the largest vector length from among the two-dimensional vectors included in the average value M _ ave (t), and sets the selected vector as the selected vector M _ max (t). Then, the weight coefficient calculation unit 343 converts the selection vector M _ max (t) into a unit vector (a vector having a length of 1) to obtain a vector as a representative vector ms (t).

Next, the weight coefficient calculation unit 343 calculates a two-dimensional vector puv (t) indicating the relative positional relationship between the measurement regions corresponding to the phase matching degrees cuv (t) between the respective two regions. The two-dimensional vector puv (t) is obtained by converting, for example, a two-dimensional vector puv _ t (t) obtained by calculating the difference between the coordinate values of the 2 measurement regions ru (k) and rv (k) serving as the basis of the phase matching degree cuv (t) between the two regions into a unit vector.

Finally, the weight coefficient calculation unit 343 calculates the weight coefficient duv (t) from the two-dimensional vector puv (t) and the representative vector ms (t). The weight coefficient duv (t) is calculated, for example, as the absolute value of the inner product of the two-dimensional vector puv (t) and the representative vector ms (t). Specifically, the weight coefficient vector duv (t) is calculated by the following expression (3).

duv(t)=|puv(t)·Ms(t)| (3)

(3) In the formula, ". cndot" represents the inner product of the vector. If the absolute value of the inner product is the absolute value of the inner product of vectors of the same length, the value is close to 0 if 2 vectors are close to the vertical relationship, and the value is large in the positive direction if 2 vectors are close to the parallel relationship. Here, since both the two-dimensional vector puv (t) and the representative vector ms (t) are unit vectors having a length of 1, the inner product thereof has a value close to "0" if it is close to the vertical relationship, and has a value close to "1" if it is close to the parallel relationship.

Therefore, according to equation (3), when the measurement regions ru (k) and rv (k) are in the same positional relationship as the representative vector ms (t), that is, when the representative vector ms (t) and the two-dimensional vector puv (t) are nearly parallel, the weight coefficient duv (t) increases. On the other hand, in the case where they are in a near-vertical relationship, the weight coefficient duv (t) decreases.

The weight coefficient calculation unit 343 supplies information obtained by integrating the weight coefficients duv (t) calculated for the phase matching degrees cuv (t) between the two regions to the phase matching degree summation unit 242 as weight information d (t).

As described above, according to the pulse wave estimating apparatus 300 of embodiment 3, since the degree of shift in the phase of the motion component changes according to the positional relationship between the direction of the face motion and the measurement region ri (k), the pulse wave from which the motion component has been further removed can be estimated by calculating the weight coefficient from the motion vector of the measurement region ri (k).

In embodiment 3, the element information mi (t) is a two-dimensional vector representing the motion of the center of gravity of each measurement region ri (k), but the present invention is not limited to this. As described above, the movement of the 4 vertices of the measurement region ri (k) may be used, the change in the size or shape of the measurement region ri (k) may be used, or a combination of these may be used.

For example, the weight coefficient calculation unit 343 may be configured to increase the weight coefficient duv (t) if the measurement regions ru (k) and rv (k) included in the variation information m (t) have similar dimensional change patterns, and to set the weight coefficient duv (t) to be smaller if the measurement regions ru (k) and rv (k) have dissimilar dimensional change patterns. The dissimilar dimensional change means that each of the measurement regions ru (k) and rv (k) exhibits a different motion pattern. Therefore, when the size change patterns are not similar, the degree of phase shift of the motion component increases, and it is easy to distinguish the pulse component from the motion component.

In addition, whether or not the size change modes are similar may be determined using the similarity. The similarity is determined using time series data of the size change of each of the measurement regions ru (k) and rv (k). For example, when the size of a certain frame is set to "1", the weight coefficient calculation unit 343 determines how the size of the measurement region in the subsequent frame changes. For example, the weight coefficient calculation section 343 determines the timing data of transitions from the 1 st frame to the 5 th frame such as "1", "0.9", "0.8", and "0.9".

Then, the weight coefficient calculation unit 343 specifies these time series data in each measurement region ri (k), and calculates the correlation value of these time series data using the pair of measurement regions ru (k) and rv (k). The correlation values are set to values of "-1" to "1", so that the correlation value becomes "1" if the changes are similar, and becomes "-1" if they are not similar. The weight coefficient duv (t) may be set such that the smaller the correlation value, the larger the weight coefficient duv (t) and the larger the correlation value, the smaller the weight coefficient duv (t), and therefore the weight coefficient calculation unit 343 calculates the weight coefficient duv (t) by, for example, the following expression (4).

Weight coefficient duv (t) 1 (correlation value of two measurement regions ru (k), rv (k)) (4)

The weight coefficient calculation unit 343 may be set so as to increase the weight coefficient duv (t) if the shape change patterns of the measurement regions ru (k) and rv (k) included in the variation information m (t) are similar, and set so as to decrease the weight coefficient duv (t) if the shape change patterns are not similar. Whether or not the shape change modes are similar is determined by the similarity, which is determined using time series data indicating changes in four-dimensional vectors of the shape changes of the measurement regions ru (k) and rv (k).

For example, the weight coefficient calculation unit 343 calculates a correlation value from elements of each of four-dimensional vectors indicating changes in the shape of each of the measurement regions ru (k) and rv (k). Here, since 4 correlation values are calculated, the weight coefficient calculation unit 343 calculates the weight coefficient duv (t) by the following expression (5) using the calculated 4 correlation value average values.

Weight coefficient duv (t) 1 (average correlation value of two measurement regions ru (k) and rv (k)) (5)

The weight coefficient calculation unit 343 may set, for example, the average of the "weight coefficient calculated from the change in size" and the "weight coefficient calculated from the change in shape" as the final weight coefficient duv (t) for the pair of measurement regions ru (k) and rv (k).

Embodiment 4

As shown in fig. 1, the pulse wave estimating device 400 according to embodiment 4 includes a skin region detecting unit 110, a measurement region setting unit 120, a pulse wave source signal extracting unit 130, a phase matching degree calculating unit 440, and a pulse wave estimating unit 150.

The skin region detection unit 110, the measurement region setting unit 120, the pulse source signal extraction unit 130, and the pulse estimation unit 150 of the pulse estimation device 400 according to embodiment 4 are the same as the skin region detection unit 110, the measurement region setting unit 120, the pulse source signal extraction unit 130, and the pulse estimation unit 150 of the pulse estimation device 100 according to embodiment 1.

The pulse wave estimating apparatus 400 is an apparatus capable of executing the pulse wave estimating method that is the information processing method according to embodiment 4.

The phase matching degree calculation unit 440 according to embodiment 4 selects a pair of two pulse source signals among a plurality of pulse source signals indicated by the pulse source signal information w (t). Then, as in embodiment 1, the phase matching degree calculation unit 440 calculates a plurality of matching degrees indicating the degree to which the phases match between the corresponding basis components, from each of the selected plurality of pairs. Then, the phase matching degree calculation unit 440 sets a weight coefficient for each of the plurality of pairs, weights the plurality of matching degrees calculated from each of the plurality of pairs using the weight coefficient, and determines a value obtained by summing up the plurality of matching degrees for each corresponding basis component as the plurality of phase matching degrees.

Here, the phase coincidence degree calculation unit 440 can set the weight coefficient so that, of the coincidence degrees calculated from the respective pairs, the weight of the coincidence degree corresponding to the measurement region in which the degree of coincidence of the phases is high is increased. Furthermore, a weight coefficient of the degree of coincidence corresponding to the measurement region ri can be set according to the magnitude of the amplitude of the pulse source signal wi (t).

As shown in fig. 10, the phase matching degree calculation unit 440 according to embodiment 4 includes two inter-region phase matching degree calculation units 141, a phase matching degree summation unit 242, and a weight coefficient calculation unit 443.

The two inter-region phase matching degree calculation units 141 of the phase matching degree calculation unit 440 according to embodiment 4 are the same as the two inter-region phase matching degree calculation units 141 of the phase matching degree calculation unit 140 according to embodiment 1. However, the two-region phase matching degree calculation unit 141 in embodiment 4 supplies the two-region phase matching degree information n (t) to the phase matching degree summation unit 242 and the weight coefficient calculation unit 443.

The weight coefficient calculation unit 443 receives the inter-region phase matching degree information n (t), calculates a weight coefficient duv (t) for each of the two inter-region phase matching degrees cuv (t) indicated by the inter-region phase matching degree information n (t), and generates weight information d (t) indicating the calculated weight coefficient duv (t). The weight information d (t) is supplied to the phase matching degree totaling unit 242.

The weight information d (t) can include a weight coefficient duv (t) for each of the two inter-region phase matching degrees cuv (t) included in the two inter-region phase matching degree information n (t). The weight coefficient duv (t) takes a value between "0" and "1", for example. For example, when the weight coefficient duv (t) is "0", the weight for the corresponding phase matching degree cuv (t) between the two regions is small, and when the weight coefficient duv (t) is "1", the weight for the corresponding phase matching degree cuv (t) between the two regions is large. When the weight coefficient duv (t) is "0.5", the weight is intermediate between "0" and "1".

The weight coefficient duv (t) is determined, for example, from a representative value eu (t) of the phase matching degrees cui (t) between the two regions associated with the measurement region ru and a representative value ev (t) of the phase matching degrees cvi (t) between the two regions associated with the measurement region rv (i ═ 1, 2, …, Rn). The representative value eu (t) of the phase matching degrees between the two regions associated with the measurement region ru is, for example, an average value of the phase matching degrees cui (t) between the two regions associated with the measurement region ru. Specifically, regarding the two inter-region phase matching degrees cu1(t), cu2(t), …, and curn (t), the two inter-region phase matching degrees obtained by calculating the average value of the phase matching degrees of the respective frequency components are set as the representative value eu (t) of the two inter-region phase matching degrees associated with the measurement region ru. When the component of the pulse signal included in the pulse source signal wu (t) of the measurement region ru is strong, the phase matching degree cui between the two regions associated with the measurement region ru increases in any frequency component. On the other hand, when the component of the pulse signal included in the pulse source signal wu (t) of the measurement region ru is weak, the phase matching degree cui between the two regions associated with the measurement region ru decreases in any frequency component. That is, when the representative value eu (t) associated with the measurement region ru is large, the weight coefficient dui corresponding to the measurement region ru is set to be large, and when the representative value eu (t) is small, the weight coefficient dui corresponding to the measurement region ru is set to be small, whereby the measurement region having a component of the pulse signal that is strong, in other words, a component having a phase matching the other measurement region can be emphasized.

The weight coefficient duv (t) is calculated from the representative value eu (t) of the degree of phase coincidence between the two regions associated with the measurement region ru and the representative value ev (t) of the degree of phase coincidence between the two regions associated with the measurement region ru.

When calculating the weight coefficient duv (t), first, the weight coefficient du (t) for the measurement region ru and the weight coefficient dv (t) for the measurement region rv are calculated. Since the calculation method of the weight coefficient du (t) is the same as that of dv (t), the calculation method of du (t) will be described here.

The maximum value of the representative value eu (t) of the phase matching degree between the two regions is set as emaxSetting the minimum value as emin. The weight coefficient du (t) is calculated by the following expression (6). Predetermining the maximum value as emaxThe minimum value is predetermined as emin

du(t)=1.0-(eu(t)-emin)/(emax-emin) (6)

Similarly, after dv (t) is calculated, the weight coefficient duv (t) is calculated as the minimum value of du (t) and dv (t). That is, duv (t) min (du (t), dv (t)).

By calculating the weight coefficient duv (t) using the above equation, the weight coefficient associated with the measurement region having a component of the pulse signal that is strong, in other words, a component whose phase matches the other measurement regions can be set to be large.

As described above, according to the pulse estimation device 400 of embodiment 4, the phase matching degree calculation unit 440 calculates the representative value of the two measurement regions included in each of the plurality of pairs using the plurality of matching degrees calculated from the pair, and can set the weight coefficient so that the weight is heavier as the representative value is higher. The representative value of each of the measurement regions of the plurality of matching degrees calculated from each of the plurality of pairs is, for example, a representative value eu (t) of the phase matching degrees cui (t) between the two regions associated with the measurement region ru, and a representative value ev (t) of the phase matching degrees cvi (t) between the two regions associated with the measurement region rv. Therefore, embodiment 4 can estimate the pulse beat with higher accuracy.

The weight coefficient calculation unit 443 integrates the weight coefficients duv (t) calculated for the phase matching degrees cuv (t) between the two regions, and generates weight information d (t) indicating the weight coefficients duv (t). The weight information d (t) is supplied to the phase matching degree totaling unit 242.

The phase matching degree summing unit 242 receives the phase matching degree information n (t) between the two regions and the weight information d (t), generates phase matching degree information c (t) for each fundamental component between the measurement regions, and supplies the generated phase matching degree information c (t) to the pulse estimating unit 150.

The phase matching degree summing unit 242 multiplies the phase matching degree cuv (t) between the two regions indicated by the phase matching degree information n (t) between the two regions by the corresponding weight information d (t), and adds the result for each basis component to generate the phase matching degree information c (t) indicating the matching degree of the phase for each basis component between the measurement regions ri (k). The phase matching degree information c (t) is supplied to the pulse estimating unit 150.

The pulse estimation unit 150 estimates a pulse from the pulse source signal information w (t) and the phase matching degree information c (t), and outputs a pulse estimation result p (t) which is pulse information indicating the estimated pulse. For example, when the heart rate is output as the pulse wave estimation result p (t), the pulse wave estimation unit 150 specifies the frequency component that is the base component having the highest degree of phase matching among the phase matching degree information c (t) for each frequency component, and estimates the pulse wave from the specified frequency component. Specifically, the pulse wave estimating unit 150 sets the frequency component having the highest degree of phase matching to correspond to the pulse wave, and outputs the frequency of the frequency component corresponding to the pulse wave as the heart rate.

In embodiment 4, the representative value eu (t) of the phase matching degrees between the two regions associated with the measurement region ru is set to the average value of the phase matching degrees cui (t) between the two regions associated with the measurement region ru, but the present invention is not limited to this. For example, the central value or the minimum value may be used, and the number of times the phase matching degree is higher than the threshold value may be set for each frequency component.

In embodiment 4, the degree of weighting for each measurement region is determined only based on the phase matching degree between the two regions, but the present invention is not limited to this. For example, as described in non-patent document 1, the weighting coefficients for the respective measurement regions may be calculated from the difference between the maximum value and the minimum value of the pulse source Signal wi (t) or the SNR (Signal-Noise-Ratio) in the power spectrum, or they may be combined. As a method of combining, for example, for the measurement region ru, an average value of the weight coefficient calculated from the SNR in the power spectrum and the weight coefficient calculated from the phase coincidence between the two regions is set as the weight coefficient du (t) for the measurement region.

In embodiment 4, the frequency component with the highest degree of phase matching corresponds to the pulse wave, and the frequency of the frequency component corresponding to the pulse wave is output as the heart rate.

The amplitude of the frequency component corresponding to the pulse varies depending on the brightness (luminance value) of the skin region in the frame im (t), the skin color, thickness, or blood flow of the subject. However, the influence of the brightness of the skin area in the frame im (t) is large, and the amplitude of the frequency component corresponding to the pulse can be estimated from the brightness of the skin area. For example, the average brightness value I based on all the measurement regions is usedave(t) threshold θ H (I) for amplitude of frequency component determinedave(t)) and θ L (I)ave(t)), only the frequency component having the amplitude not less than the threshold value θ H and not more than the threshold value θ L is specified, and the frequency component having the highest degree of phase matching is specified.

By estimating the heart rate from the frequency components having amplitudes within the predetermined range in this manner, the heart rate can be estimated with higher accuracy.

Description of the reference symbols

100. 200, 300, 400: a pulse estimation device; 110: a skin region detection unit; 120: a measurement region setting unit; 130: a pulse source signal extraction unit; 140. 240, 340, 440: a phase matching degree calculation unit; 141: a phase matching degree calculation unit between the two regions; 142. 242: a phase matching degree summing unit; 243. 343, 443: a weight coefficient calculation unit; 150: a pulse estimation unit; 160: a camera device; 161: a light source; 370: a variation information acquisition unit.

34页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于监测患者健康状态的装置和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!