Evaluation device, evaluation method, and evaluation program
阅读说明:本技术 评价装置、评价方法及评价程序 (Evaluation device, evaluation method, and evaluation program ) 是由 首藤胜行 鬼头诚 于 2019-03-08 设计创作,主要内容包括:本发明的评价装置包括:显示屏;注视点检测部,对观察显示屏的受检者的注视点的位置进行检测;显示控制部,针对显示屏显示包含特定对象物和与特定对象物不同的比较对象物的图像;区域设定部,设定对应于特定对象物的特定区域和对应于比较对象物的比较区域;判定部,基于注视点的位置,在显示图像的期间,分别判定注视点是否存在于特定区域及比较区域;运算部,基于判定部的判定结果,计算注视点数据;评价部,根据注视点数据,求出受检者的评价数据。(The evaluation device of the present invention includes: a display screen; a gaze point detection unit that detects a position of a gaze point of a subject observing the display screen; a display control unit that displays an image including a specific object and a comparison object different from the specific object on a display screen; an area setting unit that sets a specific area corresponding to a specific object and a comparison area corresponding to a comparison object; a determination unit that determines whether or not the gaze point is present in the specific region and the comparison region, respectively, while the image is being displayed, based on the position of the gaze point; a calculation unit that calculates the point-of-interest data based on the determination result of the determination unit; an evaluation unit obtains evaluation data of the subject based on the gaze point data.)
1. An evaluation device comprising:
a display screen;
a gaze point detection unit that detects a position of a gaze point of a subject observing the display screen;
a display control unit that displays an image including a specific object and a comparison object different from the specific object on the display screen;
an area setting unit that sets a specific area corresponding to the specific object and a comparison area corresponding to the comparison object;
a determination unit configured to determine whether the gaze point is present in the specific region and the comparison region, respectively, while the image is displayed, based on a position of the gaze point;
a calculation unit that calculates the point-of-interest data based on the determination result of the determination unit; and
and an evaluation unit configured to obtain evaluation data of the subject based on the gaze point data.
2. The evaluation device according to claim 1,
the display control unit performs a first display operation of displaying the specific object on the display screen, and then performs a second display operation of displaying the specific object and the comparison object on the display screen,
the determination unit determines whether or not the gaze point is present in the specific region and the comparison region, respectively, during a display period in which the second display operation is performed.
3. The evaluation device according to claim 1,
the display control unit performs a first display operation of changing a display mode of the specific object while the specific object and the comparison object are displayed on the display screen, and then performs a second display operation of displaying the specific object and the comparison object on the display screen,
the determination unit determines whether the gaze point is present in the specific region and the comparison region during the display period in which the first display operation or the second display operation is performed, based on the position of the gaze point.
4. The evaluation device according to any one of claims 1 to 3,
the point of regard data includes: arrival time data indicating a time from a start time of the display period to an arrival time at which the gaze point first arrives at the specific region; movement number data indicating the number of times the point of regard moves between the plurality of comparison areas before initially reaching the specific area; presence time data indicating a presence time during which the point of regard is present in the specific region or the comparison region during the display; and final area data indicating an area where the gaze point is located last in the display time among the specific area and the comparison area,
the evaluation unit obtains evaluation data of the subject based on at least one of the gaze point data.
5. The evaluation device according to claim 4,
the evaluation unit obtains the evaluation data by weighting at least one data included in the gaze point data.
6. An evaluation method comprising:
displaying an image on a display screen;
detecting a position of a gaze point of a subject observing the display screen;
displaying the image including a specific object and a comparison object on the display screen, the comparison object being different from the specific object;
setting a specific region corresponding to the specific object and a comparison region corresponding to the comparison object;
determining whether the gaze point exists in the specific region and the comparison region, respectively, during display of the image displayed on the display screen based on the position of the gaze point;
calculating the gazing point data during the display period based on the judgment result; and
based on the gaze point data, evaluation data of the subject is obtained.
7. An evaluation program causing a computer to execute:
displaying an image on a display screen;
detecting a position of a gaze point of a subject observing the display screen;
displaying the image including a specific object and a comparison object on the display screen, the comparison object being different from the specific object;
setting a specific region corresponding to the specific object and a comparison region corresponding to the comparison object;
determining whether the gaze point exists in the specific region and the comparison region, respectively, during display of the image displayed on the display screen based on the position of the gaze point;
calculating the gazing point data during the display period based on the judgment result; and
based on the gaze point data, evaluation data of the subject is obtained.
Technical Field
The present invention relates to an evaluation device, an evaluation method, and an evaluation program.
Background
In recent years, cognitive dysfunction and brain dysfunction such as dementia have been increasing, and it is required to discover such cognitive dysfunction and brain dysfunction as early as possible and to quantitatively evaluate the severity of symptoms. Symptoms of cognitive and brain dysfunction are known to affect memory. Thus, the subject is evaluated based on the memory of the subject. For example, the following devices are proposed: a plurality of numbers are displayed, and the subject adds the numbers to obtain an answer, and confirms the answer given by the subject (for example, see patent document 1).
Prior art documents
Patent document
Patent document 1: japanese patent laid-open No. 2011-083403.
Disclosure of Invention
However, the method of
The present invention has been made in view of the above problems, and an object thereof is to provide an evaluation device, an evaluation method, and an evaluation program capable of accurately evaluating cognitive dysfunction and brain dysfunction.
The evaluation device according to the present invention includes: a display screen; a gaze point detection unit that detects a position of a gaze point of a subject observing the display screen;
a display control unit that displays an image including a specific object and a comparison object different from the specific object on the display screen; an area setting unit that sets a specific area corresponding to the specific object and a comparison area corresponding to the comparison object; a determination unit configured to determine whether the gaze point is present in the specific region and the comparison region, respectively, while the image is displayed, based on a position of the gaze point; a calculation unit that calculates the point-of-interest data based on the determination result of the determination unit; and an evaluation unit configured to obtain evaluation data of the subject based on the gaze point data.
The evaluation method according to the present invention includes: displaying an image on a display screen; detecting a position of a gaze point of a subject observing the display screen; displaying the image including a specific object and a comparison object in the display screen, the comparison object being different from the specific object; setting a specific region corresponding to the specific object and a comparison region corresponding to the comparison object; determining whether the gaze point exists in the specific region and the comparison region, respectively, during a display period in which the image is displayed on the display screen according to the position of the gaze point; calculating the gazing point data during the display period based on the judgment result; and obtaining evaluation data of the subject according to the gazing point data.
According to the evaluation program of the present invention, the computer executes the following processing: displaying an image on a display screen; detecting a position of a gaze point of a subject observing the display screen; displaying the image including a specific object and a comparison object in the display screen, the comparison object being different from the specific object; setting a specific region corresponding to the specific object and a comparison region corresponding to the comparison object; determining whether the gaze point exists in the specific region and the comparison region, respectively, during a display period in which the image is displayed on the display screen according to the position of the gaze point; calculating the gazing point data during the display period based on the judgment result; and obtaining evaluation data of the subject according to the gazing point data.
According to the present invention, it is possible to provide an evaluation device, an evaluation method, and an evaluation program that can evaluate cognitive dysfunction and brain dysfunction with high accuracy.
Drawings
Fig. 1 is a perspective view schematically showing an example of a line-of-sight detection device according to the present embodiment;
fig. 2 is a diagram showing an example of a hardware configuration of the line-of-sight detection device according to the present embodiment;
fig. 3 is a functional block diagram showing an example of the sight line detection device according to the present embodiment;
fig. 4 is a schematic diagram for explaining a method of calculating the position data of the corneal curvature center according to the present embodiment;
fig. 5 is a schematic diagram for explaining a method of calculating position data of a corneal center of curvature according to the present embodiment;
fig. 6 is a schematic diagram for explaining an example of the calibration process according to the present embodiment;
fig. 7 is a schematic diagram for explaining an example of the gazing point detection processing according to the present embodiment;
fig. 8 is a diagram showing one example of an indication displayed on a display screen;
fig. 9 is a diagram showing one example of a specific object displayed on a display screen;
fig. 10 is a diagram showing one example of an indication displayed on a display screen;
fig. 11 is a diagram showing one example of a case where a specific object and a plurality of comparison objects are displayed on a display screen;
fig. 12 is a diagram showing another example in a case where an instruction and a specific object are displayed on a display screen;
fig. 13 is a diagram showing another example in a case where a specific object and a plurality of comparison objects are displayed on a display screen;
fig. 14 is a diagram showing another example in a case where an instruction and a specific object are displayed on a display screen;
fig. 15 is a diagram showing another example in a case where a specific object and a plurality of comparison objects are displayed on a display screen;
fig. 16 is a flowchart showing an example of the evaluation method according to the present embodiment;
fig. 17 is a diagram showing an example of a series of images for evaluation displayed on a display screen;
fig. 18 is a diagram showing an example of a series of images for evaluation displayed on a display screen;
fig. 19 is a diagram showing an example of a series of images for evaluation displayed on a display screen;
fig. 20 is a diagram showing an example of a series of images for evaluation displayed on a display screen;
fig. 21 is a diagram showing an example of a series of images for evaluation displayed on a display screen;
fig. 22 is a diagram showing an example of a series of images for evaluation displayed on a display screen;
fig. 23 is a diagram showing an example of a series of images for evaluation displayed on a display screen;
fig. 24 is a flowchart showing a processing flow of an evaluation method of another example;
fig. 25 is a flowchart showing a flow of processing in the memory instruction processing;
fig. 26 is a flowchart showing a flow of processing in the memory processing;
fig. 27 is a flowchart showing a flow of processing in the answer processing;
fig. 28 is a diagram showing an example of a series of images for evaluation displayed on a display screen;
fig. 29 is a diagram showing an example of a series of images for evaluation displayed on a display screen;
fig. 30 is a diagram showing an example of a series of images for evaluation displayed on a display screen.
Detailed Description
Embodiments of an evaluation apparatus, an evaluation method, and an evaluation program according to the present invention will be described below with reference to the drawings. The present invention is not limited to this embodiment. The components in the following embodiments include components that can be replaced and easily conceived by those skilled in the art, or substantially the same components.
In the following description, a three-dimensional global coordinate system is set to describe the positional relationship of each part. A direction parallel to the first axis on the predetermined surface is defined as an X-axis direction, a direction parallel to a second axis on the predetermined surface orthogonal to the first axis is defined as a Y-axis direction, and a direction parallel to a3 rd axis orthogonal to the first axis and the second axis, respectively, is defined as a Z-axis direction. The prescribed plane comprises an XY plane.
(Sight line detection device)
Fig. 1 is a perspective view schematically showing an example of a line-of-
The
The
The
The
A frame synchronization signal is output from at least one of the
When the detection light is irradiated onto the
By appropriately setting the relative positions of the
Fig. 2 is a diagram showing an example of the hardware configuration of the line-of-
The
The drive circuit 40 generates a drive signal and outputs the drive signal to the
The
In this embodiment, the
Fig. 3 is a functional block diagram showing an example of the line of
The
The display control unit 202 performs a display operation including a first display operation of displaying the specific object on the
The light source control section 204 controls the light source driving unit 406 to control the operation states of the first
The image data acquisition unit 206 acquires image data of the
The input data acquisition unit 208 acquires input data generated by operating the
The position detection unit 210 detects position data of the pupil center from the image data of the
The curvature center calculating unit 212 calculates position data of the corneal curvature center of the
The gaze point detecting unit 214 detects position data of the gaze point of the subject based on the image data of the
The area setting unit 216 sets a specific area corresponding to the specific object and a comparison area corresponding to each comparison object on the
The determination unit 218 determines whether or not the gazing point is present in the specific area and the comparison area based on the position data of the viewpoint during the display period in which the second display operation is performed, and outputs determination data. The determination unit 218 determines whether or not the gaze point is present in the specific area and the comparison area, for example, at a constant time interval. The constant time may be, for example, the period of the frame synchronization signal output from the
Based on the determination data of the determination unit 218, the calculation unit 220 calculates movement passage data (which may be referred to as gaze point data) indicating the passage of movement of the gaze point during the display period. Moving through the data includes: the display device includes arrival time data from a start time of a display period to an arrival time at which a gaze point first arrives at a specific region, movement number data indicating the number of times a position of the gaze point moves between a plurality of comparison regions before the gaze point first arrives at the specific region, presence time data indicating presence time of the gaze point existing in the specific region or the comparison regions during the display period, and final region data indicating a region in which the gaze point last exists in the specific region and the comparison regions during the display period.
The arithmetic unit 220 includes a management timer for managing the reproduction time of the video and a detection timer T1 for detecting the time elapsed since the video was displayed on the
The evaluation unit 224 obtains subject evaluation data based on the movement passage data. The evaluation data is data for evaluating whether or not the subject can watch on the specific object displayed on the
The storage unit 222 stores the determination data, the movement passage data (presence time data, movement number data, final area data, arrival time data), and the evaluation data. Further, the storage unit 222 stores an evaluation program for causing a computer to execute: processing of the display image; processing of detecting a position of a gaze point of a subject observing the display screen; performing a process of performing a display operation including a first display operation of displaying a specific object on a display screen and a second display operation of displaying the specific object and a plurality of comparison objects different from the specific object on the display screen after performing the first display operation; setting a specific area corresponding to a specific object and a comparison area corresponding to each comparison object in a display screen; a process of determining whether or not the gazing point exists in the specific area and the comparison area during the display period in which the second display operation is performed, based on the positional data of the gazing point, and outputting determination data; processing of calculating movement passage data indicating passage of movement of the gaze point during the display period based on the determination data; processing for obtaining evaluation data of the subject based on the movement passing data; and a process of outputting the evaluation data.
The output control unit 226 outputs data to at least one of the
Next, an outline of the processing of the curvature center calculating unit 212 in the present embodiment will be described. The curvature center calculating unit 212 calculates position data of the corneal curvature center of the
First, an example shown in fig. 4 will be explained. The
Next, an example shown in fig. 5 will be described. In the present embodiment, the
As described above, even in the case where there are two light sources, the corneal center of
The corneal radius of
Next, an example of the line-of-sight detection method according to the present embodiment will be described. Fig. 6 is a schematic diagram for explaining an example of the calibration process according to the present embodiment. In the calibration process, the target position 130 is set in order to fixate the subject. The target location 130 is defined in a three-dimensional global coordinate system. In the present embodiment, the target position 130 is set at, for example, the center position of the
Next, the gaze point detection process will be described. The fixation point detection process is performed after the calibration process. The gaze point detection unit 214 calculates a line-of-sight vector of the subject and position data of the gaze point based on the image data of the
[ evaluation method ]
Next, the evaluation method according to the present embodiment will be described. In the evaluation method according to the present embodiment, the visual
Fig. 8 is a diagram showing an example of instruction information I1 displayed on the
After the instruction information I1 is displayed on the
Fig. 10 is a diagram showing one example of the instruction information I2 displayed on the
Fig. 11 is a diagram showing an example of a case where a plurality of objects are displayed on the
The comparison objects M2 to M4 may have a shape similar to the specific object M1 or may have a shape dissimilar to the specific object M1. In the example shown in fig. 11, the comparison object M2 has a shape in which a trapezoid and a circle are combined, the comparison object M3 has a shape in which a square and a circle are combined, and the comparison object M4 has a shape in which a circle and a regular hexagon are combined. The display controller 202 causes the subject to find the specific object M1 and to watch the found specific object M1 by displaying a plurality of objects including the specific object M1 and the comparison objects M2 to M4 on the
In addition, in fig. 11, an example of the gaze point P displayed on the
During the display period in which the second display operation is performed, the area setting unit 216 sets the specific area a1 corresponding to the specific object M1. The area setting unit 216 sets comparison areas a2 to a4 corresponding to the comparison objects M2 to M4, respectively. The specific region a1 and the comparison regions a2 to a4 are not displayed on the
The area setting unit 216 sets the specific area a1 to a rectangular range including the specific object M1, for example. Similarly, the area setting unit 216 sets the comparison areas a2 to a4 to ranges including rectangles of the comparison objects M2 to M4, for example. The shapes of the specific region a1 and the comparative regions a2 to a4 are not limited to rectangular, and may be other shapes such as circular, elliptical, and polygonal shapes.
The symptoms of cognitive dysfunction and brain dysfunction are known to affect memory. When the examinee is not a person with cognitive or brain dysfunction, the comparison objects M2 to M4 displayed on the
Therefore, for example, by performing the following procedure, the subject can be evaluated. First, as a first display operation, the specific object M1 is displayed on the
In the second display operation, when the position data of the gaze point P of the subject is detected, the determination unit 218 determines whether or not the gaze point of the subject is present in the specific region a1 and the plurality of comparison regions a2 to a4, and outputs the determination data.
The calculation unit 220 calculates movement passage data indicating the passage of movement of the gaze point P during the display period, based on the determination data. The calculation unit 220 calculates the presence time data, the movement number data, the final area data, and the arrival time data as movement passage data.
The presence time data indicates the presence time at which the point of regard P exists in the specific area a 1. In the present embodiment, the presence time during which the determination unit 218 determines that the gaze point P is present in the specific area a1 can be estimated to be longer as the number of times the determination unit 218 determines that the gaze point is present in the specific area a1 is larger. Therefore, the presence time data may be the number of times the determination unit 218 determines that the gaze point is present in the specific area a 1. That is, the calculation unit 220 can use the counter count value CNTA as the presence time data.
The movement number data indicates the number of movements by which the position of the gaze point P moves between the plurality of comparison regions a2 to a4 before the gaze point P first reaches the specific region a 1. Therefore, the arithmetic unit 220 can count how many times the gaze point P moves between the specific region a1 and the regions of the comparison regions a2 to a4, and can use the count result of the arrival of the gaze point P at the specific region a1 as the movement number data.
The final area data indicates an area in which the fixation point P is located last in the display time, that is, an area at which the examinee gazes last as a response, from among the specific area a1 and the comparison areas a2 to a 4. The calculation unit 220 updates the area where the gaze point P exists every time the gaze point P is detected, and thereby can use the detection result of the time when the display period ends as the final area data.
The arrival time data indicates the time from the start time of the display period to the arrival time at which the fixation point first reaches the specific area a 1. Therefore, the arithmetic unit 220 can use the detection result of the timer T1 as the arrival time data by measuring the elapsed time from the start of the display period by the timer T1 and detecting the measurement value of the timer T1 with the flag value set to 1 at the time when the fixation point first reaches the specific region a 1.
In the present embodiment, the evaluation unit 224 obtains evaluation data from the presence time data, the number of times of movement data, the final area data, and the arrival time data.
Here, the data value of the final area data is D1, the data value of the existence time data is D2, the data value of the arrival time data is D3, and the data value of the movement number data is D4. Here, if the final gaze point P of the subject exists in the specific region a1 (i.e., if it is a correct answer), the data value D1 of the final region data is 1, and if it does not exist in the specific region a1 (i.e., if it is an incorrect answer), the data value D1 of the final region data is 0. In addition, the data value D2 of the presence time data is the number of seconds in which the point of regard P exists in the specific area a 1. The data value D2 may be set to an upper limit value of seconds shorter than the display period. The data value D3 of the arrival time data is the reciprocal of the arrival time, and is, for example, 1/(arrival time) ÷ 10 (10: a coefficient for setting the minimum value of the arrival time to 0.1 second and setting the arrival time evaluation value to 1 or less). The count value is directly used as the data value D4 of the movement number data. The data value D4 may be set to an upper limit value as appropriate.
In this case, the evaluation value ANS is expressed as: ANS D1 · K1+ D2 · K2+ D3 · K3+ D4 · K4. Where K1 to K4 are constants for weighting. The constants K1 to K4 can be set as appropriate.
The evaluation value ANS shown in the above equation has a large value when the data value D1 of the final area data is 1, when the data value D2 of the presence time data is large, when the data value D3 of the arrival time data is large, and when the data value D4 of the movement number data is large. That is, the evaluation value ANS increases as the final gaze point P exists in the specific area a1, the longer the existence time of the gaze point P in the specific area a1, the shorter the arrival time of the gaze point P at the specific area a1 from the start time of the display period, and the greater the number of movements of the gaze point P in each area.
On the other hand, the evaluation value ANS becomes smaller when the data value D1 of the final area data is 0, when the data value D2 of the presence time data is small, when the data value D3 of the arrival time data is small, and when the data value D4 of the movement number data is small. That is, the evaluation value ANS is smaller as the final gaze point P does not exist in the specific area a1, the presence time of the gaze point P in the specific area a1 is shorter, the arrival time of the gaze point P at the specific area a1 from the start time of the display period is longer, and the number of movements of the gaze point P in each area is smaller.
Therefore, the evaluation unit 224 can determine the evaluation data by determining whether or not the evaluation value ANS is equal to or greater than a predetermined value. For example, when the evaluation value ANS is equal to or greater than a predetermined value, it can be evaluated that the possibility that the subject is a person with cognitive dysfunction or brain dysfunction is low. In addition, when the evaluation value ANS is smaller than the predetermined value, it can be evaluated that the subject is highly likely to be a person with cognitive dysfunction or brain dysfunction.
The evaluation unit 224 may store the value of the evaluation value ANS in the storage unit 222 in advance. For example, the evaluation value ANS for the same subject may be accumulated and stored, and the evaluation may be performed when compared with the past evaluation value. For example, when the evaluation value ANS is a value higher than the past evaluation value, it can be evaluated that the brain function is improved as compared with the previous evaluation. In addition, when the cumulative value of the evaluation value ANS gradually increases, it can be evaluated that the brain function gradually improves.
The evaluation unit 224 may evaluate the presence time data, the number of times of movement data, the final area data, and the arrival time data independently or in combination. For example, when the gaze point P accidentally reaches the specific area a1 while gazing at a plurality of objects, the data value D4 of the movement number data is reduced. In this case, the evaluation may be performed in combination with the data value D2 of the above-described age data. For example, even if the number of movements is small but the existing time is long, it can be evaluated that the specific area a1 as a correct answer can be watched. In addition, when the number of moves is small and the presence time is short, it can be evaluated that the gaze point P accidentally passes through the specific area a 1.
In addition, if the final area is the specific area a1 when the number of moves is small, it can be evaluated that the user has reached the specific area a1 with a correct answer with little movement of the point of regard, for example. On the other hand, if the final area is not the specific area a1 when the number of movements is small, it can be evaluated that the gaze point P has accidentally passed through the specific area a1, for example.
In the present embodiment, when the evaluation unit 224 outputs the evaluation data, the output control unit 226 can output, for example, character data of "the subject is less likely to be a person with cognitive dysfunction or brain dysfunction" and character data of "the subject is more likely to be a person with cognitive dysfunction or brain dysfunction" to the
Fig. 12 is a diagram showing an example of a case where the specific object and the instruction information I3 are simultaneously displayed on the
Fig. 14 is a diagram showing another example of a case where the specific object and the instruction information I4 are displayed on the
After the first display operation, in the second display operation, as shown in fig. 15, the display controller 202 can display the specific object M8 and the comparison objects M9 to M11 which are faces of persons different from the specific object M8. The area setting unit 216 can set a specific area a8 corresponding to the specific object M8 and can set comparison areas a9 to a11 corresponding to the comparison objects M9 to M11. As shown in fig. 15, the display controller 202 may simultaneously display the specific object M8, the comparison objects M9 to M11, and the instruction information I5 in the second display operation. In this way, the display control unit 202 may display the instruction information for each of the first display operation and the second display operation. This can further shorten the inspection time.
Next, an example of the evaluation method according to the present embodiment will be described with reference to fig. 16. Fig. 16 is a flowchart showing an example of the evaluation method according to the present embodiment. In the present embodiment, the display control unit 202 starts video playback (step S101). After the waiting time until the evaluation video portion has elapsed on the
The gaze point detecting unit 214 detects position data of the gaze point of the subject on the
If it is determined that the gaze point P is present in specific area a1 (yes in step S109), arithmetic unit 220 determines whether or not the flag value is 1, that is, whether or not the gaze point P reaches specific area a1 first (1: reached, 0: not reached) (step S110). When the flag value is "1" (yes in step S110), the arithmetic unit 220 skips the following steps S111 to S113 and performs the processing of step S114, which will be described later.
When the flag value is not 1, that is, when it is the first time that the gaze point P reaches the specific area a1 (no in step S110), the arithmetic unit 220 extracts the measurement result of the timer T1 as the arrival time data (step S111). The calculation unit 220 stores, in the storage unit 222, the movement count data indicating that the gaze point P has moved between the regions several times before reaching the specific region a (step S112). After that, the arithmetic unit 220 changes the flag value to 1 (step S113).
Next, the calculation unit 220 determines whether or not the final area, which is the area where the gaze point P exists in the latest detection, is the specific area a1 (step S114). When determining that the final area is the specific area a1 (yes in step S114), the arithmetic unit 220 skips the following steps S115 and S116 and performs the processing of step S117 described later. When determining that the final area is not the specific area a1 (no in step S114), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S115), and changes the final area to the specific area a1 (step S116). Further, the arithmetic unit 220 performs +1 on the count value CNTA indicating the presence time data in the specific area a1 (step S117). Thereafter, the arithmetic unit 220 performs the processing of step S130 and subsequent steps, which will be described later.
When it is determined that the gaze point P is not present in the specific area a1 (no in step S109), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area a2 (step S118). If it is determined that the gaze point P is present in the comparison area a2 (yes in step S118), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area a2 (step S119). When determining that the final area is the comparison area a2 (yes in step S119), the arithmetic unit 220 skips step S120 and step S121 below and performs the processing of step S130 described below. When determining that the final area is not the comparison area a2 (no in step S119), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S115), and changes the final area to the comparison area a2 (step S120). Thereafter, the arithmetic unit 220 performs the processing of step S130 and subsequent steps, which will be described later.
When it is determined that the gaze point P is not present in the comparison area a2 (no in step S118), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area A3 (step S122). If it is determined that the gaze point P is present in the comparison area A3 (yes in step S122), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area A3 (step S123). When determining that the final area is the comparison area a3 (yes in step S123), the arithmetic unit 220 skips step S124 and step S125 below and performs the processing of step S130 described below. When determining that the final area is not the comparison area A3 (no in step S123), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S123), and changes the final area to the comparison area A3 (step S125). Thereafter, the arithmetic unit 220 performs the processing of step S130 and subsequent steps, which will be described later.
When it is determined that the gaze point P is not present in the comparison area A3 (no in step S122), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area A3 (step S122). If it is determined that the gaze point P is present in the comparison area a4 (yes in step S126), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area a4 (step S127). When determining that the final area is the comparison area a4 (yes in step S127), the arithmetic unit 220 skips step S128 and step S129 described below and performs the processing of step S130 described below. When determining that the final area is not the comparison area a4 (no in step S127), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S128), and changes the final area to the comparison area A3 (step S129). Thereafter, the arithmetic unit 220 performs the processing of step S130 and subsequent steps, which will be described later.
Then, the arithmetic unit 220 determines whether or not the video reproduction completion time has been reached based on the detection result of the detection timer T1 (step S130). If the arithmetic unit 220 determines that the video reproduction completion time has not been reached (no in step S130), the processing from step S106 onward is repeated.
When the arithmetic unit 220 determines that the video reproduction completion time has been reached (yes in step S130), the display control unit 202 stops the video reproduction (step S131). After stopping the reproduction of the video, the evaluation unit 224 calculates an evaluation value ANS from the presence time data, the number of movements data, the final area data, and the arrival time data obtained from the above processing results (step S132), and obtains evaluation data from the evaluation value ANS. Then, the output control unit 226 outputs the evaluation data obtained by the evaluation unit 224 (step S133).
As described above, the evaluation device according to the present embodiment includes: a gaze point detection unit 214 that detects the position of a gaze point of a subject who observes an image displayed on the
In addition, the evaluation method according to the present embodiment includes: detecting a position of a gaze point of a subject who observes an image displayed on the
In addition, the evaluation program according to the present embodiment causes a computer to execute: detecting a position of a gaze point of a subject who observes an image displayed on the
According to the present embodiment, since the evaluation data of the subject can be obtained from the movement passage of the gaze point during the display period, it is possible to reduce the chance and to evaluate the memory of the subject with high accuracy. Thus, the
In the
In the
The technical scope of the present invention is not limited to the above-described embodiments, and appropriate modifications can be made within a scope not departing from the gist of the present invention. For example, in each of the above embodiments, the case where the
In the above embodiment, the case where the area setting unit 216 sets the specific area a1 and the comparison areas a2 to a4 in the second display operation has been described as an example, but the invention is not limited thereto. For example, the area setting unit 216 may set a corresponding area corresponding to the specific object M1 displayed on the
In the above-described embodiment, the case where the display mode of the specific object is in a fixed state in the first display operation has been described as an example, but the present invention is not limited to this. For example, the display control section 202 may change the display form of the specific object in the first display operation.
Fig. 17 to 23 are diagrams showing examples of a series of images for evaluation displayed on the
As shown in fig. 17, the area setting unit 216 sets a specific area a corresponding to the specific object F and sets comparison areas B1 to B4 corresponding to the comparison objects F1 to F4. Further, the area setting unit 216 sets the instruction area C corresponding to the instruction information I6. The area setting unit 216 sets the specific area a to a rectangular area including the specific object F, for example. Similarly, the area setting unit 216 sets the comparison areas B1 to B4 to rectangular ranges including the comparison objects F1 to M4, respectively. The area setting unit 216 sets the instruction area C to a rectangular range including the instruction information I6. The shapes of the specific area a, the comparison areas B1 to B4, and the indication area C are not limited to a rectangle, and may be other shapes such as a circle, an ellipse, and a polygon. In this case, the area setting unit 216 sets the specific area a, the comparison areas B1 to B4, and the indication area C so as not to overlap each other.
Next, in the first display operation, the display control unit 202 displays an animation in which the bear eats 1 of the 5 kinds of food on the
Fig. 18 shows a scenario where a bear picks up an orange mouth. Fig. 19 shows a scenario where a bear places an orange in the mouth until the mouth is closed. Fig. 20 shows a scenario where oranges become invisible in the mouth of a bear, and the bear is eating oranges. In this way, the display control unit 202 changes the display mode of the specific object F. By displaying a series of motions of the bear shown in fig. 18 to 20 on the
As shown in fig. 18 to 20, the area setting unit 216 sets the specific area a corresponding to the specific object F and sets the comparison areas B1 to B4 corresponding to the comparison objects F1 to F4 from the state shown in fig. 17. Further, the area setting unit 216 cancels the setting of the instruction area C. Then, the area setting unit 216 sets the movement area D to a rectangular range including a trajectory of movement of the orange during a period from when the bear is lifted up to when the bear is put in the entrance. In this case, the shapes of the specific area a, the comparison areas B1 to B4, and the movement area D are not limited to a rectangle, and may be other shapes such as a circle, an ellipse, and a polygon. In this case, the area setting unit 216 sets the specific area a, the comparison areas B1 to B4, and the movement area D so as not to overlap each other. When the scene in fig. 20 is displayed after the scene in fig. 19 is completed, the area setting unit 216 cancels the setting of the movement area D. That is, the setting of the moving area D is canceled at a predetermined timing when the orange enters the mouth of the bear and the mouth is closed and invisible.
After the first display operation, in the second display operation, as shown in fig. 21, the display control unit 202 displays the instruction information I7 for making the subject watch which of the 5 kinds of food the bear eats, in a state where the 5 kinds of food are arranged in front of the bear. The area setting unit 216 sets the specific area a corresponding to the specific object F and sets the comparison areas B1 to B4 corresponding to the comparison objects F1 to F4 from the state shown in fig. 20. Further, the area setting unit 216 sets the instruction area E corresponding to the instruction information I7. The shapes of the specific area a, the comparison areas B1 to B4, and the indication area E are not limited to a rectangle, and may be other shapes such as a circle, an ellipse, and a polygon. In this case, the area setting unit 216 sets the specific area a, the comparison areas B1 to B4, and the instruction area E so as not to overlap each other.
After the instruction information I7 is displayed for a predetermined period, as shown in fig. 22, the display control unit 202 deletes the display of the instruction information I7. The area setting unit 216 cancels the setting of the instruction area E in accordance with the timing of deleting the display of the instruction information I7. The display control unit 202 and the area setting unit 216 maintain this state for a predetermined period. That is, the display control unit 202 causes the specific object F and the comparison objects F1 to F4 to be displayed on the
After the predetermined period of time has elapsed, as shown in fig. 23, the display control unit 202 may display an image indicating a correct answer to the instruction information I7. Fig. 23 shows, as an example, an image in which an area where an orange is placed is surrounded by a frame and a bear looks in the direction of the orange. By displaying the image of fig. 23, the subject can be made to clearly grasp the correct answer. In addition, when displaying an image indicating a correct answer, the area setting unit 216 may cancel the specific area a, the comparison areas B1 to B4, and the instruction area E.
Fig. 24 is a flowchart showing a processing flow of an evaluation method according to another example. As shown in fig. 24, the display control unit 202 displays instruction information I6 for the subject to remember which of the 5 kinds of food the bear has eaten (memory instruction processing: step S201).
Next, as a first display operation, the display control unit 202 displays an animation in which the bear eats 1 of the 5 kinds of food on the
Next, as a second display operation, the display control unit 202 displays instruction information I7 for making the subject look at which of the 5 kinds of food the bear has eaten in a state where the 5 kinds of food are arranged in front of the bear (answer processing: step S203).
Next, the display control unit 202 displays an image indicating a forward answer to the instruction information I7 (forward answer display processing: step S204).
Next, the evaluation unit 224 calculates an evaluation value ANS from the presence time data, the movement number data, the final area data, and the arrival time data obtained from the above processing results, and obtains evaluation data from the evaluation value ANS (step S205). Then, the output control unit 226 outputs the evaluation data obtained by the evaluation unit 224 (step S206).
Fig. 25 is a flowchart showing a flow of processing in the memory instruction processing (step S201). As shown in fig. 25, in the memory instruction processing, the display control unit 202 starts the reproduction of the video (step S301). After the waiting time for the video portion has elapsed, the arithmetic unit 220 resets the timer T1 (step S302), and resets the count values CNTC and RRa of the counters (step S303). The timer T1 is a timer for obtaining the timing at which the memory in the present image instructs the processing section to end the image. The counter CNTC is used to measure a count value CNTC indicating the presence time data in the indication area C of the gaze point P. The counter RRa is a counter for counting an accumulated number of times RRa indicating how many times the gazing point P moves between areas during video reproduction.
The gaze point detecting unit 214 detects position data of the gaze point of the subject on the
When it is determined that the gazing point P is not present in the specific area a (yes in step S307), the arithmetic unit 220 determines whether or not the final area, which is the area where the gazing point P is present in the latest detection, is the specific area a (step S308). When determining that the final area is the specific area a (yes in step S308), the arithmetic unit 220 skips step S309 and step S310 below and performs the processing of step S329 described below. When determining that the final area is not the specific area a (no in step S308), the arithmetic unit 220 performs +1 on the cumulative number RRa of times indicating how many times the gaze point P has moved between the areas (step S309), and changes the final area to the specific area a (step S310). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.
When it is determined that the gaze point P is not present in the specific area a (no in step S307), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B1 (step S311). If it is determined that the gaze point P is present in the comparison area B1 (yes in step S311), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B1 (step S312). If the arithmetic unit 220 determines that the final area is the comparison area B1 (yes in step S312), it skips step S313 and step S314 below and performs the processing of step S329 described below. When determining that the final area is not the comparison area B1 (no in step S312), the arithmetic unit 220 performs +1 on the cumulative number RRa of times indicating how many times the point of regard P has moved between the areas (step S313), and changes the final area to the comparison area B1 (step S314). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.
When it is determined that the gaze point P is not present in the specific region B1 (no in step S311), the arithmetic unit 220 determines whether or not the gaze point P is present in the comparative region B2 (step S315). If it is determined that the gaze point P is present in the comparison area B2 (yes in step S315), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B2 (step S316). If the arithmetic unit 220 determines that the final area is the comparison area B2 (yes in step S316), it skips the following steps S317 and S318 and performs the processing in step S329, which will be described later. When determining that the final area is not the comparison area B2 (no in step S316), the arithmetic unit 220 performs +1 on the cumulative number RRa of times indicating how many times the point of regard P has moved between the areas (step S317), and changes the final area to the comparison area B2 (step S318). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.
When it is determined that the gaze point P is not present in the specific region B2 (no in step S315), the arithmetic unit 220 determines whether the gaze point P is present in the comparative region B3 (step S319). If it is determined that the gaze point P is present in the comparison area B3 (yes in step S319), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B3 (step S320). If the arithmetic unit 220 determines that the final area is the comparison area B3 (yes in step S320), it skips step S321 and step S322 described below and performs the processing of step S329 described below. When determining that the final area is not the comparison area B3 (no in step S320), the arithmetic unit 220 performs +1 on the cumulative number RRa of times indicating how many times the point of regard P has moved between the areas (step S321), and changes the final area to the comparison area B3 (step S322). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.
When it is determined that the gaze point P is not present in the specific region B3 (no in step S319), the arithmetic unit 220 determines whether the gaze point P is present in the comparison region B4 (step S323). If it is determined that the gaze point P is present in the comparison area B4 (yes in step S323), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B4 (step S324). If the arithmetic unit 220 determines that the final area is the comparison area B4 (yes in step S324), it skips the following steps S325 and S326 and performs the processing in step S329, which will be described later. When determining that the final area is not the comparison area B4 (no in step S324), the arithmetic unit 220 performs +1 on the cumulative number RRa of times indicating how many times the point of regard P has moved between the areas (step S325), and changes the final area to the comparison area B4 (step S326). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.
When it is determined that the gaze point P is not present in the specific area B4 (no in step S323), the arithmetic unit 220 determines whether the gaze point P is present in the instruction area C (step S327). If it is determined that the gaze point P is not present in the instruction area C (no in step S327), the processing in and after step S329 to be described later is performed. When it is determined that the gaze point P is present in the indicated region C (yes in step S327), the arithmetic unit 220 performs +1 on the count value CNTC indicating the presence time data of the gaze point P in the indicated region C (step S328). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.
Then, the arithmetic unit 220 determines whether or not the video reproduction completion time has been reached based on the detection result of the detection timer T1 (step S329). If the arithmetic unit 220 determines that the video reproduction completion time has not been reached (no in step S329), the processing from step S304 onward is repeated.
When the arithmetic unit 220 determines that the video reproduction completion time has been reached (yes in step S329), the display control unit 202 stops the video reproduction (step S330). After the reproduction of the video is stopped, a memory process is performed (step S202).
Fig. 26 is a flowchart showing the flow of processing in the memorizing processing (step S202). As shown in fig. 26, in the storage process, the display control unit 202 starts the reproduction of the video (step S401). After the waiting time for the video portion has elapsed, the arithmetic unit 220 resets the timer T2 (step S402), and resets the counter values CNTAa, CNTB1a, CNTB2a, CNTB3a, CNTB4a, and CNTD of the counter (step S403). The timer T2 is a timer for obtaining the timing of the end of the image in the memory processing portion of the present image. The counter CNTAa is used to measure a count value CNTAa indicating the presence time data of the gaze point P in the specific region a. The counters CNTB1a to CNTB4a measure count values CNTB1a to CNTB4a indicating the presence time data of the gazing point P in the comparison regions B1 to B4. The counter CNTD is used to measure a count value CNTD indicating the presence time data of the gaze point P in the moving area D.
The gaze point detecting unit 214 detects position data of the gaze point of the subject on the
When it is determined that the gaze point P exists in the specific area a (yes in step S407), the arithmetic unit 220 performs +1 on the count value CNTAa indicating the presence time data of the gaze point P in the specific area a (step S408). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.
When it is determined that the gaze point P is not present in the specific area a (no in step S407), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B1 (step S409). If it is determined that the gaze point P is present in the comparison area B1 (yes in step S409), the arithmetic unit 220 performs +1 on the count value CNTB1a indicating the presence time data of the gaze point P in the comparison area B1 (step S410). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.
When it is determined that the gaze point P is not present in the specific area B1 (no in step S409), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B2 (step S411). If it is determined that the gaze point P is present in the comparison area B2 (yes in step S411), the arithmetic unit 220 performs +1 on the count value CNTB2a indicating the presence time data of the gaze point P in the comparison area B2 (step S412). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.
When it is determined that the gaze point P is not present in the specific area B2 (no in step S411), the arithmetic unit 220 determines whether the gaze point P is present in the comparative area B3 (step S413). If it is determined that the gaze point P is present in the comparison area B3 (yes in step S413), the arithmetic unit 220 performs +1 on the count value CNTB3a indicating the presence time data of the gaze point P in the comparison area B3 (step S414). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.
When it is determined that the gaze point P is not present in the specific region B3 (no in step S413), the arithmetic unit 220 determines whether or not the gaze point P is present in the comparison region B4 (step S415). If it is determined that the gaze point P is present in the comparison area B4 (yes in step S415), the arithmetic unit 220 performs +1 on the count value CNTB4a indicating the presence time data of the gaze point P in the comparison area B4 (step S416). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.
When it is determined that the gaze point P is not present in the specific region B4 (no in step S415), the arithmetic unit 220 determines whether or not the value of the timer T2 exceeds a predetermined time T01 (step S417). The prescribed time t01 is when an orange enters the mouth of a bear, the mouth is closed and not visible. When the value of the timer T2 exceeds the predetermined time T01 (yes in step S417), the arithmetic unit 220 skips the process in step S418 and performs the processes after step S420, which will be described later. If the value of the timer T2 does not exceed the predetermined time T01 (no in step S417), the arithmetic unit 220 determines whether or not the gaze point P is present in the moving area D (step S418). If it is determined that the gaze point P is not present in the moving area D (no in step S418), the processing from step S420 and subsequent steps is performed. When it is determined that the gaze point P is present in the moving area D (yes in step S418), the arithmetic unit 220 performs +1 on the count value CNTD indicating the presence time data of the gaze point P in the moving area D (step S419). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.
Then, the arithmetic unit 220 determines whether or not the video reproduction completion time has been reached based on the detection result of the detection timer T2 (step S420). When the arithmetic unit 220 determines that the video reproduction completion time has not been reached (no in step S420), the processing from step S404 onward is repeated.
When the arithmetic unit 220 determines that the video reproduction completion time has been reached (yes in step S420), the display control unit 202 stops the video reproduction (step S421). After the reproduction of the video is stopped, the response processing is performed (step S203).
Fig. 27 is a flowchart showing the flow of processing in the reply processing (step S203). As shown in fig. 27, in the response processing, the display control unit 202 starts the reproduction of the video (step S501). After the waiting time for the video portion has elapsed, the arithmetic unit 220 resets the timer T3 (step S502), resets the counter values cnlab, CNTB1b, CNTB2b, CNTB3b, CNTB4b, CNTE, and RRb of the counters (step S503), and sets the flag value to 0 (step S504). The timer T3 is a timer for obtaining the timing of the end of the image of the response processing section in the present image. The counter cnlab is used to measure a count value cnlab indicating the presence time data of the fixation point P in the specific area a. The counters CNTB1B to CNTB4B measure count values CNTB1B to CNTB4B indicating the presence time data of the gazing point P in the comparison regions B1 to B4. The counter CNTE is used to measure a count value CNTE indicating the presence time data of the gaze point P in the indication area E. The counter RRb is a counter for counting the cumulative number of times RRb indicating how many times the gaze point P moves between the regions.
The gaze point detecting unit 214 detects position data of the gaze point of the subject on the
When it is determined that the gaze point P is present in the specific area a (yes in step S508), the arithmetic unit 220 determines whether or not the flag value is 1, that is, whether or not the gaze point P reaches the first specific area a (1: reached, 0: not reached) (step S509). If the flag value is "1" (yes in step S509), the arithmetic unit 220 skips the following steps S510 to S512 and performs the processing of step S513 described later.
When the flag value is not 1, that is, when it is the first time that the gaze point P reaches the specific area a (no in step S509), the arithmetic unit 220 extracts the measurement result of the timer T3 as arrival time data (step S510). The calculation unit 220 stores, in the storage unit 222, the movement count data indicating that the gaze point P has moved between the regions several times before reaching the specific region a (step S511). After that, the arithmetic unit 220 changes the flag value to 1 (step S512).
Next, the calculation unit 220 determines whether or not the final area, which is the area where the gaze point P exists in the latest detection, is the specific area a (step S513). When determining that the final area is the specific area a (yes in step S513), the arithmetic unit 220 skips step S514 and step S515 below and performs the processing of step S516 described below. When determining that the final area is not the specific area a (no in step S513), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S514), and changes the final area to the specific area a (step S515). Further, the calculation unit 220 performs +1 on the count value cnlab indicating the presence time data of the gaze point P in the specific area a (step S516). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.
When it is determined that the gaze point P is not present in the specific area a (no in step S508), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B1 (step S517). If it is determined that the gaze point P is present in the comparison area B1 (yes in step S517), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B1 (step S518). If the arithmetic unit 220 determines that the final area is the comparison area B1 (yes in step S518), it skips step S519 and step S520 below and performs the processing of step S521 described below. When determining that the final area is not the comparison area B1 (no in step S518), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S519), and changes the final area to the comparison area B1 (step S520). The arithmetic unit 220 performs +1 operation on the count value CNTB1B indicating the presence time data of the point of regard P in the comparison area B1 (step S521). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.
When it is determined that the gaze point P is not present in the comparison area B1 (no in step S517), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B2 (step S522). If it is determined that the gaze point P is present in the comparison area B2 (yes in step S522), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B2 (step S523). If the arithmetic unit 220 determines that the final area is the comparison area B2 (yes in step S523), it skips the following steps S524 and S525 and performs the processing of step S526 described later. When determining that the final area is not the comparison area B2 (no in step S523), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S524), and changes the final area to the comparison area B2 (step S525). The computing unit 220 performs +1 operation on the count value CNTB2B indicating the presence time data of the gaze point P in the comparison area B2 (step S526). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.
When determining that the gaze point P is not present in the comparison area B2 (no in step S522), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B3 (step S527). If it is determined that the gaze point P is present in the comparison area B3 (yes in step S527), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B3 (step S528). If the arithmetic unit 220 determines that the final area is the comparison area B3 (yes in step S528), it skips the following steps S529 and S530 and performs the processing in step S531 to be described later. When determining that the final area is not the comparison area B3 (no in step S528), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S529), and changes the final area to the comparison area B3 (step S530). The arithmetic unit 220 performs +1 operation on the count value CNTB3B indicating the presence time data of the point of regard P in the comparison area B3 (step S531). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.
When determining that the gaze point P is not present in the comparison area B3 (no in step S527), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B4 (step S532). If it is determined that the gaze point P is present in the comparison area B4 (yes in step S532), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B4 (step S533). If the arithmetic unit 220 determines that the final area is the comparison area B4 (yes in step S533), it skips the following steps S534 and S535 and performs the processing of step S536, which will be described later. When determining that the final area is not the comparison area B4 (no in step S533), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S534), and changes the final area to the comparison area B4 (step S535). Further, the arithmetic unit 220 performs +1 on the count value CNTB4B indicating the presence time data of the point of regard P in the comparison area B4 (step S536). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.
When determining that the gaze point P is not present in the comparison area B4 (no in step S532), the arithmetic unit 220 determines whether or not the value of the timer T3 exceeds a predetermined time T02 (step S537). The predetermined time t02 is the time when the deletion instruction information I7 is displayed. When the value of the timer T3 exceeds the predetermined time T02 (yes at step S537), the arithmetic unit 220 skips the process at step S538 and performs the processes after step S540. If the value of the timer T3 does not exceed the predetermined time T02 (no in step S537), the arithmetic unit 220 determines whether or not the gazing point P is present in the instruction area E (step S538). When it is determined that the gaze point P exists in the indication area E (yes in step S538), the arithmetic unit 220 performs +1 on the count value CNTE indicating the presence time data of the gaze point P in the indication area E (step S539). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.
Then, the arithmetic unit 220 determines whether or not the video reproduction completion time has been reached based on the detection result of the detection timer T3 (step S540). If the arithmetic unit 220 determines that the video reproduction completion time has not been reached (no in step S540), the processing from step S505 is repeated.
When the arithmetic unit 220 determines that the video reproduction end time has been reached (yes in step S540), the display control unit 202 stops the video reproduction (step S541). After the reproduction of the video is stopped, evaluation calculation (step S205) and evaluation value output (step S206) are performed.
In the evaluation calculation, the evaluation value ANS is expressed as:
ANS=K11·RRa+K12·CNTC+K13·CNTAa
+K14·CNTB1a+K15·CNTB2a+K16·CNTB3a
+K17·CNTB4a+K18·CNTD+K19·CNTAb
+K20·CNTB1b+K21·CNTB2b+K22·CNTB3b
+K23·CNTB4b+K24·CNTE+K25·RRb
where K11 to K25 are constants for weighting. The constants K11 to K25 can be set as appropriate.
The more excellent the RRa tends to confirm the object, the higher the value of RRa. In this case, by setting K11 to a negative coefficient, the higher the value of RRa, the lower the value of the evaluation value ANS.
With respect to CNTC, the more well the indicator text tends to be recognized, the higher the value of CNTC. In this case, by setting K12 to a negative coefficient, the higher the value of CNTC, the lower the value of the evaluation value ANS becomes.
As for CNTAa, the more carefully the orange that bears tends to eat is observed, the higher the value of CNTAa. In this case, by setting K13 to a negative coefficient, the higher the value of CNTA, the lower the value of the evaluation value ANS.
CNTB1a to CNTB4a have higher values of CNTB1a to CNTB4a as the subjects other than oranges eaten by bears tend to be observed without fail. In this case, by setting K14 to K17 to positive coefficients, the higher the CNTB1a to CNTB4a values are, the higher the evaluation value ANS becomes.
The more excellent the CNTD tends to confirm the object, the higher the CNTD value. On the other hand, when there is a tendency to observe only a moving object, the numerical value also becomes high. In this case, K18 may be set to a positive coefficient, for example, the coefficient may be set lower than other coefficients.
With respect to cnlab, the more carefully we observe the orange that is the correct answer, the higher the value of cnlab. In this case, K19 is set to a negative coefficient and the absolute value is set to be larger than the other coefficients, so that the higher the value of cnlab, the more greatly the value of the evaluation value ANS decreases.
Regarding CNTB1 b-CNTB 4b, the more carefully an incorrect food is observed, the higher the value of CNTB1 b-CNTB 4 b. In this case, the higher the value of CNTB1b to CNTB4b, the larger the value of the evaluation value ANS becomes, by setting K20 to K23 to a positive coefficient and setting the absolute value to be larger than the other coefficients.
With respect to CNTE, the more carefully the indication information I7 tends to be confirmed, the higher the value of CNTE. In this case, by setting K24 to a negative coefficient, the higher the value of CNTE, the lower the value of the evaluation value ANS.
The more the RRb tends to be hesitant in correct solution selection, the higher the value of RRb. In this case, K25 is set to a positive coefficient, so that the higher the RRb value is, the higher the evaluation value ANS value is.
The evaluation unit 224 can determine the evaluation data by determining whether or not the evaluation value ANS is equal to or greater than a predetermined value. For example, when the evaluation value ANS is equal to or greater than a predetermined value, it can be evaluated that the subject is highly likely to be a person with cognitive dysfunction or brain dysfunction. In addition, when the evaluation value ANS is smaller than the predetermined value, the possibility that the subject is a person with cognitive dysfunction or brain dysfunction is low.
The evaluation unit 224 may calculate the evaluation value of the subject based on at least one of the above-described gazing point data. For example, if the presence time data cnlab of the specific region a is equal to or greater than a predetermined value, the evaluation unit 224 can evaluate that the subject is less likely to be a person with cognitive dysfunction or brain dysfunction. Further, if the ratio of the specific region a presence time data cnlab to the comparison region B1 to CNTB1B to CNTB4B (the ratio of the attention rates of the specific region a and the comparison regions B1 to B4) is equal to or greater than a predetermined value, the evaluation unit 224 can evaluate that the subject is less likely to be a cognitive or brain dysfunction person. Further, if the ratio of the presence time data cnlab of the specific region a to the total fixation time (the ratio of the fixation time of the specific region a to the total fixation time) is equal to or greater than a predetermined value, the evaluation unit 224 can evaluate that the subject is less likely to be a cognitive or brain-dysfunction person. Further, the evaluation unit 224 can evaluate that the probability that the subject is a person with cognitive and brain dysfunction is low if the final region is the specific region a, and that the probability that the subject is a person with cognitive and brain dysfunction is high if the final region is the comparison regions B1 to B4.
Fig. 28 to 30 are diagrams showing other examples of a series of images for evaluation displayed on the
As shown in fig. 28, the area setting unit 216 sets a specific area a corresponding to the specific object W and sets comparison areas B1 to B5 corresponding to the comparison objects W1 to W5. Further, the area setting unit 216 sets the instruction area C corresponding to the instruction information I8. The area setting unit 216 sets the specific area a to a rectangular range corresponding to the specific object W, for example. In fig. 28, since the specific object W is a rectangular window, the specific area a can be set so as to overlap with the outline of the window. Similarly, the area setting unit 216 can set the comparison areas B1 to B5 so as to overlap the outlines of the windows of the comparison objects W1 to W5, for example. The area setting unit 216 sets the instruction area C to a rectangular range including the instruction information I8. The shapes of the specific area a, the comparison areas B1 to B5, and the indication area C are not limited to a rectangle, and may be other shapes such as a circle, an ellipse, and a polygon. In this case, the area setting unit 216 sets the specific area a, the comparison areas B1 to B5, and the indication area C so as not to overlap each other. The display control unit 202 and the area setting unit 216 maintain this state for a predetermined period. That is, the display control unit 202 causes the specific object W and the comparison objects W1 to W5 to be displayed on the
After the above display for the predetermined period, as shown in fig. 29, the display control unit 202 erases the human image from the window of the specific object W. In this way, the display control unit 202 changes the display mode of the specific object W. Then, as a second display operation, instruction information I9 for causing the subject to look at which window of the 6 windows the person has exposed the face is displayed. The area setting unit 216 sets the specific area a corresponding to the specific object W and sets the comparison areas B1 to B5 corresponding to the comparison objects W1 to W5 from the state shown in fig. 28. Further, the area setting unit 216 sets the instruction area E corresponding to the instruction information I9. The shapes of the specific area a, the comparison areas B1 to B5, and the indication area E are not limited to a rectangle, and may be other shapes such as a circle, an ellipse, and a polygon. In this case, the area setting unit 216 sets the specific area a, the comparison areas B1 to B5, and the instruction area E so as not to overlap each other.
After the second display operation is performed for a predetermined period of time, as shown in fig. 30, the display control unit 202 may display an image indicating the answer to the forward answer to the instruction information I9. In fig. 30, as an example, a person is again exposed to the face in a window corresponding to a specific object W, and instruction information I10 indicating that the window is a correct answer is displayed. By displaying the image of fig. 30, the subject can be made to clearly grasp the correct answer. In addition, when displaying an image indicating a correct answer, the area setting unit 216 may cancel the specific area a, the comparison areas B1 to B5, and the instruction area E.
As described above, the
In addition, the evaluation method of the above embodiment includes: detecting a position of a gaze point of a subject observing the
In addition, the evaluation program of the above embodiment causes a computer to execute: detecting a position of a gaze point of a subject observing the
Therefore, even when the display mode of the specific object is not changed in the first display operation, and when the display mode of the specific object is changed, the evaluation data of the subject can be obtained from the movement passage of the gaze point during the display period. As described above, by diversifying the display form of the specific object, the chance can be further reduced, and the memory of the subject can be evaluated with high accuracy. Thus, the
Description of the symbols
A. A1, a5, A8 … specific region, a2 to a4, a6 to a6, B6 to B6 comparison region, 6 indication region, D6 movement region, D6 data value, I6 indication information, 6, M6, W6 specific object, F6 to F6, M6 to M6, M6 to M6, W6 to W6 comparison object, ansp 6 fixation point, T6 detection timer, CNTA 6 evaluation value, CNTA 6 count value, 206 computer system, 202 6 display control unit, 36214 fixation point detection unit, 36216 region setting unit, 36218 determination unit, 6, operation unit 222, 6 storage unit, 6 evaluation unit, 6 output control unit, 6, 202 6, 6 output control unit, 6, and 6 output control unit
- 上一篇:一种医用注射器针头装配设备
- 下一篇:活检装置及方法