Evaluation device, evaluation method, and evaluation program

文档序号:1144943 发布日期:2020-09-11 浏览:11次 中文

阅读说明:本技术 评价装置、评价方法及评价程序 (Evaluation device, evaluation method, and evaluation program ) 是由 首藤胜行 鬼头诚 于 2019-03-08 设计创作,主要内容包括:本发明的评价装置包括:显示屏;注视点检测部,对观察显示屏的受检者的注视点的位置进行检测;显示控制部,针对显示屏显示包含特定对象物和与特定对象物不同的比较对象物的图像;区域设定部,设定对应于特定对象物的特定区域和对应于比较对象物的比较区域;判定部,基于注视点的位置,在显示图像的期间,分别判定注视点是否存在于特定区域及比较区域;运算部,基于判定部的判定结果,计算注视点数据;评价部,根据注视点数据,求出受检者的评价数据。(The evaluation device of the present invention includes: a display screen; a gaze point detection unit that detects a position of a gaze point of a subject observing the display screen; a display control unit that displays an image including a specific object and a comparison object different from the specific object on a display screen; an area setting unit that sets a specific area corresponding to a specific object and a comparison area corresponding to a comparison object; a determination unit that determines whether or not the gaze point is present in the specific region and the comparison region, respectively, while the image is being displayed, based on the position of the gaze point; a calculation unit that calculates the point-of-interest data based on the determination result of the determination unit; an evaluation unit obtains evaluation data of the subject based on the gaze point data.)

1. An evaluation device comprising:

a display screen;

a gaze point detection unit that detects a position of a gaze point of a subject observing the display screen;

a display control unit that displays an image including a specific object and a comparison object different from the specific object on the display screen;

an area setting unit that sets a specific area corresponding to the specific object and a comparison area corresponding to the comparison object;

a determination unit configured to determine whether the gaze point is present in the specific region and the comparison region, respectively, while the image is displayed, based on a position of the gaze point;

a calculation unit that calculates the point-of-interest data based on the determination result of the determination unit; and

and an evaluation unit configured to obtain evaluation data of the subject based on the gaze point data.

2. The evaluation device according to claim 1,

the display control unit performs a first display operation of displaying the specific object on the display screen, and then performs a second display operation of displaying the specific object and the comparison object on the display screen,

the determination unit determines whether or not the gaze point is present in the specific region and the comparison region, respectively, during a display period in which the second display operation is performed.

3. The evaluation device according to claim 1,

the display control unit performs a first display operation of changing a display mode of the specific object while the specific object and the comparison object are displayed on the display screen, and then performs a second display operation of displaying the specific object and the comparison object on the display screen,

the determination unit determines whether the gaze point is present in the specific region and the comparison region during the display period in which the first display operation or the second display operation is performed, based on the position of the gaze point.

4. The evaluation device according to any one of claims 1 to 3,

the point of regard data includes: arrival time data indicating a time from a start time of the display period to an arrival time at which the gaze point first arrives at the specific region; movement number data indicating the number of times the point of regard moves between the plurality of comparison areas before initially reaching the specific area; presence time data indicating a presence time during which the point of regard is present in the specific region or the comparison region during the display; and final area data indicating an area where the gaze point is located last in the display time among the specific area and the comparison area,

the evaluation unit obtains evaluation data of the subject based on at least one of the gaze point data.

5. The evaluation device according to claim 4,

the evaluation unit obtains the evaluation data by weighting at least one data included in the gaze point data.

6. An evaluation method comprising:

displaying an image on a display screen;

detecting a position of a gaze point of a subject observing the display screen;

displaying the image including a specific object and a comparison object on the display screen, the comparison object being different from the specific object;

setting a specific region corresponding to the specific object and a comparison region corresponding to the comparison object;

determining whether the gaze point exists in the specific region and the comparison region, respectively, during display of the image displayed on the display screen based on the position of the gaze point;

calculating the gazing point data during the display period based on the judgment result; and

based on the gaze point data, evaluation data of the subject is obtained.

7. An evaluation program causing a computer to execute:

displaying an image on a display screen;

detecting a position of a gaze point of a subject observing the display screen;

displaying the image including a specific object and a comparison object on the display screen, the comparison object being different from the specific object;

setting a specific region corresponding to the specific object and a comparison region corresponding to the comparison object;

determining whether the gaze point exists in the specific region and the comparison region, respectively, during display of the image displayed on the display screen based on the position of the gaze point;

calculating the gazing point data during the display period based on the judgment result; and

based on the gaze point data, evaluation data of the subject is obtained.

Technical Field

The present invention relates to an evaluation device, an evaluation method, and an evaluation program.

Background

In recent years, cognitive dysfunction and brain dysfunction such as dementia have been increasing, and it is required to discover such cognitive dysfunction and brain dysfunction as early as possible and to quantitatively evaluate the severity of symptoms. Symptoms of cognitive and brain dysfunction are known to affect memory. Thus, the subject is evaluated based on the memory of the subject. For example, the following devices are proposed: a plurality of numbers are displayed, and the subject adds the numbers to obtain an answer, and confirms the answer given by the subject (for example, see patent document 1).

Prior art documents

Patent document

Patent document 1: japanese patent laid-open No. 2011-083403.

Disclosure of Invention

However, the method of patent document 1 or the like includes contingency because the method used is merely a selection answer, and it is difficult to verify the answer, and it is difficult to obtain high evaluation accuracy. Therefore, it is necessary to evaluate cognitive dysfunction and brain dysfunction with high accuracy.

The present invention has been made in view of the above problems, and an object thereof is to provide an evaluation device, an evaluation method, and an evaluation program capable of accurately evaluating cognitive dysfunction and brain dysfunction.

The evaluation device according to the present invention includes: a display screen; a gaze point detection unit that detects a position of a gaze point of a subject observing the display screen;

a display control unit that displays an image including a specific object and a comparison object different from the specific object on the display screen; an area setting unit that sets a specific area corresponding to the specific object and a comparison area corresponding to the comparison object; a determination unit configured to determine whether the gaze point is present in the specific region and the comparison region, respectively, while the image is displayed, based on a position of the gaze point; a calculation unit that calculates the point-of-interest data based on the determination result of the determination unit; and an evaluation unit configured to obtain evaluation data of the subject based on the gaze point data.

The evaluation method according to the present invention includes: displaying an image on a display screen; detecting a position of a gaze point of a subject observing the display screen; displaying the image including a specific object and a comparison object in the display screen, the comparison object being different from the specific object; setting a specific region corresponding to the specific object and a comparison region corresponding to the comparison object; determining whether the gaze point exists in the specific region and the comparison region, respectively, during a display period in which the image is displayed on the display screen according to the position of the gaze point; calculating the gazing point data during the display period based on the judgment result; and obtaining evaluation data of the subject according to the gazing point data.

According to the evaluation program of the present invention, the computer executes the following processing: displaying an image on a display screen; detecting a position of a gaze point of a subject observing the display screen; displaying the image including a specific object and a comparison object in the display screen, the comparison object being different from the specific object; setting a specific region corresponding to the specific object and a comparison region corresponding to the comparison object; determining whether the gaze point exists in the specific region and the comparison region, respectively, during a display period in which the image is displayed on the display screen according to the position of the gaze point; calculating the gazing point data during the display period based on the judgment result; and obtaining evaluation data of the subject according to the gazing point data.

According to the present invention, it is possible to provide an evaluation device, an evaluation method, and an evaluation program that can evaluate cognitive dysfunction and brain dysfunction with high accuracy.

Drawings

Fig. 1 is a perspective view schematically showing an example of a line-of-sight detection device according to the present embodiment;

fig. 2 is a diagram showing an example of a hardware configuration of the line-of-sight detection device according to the present embodiment;

fig. 3 is a functional block diagram showing an example of the sight line detection device according to the present embodiment;

fig. 4 is a schematic diagram for explaining a method of calculating the position data of the corneal curvature center according to the present embodiment;

fig. 5 is a schematic diagram for explaining a method of calculating position data of a corneal center of curvature according to the present embodiment;

fig. 6 is a schematic diagram for explaining an example of the calibration process according to the present embodiment;

fig. 7 is a schematic diagram for explaining an example of the gazing point detection processing according to the present embodiment;

fig. 8 is a diagram showing one example of an indication displayed on a display screen;

fig. 9 is a diagram showing one example of a specific object displayed on a display screen;

fig. 10 is a diagram showing one example of an indication displayed on a display screen;

fig. 11 is a diagram showing one example of a case where a specific object and a plurality of comparison objects are displayed on a display screen;

fig. 12 is a diagram showing another example in a case where an instruction and a specific object are displayed on a display screen;

fig. 13 is a diagram showing another example in a case where a specific object and a plurality of comparison objects are displayed on a display screen;

fig. 14 is a diagram showing another example in a case where an instruction and a specific object are displayed on a display screen;

fig. 15 is a diagram showing another example in a case where a specific object and a plurality of comparison objects are displayed on a display screen;

fig. 16 is a flowchart showing an example of the evaluation method according to the present embodiment;

fig. 17 is a diagram showing an example of a series of images for evaluation displayed on a display screen;

fig. 18 is a diagram showing an example of a series of images for evaluation displayed on a display screen;

fig. 19 is a diagram showing an example of a series of images for evaluation displayed on a display screen;

fig. 20 is a diagram showing an example of a series of images for evaluation displayed on a display screen;

fig. 21 is a diagram showing an example of a series of images for evaluation displayed on a display screen;

fig. 22 is a diagram showing an example of a series of images for evaluation displayed on a display screen;

fig. 23 is a diagram showing an example of a series of images for evaluation displayed on a display screen;

fig. 24 is a flowchart showing a processing flow of an evaluation method of another example;

fig. 25 is a flowchart showing a flow of processing in the memory instruction processing;

fig. 26 is a flowchart showing a flow of processing in the memory processing;

fig. 27 is a flowchart showing a flow of processing in the answer processing;

fig. 28 is a diagram showing an example of a series of images for evaluation displayed on a display screen;

fig. 29 is a diagram showing an example of a series of images for evaluation displayed on a display screen;

fig. 30 is a diagram showing an example of a series of images for evaluation displayed on a display screen.

Detailed Description

Embodiments of an evaluation apparatus, an evaluation method, and an evaluation program according to the present invention will be described below with reference to the drawings. The present invention is not limited to this embodiment. The components in the following embodiments include components that can be replaced and easily conceived by those skilled in the art, or substantially the same components.

In the following description, a three-dimensional global coordinate system is set to describe the positional relationship of each part. A direction parallel to the first axis on the predetermined surface is defined as an X-axis direction, a direction parallel to a second axis on the predetermined surface orthogonal to the first axis is defined as a Y-axis direction, and a direction parallel to a3 rd axis orthogonal to the first axis and the second axis, respectively, is defined as a Z-axis direction. The prescribed plane comprises an XY plane.

(Sight line detection device)

Fig. 1 is a perspective view schematically showing an example of a line-of-sight detection device 100 according to a first embodiment. The visual line detection apparatus 100 is used as an evaluation apparatus for evaluating cognitive dysfunction such as dementia and brain dysfunction. As shown in fig. 1, the line of sight detection apparatus 100 includes a display apparatus 101, a stereoscopic camera apparatus 102, and an illumination apparatus 103.

The display device 101 includes a flat panel display such as a Liquid Crystal Display (LCD) or an organic EL display (OLED). In the present embodiment, the display device 101 has a display screen 101S. The display screen 101S displays an image. In the present embodiment, the display screen 101S displays, for example, an index for evaluating the visual function of the subject. The display screen 101S is substantially parallel to the XY plane. The X-axis direction is the left-right direction of the display screen 101S, the Y-axis direction is the up-down direction of the display screen 101S, and the Z-axis direction is the depth direction orthogonal to the display screen 101S.

The stereoscopic camera device 102 has a first camera 102A and a second camera 102B. The stereoscopic camera device 102 is disposed below the display screen 101S of the display device 101. The first camera 102A and the second camera 102B are arranged in the X-axis direction. The first camera 102A is disposed in the-X direction compared to the second camera 102B. The first camera 102A and the second camera 102B each include an infrared camera having an optical system capable of transmitting near infrared light having a wavelength of 850[ nm ], for example, and an image pickup element capable of receiving the near infrared light.

The illumination device 103 includes a first light source 103A and a second light source 103B. The illumination device 103 is disposed below the display screen 101S of the display device 101. The first light source 103A and the second light source 103B are arranged in the X-axis direction. The first light source 103A is arranged in the-X direction compared to the first camera 102A. The second light source 103B is disposed in the + X direction compared to the second camera 102B. The first Light source 103A and the second Light source 103B each include an LED (Light Emitting Diode) Light source, and can emit near infrared Light having a wavelength of 850 nm, for example. The first light source 103A and the second light source 103B may be disposed between the first camera 102A and the second camera 102B.

The illumination device 103 emits near-infrared light as detection light to illuminate the eyeball 111 of the subject. The stereo camera device 102 images a part of the eyeball 111 (hereinafter, referred to as "eyeball" including the detection light) by the second camera 102B when the detection light emitted from the first light source 103A is irradiated onto the eyeball 111, and images the eyeball 111 by the first camera 102A when the detection light emitted from the second light source 103B is irradiated onto the eyeball 111.

A frame synchronization signal is output from at least one of the first camera 102A and the second camera 102B. The first light source 103A and the second light source 103B emit detection light based on the frame synchronization signal. The first camera 102A captures image data of the eyeball 111 when the detection light emitted from the second light source 103B is irradiated onto the eyeball 111. The second camera 102B captures image data of the eyeball 111 when the detection light emitted from the first light source 103A is irradiated onto the eyeball 111.

When the detection light is irradiated onto the eyeball 111, a part of the detection light is reflected by the pupil 112, and the light from the pupil 112 is incident on the stereo camera device 102. When the detection light is irradiated to the eyeball 111, a cornea reflection image 113 that is a virtual image of the cornea is formed on the eyeball 111, and the light from the cornea reflection image 113 is incident on the stereo camera device 102.

By appropriately setting the relative positions of the first camera 102A and the second camera 102B and the first light source 103A and the second light source 103B, the intensity of light entering the stereoscopic camera device 102 from the pupil 112 becomes low, and the intensity of light entering the stereoscopic camera device 102 from the cornea reflection image 113 becomes high. That is, the image of the pupil 112 captured by the stereo camera device 102 has low luminance, and the image of the cornea reflection image 113 has high luminance. The stereoscopic camera device 102 may detect the position of the pupil 112 and the position of the corneal reflection image 113 based on the brightness of the captured image.

Fig. 2 is a diagram showing an example of the hardware configuration of the line-of-sight detection device 100 according to the present embodiment. As shown in fig. 2, the line of sight detection apparatus 100 includes a display apparatus 101, a stereoscopic camera apparatus 102, an illumination apparatus 103, a computer system 20, an input/output interface apparatus 30, a drive circuit 40, an output apparatus 50, and an input apparatus 60.

The computer system 20, the drive circuit 40, the output device 50, and the input device 60 perform data communication via the input/output interface device 30. The computer system 20 includes an arithmetic processing device 20A and a storage device 20B. The arithmetic Processing Unit 20A includes a microprocessor such as a CPU (Central Processing Unit). The storage device 20B includes memories or registers such as a ROM (read only memory) and a RAM (random access memory). The arithmetic processing device 20A performs arithmetic processing in accordance with the computer program 20C stored in the storage device 20B.

The drive circuit 40 generates a drive signal and outputs the drive signal to the display device 101, the stereoscopic camera device 102, and the illumination device 103. The drive circuit 40 supplies the image data of the eyeball 111 captured by the stereo camera device 102 to the computer system 20 via the input/output interface device 30.

The output device 50 includes a display device such as a flat panel display. The output device 50 may include a printing device. The input device 60 generates input data by being operated. The input device 60 includes a keyboard or mouse for a computer system. In addition, the input device 60 may include a touch sensor provided on a display screen of the output device 50 as a display device.

In this embodiment, the display device 101 and the computer system 20 are independent devices. In addition, the display device 101 and the computer system 20 may be integrated. For example, when the line of sight detection apparatus 100 includes a tablet-type personal computer, the computer system 20, the input/output interface apparatus 30, the drive circuit 40, and the display apparatus 101 may be mounted on the tablet-type personal computer.

Fig. 3 is a functional block diagram showing an example of the line of sight detecting apparatus 100 according to the present embodiment. As shown in fig. 3, the input/output interface device 30 includes an input/output unit 302. The drive circuit 40 includes: a display device driving unit 402 that generates a driving signal for driving the display device 101 and outputs the driving signal to the display device 101; a first camera input/output section 404A that generates a drive signal for driving the first camera 102A and outputs the drive signal to the first camera 102A; a second camera input/output unit 404B that generates a drive signal for driving the second camera 102B and outputs the drive signal to the second camera 102B; and a light source driving unit 406 that generates a driving signal for driving the first light source 103A and the second light source 103B and outputs the generated driving signal to the first light source 103A and the second light source 103B. In addition, the first camera input/output unit 404A supplies the image data of the eyeball 111 captured by the first camera 102A to the computer system 20 via the input/output unit 302. The second camera input/output unit 404B supplies the image data of the eyeball 111 captured by the second camera 102B to the computer system 20 via the input/output unit 302.

The computer system 20 controls the gaze detection apparatus 100. The computer system 20 includes a display control unit 202, a light source control unit 204, an image data acquisition unit 206, an input data acquisition unit 208, a position detection unit 210, a curvature center calculation unit 212, a gaze point detection unit 214, an area setting unit 216, a determination unit 218, a calculation unit 220, a storage unit 222, an evaluation unit 224, and an output control unit 226. The functions of the computer system 20 are exerted by the arithmetic processing device 20A and the storage device 20B.

The display control unit 202 performs a display operation including a first display operation of displaying the specific object on the display screen 101S and a second display operation of displaying the specific object and a plurality of comparison objects different from the specific object on the display screen 101S after performing the first display operation. The specific object is an object for causing the subject to memorize. The plurality of comparison objects are objects displayed on the display 101S in parallel with the specific object so that the subject finds the specific object. The display control unit 202 may display a display for instructing the subject to remember the specific object displayed in the first display operation on the display screen 101S. Further, the display control unit 202 may display, on the display screen 101S, a display for instructing the subject to view the specific object from among the specific object and the plurality of comparison objects displayed in the second display operation.

The light source control section 204 controls the light source driving unit 406 to control the operation states of the first light source 103A and the second light source 103B. The light source control unit 204 controls the first light source 103A and the second light source 103B so that the first light source 103A and the second light source 103B emit detection light at different timings.

The image data acquisition unit 206 acquires image data of the eyeball 111 of the subject captured by the stereoscopic camera device 102 from the stereoscopic camera device 102 via the input/output unit 302, the stereoscopic camera device 102 including the first camera 102A and the second camera 102B.

The input data acquisition unit 208 acquires input data generated by operating the input device 60 from the input device 60 via the input/output unit 302.

The position detection unit 210 detects position data of the pupil center from the image data of the eyeball 111 acquired by the image data acquisition unit 206. The position detection unit 210 detects position data of the corneal reflection center from the image data of the eyeball 111 acquired by the image data acquisition unit 206. The pupil center is the center of the pupil 112. The corneal reflection center is the center of the corneal reflection image 113. The position detection unit 210 detects position data of the pupil center and position data of the corneal reflection center for each of the left and right eyeballs 111 of the subject.

The curvature center calculating unit 212 calculates position data of the corneal curvature center of the eyeball 111 from the image data of the eyeball 111 acquired by the image data acquiring unit 206.

The gaze point detecting unit 214 detects position data of the gaze point of the subject based on the image data of the eyeball 111 acquired by the image data acquiring unit 206. In the present embodiment, the position data of the gazing point is: position data of an intersection of the subject' S sight line vector and the display screen 101S of the display device 101, which is defined by the three-dimensional global coordinate system. The fixation point detecting unit 214 detects the visual line vectors of the left and right eyeballs 111 of the subject based on the position data of the pupil center and the position data of the corneal curvature center acquired from the image data of the eyeballs 111. After detecting the line-of-sight vector, the gaze point detecting unit 214 detects position data of a gaze point indicating an intersection of the line-of-sight vector and the display screen 101S.

The area setting unit 216 sets a specific area corresponding to the specific object and a comparison area corresponding to each comparison object on the display screen 101S of the display device 101 during the display period in which the second display operation is performed.

The determination unit 218 determines whether or not the gazing point is present in the specific area and the comparison area based on the position data of the viewpoint during the display period in which the second display operation is performed, and outputs determination data. The determination unit 218 determines whether or not the gaze point is present in the specific area and the comparison area, for example, at a constant time interval. The constant time may be, for example, the period of the frame synchronization signal output from the first camera 102A and the second camera 102B (for example, every 20 msec).

Based on the determination data of the determination unit 218, the calculation unit 220 calculates movement passage data (which may be referred to as gaze point data) indicating the passage of movement of the gaze point during the display period. Moving through the data includes: the display device includes arrival time data from a start time of a display period to an arrival time at which a gaze point first arrives at a specific region, movement number data indicating the number of times a position of the gaze point moves between a plurality of comparison regions before the gaze point first arrives at the specific region, presence time data indicating presence time of the gaze point existing in the specific region or the comparison regions during the display period, and final region data indicating a region in which the gaze point last exists in the specific region and the comparison regions during the display period.

The arithmetic unit 220 includes a management timer for managing the reproduction time of the video and a detection timer T1 for detecting the time elapsed since the video was displayed on the display 101S. The arithmetic unit 220 has a counter that counts the number of times of determination that the gaze point is determined to be present in the specific region.

The evaluation unit 224 obtains subject evaluation data based on the movement passage data. The evaluation data is data for evaluating whether or not the subject can watch on the specific object displayed on the display screen 101S during the display operation.

The storage unit 222 stores the determination data, the movement passage data (presence time data, movement number data, final area data, arrival time data), and the evaluation data. Further, the storage unit 222 stores an evaluation program for causing a computer to execute: processing of the display image; processing of detecting a position of a gaze point of a subject observing the display screen; performing a process of performing a display operation including a first display operation of displaying a specific object on a display screen and a second display operation of displaying the specific object and a plurality of comparison objects different from the specific object on the display screen after performing the first display operation; setting a specific area corresponding to a specific object and a comparison area corresponding to each comparison object in a display screen; a process of determining whether or not the gazing point exists in the specific area and the comparison area during the display period in which the second display operation is performed, based on the positional data of the gazing point, and outputting determination data; processing of calculating movement passage data indicating passage of movement of the gaze point during the display period based on the determination data; processing for obtaining evaluation data of the subject based on the movement passing data; and a process of outputting the evaluation data.

The output control unit 226 outputs data to at least one of the display device 101 and the output device 50.

Next, an outline of the processing of the curvature center calculating unit 212 in the present embodiment will be described. The curvature center calculating unit 212 calculates position data of the corneal curvature center of the eyeball 111 based on the image data of the eyeball 111. Fig. 4 and 5 are schematic diagrams for explaining a method of calculating the positional data of the corneal center of curvature 110 according to the present embodiment. Fig. 4 shows an example in which the eyeball 111 is illuminated with one light source 103C. Fig. 5 shows an example in which the eyeball 111 is illuminated by the first light source 103A and the second light source 103B.

First, an example shown in fig. 4 will be explained. The light source 103C is disposed between the first camera 102A and the second camera 102B. The pupil center 112C is the center of the pupil 112. The corneal reflection center 113C is the center of the corneal reflection image 113. In fig. 4, the pupil center 112C represents the pupil center when the eyeball 111 is illuminated by one light source 103C. The corneal reflection center 113C represents a corneal reflection center when the eyeball 111 is illuminated by one light source 103C. The corneal reflection center 113C exists on a straight line connecting the light source 103C and the corneal center of curvature 110. The corneal reflection center 113C is located at a point intermediate between the corneal surface and the corneal center of curvature 110. The corneal radius of curvature 109 is the distance between the corneal surface and the corneal center of curvature 110. The position data of the corneal reflection center 113C is detected by the stereo camera device 102. The corneal center of curvature 110 exists on a straight line connecting the light source 103C and the corneal reflection center 113C. The curvature center calculating unit 212 calculates position data in which the distance from the corneal reflection center 113C on the straight line is a predetermined value as position data of the corneal curvature center 110. The predetermined value is a value predetermined from a normal value of the radius of curvature of the cornea, and is stored in the storage unit 222.

Next, an example shown in fig. 5 will be described. In the present embodiment, the first camera 102A and the second light source 103B, and the second camera 102B and the first light source 103A are disposed at positions that are bilaterally symmetrical with respect to a straight line passing through the middle position of the first camera 102A and the second camera 102B. The virtual light source 103V can be considered to exist at an intermediate position of the first camera 102A and the second camera 102B. The corneal reflection center 121 represents a corneal reflection center in an image of the eyeball 111 captured by the second camera 102B. The corneal reflection center 122 represents a corneal reflection center in an image of the eyeball 111 taken by the first camera 102A. The corneal reflection center 124 represents a corneal reflection center corresponding to the virtual light source 103V. The position data of the corneal reflection center 124 is calculated based on the position data of the corneal reflection center 121 and the position data of the corneal reflection center 122 photographed by the stereo camera device 102. The stereo camera device 102 detects position data of the corneal reflection center 121 and position data of the corneal reflection center 122 in a three-dimensional local coordinate system defined by the stereo camera device 102. The stereoscopic camera device 102 is subjected to camera calibration by a stereo calibration method in advance, and conversion parameters for converting the three-dimensional local coordinate system of the stereoscopic camera device 102 into the three-dimensional global coordinate system are calculated. The transformation parameters are stored in the storage unit 222. The curvature center calculation unit 212 converts the position data of the corneal reflection center 121 and the position data of the corneal reflection center 122 captured by the stereo camera device 102 into position data in the three-dimensional global coordinate system using the conversion parameters. The curvature center calculation unit 212 calculates the position data of the corneal reflection center 124 in the three-dimensional global coordinate system from the position data of the corneal reflection center 121 and the position data of the corneal reflection center 122 defined in the three-dimensional global coordinate system. The corneal center of curvature 110 is located on a line 123 connecting the virtual light source 103V and the corneal reflection center 124. The curvature center calculating unit 212 calculates position data in which the distance from the corneal reflection center 124 on the straight line 123 is a predetermined value as position data of the corneal curvature center 110. The predetermined value is a value predetermined from a normal value of the radius of curvature of the cornea, and is stored in the storage unit 222.

As described above, even in the case where there are two light sources, the corneal center of curvature 110 is calculated by the same method as in the case where there is only one light source.

The corneal radius of curvature 109 is the distance between the corneal surface and the corneal center of curvature 110. Therefore, by calculating the position data of the corneal surface and the position data of the corneal center of curvature 110, the corneal radius of curvature 109 is calculated.

Next, an example of the line-of-sight detection method according to the present embodiment will be described. Fig. 6 is a schematic diagram for explaining an example of the calibration process according to the present embodiment. In the calibration process, the target position 130 is set in order to fixate the subject. The target location 130 is defined in a three-dimensional global coordinate system. In the present embodiment, the target position 130 is set at, for example, the center position of the display screen 101S of the display device 101. The target position 130 may be set at an end position of the display screen 101S. The output control unit 226 displays the target image at the set target position 130. Line 131 is a line connecting virtual light source 103V and corneal reflection center 113C. The line 132 is a line connecting the target position 130 and the pupil center 112C. The corneal center of curvature 110 is the intersection of line 131 and line 132. The curvature center calculation unit 212 may calculate the position data of the corneal curvature center 110 based on the position data of the virtual light source 103V, the position data of the target position 130, the position data of the pupil center 112C, and the position data of the corneal reflection center 113C.

Next, the gaze point detection process will be described. The fixation point detection process is performed after the calibration process. The gaze point detection unit 214 calculates a line-of-sight vector of the subject and position data of the gaze point based on the image data of the eyeball 111. Fig. 7 is a schematic diagram for explaining an example of the gazing point detection processing according to the present embodiment. In fig. 7, the gaze point 165 represents a gaze point obtained from the corneal curvature center calculated using a normal curvature radius value. The gaze point 166 represents a gaze point obtained from the corneal curvature center calculated using the distance 126, and the distance 126 is obtained by calibration processing. The pupil center 112C represents a pupil center calculated in the calibration process, and the corneal reflection center 113C represents a corneal reflection center calculated in the calibration process. Line 173 is a line connecting virtual light source 103V and corneal reflection center 113C. The corneal center of curvature 110 is the position of the corneal center of curvature calculated from a normal curvature radius value. The distance 126 is the distance between the pupil center 112C and the corneal center of curvature 110 calculated by the calibration process. The corneal center of curvature 110H represents the position of the corrected corneal center of curvature that is corrected for the corneal center of curvature 110 using the distance 126. The corneal center of curvature 110H is determined from the fact that the corneal center of curvature 110 is on the straight line 173 and the distance between the pupil center 112C and the corneal center of curvature 110 is the distance 126. Thus, the line of sight 177 calculated using the normal curvature radius value is corrected to the line of sight 178. In addition, the gaze point on the display screen 101S of the display device 101 is corrected from the gaze point 165 to the gaze point 166.

[ evaluation method ]

Next, the evaluation method according to the present embodiment will be described. In the evaluation method according to the present embodiment, the visual line detection apparatus 100 described above is used to evaluate cognitive dysfunction such as dementia and brain dysfunction as visual functions of a subject.

Fig. 8 is a diagram showing an example of instruction information I1 displayed on the display screen 101S in the evaluation method of the present embodiment. As shown in fig. 8, the display controller 202 displays instruction information I1 on the display screen 101S, wherein the instruction information I1 is used for the subject to memorize the specific object to be displayed next (M1: see fig. 9).

After the instruction information I1 is displayed on the display 101S, the display control unit 202 displays the specific object on the display 101S as the first display operation. Fig. 9 is a diagram showing one example of the specific object M1 displayed on the display screen 101S. As shown in fig. 9, the display control unit 202 displays, for example, the specific object M1 in which a circular shape and a triangular shape are combined in the first display operation, but the present invention is not limited thereto. The display controller 202 displays the specific object M1 on the display screen 101S for a predetermined time (for example, several seconds) to cause the subject to look at the specific object M1 and memorize the specific object M1.

Fig. 10 is a diagram showing one example of the instruction information I2 displayed on the display screen 101S. As shown in fig. 10, after the first display operation is performed for a predetermined time, the display controller 202 displays instruction information I2 on the display screen 101S, wherein the instruction information I2 is used to instruct the subject to look at the specific object M1 on the screen to be displayed next.

Fig. 11 is a diagram showing an example of a case where a plurality of objects are displayed on the display screen 101S. After the display controller 202 displays the instruction information I2 on the display screen 101S, as a second display operation, the specific object M1 and the plurality of comparison objects M2 to M4 are displayed on the display screen 101S as shown in fig. 11.

The comparison objects M2 to M4 may have a shape similar to the specific object M1 or may have a shape dissimilar to the specific object M1. In the example shown in fig. 11, the comparison object M2 has a shape in which a trapezoid and a circle are combined, the comparison object M3 has a shape in which a square and a circle are combined, and the comparison object M4 has a shape in which a circle and a regular hexagon are combined. The display controller 202 causes the subject to find the specific object M1 and to watch the found specific object M1 by displaying a plurality of objects including the specific object M1 and the comparison objects M2 to M4 on the display screen 101S.

In addition, in fig. 11, an example of the gaze point P displayed on the display screen 101S, for example, as a result of measurement, is shown, but the gaze point P is not actually displayed on the display screen 101S. The detection of the position data of the gaze point is performed, for example, at a cycle of the frame synchronization signal output from the first camera 102A and the second camera 102B (for example, at intervals of 20 msec). The first camera 102A and the second camera 102B capture images synchronously.

During the display period in which the second display operation is performed, the area setting unit 216 sets the specific area a1 corresponding to the specific object M1. The area setting unit 216 sets comparison areas a2 to a4 corresponding to the comparison objects M2 to M4, respectively. The specific region a1 and the comparison regions a2 to a4 are not displayed on the display 101S.

The area setting unit 216 sets the specific area a1 to a rectangular range including the specific object M1, for example. Similarly, the area setting unit 216 sets the comparison areas a2 to a4 to ranges including rectangles of the comparison objects M2 to M4, for example. The shapes of the specific region a1 and the comparative regions a2 to a4 are not limited to rectangular, and may be other shapes such as circular, elliptical, and polygonal shapes.

The symptoms of cognitive dysfunction and brain dysfunction are known to affect memory. When the examinee is not a person with cognitive or brain dysfunction, the comparison objects M2 to M4 displayed on the display screen 101S are observed one by one in the second display operation, and are determined to be different from the specific object M1 stored in the first display operation, and the specific object M1 can be finally found and watched. On the other hand, in the case where the subject is a person with cognitive dysfunction or brain dysfunction, the specific object M1 may not be memorized or may be forgotten immediately even if memorized. Therefore, the comparison as described above cannot be performed, and the specific object M1 may not be observed.

Therefore, for example, by performing the following procedure, the subject can be evaluated. First, as a first display operation, the specific object M1 is displayed on the display screen 101S and the subject memorizes it. Then, as a second display operation, the specific object M1 and the plurality of comparison objects M2 to M4 are displayed on the display screen 101S. In the second display operation, the subject is instructed to aim the viewpoint at the specific object M1. In this case, the subject can be evaluated from the viewpoints of whether the subject is watching the plurality of comparison objects M2 to M4 one by one, whether the subject can finally reach the specific object M1 as a correct answer, how long the time required for the subject to reach the specific object M1 is, whether the subject can watch the specific object M1, and the like.

In the second display operation, when the position data of the gaze point P of the subject is detected, the determination unit 218 determines whether or not the gaze point of the subject is present in the specific region a1 and the plurality of comparison regions a2 to a4, and outputs the determination data.

The calculation unit 220 calculates movement passage data indicating the passage of movement of the gaze point P during the display period, based on the determination data. The calculation unit 220 calculates the presence time data, the movement number data, the final area data, and the arrival time data as movement passage data.

The presence time data indicates the presence time at which the point of regard P exists in the specific area a 1. In the present embodiment, the presence time during which the determination unit 218 determines that the gaze point P is present in the specific area a1 can be estimated to be longer as the number of times the determination unit 218 determines that the gaze point is present in the specific area a1 is larger. Therefore, the presence time data may be the number of times the determination unit 218 determines that the gaze point is present in the specific area a 1. That is, the calculation unit 220 can use the counter count value CNTA as the presence time data.

The movement number data indicates the number of movements by which the position of the gaze point P moves between the plurality of comparison regions a2 to a4 before the gaze point P first reaches the specific region a 1. Therefore, the arithmetic unit 220 can count how many times the gaze point P moves between the specific region a1 and the regions of the comparison regions a2 to a4, and can use the count result of the arrival of the gaze point P at the specific region a1 as the movement number data.

The final area data indicates an area in which the fixation point P is located last in the display time, that is, an area at which the examinee gazes last as a response, from among the specific area a1 and the comparison areas a2 to a 4. The calculation unit 220 updates the area where the gaze point P exists every time the gaze point P is detected, and thereby can use the detection result of the time when the display period ends as the final area data.

The arrival time data indicates the time from the start time of the display period to the arrival time at which the fixation point first reaches the specific area a 1. Therefore, the arithmetic unit 220 can use the detection result of the timer T1 as the arrival time data by measuring the elapsed time from the start of the display period by the timer T1 and detecting the measurement value of the timer T1 with the flag value set to 1 at the time when the fixation point first reaches the specific region a 1.

In the present embodiment, the evaluation unit 224 obtains evaluation data from the presence time data, the number of times of movement data, the final area data, and the arrival time data.

Here, the data value of the final area data is D1, the data value of the existence time data is D2, the data value of the arrival time data is D3, and the data value of the movement number data is D4. Here, if the final gaze point P of the subject exists in the specific region a1 (i.e., if it is a correct answer), the data value D1 of the final region data is 1, and if it does not exist in the specific region a1 (i.e., if it is an incorrect answer), the data value D1 of the final region data is 0. In addition, the data value D2 of the presence time data is the number of seconds in which the point of regard P exists in the specific area a 1. The data value D2 may be set to an upper limit value of seconds shorter than the display period. The data value D3 of the arrival time data is the reciprocal of the arrival time, and is, for example, 1/(arrival time) ÷ 10 (10: a coefficient for setting the minimum value of the arrival time to 0.1 second and setting the arrival time evaluation value to 1 or less). The count value is directly used as the data value D4 of the movement number data. The data value D4 may be set to an upper limit value as appropriate.

In this case, the evaluation value ANS is expressed as: ANS D1 · K1+ D2 · K2+ D3 · K3+ D4 · K4. Where K1 to K4 are constants for weighting. The constants K1 to K4 can be set as appropriate.

The evaluation value ANS shown in the above equation has a large value when the data value D1 of the final area data is 1, when the data value D2 of the presence time data is large, when the data value D3 of the arrival time data is large, and when the data value D4 of the movement number data is large. That is, the evaluation value ANS increases as the final gaze point P exists in the specific area a1, the longer the existence time of the gaze point P in the specific area a1, the shorter the arrival time of the gaze point P at the specific area a1 from the start time of the display period, and the greater the number of movements of the gaze point P in each area.

On the other hand, the evaluation value ANS becomes smaller when the data value D1 of the final area data is 0, when the data value D2 of the presence time data is small, when the data value D3 of the arrival time data is small, and when the data value D4 of the movement number data is small. That is, the evaluation value ANS is smaller as the final gaze point P does not exist in the specific area a1, the presence time of the gaze point P in the specific area a1 is shorter, the arrival time of the gaze point P at the specific area a1 from the start time of the display period is longer, and the number of movements of the gaze point P in each area is smaller.

Therefore, the evaluation unit 224 can determine the evaluation data by determining whether or not the evaluation value ANS is equal to or greater than a predetermined value. For example, when the evaluation value ANS is equal to or greater than a predetermined value, it can be evaluated that the possibility that the subject is a person with cognitive dysfunction or brain dysfunction is low. In addition, when the evaluation value ANS is smaller than the predetermined value, it can be evaluated that the subject is highly likely to be a person with cognitive dysfunction or brain dysfunction.

The evaluation unit 224 may store the value of the evaluation value ANS in the storage unit 222 in advance. For example, the evaluation value ANS for the same subject may be accumulated and stored, and the evaluation may be performed when compared with the past evaluation value. For example, when the evaluation value ANS is a value higher than the past evaluation value, it can be evaluated that the brain function is improved as compared with the previous evaluation. In addition, when the cumulative value of the evaluation value ANS gradually increases, it can be evaluated that the brain function gradually improves.

The evaluation unit 224 may evaluate the presence time data, the number of times of movement data, the final area data, and the arrival time data independently or in combination. For example, when the gaze point P accidentally reaches the specific area a1 while gazing at a plurality of objects, the data value D4 of the movement number data is reduced. In this case, the evaluation may be performed in combination with the data value D2 of the above-described age data. For example, even if the number of movements is small but the existing time is long, it can be evaluated that the specific area a1 as a correct answer can be watched. In addition, when the number of moves is small and the presence time is short, it can be evaluated that the gaze point P accidentally passes through the specific area a 1.

In addition, if the final area is the specific area a1 when the number of moves is small, it can be evaluated that the user has reached the specific area a1 with a correct answer with little movement of the point of regard, for example. On the other hand, if the final area is not the specific area a1 when the number of movements is small, it can be evaluated that the gaze point P has accidentally passed through the specific area a1, for example.

In the present embodiment, when the evaluation unit 224 outputs the evaluation data, the output control unit 226 can output, for example, character data of "the subject is less likely to be a person with cognitive dysfunction or brain dysfunction" and character data of "the subject is more likely to be a person with cognitive dysfunction or brain dysfunction" to the output device 50 based on the evaluation data. In addition, when the evaluation value ANS of the same subject becomes higher than the past evaluation value ANS, the output control unit 226 can cause the output device 50 to output character data such as "brain function is improved" or the like.

Fig. 12 is a diagram showing an example of a case where the specific object and the instruction information I3 are simultaneously displayed on the display screen 101S. Fig. 13 is a diagram showing another example of a case where a specific object and a plurality of comparison objects are displayed on the display 101S. As shown in fig. 12, the display controller 202 may display the specific object M5 on the display screen 101S and simultaneously display the instruction information I3 for instructing the subject to look at the same figure as the specific object M5 on the display screen 101S in the first display operation. After the first display operation, the display controller 202 can display the specific object M5 and the comparison objects M6 and M7 in the second display operation, as shown in fig. 13. At this time, the display controller 202 may display a graphic formed using the same shape (for example, a pentagon) as the specific object M5 and the comparison objects M6 and M7, for example. In this way, by displaying the specific object M5 and the comparison objects M6 and M7 in similar patterns, the pattern recognition function of the subject can be evaluated. The area setting unit 216 can set a specific area a5 corresponding to the specific object M5 and can set comparison areas a6 and a7 corresponding to the comparison objects M6 and M7. In this way, by simultaneously displaying the specific object M5 and the instruction information I3 on the display screen 101S, the examination time can be shortened.

Fig. 14 is a diagram showing another example of a case where the specific object and the instruction information I4 are displayed on the display screen 101S. Fig. 15 is a diagram showing another example of a case where a specific object and a plurality of comparison objects are displayed on the display 101S. As shown in fig. 14, the display control unit 202 can display the face of the person as the specific object M8 in the first display operation. In this case, the display controller 202 may display the specific object M8 and the instruction information I4 at the same time. As the instruction information shown in fig. 14, the contents of the person who instructs the subject to remember the specific object M8 may be used.

After the first display operation, in the second display operation, as shown in fig. 15, the display controller 202 can display the specific object M8 and the comparison objects M9 to M11 which are faces of persons different from the specific object M8. The area setting unit 216 can set a specific area a8 corresponding to the specific object M8 and can set comparison areas a9 to a11 corresponding to the comparison objects M9 to M11. As shown in fig. 15, the display controller 202 may simultaneously display the specific object M8, the comparison objects M9 to M11, and the instruction information I5 in the second display operation. In this way, the display control unit 202 may display the instruction information for each of the first display operation and the second display operation. This can further shorten the inspection time.

Next, an example of the evaluation method according to the present embodiment will be described with reference to fig. 16. Fig. 16 is a flowchart showing an example of the evaluation method according to the present embodiment. In the present embodiment, the display control unit 202 starts video playback (step S101). After the waiting time until the evaluation video portion has elapsed on the display 101S (step S102), the timer T1 is reset (step S103), the count value CNTA of the counter is reset (step S104), and the flag value is set to 0 (step S105).

The gaze point detecting unit 214 detects position data of the gaze point of the subject on the display screen 101S of the display device 101 at predetermined sampling intervals (for example, 20[ msec ]) in a state where the subject is viewing the video image displayed on the display device 101 (step S106). When the position data is detected (no in step S107), the determination unit 218 determines the region where the gaze point P exists based on the position data (step S108).

If it is determined that the gaze point P is present in specific area a1 (yes in step S109), arithmetic unit 220 determines whether or not the flag value is 1, that is, whether or not the gaze point P reaches specific area a1 first (1: reached, 0: not reached) (step S110). When the flag value is "1" (yes in step S110), the arithmetic unit 220 skips the following steps S111 to S113 and performs the processing of step S114, which will be described later.

When the flag value is not 1, that is, when it is the first time that the gaze point P reaches the specific area a1 (no in step S110), the arithmetic unit 220 extracts the measurement result of the timer T1 as the arrival time data (step S111). The calculation unit 220 stores, in the storage unit 222, the movement count data indicating that the gaze point P has moved between the regions several times before reaching the specific region a (step S112). After that, the arithmetic unit 220 changes the flag value to 1 (step S113).

Next, the calculation unit 220 determines whether or not the final area, which is the area where the gaze point P exists in the latest detection, is the specific area a1 (step S114). When determining that the final area is the specific area a1 (yes in step S114), the arithmetic unit 220 skips the following steps S115 and S116 and performs the processing of step S117 described later. When determining that the final area is not the specific area a1 (no in step S114), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S115), and changes the final area to the specific area a1 (step S116). Further, the arithmetic unit 220 performs +1 on the count value CNTA indicating the presence time data in the specific area a1 (step S117). Thereafter, the arithmetic unit 220 performs the processing of step S130 and subsequent steps, which will be described later.

When it is determined that the gaze point P is not present in the specific area a1 (no in step S109), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area a2 (step S118). If it is determined that the gaze point P is present in the comparison area a2 (yes in step S118), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area a2 (step S119). When determining that the final area is the comparison area a2 (yes in step S119), the arithmetic unit 220 skips step S120 and step S121 below and performs the processing of step S130 described below. When determining that the final area is not the comparison area a2 (no in step S119), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S115), and changes the final area to the comparison area a2 (step S120). Thereafter, the arithmetic unit 220 performs the processing of step S130 and subsequent steps, which will be described later.

When it is determined that the gaze point P is not present in the comparison area a2 (no in step S118), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area A3 (step S122). If it is determined that the gaze point P is present in the comparison area A3 (yes in step S122), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area A3 (step S123). When determining that the final area is the comparison area a3 (yes in step S123), the arithmetic unit 220 skips step S124 and step S125 below and performs the processing of step S130 described below. When determining that the final area is not the comparison area A3 (no in step S123), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S123), and changes the final area to the comparison area A3 (step S125). Thereafter, the arithmetic unit 220 performs the processing of step S130 and subsequent steps, which will be described later.

When it is determined that the gaze point P is not present in the comparison area A3 (no in step S122), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area A3 (step S122). If it is determined that the gaze point P is present in the comparison area a4 (yes in step S126), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area a4 (step S127). When determining that the final area is the comparison area a4 (yes in step S127), the arithmetic unit 220 skips step S128 and step S129 described below and performs the processing of step S130 described below. When determining that the final area is not the comparison area a4 (no in step S127), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S128), and changes the final area to the comparison area A3 (step S129). Thereafter, the arithmetic unit 220 performs the processing of step S130 and subsequent steps, which will be described later.

Then, the arithmetic unit 220 determines whether or not the video reproduction completion time has been reached based on the detection result of the detection timer T1 (step S130). If the arithmetic unit 220 determines that the video reproduction completion time has not been reached (no in step S130), the processing from step S106 onward is repeated.

When the arithmetic unit 220 determines that the video reproduction completion time has been reached (yes in step S130), the display control unit 202 stops the video reproduction (step S131). After stopping the reproduction of the video, the evaluation unit 224 calculates an evaluation value ANS from the presence time data, the number of movements data, the final area data, and the arrival time data obtained from the above processing results (step S132), and obtains evaluation data from the evaluation value ANS. Then, the output control unit 226 outputs the evaluation data obtained by the evaluation unit 224 (step S133).

As described above, the evaluation device according to the present embodiment includes: a gaze point detection unit 214 that detects the position of a gaze point of a subject who observes an image displayed on the display screen 101S; a display control unit 202 that performs a display operation including a first display operation of displaying a specific object M1 on the display screen 101S and a second display operation of displaying a specific object M1 and comparison objects M2 to M4 different from the specific object M1 on the display screen 101S after the first display operation is performed; an area setting unit 216 that sets a specific area a1 corresponding to a specific object M1 and comparison areas a2 to a4 corresponding to comparison objects M2 to M4 on the display screen 101S; a determination unit 218 configured to determine whether or not the gaze point P exists in the specific region a1 and the comparison regions a2 to a4 during the second display operation, based on the position data of the gaze point P; a calculation unit 220 that calculates movement passage data indicating the passage of movement of the gaze point P during the display period, based on the determination result; the evaluation unit 224 obtains evaluation data of the subject based on the movement passage data.

In addition, the evaluation method according to the present embodiment includes: detecting a position of a gaze point of a subject who observes an image displayed on the display screen 101S; performing a display operation including a first display operation of displaying the specific object M1 on the display screen 101S and a second display operation of displaying the specific object M1 and the comparison objects M2 to M4 different from the specific object M1 on the display screen 101S after the first display operation is performed; setting a specific region a1 corresponding to a specific object M1 and comparison regions a2 to a4 corresponding to comparison objects M2 to M4 on the display screen 101S; determining whether the gaze point P exists in the specific region a1 and the comparison regions a2 to a4 during the second display operation, respectively, based on the position data of the gaze point P; based on the determination result, movement passage data indicating the passage of movement of the gaze point P during display is calculated; based on the movement passing data, evaluation data of the subject is obtained.

In addition, the evaluation program according to the present embodiment causes a computer to execute: detecting a position of a gaze point of a subject who observes an image displayed on the display screen 101S; performing a display operation including a first display operation of displaying the specific object M1 on the display screen 101S and a second display operation of displaying the specific object M1 and the comparison objects M2 to M4 different from the specific object M1 on the display screen 101S after the first display operation is performed; setting a specific region a1 corresponding to a specific object M1 and comparison regions a2 to a4 corresponding to comparison objects M2 to M4 on the display screen 101S; determining whether the gaze point P exists in the specific region a1 and the comparison regions a2 to a4 during the second display operation, respectively, based on the position data of the gaze point P; based on the determination result, movement passage data indicating the passage of movement of the gaze point P during display is calculated; based on the movement passing data, evaluation data of the subject is obtained.

According to the present embodiment, since the evaluation data of the subject can be obtained from the movement passage of the gaze point during the display period, it is possible to reduce the chance and to evaluate the memory of the subject with high accuracy. Thus, the evaluation device 100 can evaluate the subject with high accuracy.

In the evaluation device 100 according to the present embodiment, the gaze point data includes: arrival time data indicating a time from a start time of the display period to an arrival time at which the gaze point P first arrives at the specific area a 1; moving number data indicating that the position of the gaze point P is in a plurality of comparison areas before the gaze point P first reaches the specific area A1The number of movements in between; at least one of the presence time data indicating that the point of regard P exists in the specific area A1 or the comparison area during the display

Figure BDA0002594783200000222

The time of existence of (1); and final area data indicating the specific area A1 and the comparison area

Figure BDA0002594783200000223

In the area where the gaze point P exists last in the display time, the evaluation unit 224 obtains evaluation data of the subject based on at least one of the gaze point data. This enables highly accurate evaluation data to be obtained efficiently.

In the evaluation device 100 according to the present embodiment, the evaluation unit 224 obtains evaluation data by weighting at least one piece of data included in the gazing point data. Thus, by giving priority to each data, more accurate evaluation data can be obtained.

The technical scope of the present invention is not limited to the above-described embodiments, and appropriate modifications can be made within a scope not departing from the gist of the present invention. For example, in each of the above embodiments, the case where the evaluation device 100 is used as an evaluation device for evaluating the possibility of a person with cognitive dysfunction or brain dysfunction has been described as an example, but the present invention is not limited thereto. For example, the evaluation device 100 may be used as an evaluation device for evaluating the memory of a subject who is not a person with cognitive dysfunction or brain dysfunction.

In the above embodiment, the case where the area setting unit 216 sets the specific area a1 and the comparison areas a2 to a4 in the second display operation has been described as an example, but the invention is not limited thereto. For example, the area setting unit 216 may set a corresponding area corresponding to the specific object M1 displayed on the display screen 101S in the first display operation. In this case, the determination unit 218 may determine whether or not the gaze point P of the subject is present in the corresponding region. The arithmetic unit 220 may determine whether or not the specific object M1 displayed on the display screen 101S in the first display operation can be memorized by the subject based on the determination result of the determination unit 218.

In the above-described embodiment, the case where the display mode of the specific object is in a fixed state in the first display operation has been described as an example, but the present invention is not limited to this. For example, the display control section 202 may change the display form of the specific object in the first display operation.

Fig. 17 to 23 are diagrams showing examples of a series of images for evaluation displayed on the display screen 101S. First, as shown in fig. 17, the display control unit 202 displays an image in which 5 kinds of foods are arranged in front of the bear on the display screen 101S. These 5 kinds of foods correspond to a plurality of objects F, F1 to F4. Here, for example, the object F is an orange, the object F1 is a watermelon, the object F2 is a fish, the object F3 is a bread, and the object F4 is an apple. In addition, the display control unit 202 displays instruction information I6 for the subject to remember which of the 5 kinds of food the bear has eaten in the image (see fig. 18 to 20) to be displayed next. Hereinafter, a case where a bear eats oranges among 5 kinds of foods will be described as an example. In this case, the object F indicating an orange is the specific object among the objects F, F1 to F4. Further, objects F1 to F4 indicating foods other than oranges were comparative objects. Hereinafter, the object F may be designated as a specific object F, and the objects F1 to F4 may be designated as comparison objects F1 to F4, respectively.

As shown in fig. 17, the area setting unit 216 sets a specific area a corresponding to the specific object F and sets comparison areas B1 to B4 corresponding to the comparison objects F1 to F4. Further, the area setting unit 216 sets the instruction area C corresponding to the instruction information I6. The area setting unit 216 sets the specific area a to a rectangular area including the specific object F, for example. Similarly, the area setting unit 216 sets the comparison areas B1 to B4 to rectangular ranges including the comparison objects F1 to M4, respectively. The area setting unit 216 sets the instruction area C to a rectangular range including the instruction information I6. The shapes of the specific area a, the comparison areas B1 to B4, and the indication area C are not limited to a rectangle, and may be other shapes such as a circle, an ellipse, and a polygon. In this case, the area setting unit 216 sets the specific area a, the comparison areas B1 to B4, and the indication area C so as not to overlap each other.

Next, in the first display operation, the display control unit 202 displays an animation in which the bear eats 1 of the 5 kinds of food on the display screen 101S. Fig. 18 to 20 are diagrams each showing a scene of the animation. Hereinafter, a case where a bear eats an orange will be described as an example. In this case, among the plurality of objects F, F1 to F4, the object F indicating an orange is a specific object. Further, objects F1 to F4 indicating foods other than oranges were comparative objects. Hereinafter, the object F may be designated as a specific object F, and the objects F1 to F4 may be designated as comparison objects F1 to F4, respectively.

Fig. 18 shows a scenario where a bear picks up an orange mouth. Fig. 19 shows a scenario where a bear places an orange in the mouth until the mouth is closed. Fig. 20 shows a scenario where oranges become invisible in the mouth of a bear, and the bear is eating oranges. In this way, the display control unit 202 changes the display mode of the specific object F. By displaying a series of motions of the bear shown in fig. 18 to 20 on the display screen 101S, the subject is made to remember that the bear eats oranges among 5 kinds of foods.

As shown in fig. 18 to 20, the area setting unit 216 sets the specific area a corresponding to the specific object F and sets the comparison areas B1 to B4 corresponding to the comparison objects F1 to F4 from the state shown in fig. 17. Further, the area setting unit 216 cancels the setting of the instruction area C. Then, the area setting unit 216 sets the movement area D to a rectangular range including a trajectory of movement of the orange during a period from when the bear is lifted up to when the bear is put in the entrance. In this case, the shapes of the specific area a, the comparison areas B1 to B4, and the movement area D are not limited to a rectangle, and may be other shapes such as a circle, an ellipse, and a polygon. In this case, the area setting unit 216 sets the specific area a, the comparison areas B1 to B4, and the movement area D so as not to overlap each other. When the scene in fig. 20 is displayed after the scene in fig. 19 is completed, the area setting unit 216 cancels the setting of the movement area D. That is, the setting of the moving area D is canceled at a predetermined timing when the orange enters the mouth of the bear and the mouth is closed and invisible.

After the first display operation, in the second display operation, as shown in fig. 21, the display control unit 202 displays the instruction information I7 for making the subject watch which of the 5 kinds of food the bear eats, in a state where the 5 kinds of food are arranged in front of the bear. The area setting unit 216 sets the specific area a corresponding to the specific object F and sets the comparison areas B1 to B4 corresponding to the comparison objects F1 to F4 from the state shown in fig. 20. Further, the area setting unit 216 sets the instruction area E corresponding to the instruction information I7. The shapes of the specific area a, the comparison areas B1 to B4, and the indication area E are not limited to a rectangle, and may be other shapes such as a circle, an ellipse, and a polygon. In this case, the area setting unit 216 sets the specific area a, the comparison areas B1 to B4, and the instruction area E so as not to overlap each other.

After the instruction information I7 is displayed for a predetermined period, as shown in fig. 22, the display control unit 202 deletes the display of the instruction information I7. The area setting unit 216 cancels the setting of the instruction area E in accordance with the timing of deleting the display of the instruction information I7. The display control unit 202 and the area setting unit 216 maintain this state for a predetermined period. That is, the display control unit 202 causes the specific object F and the comparison objects F1 to F4 to be displayed on the display 101S for a predetermined period. The area setting unit 216 sets the specific area a corresponding to the specific object F for a predetermined period, and sets the comparison areas B1 to B4 corresponding to the comparison objects F1 to F4 for a predetermined period. During the predetermined period, the subject is made to look at the specific object F and the comparison objects F1 to F4.

After the predetermined period of time has elapsed, as shown in fig. 23, the display control unit 202 may display an image indicating a correct answer to the instruction information I7. Fig. 23 shows, as an example, an image in which an area where an orange is placed is surrounded by a frame and a bear looks in the direction of the orange. By displaying the image of fig. 23, the subject can be made to clearly grasp the correct answer. In addition, when displaying an image indicating a correct answer, the area setting unit 216 may cancel the specific area a, the comparison areas B1 to B4, and the instruction area E.

Fig. 24 is a flowchart showing a processing flow of an evaluation method according to another example. As shown in fig. 24, the display control unit 202 displays instruction information I6 for the subject to remember which of the 5 kinds of food the bear has eaten (memory instruction processing: step S201).

Next, as a first display operation, the display control unit 202 displays an animation in which the bear eats 1 of the 5 kinds of food on the display screen 101S, and causes the subject to memorize the animation (memory processing: step S202).

Next, as a second display operation, the display control unit 202 displays instruction information I7 for making the subject look at which of the 5 kinds of food the bear has eaten in a state where the 5 kinds of food are arranged in front of the bear (answer processing: step S203).

Next, the display control unit 202 displays an image indicating a forward answer to the instruction information I7 (forward answer display processing: step S204).

Next, the evaluation unit 224 calculates an evaluation value ANS from the presence time data, the movement number data, the final area data, and the arrival time data obtained from the above processing results, and obtains evaluation data from the evaluation value ANS (step S205). Then, the output control unit 226 outputs the evaluation data obtained by the evaluation unit 224 (step S206).

Fig. 25 is a flowchart showing a flow of processing in the memory instruction processing (step S201). As shown in fig. 25, in the memory instruction processing, the display control unit 202 starts the reproduction of the video (step S301). After the waiting time for the video portion has elapsed, the arithmetic unit 220 resets the timer T1 (step S302), and resets the count values CNTC and RRa of the counters (step S303). The timer T1 is a timer for obtaining the timing at which the memory in the present image instructs the processing section to end the image. The counter CNTC is used to measure a count value CNTC indicating the presence time data in the indication area C of the gaze point P. The counter RRa is a counter for counting an accumulated number of times RRa indicating how many times the gazing point P moves between areas during video reproduction.

The gaze point detecting unit 214 detects position data of the gaze point of the subject on the display screen 101S of the display device 101 at predetermined sampling intervals (for example, 20[ msec ]) in a state where the subject is viewing the video image displayed on the display device 101 (step S304). If the position data is not detected (yes in step S305), the processing in and after step S329 to be described later is performed. When the position data is detected (no in step S305), the determination unit 218 determines the region where the gaze point P exists based on the position data (step S306).

When it is determined that the gazing point P is not present in the specific area a (yes in step S307), the arithmetic unit 220 determines whether or not the final area, which is the area where the gazing point P is present in the latest detection, is the specific area a (step S308). When determining that the final area is the specific area a (yes in step S308), the arithmetic unit 220 skips step S309 and step S310 below and performs the processing of step S329 described below. When determining that the final area is not the specific area a (no in step S308), the arithmetic unit 220 performs +1 on the cumulative number RRa of times indicating how many times the gaze point P has moved between the areas (step S309), and changes the final area to the specific area a (step S310). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.

When it is determined that the gaze point P is not present in the specific area a (no in step S307), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B1 (step S311). If it is determined that the gaze point P is present in the comparison area B1 (yes in step S311), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B1 (step S312). If the arithmetic unit 220 determines that the final area is the comparison area B1 (yes in step S312), it skips step S313 and step S314 below and performs the processing of step S329 described below. When determining that the final area is not the comparison area B1 (no in step S312), the arithmetic unit 220 performs +1 on the cumulative number RRa of times indicating how many times the point of regard P has moved between the areas (step S313), and changes the final area to the comparison area B1 (step S314). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.

When it is determined that the gaze point P is not present in the specific region B1 (no in step S311), the arithmetic unit 220 determines whether or not the gaze point P is present in the comparative region B2 (step S315). If it is determined that the gaze point P is present in the comparison area B2 (yes in step S315), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B2 (step S316). If the arithmetic unit 220 determines that the final area is the comparison area B2 (yes in step S316), it skips the following steps S317 and S318 and performs the processing in step S329, which will be described later. When determining that the final area is not the comparison area B2 (no in step S316), the arithmetic unit 220 performs +1 on the cumulative number RRa of times indicating how many times the point of regard P has moved between the areas (step S317), and changes the final area to the comparison area B2 (step S318). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.

When it is determined that the gaze point P is not present in the specific region B2 (no in step S315), the arithmetic unit 220 determines whether the gaze point P is present in the comparative region B3 (step S319). If it is determined that the gaze point P is present in the comparison area B3 (yes in step S319), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B3 (step S320). If the arithmetic unit 220 determines that the final area is the comparison area B3 (yes in step S320), it skips step S321 and step S322 described below and performs the processing of step S329 described below. When determining that the final area is not the comparison area B3 (no in step S320), the arithmetic unit 220 performs +1 on the cumulative number RRa of times indicating how many times the point of regard P has moved between the areas (step S321), and changes the final area to the comparison area B3 (step S322). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.

When it is determined that the gaze point P is not present in the specific region B3 (no in step S319), the arithmetic unit 220 determines whether the gaze point P is present in the comparison region B4 (step S323). If it is determined that the gaze point P is present in the comparison area B4 (yes in step S323), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B4 (step S324). If the arithmetic unit 220 determines that the final area is the comparison area B4 (yes in step S324), it skips the following steps S325 and S326 and performs the processing in step S329, which will be described later. When determining that the final area is not the comparison area B4 (no in step S324), the arithmetic unit 220 performs +1 on the cumulative number RRa of times indicating how many times the point of regard P has moved between the areas (step S325), and changes the final area to the comparison area B4 (step S326). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.

When it is determined that the gaze point P is not present in the specific area B4 (no in step S323), the arithmetic unit 220 determines whether the gaze point P is present in the instruction area C (step S327). If it is determined that the gaze point P is not present in the instruction area C (no in step S327), the processing in and after step S329 to be described later is performed. When it is determined that the gaze point P is present in the indicated region C (yes in step S327), the arithmetic unit 220 performs +1 on the count value CNTC indicating the presence time data of the gaze point P in the indicated region C (step S328). Thereafter, the arithmetic unit 220 performs the processing of step S329 and subsequent steps, which will be described later.

Then, the arithmetic unit 220 determines whether or not the video reproduction completion time has been reached based on the detection result of the detection timer T1 (step S329). If the arithmetic unit 220 determines that the video reproduction completion time has not been reached (no in step S329), the processing from step S304 onward is repeated.

When the arithmetic unit 220 determines that the video reproduction completion time has been reached (yes in step S329), the display control unit 202 stops the video reproduction (step S330). After the reproduction of the video is stopped, a memory process is performed (step S202).

Fig. 26 is a flowchart showing the flow of processing in the memorizing processing (step S202). As shown in fig. 26, in the storage process, the display control unit 202 starts the reproduction of the video (step S401). After the waiting time for the video portion has elapsed, the arithmetic unit 220 resets the timer T2 (step S402), and resets the counter values CNTAa, CNTB1a, CNTB2a, CNTB3a, CNTB4a, and CNTD of the counter (step S403). The timer T2 is a timer for obtaining the timing of the end of the image in the memory processing portion of the present image. The counter CNTAa is used to measure a count value CNTAa indicating the presence time data of the gaze point P in the specific region a. The counters CNTB1a to CNTB4a measure count values CNTB1a to CNTB4a indicating the presence time data of the gazing point P in the comparison regions B1 to B4. The counter CNTD is used to measure a count value CNTD indicating the presence time data of the gaze point P in the moving area D.

The gaze point detecting unit 214 detects position data of the gaze point of the subject on the display screen 101S of the display device 101 at predetermined sampling intervals (for example, 20[ msec ]) in a state where the subject is viewing the video image displayed on the display device 101 (step S404). If the position data is not detected (yes in step S405), the processing after step S420 is performed. When the position data is detected (no in step S405), the determination unit 218 determines the region where the gaze point P exists based on the position data (step S406).

When it is determined that the gaze point P exists in the specific area a (yes in step S407), the arithmetic unit 220 performs +1 on the count value CNTAa indicating the presence time data of the gaze point P in the specific area a (step S408). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.

When it is determined that the gaze point P is not present in the specific area a (no in step S407), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B1 (step S409). If it is determined that the gaze point P is present in the comparison area B1 (yes in step S409), the arithmetic unit 220 performs +1 on the count value CNTB1a indicating the presence time data of the gaze point P in the comparison area B1 (step S410). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.

When it is determined that the gaze point P is not present in the specific area B1 (no in step S409), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B2 (step S411). If it is determined that the gaze point P is present in the comparison area B2 (yes in step S411), the arithmetic unit 220 performs +1 on the count value CNTB2a indicating the presence time data of the gaze point P in the comparison area B2 (step S412). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.

When it is determined that the gaze point P is not present in the specific area B2 (no in step S411), the arithmetic unit 220 determines whether the gaze point P is present in the comparative area B3 (step S413). If it is determined that the gaze point P is present in the comparison area B3 (yes in step S413), the arithmetic unit 220 performs +1 on the count value CNTB3a indicating the presence time data of the gaze point P in the comparison area B3 (step S414). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.

When it is determined that the gaze point P is not present in the specific region B3 (no in step S413), the arithmetic unit 220 determines whether or not the gaze point P is present in the comparison region B4 (step S415). If it is determined that the gaze point P is present in the comparison area B4 (yes in step S415), the arithmetic unit 220 performs +1 on the count value CNTB4a indicating the presence time data of the gaze point P in the comparison area B4 (step S416). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.

When it is determined that the gaze point P is not present in the specific region B4 (no in step S415), the arithmetic unit 220 determines whether or not the value of the timer T2 exceeds a predetermined time T01 (step S417). The prescribed time t01 is when an orange enters the mouth of a bear, the mouth is closed and not visible. When the value of the timer T2 exceeds the predetermined time T01 (yes in step S417), the arithmetic unit 220 skips the process in step S418 and performs the processes after step S420, which will be described later. If the value of the timer T2 does not exceed the predetermined time T01 (no in step S417), the arithmetic unit 220 determines whether or not the gaze point P is present in the moving area D (step S418). If it is determined that the gaze point P is not present in the moving area D (no in step S418), the processing from step S420 and subsequent steps is performed. When it is determined that the gaze point P is present in the moving area D (yes in step S418), the arithmetic unit 220 performs +1 on the count value CNTD indicating the presence time data of the gaze point P in the moving area D (step S419). Thereafter, the arithmetic unit 220 performs the processing of step S420 and thereafter, which will be described later.

Then, the arithmetic unit 220 determines whether or not the video reproduction completion time has been reached based on the detection result of the detection timer T2 (step S420). When the arithmetic unit 220 determines that the video reproduction completion time has not been reached (no in step S420), the processing from step S404 onward is repeated.

When the arithmetic unit 220 determines that the video reproduction completion time has been reached (yes in step S420), the display control unit 202 stops the video reproduction (step S421). After the reproduction of the video is stopped, the response processing is performed (step S203).

Fig. 27 is a flowchart showing the flow of processing in the reply processing (step S203). As shown in fig. 27, in the response processing, the display control unit 202 starts the reproduction of the video (step S501). After the waiting time for the video portion has elapsed, the arithmetic unit 220 resets the timer T3 (step S502), resets the counter values cnlab, CNTB1b, CNTB2b, CNTB3b, CNTB4b, CNTE, and RRb of the counters (step S503), and sets the flag value to 0 (step S504). The timer T3 is a timer for obtaining the timing of the end of the image of the response processing section in the present image. The counter cnlab is used to measure a count value cnlab indicating the presence time data of the fixation point P in the specific area a. The counters CNTB1B to CNTB4B measure count values CNTB1B to CNTB4B indicating the presence time data of the gazing point P in the comparison regions B1 to B4. The counter CNTE is used to measure a count value CNTE indicating the presence time data of the gaze point P in the indication area E. The counter RRb is a counter for counting the cumulative number of times RRb indicating how many times the gaze point P moves between the regions.

The gaze point detecting unit 214 detects position data of the gaze point of the subject on the display screen 101S of the display device 101 at predetermined sampling intervals (for example, 20[ msec ]) in a state where the subject is viewing the video image displayed on the display device 101 (step S505). When the position data is detected (no in step S506), the determination unit 218 determines the region where the gaze point P exists based on the position data (step S507).

When it is determined that the gaze point P is present in the specific area a (yes in step S508), the arithmetic unit 220 determines whether or not the flag value is 1, that is, whether or not the gaze point P reaches the first specific area a (1: reached, 0: not reached) (step S509). If the flag value is "1" (yes in step S509), the arithmetic unit 220 skips the following steps S510 to S512 and performs the processing of step S513 described later.

When the flag value is not 1, that is, when it is the first time that the gaze point P reaches the specific area a (no in step S509), the arithmetic unit 220 extracts the measurement result of the timer T3 as arrival time data (step S510). The calculation unit 220 stores, in the storage unit 222, the movement count data indicating that the gaze point P has moved between the regions several times before reaching the specific region a (step S511). After that, the arithmetic unit 220 changes the flag value to 1 (step S512).

Next, the calculation unit 220 determines whether or not the final area, which is the area where the gaze point P exists in the latest detection, is the specific area a (step S513). When determining that the final area is the specific area a (yes in step S513), the arithmetic unit 220 skips step S514 and step S515 below and performs the processing of step S516 described below. When determining that the final area is not the specific area a (no in step S513), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S514), and changes the final area to the specific area a (step S515). Further, the calculation unit 220 performs +1 on the count value cnlab indicating the presence time data of the gaze point P in the specific area a (step S516). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.

When it is determined that the gaze point P is not present in the specific area a (no in step S508), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B1 (step S517). If it is determined that the gaze point P is present in the comparison area B1 (yes in step S517), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B1 (step S518). If the arithmetic unit 220 determines that the final area is the comparison area B1 (yes in step S518), it skips step S519 and step S520 below and performs the processing of step S521 described below. When determining that the final area is not the comparison area B1 (no in step S518), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S519), and changes the final area to the comparison area B1 (step S520). The arithmetic unit 220 performs +1 operation on the count value CNTB1B indicating the presence time data of the point of regard P in the comparison area B1 (step S521). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.

When it is determined that the gaze point P is not present in the comparison area B1 (no in step S517), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B2 (step S522). If it is determined that the gaze point P is present in the comparison area B2 (yes in step S522), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B2 (step S523). If the arithmetic unit 220 determines that the final area is the comparison area B2 (yes in step S523), it skips the following steps S524 and S525 and performs the processing of step S526 described later. When determining that the final area is not the comparison area B2 (no in step S523), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S524), and changes the final area to the comparison area B2 (step S525). The computing unit 220 performs +1 operation on the count value CNTB2B indicating the presence time data of the gaze point P in the comparison area B2 (step S526). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.

When determining that the gaze point P is not present in the comparison area B2 (no in step S522), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B3 (step S527). If it is determined that the gaze point P is present in the comparison area B3 (yes in step S527), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B3 (step S528). If the arithmetic unit 220 determines that the final area is the comparison area B3 (yes in step S528), it skips the following steps S529 and S530 and performs the processing in step S531 to be described later. When determining that the final area is not the comparison area B3 (no in step S528), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S529), and changes the final area to the comparison area B3 (step S530). The arithmetic unit 220 performs +1 operation on the count value CNTB3B indicating the presence time data of the point of regard P in the comparison area B3 (step S531). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.

When determining that the gaze point P is not present in the comparison area B3 (no in step S527), the arithmetic unit 220 determines whether the gaze point P is present in the comparison area B4 (step S532). If it is determined that the gaze point P is present in the comparison area B4 (yes in step S532), the arithmetic unit 220 determines whether or not the final area, which is the area where the gaze point P is present in the latest detection, is the comparison area B4 (step S533). If the arithmetic unit 220 determines that the final area is the comparison area B4 (yes in step S533), it skips the following steps S534 and S535 and performs the processing of step S536, which will be described later. When determining that the final area is not the comparison area B4 (no in step S533), the arithmetic unit 220 performs +1 on the cumulative number of times indicating how many times the point of regard P has moved between the areas (step S534), and changes the final area to the comparison area B4 (step S535). Further, the arithmetic unit 220 performs +1 on the count value CNTB4B indicating the presence time data of the point of regard P in the comparison area B4 (step S536). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.

When determining that the gaze point P is not present in the comparison area B4 (no in step S532), the arithmetic unit 220 determines whether or not the value of the timer T3 exceeds a predetermined time T02 (step S537). The predetermined time t02 is the time when the deletion instruction information I7 is displayed. When the value of the timer T3 exceeds the predetermined time T02 (yes at step S537), the arithmetic unit 220 skips the process at step S538 and performs the processes after step S540. If the value of the timer T3 does not exceed the predetermined time T02 (no in step S537), the arithmetic unit 220 determines whether or not the gazing point P is present in the instruction area E (step S538). When it is determined that the gaze point P exists in the indication area E (yes in step S538), the arithmetic unit 220 performs +1 on the count value CNTE indicating the presence time data of the gaze point P in the indication area E (step S539). Thereafter, the arithmetic unit 220 performs the processing of step S540 and thereafter, which will be described later.

Then, the arithmetic unit 220 determines whether or not the video reproduction completion time has been reached based on the detection result of the detection timer T3 (step S540). If the arithmetic unit 220 determines that the video reproduction completion time has not been reached (no in step S540), the processing from step S505 is repeated.

When the arithmetic unit 220 determines that the video reproduction end time has been reached (yes in step S540), the display control unit 202 stops the video reproduction (step S541). After the reproduction of the video is stopped, evaluation calculation (step S205) and evaluation value output (step S206) are performed.

In the evaluation calculation, the evaluation value ANS is expressed as:

ANS=K11·RRa+K12·CNTC+K13·CNTAa

+K14·CNTB1a+K15·CNTB2a+K16·CNTB3a

+K17·CNTB4a+K18·CNTD+K19·CNTAb

+K20·CNTB1b+K21·CNTB2b+K22·CNTB3b

+K23·CNTB4b+K24·CNTE+K25·RRb

where K11 to K25 are constants for weighting. The constants K11 to K25 can be set as appropriate.

The more excellent the RRa tends to confirm the object, the higher the value of RRa. In this case, by setting K11 to a negative coefficient, the higher the value of RRa, the lower the value of the evaluation value ANS.

With respect to CNTC, the more well the indicator text tends to be recognized, the higher the value of CNTC. In this case, by setting K12 to a negative coefficient, the higher the value of CNTC, the lower the value of the evaluation value ANS becomes.

As for CNTAa, the more carefully the orange that bears tends to eat is observed, the higher the value of CNTAa. In this case, by setting K13 to a negative coefficient, the higher the value of CNTA, the lower the value of the evaluation value ANS.

CNTB1a to CNTB4a have higher values of CNTB1a to CNTB4a as the subjects other than oranges eaten by bears tend to be observed without fail. In this case, by setting K14 to K17 to positive coefficients, the higher the CNTB1a to CNTB4a values are, the higher the evaluation value ANS becomes.

The more excellent the CNTD tends to confirm the object, the higher the CNTD value. On the other hand, when there is a tendency to observe only a moving object, the numerical value also becomes high. In this case, K18 may be set to a positive coefficient, for example, the coefficient may be set lower than other coefficients.

With respect to cnlab, the more carefully we observe the orange that is the correct answer, the higher the value of cnlab. In this case, K19 is set to a negative coefficient and the absolute value is set to be larger than the other coefficients, so that the higher the value of cnlab, the more greatly the value of the evaluation value ANS decreases.

Regarding CNTB1 b-CNTB 4b, the more carefully an incorrect food is observed, the higher the value of CNTB1 b-CNTB 4 b. In this case, the higher the value of CNTB1b to CNTB4b, the larger the value of the evaluation value ANS becomes, by setting K20 to K23 to a positive coefficient and setting the absolute value to be larger than the other coefficients.

With respect to CNTE, the more carefully the indication information I7 tends to be confirmed, the higher the value of CNTE. In this case, by setting K24 to a negative coefficient, the higher the value of CNTE, the lower the value of the evaluation value ANS.

The more the RRb tends to be hesitant in correct solution selection, the higher the value of RRb. In this case, K25 is set to a positive coefficient, so that the higher the RRb value is, the higher the evaluation value ANS value is.

The evaluation unit 224 can determine the evaluation data by determining whether or not the evaluation value ANS is equal to or greater than a predetermined value. For example, when the evaluation value ANS is equal to or greater than a predetermined value, it can be evaluated that the subject is highly likely to be a person with cognitive dysfunction or brain dysfunction. In addition, when the evaluation value ANS is smaller than the predetermined value, the possibility that the subject is a person with cognitive dysfunction or brain dysfunction is low.

The evaluation unit 224 may calculate the evaluation value of the subject based on at least one of the above-described gazing point data. For example, if the presence time data cnlab of the specific region a is equal to or greater than a predetermined value, the evaluation unit 224 can evaluate that the subject is less likely to be a person with cognitive dysfunction or brain dysfunction. Further, if the ratio of the specific region a presence time data cnlab to the comparison region B1 to CNTB1B to CNTB4B (the ratio of the attention rates of the specific region a and the comparison regions B1 to B4) is equal to or greater than a predetermined value, the evaluation unit 224 can evaluate that the subject is less likely to be a cognitive or brain dysfunction person. Further, if the ratio of the presence time data cnlab of the specific region a to the total fixation time (the ratio of the fixation time of the specific region a to the total fixation time) is equal to or greater than a predetermined value, the evaluation unit 224 can evaluate that the subject is less likely to be a cognitive or brain-dysfunction person. Further, the evaluation unit 224 can evaluate that the probability that the subject is a person with cognitive and brain dysfunction is low if the final region is the specific region a, and that the probability that the subject is a person with cognitive and brain dysfunction is high if the final region is the comparison regions B1 to B4.

Fig. 28 to 30 are diagrams showing other examples of a series of images for evaluation displayed on the display screen 101S. First, as shown in fig. 28, in the first display operation, the display control unit 202 displays an image in which a person exposes a face in 1 window of a plurality of (for example, 6) windows on the display screen 101S. These 6 kinds of windows correspond to a plurality of objects W, W1 to W5. Further, the display control unit 202 displays instruction information I8 for the subject to remember which window of the 6 windows the face is exposed to. In this case, among the plurality of objects W, W1 to W5, the window in which a person is exposed becomes the specific object W. Windows other than the window in which the person is exposed to the face are the comparison objects W1 to W5.

As shown in fig. 28, the area setting unit 216 sets a specific area a corresponding to the specific object W and sets comparison areas B1 to B5 corresponding to the comparison objects W1 to W5. Further, the area setting unit 216 sets the instruction area C corresponding to the instruction information I8. The area setting unit 216 sets the specific area a to a rectangular range corresponding to the specific object W, for example. In fig. 28, since the specific object W is a rectangular window, the specific area a can be set so as to overlap with the outline of the window. Similarly, the area setting unit 216 can set the comparison areas B1 to B5 so as to overlap the outlines of the windows of the comparison objects W1 to W5, for example. The area setting unit 216 sets the instruction area C to a rectangular range including the instruction information I8. The shapes of the specific area a, the comparison areas B1 to B5, and the indication area C are not limited to a rectangle, and may be other shapes such as a circle, an ellipse, and a polygon. In this case, the area setting unit 216 sets the specific area a, the comparison areas B1 to B5, and the indication area C so as not to overlap each other. The display control unit 202 and the area setting unit 216 maintain this state for a predetermined period. That is, the display control unit 202 causes the specific object W and the comparison objects W1 to W5 to be displayed on the display 101S for a predetermined period. The area setting unit 216 sets a predetermined period for the specific area a corresponding to the specific object W, and sets predetermined periods for the comparison areas B1 to B5 corresponding to the comparison objects W1 to W5. During the predetermined period, the subject is made to look at the specific object W and the comparison objects W1 to W5.

After the above display for the predetermined period, as shown in fig. 29, the display control unit 202 erases the human image from the window of the specific object W. In this way, the display control unit 202 changes the display mode of the specific object W. Then, as a second display operation, instruction information I9 for causing the subject to look at which window of the 6 windows the person has exposed the face is displayed. The area setting unit 216 sets the specific area a corresponding to the specific object W and sets the comparison areas B1 to B5 corresponding to the comparison objects W1 to W5 from the state shown in fig. 28. Further, the area setting unit 216 sets the instruction area E corresponding to the instruction information I9. The shapes of the specific area a, the comparison areas B1 to B5, and the indication area E are not limited to a rectangle, and may be other shapes such as a circle, an ellipse, and a polygon. In this case, the area setting unit 216 sets the specific area a, the comparison areas B1 to B5, and the instruction area E so as not to overlap each other.

After the second display operation is performed for a predetermined period of time, as shown in fig. 30, the display control unit 202 may display an image indicating the answer to the forward answer to the instruction information I9. In fig. 30, as an example, a person is again exposed to the face in a window corresponding to a specific object W, and instruction information I10 indicating that the window is a correct answer is displayed. By displaying the image of fig. 30, the subject can be made to clearly grasp the correct answer. In addition, when displaying an image indicating a correct answer, the area setting unit 216 may cancel the specific area a, the comparison areas B1 to B5, and the instruction area E.

As described above, the evaluation device 100 according to the above embodiment includes: a display screen 101S; a gaze point detection unit 214 that detects the position of a gaze point of a subject who observes the display screen 101S; a display control unit 202 that displays an image including a specific object and a comparison object different from the specific object on the display screen 101S; an area setting unit 216 that sets a specific area corresponding to the specific object and a comparison area corresponding to the comparison object; a determination unit 218 configured to determine whether or not the gaze point is present in the specific region and the comparison region during the period in which the image is displayed, based on the positional data of the gaze point; a calculation unit 220 that calculates gaze point data indicating the movement of the gaze point based on the determination result of the determination unit 218; the evaluation unit 224 obtains evaluation data of the subject based on the gaze point data.

In addition, the evaluation method of the above embodiment includes: detecting a position of a gaze point of a subject observing the display screen 101S; displaying an image including a specific object and a comparison object different from the specific object on the display 101S; setting a specific area corresponding to a specific object and a comparison area corresponding to a comparison object; determining whether the gazing point exists in the specific area and the comparison area during display of the display image on the display screen based on the position of the gazing point; calculating gaze point data indicating the passage of movement of the gaze point during the display period based on the determination result; and obtaining evaluation data of the subject based on the gazing point data.

In addition, the evaluation program of the above embodiment causes a computer to execute: detecting a position of a gaze point of a subject observing the display screen 101S; displaying an image including a specific object and a comparison object different from the specific object on the display 101S; setting a specific area corresponding to a specific object and a comparison area corresponding to a comparison object; determining whether the gazing point exists in the specific area and the comparison area during display of the display image on the display screen based on the position of the gazing point; calculating gaze point data indicating the passage of movement of the gaze point during the display period based on the determination result; and obtaining evaluation data of the subject based on the gazing point data.

Therefore, even when the display mode of the specific object is not changed in the first display operation, and when the display mode of the specific object is changed, the evaluation data of the subject can be obtained from the movement passage of the gaze point during the display period. As described above, by diversifying the display form of the specific object, the chance can be further reduced, and the memory of the subject can be evaluated with high accuracy. Thus, the evaluation device 100 can evaluate the subject with high accuracy.

Description of the symbols

A. A1, a5, A8 … specific region, a2 to a4, a6 to a6, B6 to B6 comparison region, 6 indication region, D6 movement region, D6 data value, I6 indication information, 6, M6, W6 specific object, F6 to F6, M6 to M6, M6 to M6, W6 to W6 comparison object, ansp 6 fixation point, T6 detection timer, CNTA 6 evaluation value, CNTA 6 count value, 206 computer system, 202 6 display control unit, 36214 fixation point detection unit, 36216 region setting unit, 36218 determination unit, 6, operation unit 222, 6 storage unit, 6 evaluation unit, 6 output control unit, 6, 202 6, 6 output control unit, 6, and 6 output control unit

49页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:活检装置及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!