Visual fatigue measuring method and system, storage medium and electronic equipment

文档序号:1451391 发布日期:2020-02-21 浏览:9次 中文

阅读说明:本技术 一种视疲劳测量方法及其系统、存储介质、电子设备 (Visual fatigue measuring method and system, storage medium and electronic equipment ) 是由 陈霏 冯永 于 2019-10-15 设计创作,主要内容包括:本发明公开了一种视疲劳测量方法及其系统、存储介质、电子设备,所述方法包括步骤:获取眼睛的眼动参数;其中,所述眼动参数包括:注视频率、扫视幅度中的一种或多种;通过眼动参数得到眼睛的疲劳程度。由于采用注视频率和/或扫视幅度作为眼动参数来得到疲劳程度,测量结果准确可靠,可以精准确定用户疲劳程度。(The invention discloses a visual fatigue measuring method and a system thereof, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring eye movement parameters of the eyes; wherein the eye movement parameters include: one or more of gaze frequency, saccade amplitude; and obtaining the fatigue degree of eyes through the eye movement parameters. Because the fatigue degree is obtained by adopting the fixation frequency and/or the saccade amplitude as the eye movement parameter, the measurement result is accurate and reliable, and the fatigue degree of the user can be accurately determined.)

1. An asthenopia measuring method, comprising the steps of:

acquiring eye movement parameters of the eyes; wherein the eye movement parameters include: one or more of gaze frequency, saccade amplitude;

and obtaining the fatigue degree of eyes through the eye movement parameters.

2. The asthenopia measurement method according to claim 1, wherein the acquiring eye movement parameters of the eye comprises:

detecting an eye movement track of the eyes, and acquiring an image of the eyes when the eye movement track is in a browsing state;

obtaining a fixation frequency according to the image of the eye; wherein, the fixation frequency is the frequency of the eyeball in a static state.

3. The asthenopia measurement method according to claim 1, wherein the acquiring eye movement parameters of the eye comprises:

calibrating and verifying the corresponding relation between the image of the eye and the viewpoint position;

detecting an eye movement track of the eyes, and acquiring an image of the eyes when the eye movement track is in a browsing state;

and determining the saccade amplitude according to the corresponding relation between the eye image and the viewpoint position and the eye image.

4. The asthenopia measurement method according to claim 3, wherein the calibrating and verifying the correspondence between the image of the eye and the viewpoint position comprises:

establishing a corresponding relation between the image of the eye and the viewpoint position by adopting a calibration model;

verifying the corresponding relation between the image of the eye and the viewpoint position by adopting a verification model, and updating the corresponding relation between the image of the eye and the viewpoint position when the watching error is greater than a preset threshold value;

when the gaze error is less than or equal to a preset threshold, the verification is completed.

5. The asthenopia measurement method according to claim 1, wherein the acquiring eye movement parameters of the eye comprises:

detecting the eye movement track of the eyes, and acquiring images of the eyes by adopting at least 2 cameras when the eye movement track is in a browsing state;

eye position data is obtained from the image of the eye and the saccade magnitude is determined.

6. A method of measuring asthenopia according to any of claims 3 to 5, wherein the amplitude of saccade is an angle of saccade of the eye.

7. An asthenopia measuring method according to any one of claims 1 to 5, wherein the obtaining of the degree of asthenopia by the eye movement parameter comprises:

and acquiring the benchmark data of the fatigue degree, and acquiring the fatigue degree of the eyes according to the benchmark data and the eye movement parameters.

8. An asthenopia measuring system, comprising: a processor, and a memory coupled to the processor,

the memory stores an asthenopia measurement program which when executed by the processor implements the steps of:

acquiring eye movement parameters of the eyes; wherein the eye movement parameters include: one or more of gaze frequency, saccade amplitude;

and obtaining the fatigue degree of eyes through the eye movement parameters.

9. A storage medium storing an asthenopia measuring program which, when executed, carries out the steps of the asthenopia measuring method according to any one of claims 1 to 7.

10. An electronic device characterized by comprising the storage medium of claim 9.

Technical Field

The invention relates to the technical field of visual fatigue measurement, in particular to a visual fatigue measurement method and system, a storage medium and electronic equipment.

Background

With the gradual popularization of video display terminals (abbreviated as VDT, mainly various electronic screens such as mobile phones, tablets, desktop computers, televisions, advertising screens and the like) and artificial light sources, the aging development of population and the increase of the working pressure of learning, and the asthenopia phenomena with blurred vision, dry eyes, double images, lacrimation, eye pain, head, neck, shoulder and aching pain and the like as typical symptoms are more and more common. The eye fatigue state caused by the modern life style can not only harm the eye health, but also cause myopia and eye pathological changes due to the continuous reduction of the adjusting capacity, and reduce the learning performance and the working efficiency of the eye health care device.

Disclosure of Invention

The technical problem to be solved by the present invention is to provide a visual fatigue measurement method and system thereof, aiming at solving the problem of inaccurate visual fatigue measurement in the prior art.

The technical scheme adopted by the invention for solving the technical problem is as follows:

an asthenopia measuring method, comprising the steps of:

acquiring eye movement parameters of the eyes; wherein the eye movement parameters include: one or more of gaze frequency, saccade amplitude;

and obtaining the fatigue degree of eyes through the eye movement parameters.

The asthenopia measuring method, wherein the acquiring of eye movement parameters of the eye comprises:

detecting an eye movement track of the eyes, and acquiring an image of the eyes when the eye movement track is in a browsing state;

obtaining a fixation frequency according to the image of the eye; wherein, the fixation frequency is the frequency of the eyeball in a static state.

The asthenopia measuring method, wherein the acquiring of eye movement parameters of the eye comprises:

calibrating and verifying the corresponding relation between the image of the eye and the viewpoint position;

detecting an eye movement track of the eyes, and acquiring an image of the eyes when the eye movement track is in a browsing state;

and determining the saccade amplitude according to the corresponding relation between the eye image and the viewpoint position and the eye image.

The method for measuring asthenopia, wherein the calibrating and verifying the corresponding relation between the image of the eye and the viewpoint position comprises the following steps:

establishing a corresponding relation between the image of the eye and the viewpoint position by adopting a calibration model;

verifying the corresponding relation between the image of the eye and the viewpoint position by adopting a verification model, and updating the corresponding relation between the image of the eye and the viewpoint position when the watching error is greater than a preset threshold value;

when the gaze error is less than or equal to a preset threshold, the verification is completed.

The asthenopia measuring method, wherein the acquiring of eye movement parameters of the eye comprises:

detecting the eye movement track of the eyes, and acquiring the images of the eyes and the position data of the eyes by adopting at least 2 cameras when the eye movement track is in a browsing state;

the saccade magnitude is determined from the image of the eye and the position data of the eye.

The visual fatigue measuring method is characterized in that the saccade amplitude is an eyeball saccade angle value.

The method for measuring asthenopia, wherein the obtaining of the degree of asthenopia by the eye movement parameter comprises:

and acquiring the benchmark data of the fatigue degree, and acquiring the fatigue degree of the eyes according to the benchmark data and the eye movement parameters.

An asthenopia measuring system, comprising: a processor, and a memory coupled to the processor,

the memory stores an asthenopia measurement program which when executed by the processor implements the steps of:

acquiring eye movement parameters of the eyes; wherein the eye movement parameters include: one or more of gaze frequency, saccade amplitude;

and obtaining the fatigue degree of eyes through the eye movement parameters.

A storage medium in which an asthenopia measuring program is stored, which when executed, implements the steps of the asthenopia measuring method as set forth in any one of the above.

An electronic device comprising a storage medium as described above.

Has the advantages that: because the fatigue degree is obtained by adopting the fixation frequency and/or the saccade amplitude as the eye movement parameter, the measurement result is accurate and reliable, and the fatigue degree of the user can be accurately determined.

Drawings

FIG. 1 is a flow chart of the visual fatigue measurement method of the present invention.

Fig. 2 is a schematic view of a screen and a camera in the present invention.

Fig. 3A is an image of the eye corresponding to the point a in fig. 2.

Fig. 3B is an image corresponding to the eye at B in fig. 2.

FIG. 4A is a schematic diagram of a 4-point calibration method according to the present invention.

FIG. 4B is a schematic diagram of a 5-point calibration method according to the present invention.

FIG. 4C is a schematic diagram of a 9-point calibration method according to the present invention.

Fig. 5 is a diagram illustrating a screen reading task in the present invention.

Fig. 6 is a schematic view of eye fixations and saccades in the present invention.

FIG. 7 is a diagram of a matching template for a reading area in accordance with the present invention.

FIG. 8 is a graph of the parameter differences between different asthenopia states in the present invention.

Fig. 9 is a functional block diagram of the visual fatigue measurement system of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

Referring to fig. 1-9, the present invention provides some embodiments of a method for measuring asthenopia.

The inventor finds out through experiments that the higher the fixation frequency and the lower the saccade amplitude of human eyes are, the deeper the asthenopia degree is, therefore, the asthenopia can be measured through the fixation frequency and the saccade amplitude of human eyes,

as shown in fig. 1, the visual fatigue measuring method according to the embodiment of the present invention includes the following steps:

step S10, obtaining eye movement parameters of eyes; wherein the eye movement parameters include: gaze frequency, saccade amplitude.

The fixation frequency is the frequency (in Hz) at which the eyeball is still; the amplitude of saccade is the value of the saccadic angle of the eye (in °). The fixation frequency and the saccade amplitude can be obtained in various ways, the fixation frequency is not influenced by the distance between the eyes and the viewpoint position, the eyeball is considered to be in a static state as long as the viewpoint position is unchanged, and the fixation frequency is the number of times/time that the eyeball is in the static state within a period of time. The saccade amplitude is related to the distance between the human eye and the viewpoint position (or the viewed object), and when the human eye browses things (for example, reads, searches and tracks the viewed object), the saccade amplitude of the short-distance saccade is large and the saccade amplitude of the long-distance saccade is small under the condition that the human eye saccades the same two viewpoint positions.

Experimental data shows that the increase of the fixation frequency and the decrease of the saccade amplitude can accurately reflect the deepening of the Visual Fatigue (VF) degree in reading, searching and tracking tasks, and otherwise, the visual fatigue degree is decreased. That is, the degree of fatigue of the user can be obtained by judging the eye movement parameters, which are the fixation frequency and/or the saccade amplitude. Of course, other eye movement parameters may be introduced, such as blink frequency, blink time, blink amplitude, etc., and fixation glance parameters, such as fixation time, glance rate, etc. According to the invention, the fatigue degree is obtained by adopting the fixation frequency and/or the saccade amplitude as the eye movement parameter, the measurement result is accurate and reliable, and the fatigue degree of the user can be accurately determined. In addition, the method has the advantages of non-contact type, simplicity, short time consumption, small influence on the current working or learning state of the user (even 'non-inductive' measurement can be realized), simple and portable measuring equipment, low cost and the like.

Step S10 includes:

step S11a, detecting an eye movement trajectory of the eye, and acquiring an image of the eye when the eye movement trajectory is in the browsing state.

Step S12a, obtaining the fixation frequency according to the eye image; wherein, the fixation frequency is the frequency of the eyeball in a static state.

The fixation frequency is independent of the eye position data, and therefore, even when the user acquires an eye image, the fixation frequency is not affected by the relative movement between the user's head and the screen 1.

Therefore, the 1 camera is adopted to acquire the image of the eyes, and the watching frequency can be obtained according to the image of the eyes.

Of course, before the image of the eye is acquired, calibration may be performed and the correspondence between the image of the eye and the viewpoint position may be verified, as shown in step S11 b. After the image of the eye is obtained, the viewpoint position is obtained in the corresponding relation between the image of the eye and the viewpoint position, and the frequency of the eyeball in a static state can be determined by analyzing the viewpoint position, so that the watching frequency is obtained. When the viewpoint position is fixed within a certain range, the eyeball is judged to be in a static state, the number of times that the eyeball is in the static state within a period of time is counted, and the watching frequency can be calculated.

There are two ways to obtain the saccade amplitude, the first is as follows:

step S10 includes:

step S11b, calibrating and verifying the eye image and viewpoint position correspondence.

Specifically, the image of the eye and the viewpoint position correspondence may be calibrated and verified before the asthenopia measurement is performed, or of course, may not be calibrated and verified. As shown in fig. 2, the eye image here may be captured by video recording using the camera 2, and the captured eye image may be a series of images on the time axis, and the viewpoint position is the position of the viewpoint of the eye on the time axis. When the user keeps the head still with respect to the screen 1, the eye point positions are different and the corresponding eye images are different. The image of the eye of fig. 3A corresponds to the image of the eye (right eye) taken by the camera 2 as in fig. 2 when the user gazes at the point of gaze a on the screen 1; similarly, the image of the eyes of fig. 3B corresponds to the right eye shot as in fig. 2 when the user gazes at point B on the screen 1.

The camera 2 may be a commercially available high-definition (color) camera, such as a front-mounted high-definition camera of a mobile phone, a self-contained high-definition camera of a notebook, a desktop computer externally equipped with a camera, or an infrared camera (considering the possibility of poor light in the shooting environment), or a black and white camera (not affecting the effect of the present invention), and may be integrated on the device or externally arranged outside the device. When the invention is used in a reading scene without VDT, the camera 2 can be separated from the screen 1 and arranged at a fixed position in front of eyes of a user, such as a desk or a desk lamp.

Step S11b specifically includes:

step S11b1, using the calibration model to establish a correspondence between the eye image and the viewpoint position (hereinafter referred to as "positional relationship model").

Specifically, a calibration operation is performed: when the user is in the normal screen 1 browsing state, the calibration of the relative position of the screen 1 and the eyes is performed. As shown in fig. 2, a1 st fixation point (e.g., fixation point a) is presented on the screen 1, the user actively emits a "aiming signal" (indicating that the eye has "aimed" at the location) after "staring" at this point, the "aiming" signal including, but not limited to, a key press, touching the screen 1, blinking or emitting a voice command, etc.; then automatically presenting the 2 nd fixation point (e.g. fixation point 2) at a position below the screen 1 while the last fixation point (fixation point a) disappears, and similarly, the user re-gazes at this point to issue a "aim" signal; point 2 is then erased and the other location presents the 3 rd fixation point, so on until the last fixation point is identified by the program as being aimed, and the process ends.

This step requires the user to keep the head still as much as possible with respect to the screen 1 during the entire asthenopia measurement process to ensure that the measurement results are as accurate as possible. And recording real-time eye images corresponding to different points when the user aims at the different points through the camera 2 of the screen 1, and analyzing and constructing a position relation model.

As shown in fig. 4 (fig. 4 includes fig. 4A, 4B, and 4C), a classical calibration manner such as 4-point, 5-point, or 9-point may be used, the order of presentation of the gaze points is random, but the gaze point needs to be traversed once, and the 1 st gaze point is recommended to be presented in the center of the screen 1; the gaze point shape is recommended but not limited to a circle, a concentric circle, a cross or a combination of circles and crosses, etc.; the recommended size of the fixation point is adjusted according to the type of equipment (such as a television, a desktop computer, a tablet or a mobile phone), the resolution and the size of the screen 1, the distance between eyes and the screen 1 and other conditions, so that the fixation point can be seen clearly and the fixation point is as small as possible. This step may be operated by a voice or screen text guide user, and this guide content may be set to on or off in the relevant settings.

And S11b2, verifying the corresponding relation between the image of the eye and the viewpoint position by adopting a verification model, and updating the corresponding relation between the image of the eye and the viewpoint position when the fixation error is larger than a preset threshold value.

Step S11b3, when the gaze error is less than or equal to a preset threshold, the verification is completed.

Performing a verification calibration operation: and carrying out multiple times of 'aiming' operation in a basically similar way, and verifying whether the accuracy of the previous calibration modeling meets the requirement. The program first presents the 1 st fixation point on the screen 1, the user actively sends out the "aiming signal" after "staring" this point, then automatically presents the 2 nd fixation point at the next position on the screen 1, the user "aims" again, without sending any signal, the program automatically recognizes as aimed after finding the user's eyes are relatively still, then erases this point, presents the next fixation point.

When the process is carried out, each time the user aims and confirms, the system calculates the real viewpoint and viewpoint position of the user according to the position relation model, calculates the distance between the user and the specified fixation point presented on the screen 1 and records the distance as the fixation error, when the error is smaller than a certain specific value (namely a preset threshold value), the point fixation task is successfully completed (namely the corresponding relation between the image of the completed eye and the viewpoint position or the completed position relation model is obtained), otherwise, the user is guided to restart the verification, even the calibration step, and the updating operation is carried out. The value of the preset threshold is related to parameters such as the type of equipment (such as a television screen, a desktop computer screen, a tablet or a mobile phone screen, and the like), the resolution and the size of the screen, the distance between eyes and the screen 1, and the like, and is an indeterminate value.

It should be noted that, preferably, the steps S11b2 and S11b3 use several points of verification, the first verification point recommends where to appear, the verification point shape and size, how the verification operation is directed, and the like, and the related modifications or operations, similar to the step S11b 1.

After verification and calibration, the measurement is carried out immediately, and the distance between the human eye and the object to be viewed can be actually obtained through the verification and calibration. If the distance between the human eyes and the observed objects changes, the verification and calibration are required to be carried out again, and the distance between the human eyes and the observed objects is obtained again, so that the obtained data of the saccade amplitude is accurate.

And step S12b, detecting the eye movement track of the eyes, and acquiring the images of the eyes when the eye movement track is in a browsing state.

And step S13b, determining the saccade amplitude according to the corresponding relation between the eye image and the viewpoint position and the eye image.

Under the condition that the distance between the human eyes and the object to be viewed is determined, the saccade amplitude is related to the images of the eyes, and the viewpoint position can be obtained according to the images of the eyes and the corresponding relation between the calibrated and verified images of the eyes and the viewpoint position, so that the saccade amplitude is obtained.

Second, step S10 includes:

and step S11c, detecting the eye movement track of the eyes, and acquiring the images of the eyes by adopting at least 2 cameras when the eye movement track is in a browsing state.

When the camera 2 and the software detect that the user is browsing (for example, the software finds that the user is currently using a PDF reader or an office document, the camera 2 finds that the eye movement trajectory of the user conforms to a progressive browsing characteristic, and determines that the eye movement trajectory is in a browsing state), the system uses a plurality of built-in grid templates to match out the most appropriate one, as shown in fig. 5 and 7, the maximum square represents the screen 1, the black dots represent the camera 2, the asterisks represent the text being browsed by the user (not provided by the measurement system, for example, the user is browsing a PDF paper), and the grid 3 represents the best matching template of the system, so that 1 to several equivalent and small numbers of characters are ensured in each small cell. The measurement system records eye movement data of the user browsing the text falling into the grid 3 without any interference and guidance to the user, and analyzes the current asthenopia state. Therefore, the purpose of continuously monitoring the visual fatigue can be achieved without any interference on the user, the study or work of the user in progress is not influenced, and the visual fatigue is measured for many times unconsciously at many moments.

Step S12c, deriving eye position data from the eye image and determining the saccade magnitude.

Since the relative movement of the user's head with respect to the screen 1 affects the saccade amplitude, it is necessary to acquire eye position data and determine the saccade amplitude based on the latest eye position data, that is, once the user's head moves with respect to the screen 1, the saccade angle value of the eye is determined based on the moved eye position. The eye position data here refers to the distance between the center of the saccade interval of the screen 1 and the eyes.

If calibration and verification are omitted, the present invention requires at least 2 cameras 2 to collect data (hereinafter "binocular cameras"). When at least 2 cameras 2 are needed, the cameras are located at different positions of the display (on the frame or under the screen 1), and 1 binocular camera (essentially 2 cameras) or 1 high-speed moving (shaking or sliding) camera can be used. The binocular camera is adopted to simulate the space positioning capacity of eyes of a person to the visual objects, binocular vision positioning is carried out on the positions of the eyes of a user, and the method has the advantages that the visual fatigue monitoring can be carried out in a non-sensing mode, namely, the user does not need to carry out calibration and other operations which enable the user to sense the visual fatigue measurement of the user, the visual fatigue measurement is completely absorbed in the work and study tasks of the user, the multiple measurement of the visual fatigue is completed unconsciously, and even the continuous and long-time monitoring is carried out.

In addition, sometimes in order to measure the asthenopia state of a certain single eye more accurately, the invention also allows the user to use tools such as special glasses, clamping pieces or eyepatches to cover one eye and then perform a vision task, so that the measurement of the asthenopia of the other eye is more accurate. In order to evaluate the asthenopia more accurately, step S200 also recommends to perform a classical subjective scale or other objective or task measurement at the same time as benchmarking data of fatigue degree (hereinafter referred to as benchmarking data) for personalized data learning and expression. Recommended for several measurements after the user has used the invention initially, or after a long period of non-use.

As shown in fig. 6, a star indicates a browsed text, ○ (circles) indicates that eyes are in a gazing state (i.e., an eyeball is in a static state), the gazing frequency can be obtained by the number of circles, a line between two adjacent circles indicates a quick saccade of the eyes, the saccade amplitude can be obtained by the line and the position data of the eyes, the size of the circle indicates the corresponding gazing time, and the circle and the line in the middle form an eye movement track when the eyes browse the text.

And step S30, obtaining the fatigue degree of eyes according to the eye movement parameters.

Specifically, the pole data of the fatigue degree is obtained, and the fatigue degree of the eyes is obtained according to the pole data of the fatigue degree and the eye movement parameters. The fatigue degree of eyes can be obtained by comparing the eye movement parameters with the sighting rod data, the sighting rod data can be data of other visual fatigue measuring means (such as a classical subjective scale or other objective measuring methods), and can also be sample data formed by data measured by a user for many times, visual tasks such as reading, searching and tracking tasks can be preset, and the sample data can be obtained by completing the visual tasks.

A visual task is presented on the screen 1, which may be a text reading (as shown in fig. 5), a random number table for the user to search for a specific number, or a motion picture or video, requiring the user's eyes to track a specific moving object, or similar to play a hamster game to test the user's eye reaction ability, etc. The camera 2 records video when the user performs a task.

The visual task executed in the step can be a single-type single page, for example, only a text reading task is executed in a single type, the text content presented on the current screen 1 is read out immediately, and is not refreshed after no page is turned; it may also be multiple types and multiple pages, for example, text reading and random number search are performed in random order, multiple pages are presented multiple times, and how to perform the test is related to setting test time, device type, size of screen 1, and other factors. When multiple pages of the visual task appear, the system intelligently identifies the content of the completed pages of the user according to a position relation model, and automatically turns the pages.

The content of the text browsing task can be derived from browsing preferences set by the user, such as news reports, learning-oriented short texts, and the like, and can also be extracted from the content of a window currently browsed by the user, such as a PDF paper just browsed by the user. To increase interest, the search and tracking task may be set in a form of a game such as searching which pandas are distinctive in a matrix arrangement, or tracking where a fast-moving tank gets treasure in the tank battle.

The method comprises the steps of preprocessing images of eyes in a visual task, intercepting video segments during the beginning and the ending of the task of each page (screen 1) of a user, calculating time consumption, calculating primary data such as the number of fixations (namely eyeball stillness states) and the amplitude (SA) of each saccade (calculating the corresponding saccade amplitude of an eyeball from the screen saccade amplitude according to a 'position relation model') by utilizing a computer vision related algorithm, and calculating the Fixation Frequency (FF) (times/time consumption) (unit Hz) and the average (eyeball) saccade amplitude (averaging a plurality of saccade amplitudes on each page) (unit rad) of each page and all page contents. The fixation frequency and saccade amplitude in the eye trajectory are shown in fig. 6.

In the visual task, the increase of the fixation frequency and the decrease of the saccade amplitude can accurately reflect the deepening of the degree of Visual Fatigue (VF), and conversely, the degree of visual fatigue is decreased, which is specifically explained by referring to FIG. 8 (the two parameters are the 'main parameters' of the invention). In addition, the blink frequency is increased, the blink time is increased, the blink amplitude is reduced, the fixation time is prolonged, the saccade rate is reduced, and the like, which are proved by other students to be related to the deepening of the asthenopia (the asthenopia is relieved when the vision is changed in the opposite direction), can also be used for auxiliary judgment of the asthenopia state. However, it is noted that the blink parameters are less reliable in the context of VDT asthenopia and the measurement takes a long time. The above is the basic principle of the invention for measuring the degree of visual fatigue.

In practical applications, since the asthenopia condition has inherent personalized differences (these factors together form a user's "feature picture") such as age, nutrition and psychological state, and external personalized differences such as environment and task, the asthenopia condition and its changes cannot be determined simply by using the current measured value and the previous and subsequent difference values of the above parameters. The invention adopts the following steps:

initially, data provided in the "sample database" is employed. For example, assuming that the user U1 measured the main parameter vector MP1 at time T1 ═ FF1, SA1 (i.e., a vector combining gaze frequency and saccade magnitude), the data of the "feature portrait" most similar user in the database is searched, and the MP1 value corresponds to the degree of visual fatigue VF 1. At a time T2 immediately following, the user again measures the primary parameter vector P2, again looking up the database for the degree of visual fatigue VF2 for P2 and the amount of change in visual fatigue Δ VF for P2-P1. Note that the initial data reflects the common situation of users of the same type (the same type of group is in the same type of environment), and is not accurate enough.

Then, the data of other visual fatigue measuring means (such as a classical subjective scale or other objective measuring methods) at the time of T1 and T2 are integrated to give the relatively personalized visual fatigue state judgment of the user, which is also mentioned in the step 3.

Each user measurement is entered into the user U1 "personal database" and the next measurement is combined with the benchmarking method to obtain new data. With the increase of the times of measuring the visual fatigue of the user U1, the visual fatigue state and the change data of the user given by the comprehensive 'benchmarks' are more accurate, after a plurality of iterations, the user data reach a relatively stable and personalized accurate state (namely, the visual fatigue state VF corresponding to the main parameter MP measured by the user in a specific scene is relatively stable), and the benchmarks can be removed for measurement in consideration of the measurement later, so that the measurement time is shortened, and the user experience is optimized. Meanwhile, the personalized data (MP and the VF corresponding to the MP) stabilized by the user is uploaded to a sample database, sample data is expanded, and the accuracy of the database is improved.

The initial sample database can be obtained by measuring the relation between the asthenopia conditions and the main parameters of various typical characteristic users (such as different ages, mental states, nutritional states, environments of temperature, humidity, light and the like, various visual tasks and the like) in a laboratory, and then the database is continuously optimized in an iterative manner by adding more users and measuring more times, so that the large sample database with strong individuation, high stability and high accuracy is finally formed.

According to the fatigue degree of eyes, the user can be graded, the grading result is used for revealing the visual fatigue state of the user in real time and reminding the user to have a rest, a visual fatigue state curve is generated at regular time, and suggestions in the aspects of visual fatigue prevention, measurement and treatment relief are provided for the user according to the result.

The invention mainly calculates the front and back changes of two key parameters of saccade amplitude and fixation frequency by capturing the eye movement reaction of human eyes when the human eyes execute a specific visual task on the screen 1 at different moments through a plurality of cameras, and carries out accurate rating, reminding and displaying on the visual fatigue state of the current user. The invention has the following advantages:

1. the visual fatigue status of VDT scenes and their changes were measured using the fixation frequency and saccade amplitude in eye movement tests as the main parameters. Compared with the traditional method, the method has the advantages of accurate and reliable measurement result, simple method, non-contact, short time consumption, small influence on the current working or learning state of the user (noninductive measurement), simple and portable measurement equipment, low cost and the like.

2. In a VDT visual fatigue scene, the positions of eyes are positioned in real time through a plurality of cameras, and the eye movement reflection of the eyes during the visual task execution is captured, so that the visual fatigue condition of a user is measured really 'senseless' quickly and objectively, the visual fatigue condition can be conveniently applied to the conventional VDT equipment, and the visual fatigue monitoring device is used for continuous visual fatigue monitoring and has zero interference on normal work and learning of the user.

3. The method for obtaining the personalized eye movement parameter and visual fatigue relationship of the user by successive iteration by taking other subjective and objective or task measuring means data such as a subjective scale and the like as a benchmark and combining typical sample data also has certain innovativeness and practical value.

Shown is the average result data from a group of 20 college student volunteers participating in the asthenopia experiment (performing a randomized data search task). The subjective scores and eye movement parameters of visual fatigue at three moments of T1, T2 and T3 were measured sequentially using a visual fatigue stimulation program so that the degree of visual fatigue of volunteers gradually increased with time.

In FIG. 8, the abscissa "T3-T1" represents the difference between the measured parameter at the time T3 and the parameter at the time T1, and similarly "T2-T1" represents the subtraction between the parameter at the time T2 and the parameter at the time T1, and the ordinate represents the difference between the previous and subsequent parameters. The horizontal lines represent the difference between the subjective scores, and the positive value of the horizontal lines indicates that the visual fatigue is deepened along with the time, and the visual fatigue degree T1 is less than T2 is less than T3. The vertical bars represent the fixation frequency pre-post difference (in Hz) and the solid bars represent the saccade amplitude pre-post difference (in rad). It can be seen from fig. 8 that, as the degree of asthenopia is deeper and deeper, the fixation frequency is higher and the saccade amplitude is lower and lower, and the change of the two parameters can well reflect the change of asthenopia state.

Based on the visual fatigue measuring method according to any one of the embodiments, the present invention further provides a preferred embodiment of the visual fatigue measuring system:

as shown in fig. 9, the visual fatigue measurement system according to the embodiment of the present invention includes: a processor 10, and a memory 20 connected to said processor 10,

the memory 20 stores an asthenopia measuring program which, when executed by the processor 10, implements the steps of:

acquiring eye movement parameters of the eyes; wherein the eye movement parameters include: one or more of gaze frequency, saccade amplitude;

the degree of eye fatigue is obtained from the eye movement parameters, as described above.

When the visual fatigue measuring program is executed by the processor 10, the following steps are also realized:

detecting an eye movement track of the eyes, and acquiring an image of the eyes when the eye movement track is in a browsing state;

obtaining a fixation frequency according to the image of the eye; the fixation frequency is a frequency at which the eyeball is in a static state, and is specifically as described above.

When the visual fatigue measuring program is executed by the processor 10, the following steps are also realized:

calibrating and verifying the corresponding relation between the image of the eye and the viewpoint position;

detecting an eye movement track of the eyes, and acquiring an image of the eyes when the eye movement track is in a browsing state;

the saccade magnitude is determined based on the correspondence between the eye image and the viewpoint position and the eye image, as described above.

When the visual fatigue measuring program is executed by the processor 10, the following steps are also realized:

establishing a corresponding relation between the image of the eye and the viewpoint position by adopting a calibration model;

verifying the corresponding relation between the image of the eye and the viewpoint position by adopting a verification model, and updating the corresponding relation between the image of the eye and the viewpoint position when the watching error is greater than a preset threshold value;

when the gaze error is less than or equal to the preset threshold, the verification is completed, as described above.

When the visual fatigue measuring program is executed by the processor 10, the following steps are also realized: detecting the eye movement track of the eyes, and acquiring images of the eyes by adopting at least 2 cameras when the eye movement track is in a browsing state;

the eye position data is derived from the image of the eye and the saccade magnitude is determined as described above.

The saccade amplitude is an eyeball saccade angle value, and the following steps are further realized:

when the visual fatigue measuring program is executed by the processor 10, the following steps are also realized:

and acquiring the benchmark data of the fatigue degree, and acquiring the fatigue degree of the eyes according to the benchmark data and the eye movement parameters.

Based on the visual fatigue measuring method according to any one of the embodiments, the present invention further provides a preferred embodiment of a storage medium:

the storage medium according to an embodiment of the present invention stores a visual fatigue measurement program, and when the visual fatigue measurement program is executed, the steps of the visual fatigue measurement method according to any one of the above embodiments are implemented.

Based on the storage medium, the invention further provides a preferred embodiment of the electronic device:

the electronic device according to an embodiment of the present invention includes the storage medium as described above, which is specifically described above.

In summary, the present invention provides a method and a system for measuring visual fatigue, a storage medium, and an electronic device, wherein the method includes the steps of: acquiring eye movement parameters of the eyes; wherein the eye movement parameters include: one or more of gaze frequency, saccade amplitude; and obtaining the fatigue degree of eyes through the eye movement parameters. Because the fatigue degree is obtained by adopting the fixation frequency and/or the saccade amplitude as the eye movement parameter, the measurement result is accurate and reliable, and the fatigue degree of the user can be accurately determined. The method has the advantages of non-contact type, simplicity, short time consumption, small influence on the current working or learning state of a user (even 'non-inductive' measurement can be realized), simple and portable measuring equipment, low cost and the like.

It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种情绪压力综合检测与分析方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!