Spatial perception detuning training system based on virtual reality visual and auditory pathway
阅读说明:本技术 基于虚拟现实视听觉通路的空间知觉失调测训系统 (Spatial perception detuning training system based on virtual reality visual and auditory pathway ) 是由 秦璐 王索刚 李伟宽 刘洛希 张重阳 于 2019-09-30 设计创作,主要内容包括:本发明提供了一种基于虚拟现实视听觉通路的空间知觉失调测训系统,包括供电模块、主控模块和连接于所述主控模块的空间运动数据处理模块、空间知觉数据分析模块、视听觉任务呈现模块、参考数据库模块和报告生成模块,所述空间运动数据处理模块连接有手部空间位置采集模块和脚部空间位置采集模块。本发明通过系统发出的视听觉指令,并通过手部/脚部空间位置采集模块测量和采集手部和脚部响应相关空间知觉任务的操作参数,从而检查使用者手部和脚部运动的准确和精细程度。(The invention provides a virtual reality visual and auditory pathway-based spatial perception disorder training system, which comprises a power supply module, a main control module, a spatial motion data processing module, a spatial perception data analysis module, a visual and auditory task presenting module, a reference database module and a report generation module, wherein the spatial motion data processing module is connected with a hand spatial position acquisition module and a foot spatial position acquisition module. The invention measures and collects the operation parameters of the hand and the foot responding to the relevant spatial perception task through the visual and auditory instructions sent by the system and the hand/foot spatial position collection module, thereby checking the accuracy and the fineness of the motion of the hand and the foot of the user.)
1. A virtual reality visual and auditory pathway-based spatial perception disorder training system is characterized by comprising a power supply module (1), a main control module (2), and a spatial motion data processing module (5), a spatial perception data analysis module (9), a visual and auditory task presenting module (6), a reference database module (7) and a report generation module (8) which are connected with the main control module (2), wherein the spatial motion data processing module (5) is connected with a hand spatial position acquisition module (3) and a foot spatial position acquisition module (4),
the hand space position acquisition module (3) is used for acquiring motion parameters of the hand;
the foot space position acquisition module (4) is used for acquiring the motion parameters of the foot;
the visual and auditory task presentation module (6) is used for presenting immersive visual information and/or auditory information according to the command of the main control module (2);
a reference database module (7) for storing reference data;
the spatial perception data analysis module (9) is used for analyzing the test result according to the corresponding reference data in the reference database module (7);
and the report generating module (8) is used for generating a corresponding test report according to the analysis result of the spatial perception data analysis module (9).
2. The virtual reality visual-auditory pathway-based spatial perceptual misregistration training system according to claim 1, comprising the following test steps:
s1, wearing a visual and auditory task presentation module (6) on a user, and respectively wearing a hand space position acquisition module (3) and a foot space position acquisition module (4) on the hand and the foot of the user;
s2, providing a virtual reality scene, and performing at least one visual hand/foot space motion test and auditory hand/foot space motion test on a user in the virtual reality scene;
and S3, analyzing the test result of the user by a spatial perception data analysis module (9).
3. The virtual reality visual-auditory pathway-based spatial perception imbalance training system according to claim 2, wherein in step S2, the user is tested for visual hand/foot spatial movement through the following virtual reality scenarios:
prompting a user to complete the space movement of the designated path through a corresponding hand/foot control target object in a text mode, and recording the path length of the space movement completion and the time for completing the space movement control;
the auditory hand/foot spatial motion test is performed on the user through the following virtual reality scenarios:
and prompting a user to complete the space movement specified by voice through a corresponding hand/foot control target object in a voice mode, and recording the actual movement path length and the instruction path length of the space.
4. The virtual reality visual-auditory pathway-based spatial perception imbalance training system according to claim 3, wherein in step S3, the spatial perception data analysis module (9) analyzes the test result by:
s31, obtaining a visual space motion operation approximation degree through a visual space motion operation approximation degree calculation method, and obtaining an auditory space motion operation approximation degree through an auditory space motion operation approximation degree calculation method;
s32, obtaining a spatial perception comprehensive quotient according to the visual space motion operation approximation degree and the auditory space motion operation approximation degree space through a perception comprehensive quotient calculation method;
s33, obtaining a standardized quotient value according to the spatial perception comprehensive quotient value and reference data;
and S34, judging the spatial perception disorder test result of the user according to the standardized quotient.
5. The virtual reality optoacoustic path-based spatial perceptual misregistration training system of claim 4, wherein in step S31, the method for calculating the operation approximation of the visual spatial motion comprises the following formula:
a ═ S1/T1+ S2/T2+ … + Sn/Tn)/n, where
A, representing the operation approximation degree of visual space motion;
s1 and S2 … Sn which indicate the path length of the completion of the spatial movement of each test;
t1, T2 and Tn, which represent the completion time of the spatial motion control of each test;
n represents the number of tests;
the auditory space motion operation approximation calculation method comprises the following formula:
b ═ C (C1/D1+ C2/D2+ … + Cn/Dn)/n, where
B, representing the auditory space motion operation approximation degree;
c1 and C2 … Cn which indicate the actual motion path length of the space of each test;
d1 and D2 … Dn indicate the instruction path length of each test.
6. The virtual reality visual-auditory pathway-based spatial perception imbalance training system according to claim 5, wherein in step S32, the spatial perception synthetic quotient calculation method includes the following formula:
e is Ax + By, wherein
E, representing the spatial perception comprehensive quotient of the user;
x, weight representing the approximation of the visual space motion operation;
y, weight representing the auditory spatial motion manipulation approximation;
in step S33, the normalized quotient is obtained by the following equation:
f ═ 100+ (E-G), where,
f, representing the standardized quotient of the user;
and G, representing a reference spatial perception synthetic quotient value in the reference data.
7. The virtual reality visual-auditory pathway-based spatial perception imbalance training system according to claim 6, further comprising a spatial perception training scheme generation module (11) and a spatial perception training process control module (12) connected to the main control module (2), wherein,
the spatial perception training scheme generating module (11) is used for providing training tasks of corresponding grades for a user according to the analysis result of the spatial perception data analyzing module (9);
and the spatial perception training process control module (12) is used for realizing the storage of a training scheme of a user, the recording of the development condition of the scheme and the recording of historical performances of the finished training.
8. The virtual reality visual-auditory pathway-based spatial perceptual misregistration training system of claim 7, further comprising the training steps of:
A. providing training tasks of corresponding grades for a user according to the analysis result of the spatial perception data analysis module (9);
B. step S2 is performed in a training manner under the corresponding training task.
9. The system for spatial perception auditory pathway-based spatial perception imbalance measurement and training according to claim 8, wherein the foot spatial position acquisition module (4) and the hand spatial position acquisition module (3) each comprise a wearable ring and a mounting box fixed on the wearable ring, and the mounting box is provided with a six-axis/nine-axis sensor, and the six-axis/nine-axis sensor is connected to the main control module (2) through a wired or/wireless mode.
10. The virtual reality visual-auditory pathway-based spatial perception disorder training system according to claim 9, wherein the visual-auditory task presentation module (6) comprises a virtual reality headset module (61) and a high-fidelity headset (62) respectively connected to the main control module (2), and the high-fidelity headset (62) is integrated on the virtual reality headset module (61).
Technical Field
The invention belongs to the technical field of attention training, and particularly relates to a spatial perception disorder training system based on a virtual reality visual-auditory pathway.
Background
The spatial perception comprises aspects of distance perception, orientation perception and the like, and is the embodiment of brain perception function, and the spatial perception disorder is one of brain dysfunction. Teenagers form shape perception through eyeball movement, and form perception of size and distance through line perspective, perspective of air, light and shade, motion parallax and the like. Without spatial perception, the brain fails to respond to spatial perception, which is not sensitive. The disorder has inherent factors, but is mainly caused by environment and human factors, and along with the development of mobile networks and mobile phone technologies in China. Many children now enjoy playing mobile phones, tablets and watching television. The electronic screen is colorful and attractive, a child can be immobilized for one or two hours, and the visual spatial perception development of the child is influenced after a long time, so that the visual spatial perception of the child is weak. In addition, China develops rapidly, talents compete violently, many children are in the young period, parents require the children to learn some competitive knowledge, such as poetry of Tang, painting, mathematics and the like, and the movement state of the children's body is still. Therefore, problems such as getting lost, frequently making radical mistakes when writing and reading, reversing numbers, or missing characters can occur in a considerable proportion of children in the ages of six to twelve. Such problems are likely to lead to poor learning performance of children, lack of concentration on learning tasks, low efficiency of lecture listening, low performance, careless tiger, and operation dragging, and in the past, the children are increasingly lack of confidence and are likely to rely on others. Therefore, the test on the spatial perception disorder is beneficial to parents and teachers to know the spatial perception level of the children, intervention and training can be carried out on the spatial perception disorder children, or an adaptive education and teaching method can be carried out, and the test is beneficial to better caring about the children and caring about the growth of the children.
The current techniques for spatial perception disorders are not uncommon, and mainly training and testing for sensory integration disorders. Such as a sensory integration training room for sensory integration disorder children using a novel patent [ application No.: CN202324705U ], comprising a series of physical training devices capable of sensory integration training children. But a large training field is needed, equipment needs to be maintained regularly, a trainer needs to monitor in real time, and safety of children is guaranteed. In addition, a baton dedicated for use by children with sensory integration disorder as in the new patent application No. [ application No.: CN203480724U ], used for training the large and small muscles, the sense of balance and vestibular sensation, and training the command of the children with sensory integration disorder. The method can unify the sense of exercise and the movement to a certain extent, but the effect of the training cannot be objectively and quantitatively evaluated. The invention also discloses a children sensory integration training device which is mainly designed in the methods of patent patents CN1506128A and CN1506129A and can be used for preventing and treating sensory integration disorder. The method can enhance the perception of the sensory channel of the child to a certain extent, but the effect of the evaluation training is still partially objective and quantized, the equipment belongs to mechanical equipment, and has higher rotating parts, and if the equipment is not carefully maintained or does not have the guidance and monitoring of a trainer, the equipment has certain potential safety hazards of the child.
Disclosure of Invention
The invention aims to solve the problems and provides a spatial perception disorder training system based on a virtual reality visual-auditory pathway;
in order to achieve the purpose, the invention adopts the following technical scheme:
a virtual reality visual and auditory pathway-based spatial perception disorder training system comprises a power supply module, a main control module, and a spatial motion data processing module, a spatial perception data analysis module, a visual and auditory task presentation module, a reference database module and a report generation module which are connected with the main control module, wherein the spatial motion data processing module is connected with a hand spatial position acquisition module and a foot spatial position acquisition module,
the hand space position acquisition module is used for acquiring motion parameters of the hand;
the foot space position acquisition module is used for acquiring the motion parameters of the foot;
the visual and auditory task presentation module is used for presenting the immersive visual information and/or auditory information according to the command of the main control module;
the reference database module is used for storing reference data;
the spatial perception data analysis module is used for analyzing the test result according to the corresponding reference data in the reference database module;
and the report generating module is used for generating a corresponding test report according to the analysis result of the spatial perception data analysis module.
In the above system for training spatial perceptual-auditory-pathway-based spatial perceptual-misregistration, the following steps are included:
s1, wearing a visual and auditory task presentation module on a user, and respectively wearing a hand space position acquisition module and a foot space position acquisition module on the hand and the foot of the user;
s2, providing a virtual reality scene, and performing at least one visual hand/foot space motion test and auditory hand/foot space motion test on a user in the virtual reality scene;
and S3, analyzing the test result of the user by the spatial perception data analysis module.
In the above-mentioned spatial perceptual-auditory-pathway-based spatial perceptual-misregistration training system, in step S2, a visual hand/foot spatial motion test is performed on the user through the following virtual reality scenarios:
and prompting a user to complete the space motion of the appointed path through a corresponding hand/foot control target object in a text mode, and recording the path length of the space motion completion and the space motion control completion time.
In the above-described spatial perceptual-auditory-pathway-based spatial perceptual-misregistration training system, in step S2, an auditory hand/foot spatial motion test is performed on the user through the following virtual reality scenarios:
and prompting a user to complete the space movement specified by voice through a corresponding hand/foot control target object in a voice mode, and recording the actual movement path length and the instruction path length of the space.
In the above-mentioned spatial perception auditory pathway-based spatial perception detuning training system, in step S3, the spatial perception data analysis module analyzes the test result by the following method:
s31, obtaining a visual space motion operation approximation degree through a visual space motion operation approximation degree calculation method, and obtaining an auditory space motion operation approximation degree through an auditory space motion operation approximation degree calculation method;
s32, obtaining a spatial perception comprehensive quotient according to the visual space motion operation approximation degree and the auditory space motion operation approximation degree space through a perception comprehensive quotient calculation method;
s33, obtaining a standardized quotient value according to the spatial perception comprehensive quotient value and reference data;
and S34, judging the spatial perception disorder test result of the user according to the standardized quotient.
In the above-mentioned spatial perceptual-auditory-pathway-based spatial perceptual-misregistration training system, in step S31, the visual-spatial-motion-operation-approximation calculation method includes the following formula:
a ═ S1/T1+ S2/T2+ … + Sn/Tn)/n, where
A, representing the operation approximation degree of visual space motion;
s1 and S2 … Sn which indicate the path length of the completion of the spatial movement of each test;
t1, T2 and Tn, which represent the completion time of the spatial motion control of each test;
n represents the number of tests.
In the above-mentioned spatial perceptual-auditory-pathway-based spatial perceptual-misregistration training system, in step S31, the auditory spatial motion manipulation approximation calculation method includes the following formula:
b ═ C (C1/D1+ C2/D2+ … + Cn/Dn)/n, where
B, representing the auditory space motion operation approximation degree;
c1 and C2 … Cn which indicate the actual motion path length of the space of each test;
d1 and D2 … Dn, which represent the instruction path length of each test;
n represents the number of tests.
In the above-mentioned spatial perception auditory pathway-based spatial perception disorder training system, in step S32, the spatial perception synthetic quotient calculation method includes the following formula:
e is Ax + By, wherein
E, representing the spatial perception comprehensive quotient of the user;
x, weight representing the approximation of the visual space motion operation;
y, weight representing the auditory spatial motion manipulation approximation;
in step S33, the normalized quotient is obtained by the following equation:
f ═ 100+ (E-G), where,
f, representing the standardized quotient of the user;
and G, representing a reference spatial perception synthetic quotient value in the reference data.
In the above-mentioned system for measuring and training spatial perception disorders based on virtual reality audiovisual channels, the system further comprises a spatial perception training scheme generation module and a spatial perception training process control module connected to the main control module, wherein,
the spatial perception training scheme generating module is used for providing training tasks of corresponding grades for a user according to the analysis result of the spatial perception data analyzing module;
and the spatial perception training process control module is used for realizing the storage of a training scheme of a user, the recording of the development condition of the scheme and the recording of historical performances of the finished training.
In the above system for training spatial perceptual-auditory-pathway-based spatial perceptual-dysregulation, the system further comprises the following training steps:
A. providing training tasks of corresponding grades for a user according to the analysis result of the spatial perception data analysis module;
B. step S2 is performed in a training manner under the corresponding training task.
In the above system for measuring and training spatial perception auditory pathway based on virtual reality, the foot spatial position acquisition module and the hand spatial position acquisition module both include a wearing ring and a mounting box fixed on the wearing ring, and the mounting box has a six-axis/nine-axis sensor therein, and the six-axis/nine-axis sensor is connected to the main control module in a wired and/or wireless manner.
In the above system for measuring and training spatial perception disorders based on virtual reality audiovisual pathways, the audiovisual task presentation module includes a virtual reality head-wearing module and a high-fidelity earphone, which are respectively connected to the main control module, and the high-fidelity earphone is integrated on the virtual reality head-wearing module.
Compared with the prior art, the invention has the advantages that the visual and auditory instructions sent by the system are used, and the hand/foot spatial position acquisition module is used for measuring and acquiring the operation parameters of the hand and foot response relative spatial perception tasks, so that the accuracy and the fineness of the motion of the hand and the foot of the user are checked; after the test, the system automatically solves various parameters related to the spatial perception and compares the parameters with the reference data of the same sex and the same age group, thereby comprehensively testing the spatial perception level of the user; and generating a personalized spatial perception training scheme according to the test result, and automatically and intelligently leading a user to train so as to effectively enhance the spatial perception capability.
Drawings
Fig. 1 is a schematic structural diagram of a spatial perceptual-auditory-pathway-based spatial perceptual-detuning training system according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a visual-auditory task presentation module according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a hand/foot spatial position acquisition module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a visual hand space exercise test/training system according to an embodiment of the present invention;
FIG. 5 is a schematic view of a visual foot spatial movement test/training system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an auditory hand space movement test/training provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of an auditory foot spatial movement test/training provided by an embodiment of the present invention;
FIG. 8 is a flowchart of a visual-auditory spatial perception test provided by an embodiment of the present invention;
fig. 9 is a flowchart of the audiovisual spatial perception training provided by the embodiment of the invention.
Reference numerals: a
Detailed Description
In the mechanism of information processing, human beings mainly perceive the world by using visual, auditory, tactile, olfactory and other pathways. Where the visual-auditory pathway accepts and perceives information at approximately 94%. Therefore, the visual sense and the auditory sense are main information processing channels of human beings, and related researches of brain science consider that the visual sense and the auditory sense functions are not independent, the visual sense and the auditory sense functions of healthy people are mutually related, the information processing channels are divided into visual sense and auditory sense single channel processing or visual sense and auditory sense mixed double channel processing in terms of the form, and the information processing is mainly embodied in the form of the three visual sense and auditory sense channels. The spatial perceptual disorder is mainly abnormal in the information processing of vision and hearing, so the test of the scheme starts from the three sensory paths.
Secondly, in the aspect of an operation control mechanism, the accurate characteristic of limb space movement can reflect the level of space perception capability, and the measurement, comparison and analysis of the preset path in the space movement task can reveal the space perception problem. The scheme utilizes a hand and foot fine operation control mode to objectively embody the spatial perception level by using quantization parameters.
The following are preferred embodiments of the present invention and are further described with reference to the accompanying drawings, but the present invention is not limited to these embodiments.
As shown in fig. 1, this embodiment provides a spatial perception disorder training system based on virtual reality audiovisual pathway, which includes a
the
and (3) the main control module 2: the system is the core of the whole system, can be a desktop computer host, a notebook computer and the like, and mainly completes the operations of visual and auditory task flow control, visual and auditory task presentation control, reference database module 7 access control, spatial perception data analysis module 9 control, report generation module control and the like.
The hand space
the foot space
the visual and auditory task presentation module 6 is used for presenting immersive visual information and/or auditory information according to the command of the
and the reference database module 7 is used for storing reference data, wherein the reference data comprises reference vision/hearing hand space motion operation approximation degrees according to normal crowd statistics, reference vision/hearing foot space motion operation approximation degrees, reference spatial perception comprehensive quotient values, quotient standard difference values and the like. And different genders and age groups have respective reference data, e.g., one statistical data segment per year from 6 to 18 years of age; two years from age 19 to age 24 as a statistical data segment; every five years from 25 to 50 years is a statistical data segment; one statistical data segment from age 51 to age 60; one statistical data segment from
The spatial perception data analysis module 9 is used for analyzing the test result according to the corresponding reference data in the reference database module 7;
the spatial motion
And the report generating module 8 is used for displaying the corresponding test report in a chart, a text form, a word or a PDF document according to a certain image-text structure according to the analysis result of the spatial perception data analysis module 9.
Further, the embodiment further includes a spatial perception training
the spatial perception training
and the spatial perception training
Further, as shown in fig. 2, the visual and auditory task presentation module 6 includes a virtual
Specifically, as shown in fig. 3, each of the foot spatial
Specifically, the testing method of the system when the system is put into use comprises the following steps:
s1, wearing a visual and auditory task presentation module 6 on a user, and respectively wearing a hand space
s2, providing a virtual reality scene, and performing at least one visual hand/foot space motion test and auditory hand/foot space motion test on a user in the virtual reality scene;
and S3, analyzing the test result of the user by the spatial perception data analysis module 9.
Specifically, in step S2, the user is subjected to a visual hand/foot spatial motion test through the following virtual reality scenarios:
and prompting a user to complete the space motion of the appointed path through a corresponding hand/foot control target object in a text mode, and recording the path length of the space motion completion and the space motion control completion time.
Likewise, in step S2, the user is tested for auditory hand/foot spatial motion through the following virtual reality scenario:
and prompting a user to complete the space movement specified by voice through a corresponding hand/foot control target object in a voice mode, and recording the actual movement path length and the instruction path length of the space.
Further, in step S3, the spatial perception data analysis module 9 analyzes the test result by:
s31, obtaining a visual space motion operation approximation degree through a visual space motion operation approximation degree calculation method, and obtaining an auditory space motion operation approximation degree through an auditory space motion operation approximation degree calculation method;
s32, obtaining a spatial perception comprehensive quotient according to the visual space motion operation approximation degree and the auditory space motion operation approximation degree space through a perception comprehensive quotient calculation method;
s33, obtaining a standardized quotient value according to the spatial perception comprehensive quotient value and reference data;
and S34, judging the spatial perception disorder test result of the user according to the standardized quotient.
Specifically, in step S31, the visual space motion operation approximation calculation method includes the following formula:
a is S1/T1+ S2/T2+ … + Sn/Tn/n, wherein
A, representing the operation approximation degree of visual space motion;
s1 and S2 … Sn which indicate the path length of the completion of the spatial movement of each test;
t1, T2 and Tn, which represent the completion time of the spatial motion control of each test;
n represents the number of tests.
Likewise, in step S31, the auditory space motion operation approximation calculation method includes the following formula:
b ═ C1/D1+ C2/D2+ … + Cn/Dn/n, where
B, representing the auditory space motion operation approximation degree;
c1 and C2 … Cn which indicate the actual motion path length of the space of each test;
d1 and D2 … Dn, which represent the instruction path length of each test;
n represents the number of tests.
Further, in step S32, the spatial perception synthetic quotient calculation method includes the following formula:
e is Ax + By, wherein
E, representing the spatial perception comprehensive quotient of the user;
x, weight representing the approximation of the visual space motion operation;
y, weight representing the auditory spatial motion manipulation approximation;
and in step S33, the normalized quotient is obtained by the following equation:
f-100 + E-G, wherein,
f, representing the standardized quotient of the user;
and G, representing a reference spatial perception synthetic quotient value in the reference data.
Wherein G is a reference spatial perception comprehensive quotient of the group under the same age group as the user; the weight is determined as the case may be, and the sum of the two weights is 1, where x is 0.5 and y is 0.5.
In step S34, if the user' S normalized quotient is 80-89 points, it indicates that the user score is below the average score in the group, and it is marked as poor; scores are stated to be close to the average score in 90-109, and are recorded as normal, scores 110-119 are stated to be higher than the average score and are recorded as good, scores 120-129 are stated to be higher than the average score and are recorded as excellent, scores 130 are above the average score and are recorded as supergroup.
Further, in step S33, the normalized quotient can be obtained by: normalized quotient 100+15 (user score-reference mean)/standard deviation. Similarly, if the normalized quotient is 80-89 points, it means that the user score is below the average score in the population, it is marked as poor, 90-109 points it is marked as average score, it is marked as normal, 119 points it is higher than the average score, it is marked as good, 120 points it is marked as high, 130 points it is over high, it is marked as super group.
Further, the system of the present embodiment further includes the following training steps:
A. providing training tasks of corresponding grades for a user according to the analysis result of the spatial perception data analysis module 9;
B. step S2 is performed in a training mode under the corresponding training task, i.e.
And S2, providing a virtual reality scene, and performing at least one time of visual hand/foot space motion test and auditory hand/foot space motion training on the user in the virtual reality scene according to the training task.
Of course, step S1 is also included in the training process, and is not repeated here since step S1 has already been performed in the testing process.
In addition, the training process may also include a training mode performing step S3, that is,
and S3, analyzing the training result of the user by the spatial perception data analysis module 9. The analysis of the training results may give the judgment results of 'poor', 'normal', 'good', 'excellent' or 'supergroup' per training result as well as the analysis of the test results. The evaluation step can be replaced by the training effect, namely the progress degree of each training result relative to the test result is given, and the spatial perception comprehensive quotient of the user can be directly given and is judged by the user.
The following detailed examples of the test procedures for each site:
1) as shown in fig. 4, visual hand space movement testing/training:
the two hands of the user are respectively tied with a hand space
The virtual reality head-mounted
2) As shown in fig. 5, visual foot spatial movement testing/training:
the user ties one foot spatial
The virtual reality head-mounted
3) As shown in fig. 6, auditory hand space movement testing/training:
the user ties up a hand spatial
The virtual reality head-mounted
4) As shown in fig. 7, auditory foot spatial movement testing/training:
the user ties one foot spatial
The virtual reality head-mounted
As shown in fig. 8, a round of testing may include a plurality of testing times, for example, 5 times, including 5 visual hand space motion tests, 5 visual foot space motion tests, 5 auditory hand space motion tests, 5 auditory foot space motion tests, and 20 tests. A spatial perceptual disorder test may comprise one, two or more rounds.
As shown in fig. 9, in step a, the perceptual space training process divides the pre-stored training tasks into five schemes of 100,80,60,40, and 20 according to the analysis results of the spatial perceptual data analysis module 9, such as 'poor', 'normal', 'good', 'excellent', and 'super group'. Each session comprised 4 bars of training items, one visual training sub-item and one auditory training sub-item, arranged in "audiovisual" order, 10 minutes per bar, with a rest of about 5 minutes in the middle of each bar, and a training time of about 1 hour. Of course, in practical applications, the level of the analysis result, the scheme classification, the training items of each training, the duration of each bar and the rest time between bars can be adjusted according to practical situations.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Although the
- 上一篇:一种医用注射器针头装配设备
- 下一篇:一种通过传感器采集生理数据的智能鞋垫