Vision tracking brain-computer interface detection system

文档序号:176279 发布日期:2021-11-02 浏览:36次 中文

阅读说明:本技术 视觉追踪的脑机接口检测系统 (Vision tracking brain-computer interface detection system ) 是由 李远清 肖君 余天佑 潘家辉 黄海云 于 2021-07-08 设计创作,主要内容包括:本发明公开了一种视觉追踪的脑机接口检测系统,包括高清人脸头像采集与移动模块、EEG信号采集模块、时频特征分析提取与分类模块、统计检验评估标准模块这几个功能模块。该系统根据行为量表评估视觉追踪的过程,选定脑机接口实验范式中的目标刺激与非目标刺激,目标刺激键闪烁同时移动,非目标保持不动。通过目标刺激的移动与闪烁,引导患者选择性追踪该目标头像并保持注意直至目标键从中心移至初始位置,并由ITPC(试次间相位一致性)作为特征进行分类实时输出在线结果,20次在线任务的结果进行统计检验,达到显著水平,则认为患者存在视觉追踪,否则不存在。本发明克服了行为量表和经验检测的对行为响应变化的敏感度不高、评估者主观性太强等缺点。(The invention discloses a brain-computer interface detection system for visual tracking, which comprises a high-definition face head portrait acquisition and moving module, an EEG signal acquisition module, a time-frequency characteristic analysis, extraction and classification module and a statistical test and evaluation standard module. The system evaluates the process of visual tracking according to the behavior scale, selects target stimulation and non-target stimulation in a brain-computer interface experimental paradigm, and the target stimulation key flickers while moving and the non-target remains still. And guiding the patient to selectively track the target head portrait and keep paying attention until the target key moves from the center to the initial position through the movement and the flicker of target stimulation, classifying by using an ITPC (inter-trial phase consistency) as a characteristic to output an online result in real time, and performing statistical test on the results of 20 online tasks until the results reach a significant level, wherein the patient is considered to have visual tracking, otherwise, the patients do not exist. The invention overcomes the defects of low sensitivity to behavior response change, strong subjectivity of an evaluator and the like of behavior scale and experience detection.)

1. A visual tracking brain-computer interface detection system, comprising: the system comprises a high-definition face head portrait acquisition and moving module, an EEG signal acquisition module, a feature analysis extraction and classification module and a statistical test evaluation standard module;

the high-definition human face head portrait acquisition and moving module acquires the head portrait of the current testee in real time through the front face of a high-definition camera on a display of a computer, cuts the head portrait into a head portrait with a preset size as a human face stimulation key, distributes the stimulation key in four directions, namely an upper direction, a lower direction, a left direction and a right direction, of a square range of the display, randomly selects one direction as a target direction before each trial, and places the human face stimulation key in the direction in the center of the display for prompting the selective attention of the patient; after the trial is started, the human face stimulation keys in the target direction are used as target keys to move and flash at the same time, the human face stimulation keys in the other 3 directions which are not targets flash in situ but do not move, in the process of flashing and moving the targets, EEG signals of patients are collected in real time through an EEG signal collection module, feature extraction and classification are carried out through a feature analysis extraction and classification module, and results are output at the same time;

the EEG signal acquisition module requires each testee to wear an electrode cap according to the extended international 10-20 system standard, and is connected in a unipolar lead mode, and the electrode at the right earlobe is used as a public end; the channels of 30 signals including Fp1, Fp2, F7, F3, Fz, F4, F8, FT7, FC3, FCz, FC4, FT8, T7, C3, Cz, C4, T8, TP7, CP3, CPz, CP4, TP8, T5, P3, Pz, P4, T6, O1, Oz and O2 are selected in an experimental way to be taken as brain electrical channels for acquiring EEG data; GND as ground electrode, a2 as reference electrode; in the data acquisition process, in order to ensure the signal quality, the electrode contact impedance of all channels is kept below 5k ohm; the EEG original data is subjected to band-pass filtering at [0.1-70] Hz, and power frequency interference and EEG background noise are removed through a 50Hz notch filter;

the feature analysis extraction and classification module requires that EEG data is sent into a computer in real time after a target key of a single trial is moved from the middle to an initial position, the data is processed by adopting a phase consistency classification algorithm, namely an ITPC feature extraction and classification algorithm, and positive/negative feedback pictures and sounds are played simultaneously according to results;

the statistical test evaluation standard module is used for evaluating the significance of the classification accuracy rate, namely the hit frequency divided by the total test frequency for each testee, and the following described binomial test based on Jeffreys' Beta distribution is used:

in the formula, N represents the number of trial times, m is the expected hit number, a is the accuracy of expected comparison, lambda is the accuracy required for achieving significance, and z is the z value in standard normal distribution; for single-sided detection, i.e., greater than the expected value, when the confidence is 0.05, the z value is 1.65; for patients who complete 20 training tasks and 20 online tasks, the accuracy is considered to be above a significant level of 45%, and significant ERP components or ERSPs with a significant frequency band exist at the same time, the online tasks of the testee with the visual tracking project score of 3 and 20 trials are judged to be completed in two times, 10 trials are performed each time, and before each online task, 10 training tasks, namely 10 trials are collected as training data sets for establishing a classification model.

2. The visual tracking brain-computer interface detection system according to claim 1, wherein the ITPC classification algorithm used in the feature analysis extraction and classification module comprises the following specific steps:

s1, carrying out zero-phase band-pass filtering on EEG data acquired by the EEG signal acquisition module at 0.1Hz-50Hz, and removing ocular artifacts by adopting a regression method;

s2, selecting the channel of the visual related region, at least including P3, Pz, P4 and O1Oz and O2(ii) a Constructing data units of at least 400 targets and 1200 non-targets by using the filtered EEG data of the selected channel, and performing time-frequency analysis based on fast Fourier change on the data units of the 400 targets and the 1200 non-targets from 100ms before stimulation movement to 800ms after stimulation;

s3, time-frequency analysis comprises the steps of calculating phase consistency ITPC between event-related spectrum disturbance ERSP and trial time; the value of ERSP can reflect the degree to which the power of different frequencies in an electroencephalogram signal changes with the onset of stimulation, where ERSP is calculated as follows:

where ERSP (f, t) is the event-related spectral perturbation at frequency f and time t; n is the number of experimental trials; fk(f, t) is the spectral estimate for the kth trial at frequency f and time t; ITPC can be considered a complement to ERSP, revealing electroencephalographic spectral phase consistency, i.e. the degree of phase consistency, between different trials within a selected frequency range and time window, calculated as follows:

where ITPC (f, t) is the phase consistency at frequency f and time t;

s4, classifying by taking ITPC values of different frequency bands as features, and determining target stimulation; constructing a feature vector of a certain frequency band according to the upper, lower, left and right positions, extracting the feature vector from the data of a training data set, training a Support Vector Machine (SVM) or a linear classifier, namely a classification model, wherein the feature vectors corresponding to a target and 3 non-targets are respectively marked as 1 and-1 in the classifier; for each online trial, the trained classification model is applied to 4 feature vectors corresponding to different directions of the upper part, the lower part, the left part and the right part to obtain 4 values, the stimulation direction corresponding to the maximum value in the 4 values is regarded as a target, and the stimulation direction is a moving target key in the trial; this trial responds correctly if the detected orientation coincides with the actual orientation.

3. The vision-tracking brain-computer interface detection system according to claim 1, wherein the EEG signal acquisition module requires the patient with disturbance of consciousness to face the display and keep a set distance from the display during the data acquisition process, so as to ensure that the visual angle of the patient is within a set angle range, wear an electrode cap on the patient, and inject electrode paste to ensure that each electrode is conducted.

Technical Field

The invention relates to the technical field of brain-computer interface technology, motion vision evoked potential, face recognition evoked potential and evoked potential related to accidental tasks, in particular to a brain-computer interface detection system for visual tracking.

Background

Cortical electroencephalograms (EEG) can be collected through a non-invasive brain-computer interface system, and are gradually applied to the aspects of auxiliary diagnosis of neurological diseases, research on brain functions, and the like. The brain response to the external environment is obtained by studying the endogenous rhythm component generated autonomously in the EEG signal or the exogenous EEG component induced by external stimulation and the spatial position and the interrelation of the information source. Therefore, for the patient with the disturbance of consciousness seriously lacking behavior expression and having sensory deficiency, the multi-modal brain-computer interface system is used for assisting the clinical diagnosis of the patient with the disturbance of consciousness by playing various sensory stimuli and directly detecting brain electrical components related to the stimuli, thereby providing a more objective and accurate diagnosis and prognosis evaluation result.

At present, clinical evaluation of patients with disturbance of consciousness is mainly based on various behavioral scales, but behavioral expression of patients is easily affected by factors such as motor impairment, sensory deficiency, low level of wakefulness or easy fatigue, and the like, so that a high misdiagnosis rate (37-43%) is caused. Coma Recovery Scale-Revised (CRS-R) is classified as auditory, visual, motor, language, communication, arousal 6 subscales; each sub-scale comprises a plurality of items for respectively detecting each functional state of the patient; for example, a visual subtotal contains: 0 point-no response, 1 point-visual startle, 2 point-visual positioning, 3 point-visual tracking, 4 point-object positioning and 5 point-object identification; the 5-item score is given by an experienced clinician to the patient first by performing a specific stimulation procedure and then by the patient's behavioral response. The specific evaluation process of the visual tracking item is as follows: at 50cm in front of the patient, the evaluator holds a circular mirror with the diameter of 10-15cm, slowly moves in four directions, namely up, down, left and right, observes whether the eyes of the patient move correspondingly along with the mirror during the movement, and considers that the patient has visual tracking behavior if 2 tracking behaviors are observed in 4 evaluations in 4 directions, otherwise, performs 2-point (visual positioning) evaluation. Such assessment may lead to inaccurate diagnosis due to eye movement limitation (dyskinesia) or poor visibility of behavior, etc.

The invention relates to a brain-computer interface detection system which is designed according to the principles of sports visual stimulation, face recognition and accidental task stimulation evoked Event Related Potential (ERP). The system extracts ITPC (inter-phase coherence) features for classification and outputs results in real time, and simultaneously judges whether a patient has visual tracking response or not by combining an offline ERP waveform and ERSP (event related spectral perturbation) in time-frequency analysis. The brain responds to external stimuli more sensitively, objectively and accurately than behavioural responses. Therefore, a more accurate and objective score may be obtained by using a brain-computer interface to detect visual tracking of a patient with disturbance of consciousness.

The difficulty of the invention lies in the design and realization of human face visual stimulation of real-time movement, the extraction of characteristics related to stimulation in electroencephalogram signals of consciousness disorder patients and the design based on an ITPC classification algorithm. And selecting the moving face stimulus and the corresponding incubation period of the ERP according to the characteristics of the visual tracking and the tested disturbance of consciousness in the behavior scale. The method comprises the steps of adopting a high-definition camera to collect a current head portrait of a tested person in real time to serve as a face stimulation key with the size of 4cm, realizing real-time movement of a target face stimulation key through Microsoft Visual C + + software programming, recording an EEG signal in the moving process, outputting a result according to extracted features after a single test is finished, and playing the result to the tested person in an audio-Visual feedback mode. The selection of ITPC characteristics requires the determination of the effective latency and corresponding band range to ensure the desired detection.

Disclosure of Invention

The invention aims to overcome the defects of the prior art and provides a visual tracking brain-computer interface detection system which can supplement the classical behavior and physiological observation in a clinical behavior scale and reduce the clinical misdiagnosis rate caused by the lack of behaviors of patients or the subjective interpretation of evaluators.

In order to achieve the purpose, the technical scheme provided by the invention is as follows: a visual tracking brain-computer interface detection system, comprising: the system comprises a high-definition face head portrait acquisition and moving module, an EEG signal acquisition module, a feature analysis extraction and classification module and a statistical test evaluation standard module;

the high-definition human face head portrait acquisition and moving module acquires the head portrait of the current testee in real time through the front face of a high-definition camera on a display of a computer, cuts the head portrait into a head portrait of 4cm by 4cm, and uses the head portrait as a human face stimulation key, distributes the stimulation key in the upper, lower, left and right directions within the square range of the display, randomly selects one direction as a target direction before each test, and places the human face stimulation key in the direction in the center of the display for prompting the selective attention of the patient; after the trial is started, the human face stimulation keys in the target direction are used as target keys to move and flash at the same time, the human face stimulation keys in the other 3 directions which are not targets flash in situ but do not move, in the process of flashing and moving the targets, EEG signals of patients are collected in real time through an EEG signal collection module, feature extraction and classification are carried out through a feature analysis extraction and classification module, and results are output at the same time;

the EEG signal acquisition module requires each testee to wear an electrode cap according to the extended international 10-20 system standard, and is connected in a unipolar lead mode, and the electrode at the right earlobe is used as a public end; the channels of 30 signals including Fp1, Fp2, F7, F3, Fz, F4, F8, FT7, FC3, FCz, FC4, FT8, T7, C3, Cz, C4, T8, TP7, CP3, CPz, CP4, TP8, T5, P3, Pz, P4, T6, O1, Oz and O2 are selected in an experimental way to be taken as brain electrical channels for acquiring EEG data; GND as ground electrode, a2 as reference electrode; in the data acquisition process, in order to ensure the signal quality, the electrode contact impedance of all channels is kept below 5k ohm; the EEG original data is subjected to band-pass filtering at [0.1-70] Hz, and power frequency interference and EEG background noise are removed through a 50Hz notch filter;

the feature analysis extraction and classification module requires that EEG data is sent into a computer in real time after a target key of a single trial is moved from the middle to an initial position, the data is processed by adopting a phase consistency classification algorithm, namely an ITPC feature extraction and classification algorithm, and positive/negative feedback pictures and sounds are played simultaneously according to results;

the statistical test evaluation criteria module, for each subject, the classification accuracy is the number of hits divided by the total number of trials (dials), and in order to evaluate the significance of the classification accuracy, a binomial test based on Jeffreys' Beta distribution is used as follows:

in the formula, N represents the number of trial runs (dials), m is the expected number of hits, a is the accuracy of the expected comparison, λ is the accuracy required to achieve significance, and z is the value of z in a standard normal distribution; for single-sided detection, i.e., greater than the expected value, when the confidence is 0.05, the z value is 1.65; for patients who complete 20 training tasks and 20 online tasks, the accuracy is considered to be above 45% of the significance level, and significant ERP components or ERSPs with a significant frequency band exist at the same time, the online tasks of the testee with the visual tracking project score of 3 and 20 trials (dials) are judged to be completed in two times, 10 trials (dials) are performed each time, and before each online task, 10 training tasks, namely 10 trials (dials), are collected as training data sets for establishing a classification model.

Further, in the feature analysis extraction and classification module, an ITPC classification algorithm is adopted, which comprises the following specific steps:

s1, carrying out zero-phase band-pass filtering on EEG data acquired by the EEG signal acquisition module at 0.1Hz-50Hz, and removing ocular artifacts by adopting a regression method;

s2, selecting the channel of the visual related region, at least including P3, Pz, P4 and O1Oz and O2(ii) a Constructing data units of at least 400 targets and 1200 non-targets by using the filtered EEG data of the selected channel, and performing time-frequency analysis based on fast Fourier change on the data units of the 400 targets and the 1200 non-targets from 100ms before stimulation movement to 800ms after stimulation;

s3, time-frequency analysis comprises the steps of calculating phase consistency ITPC between event-related spectrum disturbance ERSP and trial time; the value of ERSP can reflect the degree to which the power of different frequencies in an electroencephalogram signal changes with the onset of stimulation, where ERSP is calculated as follows:

where ERSP (f, t) is the event-related spectral perturbation at frequency f and time t; n is the number of experimental trials; fk(f, t) is the spectral estimate for the kth trial at frequency f and time t; ITPC can be considered a complement to ERSP, revealing electroencephalographic spectral phase consistency, i.e. the degree of phase consistency, between different trials within a selected frequency range and time window, calculated as follows:

where ITPC (f, t) is the phase consistency at frequency f and time t;

s4, classifying by taking ITPC values of different frequency bands as features, and determining target stimulation; constructing a feature vector of a certain frequency band (1-20Hz) according to the upper, lower, left and right positions, extracting the feature vector from the data of a training data set, training a Support Vector Machine (SVM) or a linear classifier, namely a classification model, wherein the feature vectors corresponding to a target and 3 non-targets are respectively marked as 1 and-1 in the classifier; for each online trial, the trained classification model is applied to 4 feature vectors corresponding to different directions of the upper part, the lower part, the left part and the right part to obtain 4 values, the stimulation direction corresponding to the maximum value in the 4 values is regarded as a target, and the stimulation direction is a moving target key in the trial; this trial (trial) response is correct if the detected orientation coincides with the actual orientation.

Further, the EEG signal acquisition module requires that the patient with the conscious disturbance faces the display and keeps a set distance from the display in the data acquisition process, so that the visual angle of the patient is within a set angle range, the patient wears an electrode cap, and electrode paste is injected to ensure that each electrode is conducted.

Compared with the prior art, the invention has the following advantages and beneficial effects:

1. according to the visual tracking evaluation method, the target stimulus and the non-target stimulus in the brain-computer interface experimental paradigm are selected according to the mobile visual stimulus, the face recognition and a plurality of ERP components (such as N170, N200, P200, P300, N400 and the like) induced by accidental tasks in combination with the visual tracking evaluation of a coma recovery scale, the target stimulus and the non-target stimulus are simultaneously moved in a flickering mode, and the non-target is kept still. And guiding the patient to selectively track and pay attention to the target head portrait and keep watching until the target key moves from the center to the initial position through the movement and the flicker of the target stimulus, classifying by using the ITPC as a characteristic and outputting an online result in real time, and performing statistical test on the results of 20 online tasks until the results reach a significant level, wherein the patient is considered to have visual tracking, otherwise, the patients do not exist.

2. The invention adopts a detection mode based on a brain-computer interface, and overcomes the defects of low sensitivity to behavior response change, strong subjectivity of an evaluator and the like of behavior scale and experience detection.

3. The invention can be effectively applied to auxiliary diagnosis of visual tracking and can also be used for predicting the rehabilitation effect of the patient with the disturbance of consciousness.

Drawings

FIG. 1 is a schematic of a single trial (trial). In the figure, after 7s of experimental cue is played, stimulation is started, one trial (trial) comprises 10 repeated stimulations (but the target key is randomly selected), the whole stimulation process lasts for 8s, and after the stimulation is finished, the classification result outputs feedback in real time (4 s).

FIG. 2 is a graph of ERSP and ITPC profiles for different groups of patients.

Detailed Description

The present invention will be further described with reference to the following specific examples.

The embodiment discloses a brain-computer interface detection system for visual tracking, which comprises a high-definition face head portrait acquisition and movement module, an EEG signal acquisition module, a feature analysis extraction and classification module and a statistical test evaluation standard module, wherein the specific conditions of each functional module are as follows:

a. high-definition human face head portrait acquisition and moving module

According to the combination of the behavior scale evaluation process and the brain-computer interface evaluation paradigm, firstly, acquiring a head portrait of a current testee in real time through the front side of a high-definition camera on a display of a computer, cutting the head portrait into a 4 cm-4 cm head portrait by a program to be used as a face stimulation key, distributing the stimulation key in four directions, namely the upper direction, the lower direction, the left direction and the right direction, in a square range of the display, randomly selecting one direction before each test as a target direction, and placing the face stimulation key in the direction in the center of the display to prompt the selective attention of the patient; the single trial (trial) process is shown in fig. 1, after the trial starts, an experimental prompt is played for 7s to help the subject notice the middle human face stimulation key, then the human face stimulation key in the target direction is moved as the target key and flickers at the same time, and the human face visual stimulation in the other 3 directions which are not the target flickers in place and does not move. After the target key moves from the middle to the initial position, the single stimulation is completed, the whole stimulation process comprises 10 repeated stimulations, and the selection of the target key is randomly selected by the program. And in the process of target flicker movement, 10 stimulated electroencephalogram signals of a patient in a single trial are collected in real time through an EEG signal collection module, feature extraction and classification are carried out through a feature analysis extraction and classification module, and the result is output as feedback (the feedback time is 10 s).

b. EEG signal acquisition module

Each subject is required to wear an electrode cap according to the extended international 10-20 system standard, and the electrode cap is connected in a unipolar lead mode, and the electrode at the right earlobe is used as a common terminal. The channels of 30 signals including Fp1, Fp2, F7, F3, Fz, F4, F8, FT7, FC3, FCz, FC4, FT8, T7, C3, Cz, C4, T8, TP7, CP3, CPz, CP4, TP8, T5, P3, Pz, P4, T6, O1, Oz and O2 are selected in the trial time as brain electrical channels for acquiring EEG data. GND as ground electrode and a2 as reference electrode. In the data acquisition process, a patient with disturbance of consciousness is firstly allowed to sit about 0.5m in front of a display, the visual angle of the patient is ensured to be about 30 degrees, an electrode cap is worn on the patient, electrode paste is injected to ensure that each electrode is conducted, in order to ensure the signal quality, the electrode contact impedance of all channels is kept below 5k ohm, the EEG original data is subjected to band-pass filtering of [0.1-70] Hz, power frequency interference is removed through a 50Hz notch filter, and EEG background noises such as head movement and myoelectricity are removed.

c. Feature analysis extraction and classification module

After a target key of a single trial is required to be moved from the middle to an initial position, EEG data is sent into a computer in real time, the data is immediately processed by adopting an ITPC (phase consistency) classification algorithm, and positive/negative feedback pictures and sounds are simultaneously played according to results; the ITPC (phase consistency) classification algorithm comprises the following specific steps:

s1, carrying out zero-phase band-pass filtering on EEG data acquired by the EEG signal acquisition module at 0.1Hz-50Hz, and removing ocular artifacts by adopting a regression method;

s2, selecting the channel of the visual related region, at least including P3, Pz, P4 and O1Oz and O2(ii) a Constructing data units of at least 400 targets and 1200 non-targets by using the filtered EEG data of the selected channel, and performing time-frequency analysis based on fast Fourier change on the data units of the 400 targets and the 1200 non-targets from 100ms before stimulation movement to 800ms after stimulation;

s3, the time-frequency analysis includes calculating phase consistency between event-related spectral perturbation (ERSP) and trial (ITPC), the value of ERSP may reflect the degree to which power at different frequencies in the electroencephalogram signal changes with stimulation initiation, wherein ERSP is calculated as follows:

where ERSP (f, t) is the event-related spectral perturbation at frequency f and time t; n is the number of experimental trials; fk(f, t) is the spectral estimate for the kth trial at frequency f and time t; ITPC can be considered a complement to ERSP, revealing electroencephalographic spectral phase consistency (i.e., the degree of phase consistency) between different trials within a selected frequency range and time window, calculated as follows:

where ITPC (f, t) is the phase consistency at frequency f and time t.

The distribution of ERSP and ITPC for different groups of patients is shown in FIG. 2; the ERSP values in the low frequency band (less than 20Hz) in the normal subject group were very high and the ITPC values in the time window of 300- & lt400 ms were also high, i.e. phase lock was good, and the same phenomenon was observed in some later-appearing visually tracked patients (in the Responsive and Inconsistent groups) among the patients with impaired consciousness, which was not observed in patients without visual tracking.

S4, classifying by taking ITPC values of different frequency bands as features, and determining target stimulation; constructing a feature vector of a certain frequency band (1-20Hz) according to the upper, lower, left and right positions. Each patient performed an online task of 20 dials, completed in two runs, 10 dials each. Before each online task, 10 training tasks (10 dials) are collected as a training data set for establishing a classification model. Extracting feature vectors from a training data set, training a Support Vector Machine (SVM) or a linear classifier, namely a classification model, wherein the feature vectors corresponding to a target and 3 non-targets are respectively marked as 1 and-1 in the classifier; for each online trial, the trained classification model is applied to 4 feature vectors corresponding to different directions of the upper part, the lower part, the left part and the right part to obtain 4 values, the stimulation direction corresponding to the maximum value in the 4 values is regarded as a target, and the stimulation direction is a moving target key in the trial; this trial (trial) response is correct if the detected orientation coincides with the actual orientation.

d. Statistical test evaluation standard module

For each subject, classification accuracy is the number of hits divided by the total number of trials (tries), however, accuracy is generally low for patients with impaired consciousness, and in order to assess the significance of classification accuracy, a binomial test based on the Jeffreys' Beta distribution was used as described below:

where N represents the number of trials (tries), m is the number of hits expected, a is the accuracy of the expected comparison (here 0.25), λ is the accuracy required to achieve significance, and z is the value of z in a standard normal distribution; for single-sided detection (i.e., greater than expected), when the confidence is 0.05, the z value is 1.65; in the patients who completed 20 training tasks and 20 online tasks, the accuracy was regarded as a significant level of 45% or more, and the presence of ITPC at the same time determined that the subject had a visual tracking program score of 3.

The above-mentioned embodiments are merely preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, so that variations based on the shape and principle of the present invention should be covered within the scope of the present invention.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种面向双眼竞争的视觉稳态诱发电位检测系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!