Method and device for analyzing steady-state cognitive response based on sound stimulation sequence

文档序号:19628 发布日期:2021-09-21 浏览:25次 中文

阅读说明:本技术 基于声音刺激序列的稳态认知响应分析的方法、装置 (Method and device for analyzing steady-state cognitive response based on sound stimulation sequence ) 是由 梁晓琪 黄淦 张治国 侯绍辉 于 2021-06-17 设计创作,主要内容包括:本发明公开了基于声音刺激序列的稳态认知响应分析方法、装置,方法包括:根据声音模型分别生成与用户输入的声音序列生成信息对应的测试声音序列及对比声音序列,向测试者分别播放测试声音序列及对比声音序列,分别采集得到第一大脑信号及第二大脑信号,对第一大脑信号进行分析处理得到第一分析信息,对第二大脑信号进行分析处理得到第二分析信息,并根据差异比较规则对第一分析信息与第二分析信息进行差异性比较得到大脑稳态认知成分的稳态比较分析结果。通过上述方法,可对得到的第一大脑信号及第二大脑信号分别进行分析处理,之后对直观表示的第一分析信息与第二分析信息进行差异性比较,大幅提高了对大脑稳态认知成分进行分析的准确性。(The invention discloses a steady state cognitive response analysis method and a device based on a sound stimulation sequence, wherein the method comprises the following steps: respectively generating a test sound sequence and a comparison sound sequence corresponding to sound sequence generation information input by a user according to a sound model, respectively playing the test sound sequence and the comparison sound sequence to a tester, respectively acquiring a first brain signal and a second brain signal, analyzing and processing the first brain signal to obtain first analysis information, analyzing and processing the second brain signal to obtain second analysis information, and performing difference comparison on the first analysis information and the second analysis information according to a difference comparison rule to obtain a steady state comparison analysis result of steady state cognitive components of the brain. By the method, the obtained first brain signals and the second brain signals can be analyzed and processed respectively, and then the visually-expressed first analysis information and the visually-expressed second analysis information are compared in a difference mode, so that the accuracy of analyzing the steady state cognitive components of the brain is greatly improved.)

1. A steady state cognitive response analysis method based on a sound stimulation sequence is characterized by comprising the following steps:

if sound sequence generation information input by a user is received, generating a test sound sequence corresponding to the sound sequence generation information according to a preset sound model;

generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model;

respectively playing the test sound sequence and the comparison sound sequence to a tester, and collecting a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence;

analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information;

analyzing and processing the second brain information according to the signal analysis rule to obtain corresponding second analysis information;

and performing difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result.

2. The method according to claim 1, wherein the sound sequence generation information includes utterance time, interval time, pitch frequency range and sequence duration, and the generating of the test sound sequence corresponding to the sound sequence generation information according to a preset sound model includes:

repeatedly and randomly acquiring a plurality of frequency values from the tone frequency range as a corresponding plurality of target frequency values;

acquiring target sounds matched with each target frequency value from the sound model and generating sound fragments with corresponding duration according to the sound production time;

and performing interval combination on the sound segments according to the interval time to obtain a test sound sequence matched with the sequence duration.

3. The method according to claim 2, wherein the sound sequence generation information further comprises a frequency value obtaining rule, and the repeatedly and randomly obtaining a plurality of frequency values from the pitch frequency range as a corresponding plurality of target frequency values comprises:

and repeatedly and randomly acquiring a plurality of frequency values from the tone frequency range according to the frequency value acquisition rule to serve as a plurality of corresponding target frequency values.

4. The method according to claim 2, wherein the generating the comparative sound sequence corresponding to the sound sequence generation information according to a preset sound model comprises:

randomly obtaining a frequency value from the pitch frequency range as a reference frequency value;

acquiring reference sound matched with the reference frequency value from the sound model and repeatedly generating reference sound segments with corresponding duration according to the sounding time;

and performing interval combination on the reference sound segments according to the interval time to obtain a comparison sound sequence matched with the sequence duration.

5. The analysis method for analyzing steady state cognitive response based on sound stimulation sequence according to claim 1, wherein the signal analysis rule includes a filtering frequency band, reference channel information and an artifact filtering formula, and the analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information includes:

performing time domain sampling on the first brain signal to obtain sampling time domain information corresponding to each time domain;

acquiring sampling filtering information corresponding to the filtering frequency band in each sampling time domain information according to the filtering frequency band;

acquiring channel sample information matched with the reference channel information in each piece of sampling filtering information, and performing re-reference transformation on each piece of sampling filtering information to obtain re-reference transformation information corresponding to each piece of sampling filtering information;

and performing artifact filtering on the re-reference transformation information according to the artifact filtering formula to obtain corresponding first analysis information.

6. The method according to claim 1, wherein the obtaining of the channel sample information matching the reference channel information in each of the sampled filtered information performs re-reference transformation on each of the sampled filtered information to obtain re-reference transformed information corresponding to each of the sampled filtered information, and the method comprises:

acquiring channel sample information matched with the reference channel information in each piece of sampling filtering information;

calculating a sample average value of each channel sample information;

and calculating the difference value between the channel value of each channel in each sampling filtering information and the corresponding sample average value to obtain the re-reference transformation information corresponding to each sampling filtering information.

7. The method according to claim 1, wherein the differentially comparing the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result comprises:

acquiring corresponding first acquisition point power from the first analysis information according to the frequency acquisition points in the difference comparison rule;

acquiring corresponding second acquisition point power from the second analysis information according to the frequency acquisition points in the difference comparison rule;

calculating a difference coefficient between the first acquisition point power and the second acquisition point power according to a difference degree calculation formula in the difference comparison rule, and taking the difference coefficient as the steady-state comparison analysis result.

8. An apparatus for analyzing a steady state cognitive response based on a sequence of acoustic stimuli, the apparatus comprising:

the test sound sequence acquisition unit is used for generating a test sound sequence corresponding to sound sequence generation information according to a preset sound model if the sound sequence generation information input by a user is received;

the comparison sound sequence acquisition unit is used for generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model;

the brain signal acquisition unit is used for respectively playing the test sound sequence and the comparison sound sequence to a tester and acquiring a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence;

the first analysis information acquisition unit is used for analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information;

the second analysis information acquisition unit is used for analyzing and processing the second brain information according to the signal analysis rule to obtain corresponding second analysis information;

and the analysis result acquisition unit is used for performing difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result.

9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for cognitive response analysis based on sound stimulus sequences according to any one of claims 1 to 7 when executing the computer program.

10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the analysis method for a sound stimulus sequence-based steady-state cognitive response analysis according to any one of claims 1 to 7.

Technical Field

The invention relates to the technical field of steady state cognitive response analysis, in particular to a method and a device for analyzing steady state cognitive response based on a sound stimulation sequence.

Background

The human body can generate cognitive response to various information in the natural environment, for example, the human body can generate connection to seen objects, the brain signals can generate corresponding changes when the human body generates authentication response, and the specific situation of the cognitive response can be obtained by analyzing the brain signals of the human body. However, the inventor finds that the existing cognitive response analysis method has a problem of low analysis response in the process of analyzing brain signals caused by sound stimulation, and is difficult to analyze the change conditions of different brain signals caused by different stimuli, that is, the difference between different brain signals cannot be accurately analyzed. Therefore, the prior art methods have the problem that the differences between brain signals caused by stimulation cannot be accurately analyzed.

Disclosure of Invention

The embodiment of the invention provides a steady state cognitive response analysis method, a steady state cognitive response analysis device, steady state cognitive response analysis equipment and a steady state cognitive response analysis medium based on a sound stimulation sequence, and aims to solve the problem that the difference between brain signals caused by sound stimulation cannot be accurately analyzed in the prior art.

In a first aspect, an embodiment of the present invention provides a method for analyzing a steady-state cognitive response based on a sound stimulation sequence, including:

if sound sequence generation information input by a user is received, generating a test sound sequence corresponding to the sound sequence generation information according to a preset sound model;

generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model;

respectively playing the test sound sequence and the comparison sound sequence to a tester, and collecting a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence;

analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information;

analyzing and processing the second brain information according to the signal analysis rule to obtain corresponding second analysis information;

and performing difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result.

In a second aspect, an embodiment of the present invention provides a cognitive response analysis apparatus based on a sound stimulation sequence, including:

the test sound sequence acquisition unit is used for generating a test sound sequence corresponding to sound sequence generation information according to a preset sound model if the sound sequence generation information input by a user is received;

the comparison sound sequence acquisition unit is used for generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model;

the brain signal acquisition unit is used for respectively playing the test sound sequence and the comparison sound sequence to a tester and acquiring a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence;

the first analysis information acquisition unit is used for analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information;

the second analysis information acquisition unit is used for analyzing and processing the second brain information according to the signal analysis rule to obtain corresponding second analysis information;

and the analysis result acquisition unit is used for performing difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result.

In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor, when executing the computer program, implements the method for analyzing a steady-state cognitive response based on a sound stimulation sequence according to the first aspect.

In a fourth aspect, the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the method for analyzing a steady-state cognitive response based on a sound stimulation sequence according to the first aspect.

The embodiment of the invention provides a cognitive response analysis method, a cognitive response analysis device, cognitive response equipment and a cognitive response medium based on a sound stimulation sequence. Respectively generating a test sound sequence and a comparison sound sequence corresponding to sound sequence generation information input by a user according to a sound model, respectively playing the test sound sequence and the comparison sound sequence to a tester, respectively acquiring a first brain signal and a second brain signal, analyzing and processing the first brain signal to obtain first analysis information, analyzing and processing the second brain signal to obtain second analysis information, and performing difference comparison on the first analysis information and the second analysis information according to a difference comparison rule to obtain a steady state comparison analysis result of steady state cognitive components of the brain. By the method, the first brain signals and the second brain signals which are obtained respectively can be analyzed and processed respectively, and then the visually-expressed first analysis information and the second analysis information are compared in a difference mode, so that the accuracy of analyzing the steady state cognitive components of the brain is greatly improved.

Drawings

In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a schematic flowchart of a steady-state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;

fig. 2 is a schematic sub-flow diagram of a steady-state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;

fig. 3 is another schematic flow chart of a steady state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;

FIG. 4 is a schematic view of another sub-flow chart of a steady state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;

fig. 5 is another schematic flow chart of a steady state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;

FIG. 6 is a schematic view of another sub-flow chart of a steady state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;

fig. 7 is a schematic block diagram of an analysis apparatus for a steady-state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;

FIG. 8 is a schematic block diagram of a computer device provided by an embodiment of the present invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.

It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.

Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a steady state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention; the steady state cognitive response analysis method based on the sound stimulation sequence is applied to a user terminal or a management server, the steady state cognition response analysis method based on the sound stimulation sequence is executed by application software installed in a user terminal or a management server, the user terminal can receive sound sequence generation information input by a user and play a corresponding sound sequence, and can analyze the steady state cognitive component response of the brain signals collected by the brain electric cap connected with the user terminal, such as desktop computer, notebook computer, tablet computer or mobile phone, the management server can receive the sound sequence generation information sent by the user through the terminal and play the corresponding sound sequence, and can be used for a server end of brain signals collected by the brain electric cap connected with the management server, such as a server constructed by enterprises, medical institutions or government departments. As shown in fig. 1, the method includes steps S110 to S160.

And S110, if sound sequence generation information input by a user is received, generating a test sound sequence corresponding to the sound sequence generation information according to a preset sound model.

And if sound sequence generation information input by a user is received, generating a test sound sequence corresponding to the sound sequence generation information according to a preset sound model. The user can input sound sequence generation information, and a test sound sequence can be correspondingly generated on the basis of the sound model and the sound sequence generation information input by the user, wherein the test sound sequence is a piece of sound information formed by combining a plurality of sound segments. The sound sequence generation information comprises sounding time, interval time, tone frequency range and sequence duration.

In an embodiment, as shown in fig. 2, step S110 includes sub-steps S111, S112 and S113.

And S111, repeatedly and randomly acquiring a plurality of frequency values from the tone frequency range as a plurality of corresponding target frequency values.

The frequency value can be randomly acquired from the tone frequency range as a target frequency value, one target frequency value can be correspondingly acquired in one random acquisition process, and a plurality of corresponding target frequency values can be acquired from the tone frequency range in a multi-time random acquisition mode because a plurality of target frequency values are required to be used when the test sound sequence is generated.

For example, the pitch frequency range is 300Hz-1200Hz, and one frequency value can be randomly obtained from 300Hz-1200Hz at a time as the target frequency value.

Specifically, the sound sequence generation information further includes a frequency value obtaining rule, and a plurality of frequency values can be repeatedly and randomly obtained from the tone frequency range according to the frequency value obtaining rule to serve as a plurality of corresponding target frequency values. For example, the frequency value obtaining rule may randomly obtain a frequency value of an integer multiple of 100 from the tone frequency range as the target frequency value, and then randomly obtain a frequency value of an integer multiple of 100 from 300Hz and 400Hz … … 1200Hz among 300Hz to 1200Hz as the target frequency value according to the frequency value obtaining rule.

And S112, acquiring the target sound matched with each target frequency value from the sound model and generating a sound fragment with corresponding duration according to the sound production time.

If the sound model stores the sound uniquely corresponding to each frequency value, the target sound matched with each target frequency value can be obtained from the sound model, each target frequency value corresponds to one target sound, each target sound is subjected to delayed acquisition according to the occurrence time to obtain a sound fragment corresponding to each target sound, the duration of each sound fragment corresponds to the sounding time, and the frequency values corresponding to a plurality of sound fragments are different. Specifically, an oscillation signal matched with each target frequency value can be obtained by using a tone function of the single chip microcomputer, a corresponding target sound is generated based on the oscillation signal, and a single target sound emitted by the single chip microcomputer is stimulated and continued by using a delay function, so that a sound segment with a duration corresponding to the sound production time is obtained.

For example, the occurrence time may be 0.03s, and the corresponding duration of each sound segment is 0.03 s.

S113, combining the sound segments at intervals according to the interval time to obtain a test sound sequence matched with the sequence duration.

And combining the plurality of sound segments at intervals according to the generation sequence and the interval time of the sound segments to obtain a test sound sequence matched with the sequence duration, wherein the test sound sequence is also matched with the sound sequence generation information, the sequence duration is the total duration of the test sound sequence, and the interval time is the specific time of the interval between two adjacent sound segments in the test sound sequence.

For example, the sequence duration is 15s, and the interval time is 0.2s, and the obtained sound segments are combined at intervals according to the interval time to generate a 15s test sound sequence.

And S120, generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model.

And generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model. And generating a comparison sound sequence according to the sound model and the sound sequence generation information, wherein the comparison sound sequence is used as a comparison sequence of the test sound sequence.

In an embodiment, as shown in fig. 3, step S120 includes sub-steps S121, S122 and S123.

And S121, randomly acquiring a frequency value from the tone frequency range as a reference frequency value.

A frequency value can be randomly obtained from the pitch frequency range as a reference frequency value, and the frequency value is obtained from the pitch frequency range only once in the process of generating the comparison sound sequence. The reference frequency value may be any frequency value in the tone frequency range, or may be a frequency value that is an integral multiple of 100 in the tone frequency range.

And S122, obtaining the reference sound matched with the reference frequency value from the sound model and repeatedly generating a reference sound fragment with corresponding duration according to the sound production time.

The reference sound which is uniquely matched with the reference frequency value can be obtained from the sound model, and the reference sound segment with the corresponding duration is repeatedly generated on the basis of the reference sound, so that a plurality of reference sound segments can be repeatedly generated, and the frequency values corresponding to the plurality of reference sound segments are the same.

And S123, combining the reference sound segments at intervals according to the interval time to obtain a comparison sound sequence matched with the sequence duration.

And combining the plurality of reference sound segments at intervals according to the interval time to obtain a comparison sound sequence matched with the sequence duration after combination.

S130, the test sound sequence and the comparison sound sequence are respectively played to a tester, and a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence are collected.

And respectively playing the test sound sequence and the comparison sound sequence to a tester, and collecting a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence. And playing the test sound sequence to the tester, acquiring a first brain signal of the tester listening to the test sound sequence, then playing the comparison sound sequence to the tester, acquiring a second brain signal of the tester listening to the comparison sound sequence, and exchanging the playing order of the test sound sequence and the comparison sound sequence. Specifically, a 64-channel electroencephalogram cap can be used for acquiring a corresponding 64-channel signal from the brain of a tester, and then the first brain signal and the second brain signal are both 64-channel signals.

And S140, analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information.

And analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information. After the first brain signal is acquired, the first brain signal can be analyzed and processed according to the signal analysis rule to obtain corresponding first analysis information, because a tester needs to listen to a test sound sequence for a certain time, the acquired first brain signal contains time domain and frequency domain information of a plurality of channels, the first brain signal cannot visually represent the cognitive response condition of the tester, the first brain signal can be analyzed and processed to obtain the first analysis information, the first analysis information is displayed in a waveform diagram mode, and therefore the cognitive response of the tester can be visually represented through the first analysis information. The signal analysis rule is a specific rule for analyzing and processing the first brain signal, wherein the signal analysis rule comprises a filtering frequency band, reference channel information and an artifact filtering formula.

In an embodiment, as shown in fig. 4, step S140 includes sub-steps S141, S142, S143, and S144.

And S141, performing time domain sampling on the first brain signal to obtain sampling time domain information corresponding to each time domain.

The time domain sampling can be carried out on the first brain signal, and the time domain information corresponding to each time domain can be obtained through the time domain sampling. The time domain is a unit time applied in time domain sampling, the signal of each channel in the first brain signal can be segmented according to the time domain to obtain signal segmentation information of each channel, time domain information is obtained by sampling from the signal segmentation information of each channel according to a preset sampling rate, and the time domain information of all channels contained in the same time domain is combined into sampling time domain information of the time domain.

For example, if the duration of the test sound sequence is 15s, the duration of acquiring the first brain signal is 20s, the time domain is 1s, and the sampling rate is 1/1000s, the signal of each channel in the first brain signal may be segmented into signal segment information of duration 1s, and the time domain information is obtained by sampling from the signal segment information of each channel through the sampling rate, so that the signal of 1s includes information corresponding to 1000 time points in the signal segment information in the time domain information. Finally, the sampled time domain information corresponding to the 1s time domain may be represented in a matrix form N × M, where N is the number of channels, M is a plurality of time point values corresponding to each signal segment information, for example, N may be 64, and M may be 1000.

And S142, acquiring sampling filtering information corresponding to the filtering frequency band in each sampling time domain information according to the filtering frequency band.

In the specific processing process, a Fast Fourier Transform (FFT) may be performed on a curve segment formed by combining a plurality of sampling values corresponding to each channel in the sampling time domain information, and a corresponding continuous curve segment may be obtained after the FFT, and a target curve corresponding to a filtering frequency band in each obtained continuous curve segment is obtained according to the filtering frequency band, for example, the filtering frequency band is 0.05 to 50Hz, and a curve corresponding to 0.05 to 50Hz is obtained from each obtained continuous curve segment as the target curve. And performing Inverse Fourier Transform (IFT) on each target curve to obtain inverse transform frequency information, wherein the inverse transform frequency information consists of a plurality of channel frequency values respectively corresponding to each channel, and the inverse transform frequency information corresponding to the plurality of frequency values corresponding to each channel in each sampling time domain information is obtained and used as sampling filtering information of each sampling time domain information.

S143, obtaining channel sample information matched with the reference channel information in each piece of sampling filtering information, and performing re-reference transformation on each piece of sampling filtering information to obtain re-reference transformation information corresponding to each piece of sampling filtering information.

The corresponding channel sample information in each sampling filter information can be obtained according to the reference channel information, so that each sampling filter information can obtain the corresponding channel sample information, and the re-reference transformation information corresponding to the channel sample information is obtained by performing re-reference transformation on the sampling filter information based on the channel sample information.

In an embodiment, as shown in fig. 5, step S143 includes sub-steps S1431, S1432 and S1433.

S1431, obtaining channel sample information matched with the reference channel information in each of the sampling filtering information.

The reference channel information may include one channel or multiple channels, for example, the filtering information of one sampling frequency includes sample information of 64 channels, and the reference channel information is TP9 and TP10, and then sample information corresponding to the 9 th channel and the 10 th channel in the sample information of 64 channels is respectively obtained as the channel sample information of the sampling filtering information. According to the method, the channel sample information corresponding to each sampling filtering information can be respectively obtained.

And S1432, calculating a sample average value of each channel sample information.

And calculating the sample average value of the channel sample information, and obtaining the information values of a plurality of sampling frequencies contained in each channel sample information to carry out average calculation, so as to obtain the corresponding sample average value after each channel information is re-referenced.

S1433, calculating a difference between a channel value of each channel in each piece of the sampling filtering information and a corresponding sample average value, and obtaining re-reference transformation information corresponding to each piece of the sampling filtering information.

The sample information of each channel in each sampling filtering information comprises information values of a plurality of sampling frequencies, the difference value between the channel value of each channel in each sampling filtering information and the average value of the corresponding sample is calculated, and the multiple difference values obtained by corresponding calculation of each sampling filtering information are combined to obtain the re-reference transformation information corresponding to each sampling filtering information.

For example, the sample average value is 4, one channel value in a certain sampling filtering information corresponding to the sample average value is 2, and the difference value corresponding to the channel value is-2; the other channel value is 9 and the difference corresponding to this channel value is 5.

S144, performing artifact filtering on the re-reference transformation information according to the artifact filtering formula to obtain corresponding first analysis information.

The artifact filtering formula is a Blind Signal Separation (BSS) method calculation formula constructed based on Independent Component Analysis (ICA). The basic principle of independent component analysis can be expressed by equation (1):

X=A×S (1);

where X is a recorded Electroencephalogram signal (EEG) which can be represented by matrix data of "dimension channel × time", S is a source signal which can be represented by matrix data of "dimension component × time", and a is a mixing matrix which can be represented by matrix data of "dimension channel × component". The purpose of the independent component analysis is to find a mixing matrix a, making each component (each row) independent of each other.

According to the basic principle, the linear model is combined, an artifact filtering formula can be constructed based on independent component analysis, and the specific calculation process comprises the following steps: calculating all re-reference transformation information based on an independent component analysis algorithm to obtain a mixed matrix A; the source signal S is calculated according to a first formula in the artifact filtering formula, where the first formula can be expressed by formula (2):

S=pinv(A)×X (2);

wherein, S is the calculated source signal, pinv (a) is the pseudo-inverse matrix operation performed on the mixed matrix a, and X is the matrix formed by combining all the re-reference transformation information.

Modifying the value contained in the line matched with the numerical modification template in the source signal S into 0 according to a preset numerical modification template to obtain S _ bar; and calculating to obtain the first analysis information according to a second formula in the artifact filtering formula, wherein the second formula can be represented by formula (3):

X_bar=A×S_bar (3);

wherein, X _ bar is the first analysis information obtained by artifact filtering, the obtained first analysis information can be represented by a two-dimensional oscillogram, the abscissa in the oscillogram is frequency value (unit is Hz), and the ordinate is power (unit is V)2/Hz)。

And S150, analyzing and processing the second brain information according to the signal analysis rule to obtain corresponding second analysis information.

And analyzing and processing the second brain information according to the signal analysis rule to obtain corresponding second analysis information. The second brain signal may be analyzed according to the signal analysis rule to obtain second analysis information corresponding to the second brain signal, and the process of analyzing the second brain signal is the same as the process of analyzing the first brain signal, which is not repeated herein. The second analysis information obtained can likewise be represented by a two-dimensional waveform diagram in which the frequency value (in Hz) is plotted on the abscissa and the power (in V) is plotted on the ordinate2/Hz)。

And S160, performing difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result.

And performing difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result. The difference comparison can be performed on the first analysis information and the second analysis information through a difference comparison rule, the difference comparison rule is a specific rule for comparing the difference between the first analysis information and the second analysis information, a steady-state comparison analysis result can be obtained through the difference comparison, and the steady-state comparison analysis result can quantitatively express the difference between the first analysis information and the second analysis information.

In one embodiment, as shown in fig. 6, step S160 includes sub-steps S161, S162, and S163.

S161, acquiring corresponding first acquisition point power from the first analysis information according to the frequency acquisition points in the difference comparison rule.

Specifically, the difference comparison rule includes at least one frequency acquisition point, the first analysis information may be specifically represented as a two-dimensional waveform diagram, and then, a power value corresponding to each frequency acquisition point may be obtained from the two-dimensional waveform diagram corresponding to the first analysis information, the power values are numerical values greater than 0, one power value is a numerical value of a horizontal coordinate in the two-dimensional waveform diagram and a vertical coordinate of a data point corresponding to one frequency acquisition point, and the power value corresponding to each frequency acquisition point constitutes the power of the first acquisition point corresponding to the first analysis information.

And S162, acquiring corresponding second acquisition point power from the second analysis information according to the frequency acquisition points in the difference comparison rule.

And acquiring second acquisition point power corresponding to the frequency acquisition point from the second analysis information in the same way, wherein the quantity of the power values contained in the second acquisition point power is equal to the quantity of the power values contained in the first acquisition point power.

S163, calculating a difference coefficient between the first acquisition point power and the second acquisition point power according to a difference degree calculation formula in the difference comparison rule, and taking the difference coefficient as the steady-state comparison analysis result.

Specifically, the difference coefficient between the first collection point power and the second collection point power can be calculated according to a difference calculation formula, and the difference calculation formula can be represented by formula (4):

wherein T is the total number of power values contained in the power of the first acquisition point, fiaIs the ith power value, f in the first collection point poweribAnd C is the calculated difference coefficient, wherein C is the ith power value in the power of the second acquisition point. The larger the difference coefficient is, the larger the difference between the first acquisition point power and the second acquisition point power is, that is, the larger the difference between the first analysis information and the second analysis information is.

In the cognitive response analysis method based on the sound stimulation sequence provided by the embodiment of the invention, a test sound sequence and a comparison sound sequence corresponding to sound sequence generation information input by a user are respectively generated according to a sound model, the test sound sequence and the comparison sound sequence are respectively played to a tester, a first brain signal and a second brain signal are respectively acquired, the first brain signal is analyzed and processed to obtain first analysis information, the second brain signal is analyzed and processed to obtain second analysis information, and the first analysis information and the second analysis information are subjected to difference comparison according to a difference comparison rule to obtain a steady-state comparison analysis result. By the method, the specific test sound sequence and the specific comparison sound sequence are generated, the first analysis information and the second analysis information which are visually represented are obtained by analyzing and processing the first brain signal and the second brain signal which are respectively obtained, and the accuracy of analyzing the difference between the brain signals caused by sound stimulation is greatly improved by comparing the difference between the first analysis information and the second analysis information which are visually represented.

The embodiment of the invention also provides a cognitive response analysis device based on the sound stimulation sequence, wherein the cognitive response analysis device based on the sound stimulation sequence can be configured in a user terminal or a management server, and the cognitive response analysis device based on the sound stimulation sequence is used for executing any embodiment of the cognitive response analysis method based on the sound stimulation sequence. Specifically, referring to fig. 7, fig. 7 is a schematic block diagram of a cognitive response analysis device based on a sound stimulation sequence according to an embodiment of the present invention.

As shown in fig. 7, the cognitive response analysis device 100 based on a sound stimulus sequence includes a test sound sequence acquisition unit 110, a comparison sound sequence acquisition unit 120, a brain signal acquisition unit 130, a first analysis information acquisition unit 140, a second analysis information acquisition unit 150, and an analysis result acquisition unit 160.

A test sound sequence obtaining unit 110, configured to, if sound sequence generation information input by a user is received, generate a test sound sequence corresponding to the sound sequence generation information according to a preset sound model.

In one embodiment, the test sound sequence acquiring unit 110 includes sub-units: a target frequency value acquisition unit for repeatedly and randomly acquiring a plurality of frequency values from the tone frequency range as a corresponding plurality of target frequency values; the sound fragment generating unit is used for acquiring target sounds matched with each target frequency value from the sound model and generating sound fragments with corresponding duration according to the sound production time; and the first sound sequence acquisition unit is used for carrying out interval combination on the sound segments according to the interval time so as to obtain a test sound sequence matched with the sequence duration.

A comparison sound sequence obtaining unit 120, configured to generate a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model.

In one embodiment, the comparison sound sequence obtaining unit 120 includes sub-units: a reference frequency value acquisition unit for randomly acquiring a frequency value from the tone frequency range as a reference frequency value; the reference sound fragment generating unit is used for acquiring reference sound matched with the reference frequency value from the sound model and repeatedly generating reference sound fragments with corresponding duration according to the sounding time; and the second sound sequence acquisition unit is used for carrying out interval combination on the plurality of reference sound segments according to the interval time so as to obtain a comparison sound sequence matched with the sequence duration.

The brain signal acquiring unit 130 is configured to play the test sound sequence and the comparison sound sequence to a tester respectively, and acquire a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence.

The first analysis information obtaining unit 140 is configured to analyze the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information.

In one embodiment, the first analysis information obtaining unit 140 includes sub-units: the sampling time domain information acquisition unit is used for performing time domain sampling on the first brain signal to obtain sampling time domain information corresponding to each time domain; the sampling filtering information acquisition unit is used for acquiring sampling filtering information corresponding to the filtering frequency band in each sampling time domain information according to the filtering frequency band; the re-reference conversion processing unit is used for acquiring channel sample information matched with the reference channel information in each piece of sampling filtering information, and performing re-reference conversion on each piece of sampling filtering information to obtain re-reference conversion information corresponding to each piece of sampling filtering information; and the artifact filtering unit is used for performing artifact filtering on the re-reference transformation information according to the artifact filtering formula to obtain corresponding first analysis information.

In an embodiment, the re-reference transform processing unit comprises sub-units: a channel sample information obtaining unit, configured to obtain channel sample information that matches the reference channel information in each of the sampling filter information; the frequency average value acquisition unit is used for calculating the frequency average value of each channel sample information; and the frequency difference value acquisition unit is used for calculating the frequency difference value between the channel frequency value of each channel in each piece of sampling filtering information and the corresponding frequency average value to obtain the re-reference transformation information corresponding to each piece of sampling filtering information.

And a second analysis information obtaining unit 150, configured to perform analysis processing on the second brain information according to the signal analysis rule to obtain corresponding second analysis information.

An analysis result obtaining unit 160, configured to perform difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result.

In one embodiment, the analysis result obtaining unit 160 includes sub-units: the first acquisition point power acquisition unit is used for acquiring corresponding first acquisition point power from the first analysis information according to the frequency acquisition points in the difference comparison rule; the second acquisition point power acquisition unit is used for acquiring corresponding second acquisition point power from the second analysis information according to the frequency acquisition points in the difference comparison rule; and the difference coefficient calculation unit is used for calculating a difference coefficient between the first acquisition point power and the second acquisition point power according to a difference degree calculation formula in the difference comparison rule, and taking the difference coefficient as the steady-state comparison analysis result.

The cognitive response analysis device based on the voice stimulus sequence provided by the embodiment of the invention applies the steady state cognitive response analysis method based on the voice stimulus sequence, respectively generates a test voice sequence and a comparison voice sequence corresponding to voice sequence generation information input by a user according to a voice model, respectively plays the test voice sequence and the comparison voice sequence to a tester, respectively acquires a first brain signal and a second brain signal, respectively analyzes and processes the first brain signal to obtain first analysis information, analyzes and processes the second brain signal to obtain second analysis information, and performs difference comparison on the first analysis information and the second analysis information according to a difference comparison rule to obtain a steady state comparison analysis result of the steady state cognitive component of the brain. By the method, the first brain signals and the second brain signals which are obtained respectively can be analyzed and processed respectively, and then the visually-expressed first analysis information and the second analysis information are compared in a difference mode, so that the accuracy of analyzing the steady state cognitive components of the brain is greatly improved.

The above-mentioned cognitive response analysis apparatus based on sound stimulus sequences may be implemented in the form of a computer program, which may be run on a computer device as shown in fig. 8.

Referring to fig. 8, fig. 8 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device can be a user terminal or a management server which is used for playing a corresponding sound sequence and analyzing brain signals collected by an electroencephalogram cap connected with the computer device so as to realize cognitive response analysis based on the sound stimulation sequence.

Referring to fig. 8, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a storage medium 503 and an internal memory 504.

The storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, may cause the processor 502 to perform a method of cognitive response analysis based on a sound stimulus sequence, wherein the storage medium 503 may be a volatile storage medium or a non-volatile storage medium.

The processor 502 is used to provide computing and control capabilities that support the operation of the overall computer device 500.

The internal memory 504 provides an environment for the execution of the computer program 5032 in the storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 may be caused to perform an analysis method based on the cognitive response analysis of the sound stimulation sequence.

The network interface 505 is used for network communication, such as providing transmission of data information. Those skilled in the art will appreciate that the configuration shown in fig. 8 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing device 500 to which aspects of the present invention may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.

The processor 502 is configured to run the computer program 5032 stored in the memory to implement the corresponding functions in the cognitive response analysis method based on the sound stimulation sequence.

Those skilled in the art will appreciate that the embodiment of a computer device illustrated in fig. 8 does not constitute a limitation on the specific construction of the computer device, and that in other embodiments a computer device may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may only include a memory and a processor, and in such embodiments, the structures and functions of the memory and the processor are consistent with those of the embodiment shown in fig. 8, and are not described herein again.

It should be understood that, in the embodiment of the present invention, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

In another embodiment of the invention, a computer-readable storage medium is provided. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program, when executed by a processor, implements the steps included in the above-described method for analyzing a cognitive response based on a sound stimulus sequence.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

In the embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only a logical division, and there may be other divisions when the actual implementation is performed, or units having the same function may be grouped into one unit, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.

In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a computer-readable storage medium, which includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned computer-readable storage media comprise: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.

While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:设备控制方法、装置及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类