Brain vision detection and analysis equipment and method based on nerve feedback

文档序号:1725064 发布日期:2019-12-20 浏览:21次 中文

阅读说明:本技术 一种基于神经反馈的大脑视觉检测分析设备和方法 (Brain vision detection and analysis equipment and method based on nerve feedback ) 是由 毕宏生 陈嵘 毕爱玲 于 2019-07-31 设计创作,主要内容包括:本申请公开了一种基于神经反馈的大脑视觉检测分析设备和方法,设备包括检测设备、第一服务器和电子针灸设备;检测设备用于获取用户的生理数据和用户的当前视功能参数;第一服务器用于根据当前视功能参数和生理数据,更新当前训练样本和当前神经网络模型;通过更新后的神经网络模型,输出用户的视力障碍类型和视力障碍等级;第一服务器还用于根据视力障碍类型和视力障碍等级,确定电子针灸数据;根据电子针灸数据,为用户匹配对应的电针设备运行数据;电子针灸设备用于根据电针设备运行数据,为用户进行电子针灸。本申请不仅能够实时调整电子针灸参数以适应用户的当前的视力状况,还可优选针灸穴位和穴位组合。(The application discloses brain vision detection and analysis equipment and a method based on nerve feedback, wherein the equipment comprises detection equipment, a first server and electronic acupuncture equipment; the detection equipment is used for acquiring physiological data of a user and current visual function parameters of the user; the first server is used for updating the current training sample and the current neural network model according to the current visual function parameters and the physiological data; outputting the vision disorder type and the vision disorder grade of the user through the updated neural network model; the first server is also used for determining electronic acupuncture data according to the vision disorder type and the vision disorder grade; matching corresponding electric acupuncture equipment operation data for a user according to the electronic acupuncture data; the electronic acupuncture device is used for performing electronic acupuncture for a user according to the operation data of the electric acupuncture device. The electronic acupuncture device can adjust the electronic acupuncture parameters in real time to adapt to the current vision condition of a user, and can also preferably select acupuncture points and point combinations.)

1. Brain visual detection analysis equipment based on nerve feedback is characterized by comprising detection equipment, a first server and electronic acupuncture equipment;

the detection device is used for acquiring physiological data and visual function parameters of a user, wherein the physiological data comprises: oxyhemoglobin concentration change data and/or deoxyhemoglobin concentration change data;

the first server is used for updating a current training sample and a current neural network model according to the current visual function parameter and the physiological data; outputting the type and grade of the visual disorder of the user by taking the current visual function parameters and the physiological data as input through the updated neural network model;

the first server is further used for determining at least one piece of electronic acupuncture data according to the vision disorder type and the vision disorder grade; the first server prestores a plurality of electronic acupuncture data, and the electronic acupuncture data are used for indicating the corresponding relation between the vision disorder type and/or the vision disorder grade and the electric acupuncture parameter, the acupuncture point and the acupuncture type; according to the at least one piece of electronic acupuncture data, matching corresponding electric acupuncture device operation data for the user, wherein the electric acupuncture device operation data comprise: electric needle parameters, acupuncture types and acupuncture points;

the electronic acupuncture device is used for performing electronic acupuncture for the user according to the operation data of the electric acupuncture device.

2. The apparatus of claim 1, further comprising: a medical care terminal;

the medical care terminal is used for acquiring vision reference data, and the vision reference data comprises: historical vision condition data and/or vision-affecting data of the user; wherein the data affecting vision comprises: physical data of the person who affects the eyesight and life style data of the person who affects the eyesight;

the first server is used for receiving the vision reference data and determining a vision characteristic type identifier corresponding to the user according to the vision disorder type, the vision disorder grade and the vision reference data, wherein the vision characteristic type identifier is used for indicating the type of the vision condition of the user; and determining at least one piece of electronic acupuncture data according to the vision characteristic type identifier.

3. The apparatus of claim 2,

the first server is used for extracting at least one keyword from the vision reference data, wherein the keyword comprises one or more of historical vision condition keywords, constitutional keywords and life style keywords of the user; according to the keywords, respectively determining corresponding vision influence factors, wherein the vision influence factors comprise: one or more of historical vision status, physical constitution, and lifestyle of the user; respectively characterizing the vision disorder type, the vision disorder grade and the vision influencing factor by numbers, letters or a combination of the numbers and the letters; and generating a vision characteristic type identifier corresponding to the user by using the vision disorder type, the vision disorder grade and the number, letter or combination of the two corresponding to each vision influence factor.

4. The apparatus of claim 3,

the first server is further used for configuring correction parameters for the vision influence factors according to the values corresponding to the vision influence factors; determining a total correction parameter according to each correction parameter; and correcting the vision characteristic type identifier according to the corresponding numerical value of the total correction parameter.

5. The apparatus of claim 2,

the first server is used for determining a plurality of query keywords according to the vision characteristic type identifier; respectively determining the number of the query keywords contained in each piece of electronic acupuncture data in a knowledge base; and acquiring electronic acupuncture data with the similarity greater than a threshold value with the vision characteristic type identifier from a knowledge base according to the number of the query keywords, wherein the knowledge base is preset in the first server.

6. The apparatus of claim 2,

the first server is used for determining at least one logical operation formula in the at least one electronic acupuncture data; outputting at least one electric acupuncture parameter, at least one acupuncture point for applying acupuncture and at least one acupuncture type by taking the vision disorder type, the vision disorder grade and the vision reference data in the vision characteristic type identifier as input through the at least one logical operation formula; determining at least one electric needle device operation data according to the at least one electric needle parameter, the at least one acupuncture point for applying the needle or the at least one acupuncture category.

7. The apparatus of claim 6,

the first server is used for acquiring a theoretical acupuncture point application set and a theoretical acupuncture category set from a knowledge base according to the vision characteristic type identifier, wherein the knowledge base is preset in the first server;

when the first server obtains the operation data of the electric needle device, the first server is used for respectively detecting whether the needle application acupuncture points and the acupuncture types belong to the theoretical needle application acupuncture point set and the theoretical acupuncture type set;

when the detection result is negative, the first server is used for re-determining at least one piece of electronic acupuncture data according to the vision characteristic type identifier; and determining at least one new logical operation formula according to the redetermined at least one electric acupuncture data, and outputting at least one electric acupuncture device operation data again according to each new logical operation formula.

8. The apparatus of claim 7,

the first server is further used for detecting whether the theoretical acupuncture point position set and the theoretical acupuncture category set of the user are acquired for the first time;

when the initial acquisition is determined, the first server is used for inquiring the theoretical acupuncture point applying set and the theoretical acupuncture category set from a preset inquiry starting node of the knowledge base and acquiring the theoretical acupuncture point applying set and the theoretical acupuncture category set;

when determining that the acquisition is not the initial acquisition, the first server is used for determining the last acquisition record of the user; and determining an inquiry starting node of the user in the knowledge base according to the acquisition record, inquiring the theoretical acupuncture point applying set and the theoretical acupuncture type set from the inquiry starting node of the user, and acquiring the theoretical acupuncture point applying set and the theoretical acupuncture type set.

9. The apparatus of claim 6,

when the first server determines a plurality of electro-acupuncture modes according to the at least one logical operation formula, the first server is used for receiving historical vision condition data of the user from the medical care terminal; constructing a database according to the at least one electronic acupuncture data and the historical vision condition data of the user; determining one or more of the electric needle device operation data by data reorganization and/or data correction based on the data in the database.

10. The apparatus of claim 9, further comprising: a second server;

when the first server determines a plurality of electric acupuncture device operation data according to the at least one piece of electronic acupuncture data and the historical vision condition data of the user, the first server is used for sending each piece of electric acupuncture device operation data to the electronic acupuncture device;

the electronic acupuncture equipment is used for respectively distributing serial numbers to the operation data of each electric acupuncture equipment; executing the operation data of each electric acupuncture device in sequence according to the serial numbers, and sending the operation data of each electric acupuncture device and the corresponding serial numbers to a second server;

the second server is used for sending a first instruction to the detection equipment when receiving a plurality of electric needle equipment operation data and corresponding serial numbers, so that the detection equipment is connected with the second server conveniently and disconnected with the first server; receiving a plurality of brain oxyhemoglobin variation quantities corresponding to the plurality of electrical needle device operation data from the detection device, and determining electrical needle device operation data corresponding to the maximum brain oxyhemoglobin variation quantity as target data; and sending a second instruction to the detection device so that the detection device is disconnected from the second server, establishes connection with the first server, and sends the physiological data corresponding to the target data to the first server.

11. The apparatus of claim 1, wherein the electronic acupuncture device comprises: the device comprises an acquisition module, an image processing module and an acupuncture point positioning module;

the acquisition module is used for acquiring a facial image of the user;

the image processing module is used for determining an eyebrow image of the user from the face image of the user; wherein the eyebrow image is an image containing eyebrows and eye areas of the user; carrying out gray level processing on the eyebrow and eye images; constructing a transverse operator according to the size of the eyebrow image subjected to the gray processing, wherein the transverse operator is an odd number; convolving the transverse operator with the eyebrow image subjected to the gray processing to obtain a transverse gray variation curve of the eyebrow image; taking the maximum value of the transverse gray scale change curve of the eyebrow eye image as the transverse central position of the eye area; at the transverse center position of the eye area, taking two positions which are respectively upward and downward along the longitudinal direction until a preset proportion is reached as an upper boundary and a lower boundary of the eye area; intercepting the eyebrow image according to the upper boundary and the lower boundary of the eye area to obtain a transverse position image of the eye area; calculating a longitudinal gray scale integration function between the upper boundary and the lower boundary for each pixel in the left half image or the right half image of the transverse position image to obtain a longitudinal gray scale integration function image; in all the peaks and the troughs of the longitudinal integration function image, taking the positions corresponding to the leftmost peak and the rightmost peak or trough in the transverse position image as the left boundary and the right boundary of the eye region in the longitudinal direction;

the acupuncture point positioning module is used for intercepting the transverse position image according to the left boundary and the right boundary of the eye area and determining the eye area on the eyebrow image; and determining the position information of the corresponding eye acupuncture points on the face image according to the determined eye areas and the corresponding position relation between the eye acupuncture points on the face and the eye areas.

12. The apparatus of claim 1,

the current visual function parameters include: one or more of acuity, diopter, refractive power, peripheral vision, fusion function parameters, stereoscopic function parameters, simultaneous vision function parameters, noise rejection capability parameters, spatial contrast sensitivity.

13. A brain visual detection analysis method based on nerve feedback is characterized by comprising the following steps:

the detection device acquires physiological data and visual function parameters of a user, wherein the physiological data comprises: oxyhemoglobin concentration change data and/or deoxyhemoglobin concentration change data;

the first server updates the current training sample and the current neural network model according to the current visual function parameters and the physiological data; outputting the type and grade of the visual disorder of the user by taking the current visual function parameters and the physiological data as input through the updated neural network model;

the first server determines at least one piece of electronic acupuncture data according to the vision disorder type and the vision disorder grade; the first server prestores a plurality of electronic acupuncture data, and the electronic acupuncture data are used for indicating the corresponding relation between the vision disorder type and/or the vision disorder grade and the electric acupuncture parameter, the acupuncture point and the acupuncture type; according to the at least one piece of electronic acupuncture data, matching corresponding electric acupuncture device operation data for the user, wherein the electric acupuncture device operation data comprise: electric needle parameters, acupuncture types and acupuncture points;

and the electronic acupuncture device performs electronic acupuncture for the user according to the operation data of the electric acupuncture device.

Technical Field

The application relates to the field of brain vision detection, in particular to brain vision detection analysis equipment and method based on nerve feedback.

Background

The electronic acupuncture technique is that on the filiform needle which is inserted into the acupoints of human body, a small quantity of low-frequency pulse current is passed through the electric acupuncture machine to stimulate the acupoints of human body.

Generally, medical staff sets electronic acupuncture parameters such as an acupuncture point or an acupuncture point combination, a radio wave waveform, a radio wave frequency, a current intensity, an acupuncture time, acupuncture types and the like on the electronic acupuncture device according to the vision condition of a user.

However, once the electronic acupuncture parameters of the above method are set, the medical staff cannot adjust the electronic acupuncture parameters in real time to adapt to the current vision condition of the user, thereby reducing the user experience.

Disclosure of Invention

In order to solve the above problems, the present application provides a brain vision detection and analysis apparatus and method based on neural feedback, which can adjust electronic acupuncture parameters in real time to adapt to the current vision condition of a user, thereby improving user experience.

In a first aspect, the present application provides a brain vision detection and analysis device based on neurofeedback, which includes a detection device, a first server, and an electronic acupuncture device;

the detection device is used for acquiring physiological data and visual function parameters of a user, wherein the physiological data comprises: oxyhemoglobin concentration change data and/or deoxyhemoglobin concentration change data;

the first server is used for updating a current training sample and a current neural network model according to the current visual function parameter and the physiological data; outputting the type and grade of the visual disorder of the user by taking the current visual function parameters and the physiological data as input through the updated neural network model;

the first server is further used for determining at least one piece of electronic acupuncture data according to the vision disorder type and the vision disorder grade; the first server prestores a plurality of electronic acupuncture data, and the electronic acupuncture data are used for indicating the corresponding relation between the vision disorder type and/or the vision disorder grade and the electric acupuncture parameter, the acupuncture point and the acupuncture type; according to the at least one piece of electronic acupuncture data, matching corresponding electric acupuncture device operation data for the user, wherein the electric acupuncture device operation data comprise: electric needle parameters, acupuncture types and acupuncture points;

the electronic acupuncture device is used for performing electronic acupuncture for the user according to the operation data of the electric acupuncture device.

In one example, the apparatus further comprises: a medical care terminal; the medical care terminal is used for acquiring vision reference data, and the vision reference data comprises: historical vision condition data and/or vision-affecting data of the user; wherein the data affecting vision comprises: physical data of the person who affects the eyesight and life style data of the person who affects the eyesight;

the first server is used for receiving the vision reference data and determining a vision characteristic type identifier corresponding to the user according to the vision disorder type, the vision disorder grade and the vision reference data, wherein the vision characteristic type identifier is used for indicating the type of the vision condition of the user; and determining at least one piece of electronic acupuncture data according to the vision characteristic type identifier.

In one example, the first server is to extract at least one keyword from the vision reference data, the keyword comprising one or more of a historical vision condition keyword, a constitutional keyword, and a lifestyle keyword of the user; according to the keywords, respectively determining corresponding vision influence factors, wherein the vision influence factors comprise: one or more of historical vision status, physical constitution, and lifestyle of the user; respectively characterizing the vision disorder type, the vision disorder grade and the vision influencing factor by numbers, letters or a combination of the numbers and the letters; and generating a vision characteristic type identifier corresponding to the user by using the vision disorder type, the vision disorder grade and the number, letter or combination of the two corresponding to each vision influence factor.

In an example, the first server is further configured to configure a correction parameter for each of the vision influencing factors according to the magnitude of the value corresponding to the vision influencing factor; determining a total correction parameter according to each correction parameter; and correcting the vision characteristic type identifier according to the corresponding numerical value of the total correction parameter.

In one example, the first server is configured to determine a plurality of query keywords according to the visual feature type identifier; respectively determining the number of the query keywords contained in each piece of electronic acupuncture data in a knowledge base; and acquiring electronic acupuncture data with the similarity greater than a threshold value with the vision characteristic type identifier from a knowledge base according to the number of the query keywords, wherein the knowledge base is preset in the first server.

In one example, the first server is used for determining at least one logical operation formula in the at least one electronic acupuncture data; outputting at least one electric acupuncture parameter, at least one acupuncture point for applying acupuncture and at least one acupuncture type by taking the vision disorder type, the vision disorder grade and the vision reference data in the vision characteristic type identifier as input through the at least one logical operation formula; determining at least one electric needle device operation data according to the at least one electric needle parameter, the at least one acupuncture point for applying the needle or the at least one acupuncture category.

In one example, the first server is used for acquiring a theoretical acupuncture point application set and a theoretical acupuncture category set from a knowledge base according to the vision characteristic type identifier, wherein the knowledge base is preset in the first server;

when the first server obtains the operation data of the electric needle device, the first server is used for respectively detecting whether the needle application acupuncture points and the acupuncture types belong to the theoretical needle application acupuncture point set and the theoretical acupuncture type set;

when the detection result is negative, the first server is used for re-determining at least one piece of electronic acupuncture data according to the vision characteristic type identifier; and determining at least one new logical operation formula according to the redetermined at least one electric acupuncture data, and outputting at least one electric acupuncture device operation data again according to each new logical operation formula.

In one example, the first server is further configured to detect whether the set of theoretical acupuncture points and the set of theoretical acupuncture categories of the user are acquired for the first time;

when the initial acquisition is determined, the first server is used for inquiring the theoretical acupuncture point applying set and the theoretical acupuncture category set from a preset inquiry starting node of the knowledge base and acquiring the theoretical acupuncture point applying set and the theoretical acupuncture category set;

when determining that the acquisition is not the initial acquisition, the first server is used for determining the last acquisition record of the user; and determining an inquiry starting node of the user in the knowledge base according to the acquisition record, inquiring the theoretical acupuncture point applying set and the theoretical acupuncture type set from the inquiry starting node of the user, and acquiring the theoretical acupuncture point applying set and the theoretical acupuncture type set.

In one example, when the first server determines a plurality of the electro-acupuncture modes according to the at least one logical operation formula, the first server is used for receiving historical vision condition data of the user from the medical care terminal; constructing a database according to the at least one electronic acupuncture data and the historical vision condition data of the user; determining one or more of the electric needle device operation data by data reorganization and/or data correction based on the data in the database.

In one example, the apparatus further comprises: a second server;

when the first server determines a plurality of electric acupuncture device operation data according to the at least one piece of electronic acupuncture data and the historical vision condition data of the user, the first server is used for sending each piece of electric acupuncture device operation data to the electronic acupuncture device;

the electronic acupuncture equipment is used for respectively distributing serial numbers to the operation data of each electric acupuncture equipment; executing the operation data of each electric acupuncture device in sequence according to the serial numbers, and sending the operation data of each electric acupuncture device and the corresponding serial numbers to a second server;

the second server is used for sending a first instruction to the detection equipment when receiving a plurality of electric needle equipment operation data and corresponding serial numbers, so that the detection equipment is connected with the second server conveniently and disconnected with the first server; receiving a plurality of brain oxyhemoglobin variation quantities corresponding to the plurality of electrical needle device operation data from the detection device, and determining electrical needle device operation data corresponding to the maximum brain oxyhemoglobin variation quantity as target data; and sending a second instruction to the detection device so that the detection device is disconnected from the second server, establishes connection with the first server, and sends the physiological data corresponding to the target data to the first server.

In one example, the electronic acupuncture device includes: the device comprises an acquisition module, an image processing module and an acupuncture point positioning module;

the acquisition module is used for acquiring a facial image of the user;

the image processing module is used for determining an eyebrow image of the user from the face image of the user; wherein the eyebrow image is an image containing eyebrows and eye areas of the user; carrying out gray level processing on the eyebrow and eye images; constructing a transverse operator according to the size of the eyebrow image subjected to the gray processing, wherein the transverse operator is an odd number; convolving the transverse operator with the eyebrow image subjected to the gray processing to obtain a transverse gray variation curve of the eyebrow image; taking the maximum value of the transverse gray scale change curve of the eyebrow eye image as the transverse central position of the eye area; at the transverse center position of the eye area, taking two positions which are respectively upward and downward along the longitudinal direction until a preset proportion is reached as an upper boundary and a lower boundary of the eye area; intercepting the eyebrow image according to the upper boundary and the lower boundary of the eye area to obtain a transverse position image of the eye area; calculating a longitudinal gray scale integration function between the upper boundary and the lower boundary for each pixel in the left half image or the right half image of the transverse position image to obtain a longitudinal gray scale integration function image; in all the peaks and the troughs of the longitudinal integration function image, taking the positions corresponding to the leftmost peak and the rightmost peak or trough in the transverse position image as the left boundary and the right boundary of the eye region in the longitudinal direction;

the acupuncture point positioning module is used for intercepting the transverse position image according to the left boundary and the right boundary of the eye area and determining the eye area on the eyebrow image; and determining the position information of the corresponding eye acupuncture points on the eyebrow-eye image according to the determined eye areas and the corresponding position relation between the eye acupuncture points on the human face and the eye areas.

In one example, the current visual function parameters include: one or more of acuity, diopter, refractive power, peripheral vision, fusion function parameters, stereoscopic function parameters, simultaneous vision function parameters, noise rejection capability parameters, spatial contrast sensitivity.

In a second aspect, an embodiment of the present application provides a method for brain visual inspection analysis based on neurofeedback, including:

the detection device acquires physiological data and visual function parameters of a user, wherein the physiological data comprises: oxyhemoglobin concentration change data and/or deoxyhemoglobin concentration change data;

the first server updates the current training sample and the current neural network model according to the current visual function parameters and the physiological data; outputting the type and grade of the visual disorder of the user by taking the current visual function parameters and the physiological data as input through the updated neural network model;

the first server determines at least one piece of electronic acupuncture data according to the vision disorder type and the vision disorder grade; the first server prestores a plurality of electronic acupuncture data, and the electronic acupuncture data are used for indicating the corresponding relation between the vision disorder type and/or the vision disorder grade and the electric acupuncture parameter, the acupuncture point and the acupuncture type; according to the at least one piece of electronic acupuncture data, matching corresponding electric acupuncture device operation data for the user, wherein the electric acupuncture device operation data comprise: electric needle parameters, acupuncture types and acupuncture points;

and the electronic acupuncture device performs electronic acupuncture for the user according to the operation data of the electric acupuncture device.

In the embodiment of the application, the oxyhemoglobin concentration change data and the deoxyhemoglobin concentration change data are data which can visually reflect the vision condition of the user at present, so that the electronic acupuncture parameters are adjusted according to the oxyhemoglobin concentration change data and the deoxyhemoglobin concentration change data to improve the adjustment accuracy; meanwhile, the current visual function parameters are utilized, and the training sample and the neural network model are adjusted in real time, so that the data timeliness is guaranteed and the electronic acupuncture parameters are adjusted in real time. In addition, the technical scheme provided by the embodiment of the application can also adjust the acupuncture type and the acupuncture point position for applying the acupuncture in real time, so that the adaptability of the electronic acupuncture parameters to users is further improved. Therefore, the technical scheme provided by the embodiment of the application can adjust the electronic acupuncture parameters in real time to adapt to the current vision condition of the user, and can also preferably select acupuncture points and point combinations, thereby improving the visual function of the user.

Drawings

The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:

fig. 1 is a schematic diagram of a brain vision monitoring and analyzing device based on neurofeedback according to an embodiment of the present application;

fig. 2 is a schematic diagram of a detection apparatus provided in an embodiment of the present application;

fig. 3 is a schematic structural diagram of an electric needle data acquisition unit according to an embodiment of the present application;

fig. 4 is a schematic diagram illustrating a corresponding relationship between vision characteristics and an electro-acupuncture mode provided in an embodiment of the present application;

fig. 5 is a schematic diagram of an inference system in a brain vision inspection analysis device according to an embodiment of the present application.

Detailed Description

In order to more clearly explain the overall concept of the present application, the following detailed description is given by way of example in conjunction with the accompanying drawings.

The embodiment of the application discloses brain vision monitoring analytical equipment based on neural feedback, as shown in fig. 1, this system includes: the medical treatment system comprises a detection device 110, a medical care terminal 120, a server 130, an electronic acupuncture device 140 and an electric needle mode selection unit 150. The server 130 corresponds to a first server, and the electric needle mode selecting unit 150 corresponds to a second server. The server 130 includes a vision characteristic encoding unit 131, an electric needle data obtaining unit 132, and a neural network unit 133.

It should be noted that the vision characteristic encoding unit 131 and the electric acupuncture data acquiring unit 132 may be two software modules or two hardware devices, which is not limited in this embodiment of the application. The server 130 may be one server or may include a plurality of servers. Similarly, the electric needle mode selecting unit 150 may be a software module or a hardware device, which is not limited in this embodiment of the present application.

The detection device 110 is used to detect physiological data of the user, which may be oxyhemoglobin concentration variation data and deoxyhemoglobin concentration variation data, and the current visual function parameter. The current visual function parameters of the user include any one or more of: the visual acuity, the diopter, the refractive power, the peripheral vision, the image fusion function parameter, the stereoscopic vision function parameter, the simultaneous vision function parameter, the noise elimination capability parameter, the spatial contrast sensitivity and the like can be expressed in the form of numbers, characters, pictures and combinations thereof. Therefore, the detecting device 110 may be a near-infrared optical brain function imaging device, and may further include a visual function parameter detecting device. That is, the detecting device 110 may be a near-infrared optical brain function imaging device, and the detecting device 110 may also include a near-infrared optical brain function imaging device and a visual function parameter detecting device.

Fig. 2 is a schematic diagram of a detection apparatus provided in an embodiment of the present application. The detection device 110 includes a near-infrared optical brain function imaging device 111 and a visual function parameter detection device 112.

The near-infrared optical brain function imaging device 111 includes: a detector 1111 and a control host 1112. The detection part 1111 comprises a plurality of fiber probes and a head cover, wherein the fiber probes are arranged on the inner surface of the head cover, and the head cover is made of silica gel. When the device is used, a user brings the hood to the top of the head, so that the optical fiber probes are distributed on a plurality of parts of the head. The fiber-optic probe transmits near-infrared signals to the head of the user, receives near-infrared signals returned from the head of the user, and transmits the received near-infrared signals to the control host 1112 through an optical fiber. And the control host obtains a brain blood flow signal according to the returned near-infrared signal, and obtains oxyhemoglobin concentration change data in the cerebral cortex and deoxyhemoglobin concentration change data in the cerebral cortex according to the obtained brain blood flow signal. The oxyhemoglobin concentration variation data and the deoxyhemoglobin concentration variation data obtained by the control host 1112 may be in various forms, for example: a sequence number type time curve graph, a position type time curve graph, a head position time curve graph and a two-dimensional topographic map. It will be appreciated by those skilled in the art that any form of hemoglobin concentration change data, whether used in conjunction with a change in oxyhemoglobin concentration or a change in deoxyhemoglobin concentration, is used to represent the change in oxyhemoglobin concentration.

The principle of obtaining oxyhemoglobin concentration variation data and deoxyhemoglobin concentration variation data of a user by transmitting a near infrared light signal to the head of a human brain is briefly explained below.

The near infrared light signal can penetrate human tissue and bone to reach 2-3cm depth inside brain, i.e. cerebral cortex level, and then the near infrared light signal is absorbed by hemoglobin in the cerebral flat layer. Based on the above principle, the near-infrared optical brain function imaging device acquires a brain blood flow signal by emitting a near-infrared light signal having a certain frequency, for example, a wavelength range of 695nm to 830nm, to the brain, and determines a change in oxyhemoglobin concentration and a change in deoxyhemoglobin concentration in a cerebral cortex according to the brain blood flow signal.

In fig. 1, the healthcare terminal 120 is a device used by a healthcare worker to collect and record vision reference data of a user, the vision reference data including: age, gender, historical vision condition data of the user, other data affecting vision, etc. The user's historical vision condition data includes: the operation data of the electric acupuncture device adopted by the user and the historical visual function parameters of the user. Other data that affect vision include the constitution of the person who affects vision, the lifestyle of the person who affects vision, and the like. Constitutions of persons who affect vision include: mild constitution, qi deficiency constitution, yang deficiency constitution, yin deficiency constitution, phlegm dampness constitution, damp-heat constitution, blood stasis constitution, qi stagnation constitution and specific constitution. Lifestyle of people affecting vision includes: love sports, watch TV, eat no fish and milk, etc.

It will be appreciated by those skilled in the art that the physiological data, the current visual function parameters acquired by the healthcare terminal and the vision reference data should be from the same user.

In the embodiment of the present application, as shown in fig. 1, the medical care terminal 120 sends the current visual function parameter to the neural network unit 133, and sends the vision reference data to the vision characteristic encoding unit 131. Further, the medical care terminal 120 also transmits the historical visual acuity condition data to the electric needle data obtaining unit 132.

In the embodiment of the present application, the neural network unit 133 determines the type of the visual disorder (e.g. amblyopia, myopia, etc.) and the level of the visual disorder (e.g. severe, general, mild, etc.) according to the received current visual function parameter and the user physiological data (e.g. oxyhemoglobin concentration change data, deoxyhemoglobin concentration change data, etc.) through a BP (Back Propagation) neural network algorithm, where the specific method is as follows:

step 1, inputting N learning samples, wherein each sample in the N learning samples comprises one or more of the following items: the physiological data (such as oxyhemoglobin concentration variation data, deoxyhemoglobin concentration variation data, and the like) and the current visual function parameters (such as visual acuity, diopter, refraction aberration, peripheral vision, image fusion function parameters, stereoscopic function parameters, and the like) of the user.

And step 2, determining the BP network structure. Setting the number L of network layers and the number of nodes in each layer according to actual needs, and determining the number of nodes in a network input layer to be n according to the dimension n of an input vector in a given sample; determining the number of nodes of a network output layer as m according to the dimension m of an output vector in a given sample; the number of nodes in the L-th layer is n (L). Determining connection weight matrixes among the layers, and initializing each element value in each matrix.

The number of input nodes is related to the number of parameters to be referred to, for example, hospital a needs to refer to the data of visual acuity, diopter, refractive error, oxyhemoglobin concentration change, and then the number of input nodes is 4. The number of output nodes is related to the number of required output parameters. For example, a hospital generally needs to acquire the vision disorder type and the vision disorder level of a user, and the number of output nodes is 2. Since the number of input nodes and the number of output nodes need to be determined according to actual situations, the number of input nodes and the number of output nodes are not specifically limited in the embodiment of the present application.

According to the characteristics of the neural network, when the number of nodes on the hidden layer is enough, the 3-layer BP network can simulate any complex nonlinear mapping relation. Therefore, in the embodiment of the present application, the number of network layers L is greater than or equal to 3, and when L equals to 3, the number of nodes on the hidden layer is:

wherein a is a constant of 1 to 10 inclusive.

And 3, setting the range of the input target error epsilon and the numerical value of the network learning rate eta, setting the initial iteration step number t to be 1, and setting the sample serial number k to be 1.

In the embodiment of the present application, when the network learning rate is large, the change of the weight value is large, and the convergence speed of the network is fast, however, too large network learning rate tends to cause oscillation, i.e., stability is affected. When the learning rate of the network is small, oscillation can be avoided, however, too small a learning rate of the network may slow down the training process, and reduce the learning efficiency of the network. According to experience, the embodiment of the present application uses a slightly smaller value to ensure the convergence of the network, for example, the network learning rate range is 0.05-0.07.

And 4, taking the kth learning sample, recording the kth learning sample as Xk, and performing forward propagation calculation on the Xk. And calculating the output of each node of the input layer, and sequentially solving the input and the output of each node of each layer.

And 5, solving errors between all nodes of the output layer and the given sample.

Step 6, if the error of at least one sample in the n learning samples is less than or equal to the target error epsilon, the learning process is ended; otherwise, entering the error back propagation stage to adjust each connection weight matrix and returning to the step 4.

Because the difference between the user individuals is relatively large and the input parameters are relatively large, for different users, various nonlinear relations may exist between the input vector and the output vector, and in order to ensure the convergence degree of the BP neural network model provided by the embodiment of the present application, in the BP neural network model provided by the embodiment of the present application, the following functions are used as activation functions, and the activation functions are specifically as follows:

tanh=2sigmoid(2x)-1 (1)

wherein, tanh is a function value of the activation function, sigmoid is an S-shaped growth curve, and x is an input vector. The convergence degree of the BP neural network model is enhanced through the activation function (1), and the vision disorder type and the vision disorder grade of the user can be rapidly and accurately output. Thereafter, the neural network unit 133 transmits the obtained type of visual impairment and the visual impairment level to the visual characteristics encoding unit 131.

In the embodiments of the present application, the types of vision disorders include: myopia and amblyopia. Wherein the amblyopia type includes: strabismus amblyopia, anisometropic amblyopia, ametropic amblyopia, disuse amblyopia (form deprivation amblyopia), congenital amblyopia (organic amblyopia). The myopia types may include: axial myopia, curvature myopia, and refractive myopia.

The vision disorder grades include: myopia grades and amblyopia grades. A level of amblyopia such as a range of amblyopia degrees or a level of amblyopia. A level of myopia such as a range of myopia degrees or a level of myopia.

The vision characteristic encoding unit 131 outputs one or more vision characteristic type identifiers according to the received vision disorder type, vision disorder grade and vision reference data, wherein the vision characteristic type identifiers comprise: the type of vision disorder, the level of vision disorder, and the vision affecting factors including one or more of gender, age, constitution, lifestyle.

Specifically, in an embodiment of the present application, the user vision characteristic data is generally recorded in the vision reference data collected by the medical care terminal, and the vision characteristic encoding unit 131 obtains keywords in the vision reference data, where the keywords include one or more of an age, a sex, a historical vision condition keyword, a physique keyword, and a life style keyword of the user. The visual characteristics encoding unit 131 determines each visual influence factor based on the plurality of keywords, respectively. Respectively representing the type of the visual disorder, the grade of the visual disorder and the visual influence factor by combining numbers, letters or the two; and generating vision characteristic type identifications corresponding to the users by using the vision disorder types, the vision disorder grades and the numbers, the letters or the combination of the numbers and the letters corresponding to the vision influence factors. For example, the type of the vision disorder can be represented by English letters, the grade of the vision disorder can be represented by numerical values, and the numbers 1 to 9 can represent the mild constitution, the qi-deficiency constitution, the yang-deficiency constitution, the yin-deficiency constitution, the phlegm-dampness constitution, the damp-heat constitution, the blood stasis constitution, the qi-depression constitution and the specific endowment constitution, respectively.

The visual feature type identifier output by the visual feature encoding unit 131 may be represented by numbers, letters, or a combination of the two, and the visual feature type identifier corresponds to the visual feature type one to one. A number or letter or combination of numbers and letters corresponds to a visual characteristic. For example, the vision characteristic type identifier a121311, where "a" indicates that the vision disorder type is an amblyopia type, the first number "1" indicates gender, the number "2" indicates the age of the user or the stage of the age of the user, for the stage of the age of the user, such as 1 indicates an infant, 2 indicates a teenager, 3 indicates an elderly person, and so on. The second number "1" represents the range of interval of variation of oxygenated hemoglobin concentration of the user's brain over the detection time of the detection device. The number "3" represents the range of variation of the deoxyhemoglobin concentration of the brain of the user during the detection time of the detection device, the third number "1" represents the normal constitution, and if the constitution is weak, the positive integer greater than 1 is used (for example, 2 corresponds to kidney deficiency, and 3 corresponds to liver deficiency). The fourth number "1" indicates that the lifestyle is normal, the lifestyle number "2" indicates that the lifestyle is good for visual health, and the number "3" indicates that the lifestyle is not good for digital health. The lifestyle can also be represented by a plurality of numbers, for example, 11 represents a normal lifestyle, 12 represents a lifestyle that is favorable for visual health-like sports, 21 represents a lifestyle that is unfavorable for visual health-like watching television, 22 represents a lifestyle that is unfavorable for visual health-not eating fishes and milks, etc.

The above manner can represent the relationship between the life style and the eyesight of different users, however, different users have different eyesight influencing factors, and if each eyesight influencing factor is written into the eyesight feature type identifier according to the above manner, the individual eyesight feature type identifier is very long, which is not beneficial to data processing. Although there are many kinds of visual effect factors, each visual effect factor usually only produces beneficial effect or harmful effect on the visual ability, and the beneficial effect or harmful effect produced by each visual effect factor can be superposed. Therefore, when a plurality of vision influence factors exist, the effect of each vision influence shadow is superposed to obtain an effect sum, and the effect sum is used as the identifier corresponding to the influence factor in the vision characteristic type identifier.

Specifically, in the embodiment of the present application, for a vision influencing factor which does not have a corresponding acupoint or cannot be quantized by using the physiological data of the user, for example, if the daily life style is like outdoor sports, it is difficult to quantize by using the physiological data of the user, and there is no corresponding acupoint. Based on the above situation, in the embodiment of the present application, the visual characteristics encoding unit 131 obtains the keywords in the visual reference data, and then calculates the correction coefficients corresponding to the keywords, where the correction coefficients are correction values made to the visual influence levels according to the lifestyle of the user. Then, the vision characteristic encoding unit 131 calculates a total correction coefficient according to each correction coefficient, and uses the total correction coefficient as an identifier corresponding to the influence factor in the vision characteristic type identifier. For example, if the age is greater than the threshold, the correction coefficient is 1, the correction coefficient for good eating habits is-0.2, and the total correction coefficient is 0.8, then in the new visual characteristic type identifier, the number "0.8" is used as the identifier corresponding to the influence factor in the visual characteristic type identifier.

In this embodiment, the electric acupuncture data obtaining unit 132 is configured to match the electric acupuncture device operation data for each received visual feature identifier. Generally, the electric needle device operation data includes electric needle parameters, an acupuncture point to be applied, and kinds of acupuncture. Wherein, the acupuncture category can include: ear acupuncture, head acupuncture, eye acupuncture, hand acupuncture, foot acupuncture, wrist and ankle acupuncture, acoustoelectric wave acupuncture, electrotherapy, microwave acupuncture, etc. The parameters of the electric needle include: electric wave waveform, electric wave frequency, current intensity, needle application time, needle application sequence and needle application temperature. The waveform, frequency and current intensity of the electric wave correspond to the force and technique of needle application. The embodiments of the present application are not limited to the specific acupuncture and moxibustion type. Preferably, eye acupuncture and hand acupuncture can be used.

The acupuncture point is Jingming acupoint, Zanzhu acupoint, Sizhu acupoint, Tongziliao acupoint, Chengqi acupoint, Taiyang acupoint, retrobulbar acupoint, Yuyao acupoint and various acupoints affecting vision, such as ear acupoint, and is used for regulating liver and kidney, and treating liver deficiency and kidney deficiency. The acupuncture point also comprises an acupuncture point combination, and the acupuncture point combination is any one or more of the acupuncture points. Wherein, the function of each acupuncture point is respectively:

jingming acupoint, indicated for eye disease, photophobia and vision improvement.

Cuanzhu acupoint has effects of clearing heat and improving eyesight.

The product has effects in dispelling pathogenic wind, relieving pain, refreshing mind, and improving eyesight.

Tongziliao acupoint has the functions of dispelling wind and heat, improving eyesight and relieving pain.

Chengqi acupoint has the actions of dispelling wind, purging fire, relieving spasm and improving eyesight.

Taiyang point, it has the actions of clearing head and improving vision.

The ball points have the effects of promoting blood circulation and improving eyesight.

The fish waist acupoint has the effect of clearing head and expense.

Obviously, one user may be matched to a plurality of electric needle device operation data. The following describes in detail how the electric acupuncture data obtaining unit 132 matches one electric acupuncture device operation data for a plurality of visual feature identifiers corresponding to the user.

As shown in fig. 3, in the embodiment of the present application, the electric needle data acquisition unit 132 includes: a knowledge base 1321, an inference system 1322, an electronic acupuncture device interface 1323, a logic base construction unit 1324, and a logic base 1325.

Knowledge base 1321 is used to store prior knowledge and experience, as well as general knowledge and universal rules. The data in the knowledge base 1321 mainly includes two parts, namely theoretical data and empirical data, and the theoretical data is derived from common knowledge, universal rules and proven medical theories in literature data. The experience data is the expert doctor experience knowledge and the successful electronic acupuncture case, wherein the electronic acupuncture case corresponds to the electronic acupuncture data.

Specifically, the theoretical data portion of the knowledge base 1321 may store the visual characteristic types and the acupuncture points and acupuncture types corresponding thereto in the manner shown in fig. 4. The knowledge base 1321 stores vision characteristics according to a tree structure, wherein the vision disorders are collectively called as categories, have the widest extension and are used as tree roots; the vision disorder type is amblyopia, hyperopia and strabismus, and is used as a tree trunk; the visual characteristic type identification corresponds to the visual function state of each user, the extension is narrowest, and the visual characteristic type identification is taken as a branch; at least one leaf is grown on each branch, and the leaf is the acupuncture point or the combination of the acupuncture points and the acupuncture type. It will be appreciated that each vision characteristic theoretically corresponds to at least one electrical acupuncture device operating data. For example, if the vision is characterized by amblyopia and yin deficiency of heart and liver, the corresponding acupuncture point may be Meihua acupuncture point, while the acupuncture point may be Zhengguan, Fengchi, Neiguan, Shenmen, Dazhui, Xinshu and Ganshu.

It can be understood that the differences between users are large, so the electric acupuncture parameters cannot be preset in advance, and the reasoning system 1322 is required to be obtained through case analysis. Therefore, the electrical needle parameters are not stored to the theoretical data part.

The logic library construction unit 1324 is configured to determine a plurality of query keywords according to the visual characteristic type identifier, acquire n cases similar to the visual characteristic type from the knowledge library 1321 according to the plurality of keywords, construct m logic operation formulas according to the n acquired cases, and finally store the m logic operation formulas in the logic library 1325. For example, the logic library constructing unit 1324 identifies the corresponding keyword of the vision characteristic type in the case, and obtains n logic operation formulas according to the keyword. And finally, combining the logic systems with the same preconditions into a new logic operation formula, thereby obtaining m logic operation formulas. The logical operation formula may be in the form of: if P the Q. Wherein, P is a precondition part and Q is a conclusion part. Specifically, P is the vision characteristic, and Q is the corresponding acupuncture point, electric needle parameter or acupuncture type. Inference system 1322 is operable to determine electrical needle device operational data based on the m logics in logic base 1325 and the theoretical data portion in knowledge base 1321.

The electronic acupuncture device interface 1323 is used to connect the electric needle data acquisition unit 132 and the electronic acupuncture device 140 so that the electric needle data acquisition unit 132 controls the electronic acupuncture device 140 to administer the needle to the user with the electric needle device operation data.

In the embodiment of the present application, the inference system 1322 mainly performs the following steps when determining the operation data of the electric acupuncture device:

step 1, determining different vision characteristic type identifications and corresponding theoretical acupuncture point sets and theoretical acupuncture type sets based on theoretical data in a knowledge base 1321.

Different visual characteristic type identifications and corresponding theoretical acupuncture point application sets and theoretical acupuncture type sets are stored in the knowledge base 1321 according to the mode shown in fig. 3, and it can be understood that one visual characteristic type identification can correspond to multiple acupuncture points and acupuncture types, so that the theoretical acupuncture point application sets and the theoretical acupuncture type sets are formed.

In the embodiment of the application, for a user who performs brain vision detection and analysis for the first time, a breadth-first method is adopted to traverse the database to search for corresponding acupuncture points and acupuncture types, and the search result can be a plurality of acupuncture points, a plurality of acupuncture point combinations and a plurality of acupuncture types, and the acupuncture points, the acupuncture types and the vision characteristic type identifiers are correspondingly recorded. Then, in the process of adjusting the acupuncture point and the acupuncture type in real time, the database needs to be queried for the acupuncture point and the acupuncture type many times, so as to adjust the acupuncture point and the acupuncture type in real time according to the physiological data of the user.

It is understood that the level of amblyopia of the user is usually changed after each electronic acupuncture and moxibustion to the user, and the change is mainly from the change of physiological data, and the type of amblyopia is usually not changed. For example, after each electron acupuncture, the concentration of oxygenated hemoglobin in the brain increases, and the level of amblyopia decreases. Based on the above situation, it can be concluded that each electronic acupuncture does not cause a great change in the visual characteristic type identifier, and therefore, the query path corresponding to each acupuncture point, combination of acupuncture points, and type of acupuncture does not change much, and the inference system 1322 can search for the corresponding acupuncture point, combination of acupuncture points, and type of acupuncture based on the breadth-first method, using the data storage node recorded last time as the initial query node.

In this way, inference system 1322 does not have to start searching from the stored root node every time, thereby improving the search efficiency. Taking the visual characteristic type identifier n1 in fig. 3 as an example, before an electronic acupuncture, the visual characteristic type identifier n1 is a1211, wherein "a" represents a weak vision type, the first number "1" represents gender, the number "2" represents age of a user. The second number "1" indicates the range of the oxygenated hemoglobin concentration in the brain, and the third number "1" indicates the normal constitution. After the electronic acupuncture, only the brain oxyhemoglobin concentration increases, and the vision characteristic type is labeled a1221, corresponding to the vision characteristic type label n 2. Obviously, at this time, if breadth-first query is performed from the vision disorder (category is collectively referred to), at least n +2 times of query is required, and if the vision characteristic type identifier n1 is used as the initial query node, at least 3 times of query is required (three times are sequentially: vision characteristic type identifier n1, vision disorder type n, and vision characteristic type identifier n 2).

When the non-first-time brain vision detection analysis is carried out, the reasoning system 1322 judges whether the visual characteristic type identification is the same as that of the latest electronic acupuncture, and if so, directly reads the last operation data of the electric acupuncture device; otherwise, the latest vision characteristic type mark is taken as a query starting point, and the database is traversed by adopting a breadth-first method to search corresponding operation data of the electric acupuncture device.

Step 2, as shown in fig. 3, the operation data of the electric acupuncture device is determined based on the logical operation formula in the logical library 1325 and the empirical data part in the knowledge base 1321.

Specifically, the inference system 1322 performs inference according to the existing logical operation formula in the logical database 1325 to obtain at least one electric acupuncture parameter, at least one acupuncture point for applying acupuncture, and at least one acupuncture type. And determining the operation data of the electric acupuncture device according to the obtained data. For example, there are three logical operation formulas A, B, C, wherein the inference system 1322 obtains the parameters of the electric acupuncture according to the logical operation formula a, the acupuncture point for applying acupuncture according to the logical operation formula B, and the kind of acupuncture according to the logical operation formula C. The inference system 1322 determines operation data of the electric acupuncture device based on the obtained parameters of the electric acupuncture, acupuncture points to be applied and kinds of acupuncture. The basic form of the logical operation formula is: p → Q or If P Then Q. Wherein, P is a precondition part and Q is a conclusion part. Correspondingly, the vision characteristics contained in the vision characteristic type identifier represent a precondition part, and the parameters of the electric acupuncture, the acupuncture points and the acupuncture types are conclusion parts. The method has the advantages of intuition, naturalness and easy reasoning and understanding, and can conveniently use computer language to express and control, thereby simulating the thinking mode of human experts to solve the problem.

And 3, verifying and reasoning the system 1322 by utilizing the theoretical acupuncture point set and the theoretical acupuncture type set in the step 1 to obtain whether the conclusion part is correct or not according to a logical operation formula. For example, if the acupoint combination obtained by the inference system 1322 is acupoint a and acupoint B, the inference system 1322 verifies whether the acupoint combination is included in the acupoint combinations in step 1, if so, the inference is correct, otherwise, the inference is wrong. If the inference is incorrect, the inference system 1322 may send a request for reconstructing a logical operation formula to the logical library construction unit 1324, so that the logical library construction unit 1324 reconstructs a logical operation formula according to the type of the visual characteristics.

In the embodiment of the present application, the logic library constructing unit 1324 may obtain a logic operation formula for a plurality of conclusions, for example, the type of the vision characteristic is a, and the wave frequency is 80-115 times/minute or 60-100 times/minute. In this case, the inference system 1322 specifies a plurality of radio wave frequencies by logical inference. This means that the inference system 1322 infers a plurality of electric needle device operational data. How to obtain the operation data of the electric acupuncture device according to the operation data of the plurality of electric acupuncture devices determined by the inference system 1322 is explained in detail below.

In the embodiment of the present application, as shown in fig. 5, the inference system 1322 constructs a case library set of solutions by retrieving, reusing, correcting and saving a plurality of cases, and extracts rules of the case set. The case retrieval refers to searching cases with the same premise used by the logical library construction unit 1324 from the knowledge base 1321 and the history visual function parameters sent by the healthcare terminal 120, and the cases and the history visual function parameters form a case set. Case reuse refers to searching a solution from a searched case set, reusing the solution meeting requirements, or modifying corresponding cases, or integrating each case to obtain a new solution, so that the new solution can be reused after meeting requirements. Case correction refers to effective modification of the retrieved solution according to correction rules and by combining actual problems. Case preservation refers to storing a new case with reuse value through a proper representation method and a proper storage mode. In the process of case reasoning performed by the reasoning system 1322, each new case may be saved to become an information source of the knowledge base 1321, and as new cases accumulate, the knowledge base is updated and expanded, and the adaptability thereof is improved.

The following is illustrated by way of example in table 1: as shown in table 1 below, the three methods of electronic acupuncture obtained by the inference system 1322 are method 1, method 2, and method 3, respectively. The inference system 1322 needs to obtain operation data of one electric acupuncture device according to the operation data of the three electric acupuncture devices. Assuming that, of the four acupuncture points A, B, C, D, acupuncture point A, B and C need to be applied simultaneously, and acupuncture point D can be applied separately. When the electric needle needs to be punctured into a human body during needle application, in order to ensure the safety of a user, the needle application current is uniformly set to be 1-1.5 mA.

When the acupuncture is applied, each electric needle stimulates the corresponding acupuncture point independently, so that the radio frequency of each acupuncture point can be different theoretically when the acupuncture is applied. In the embodiment of the present application, the acupuncture points A, C, D have corresponding radio frequencies respectively. Since the radio frequency of each acupoint may be different, the inference system 1322 directly uses the radio frequency of the method 1, the method 2, and the method 3 as the radio frequency of the acupoint A, C, D in the optimal solution. However, the acupoint B is obviously different from other acupoints in the selection of the radio frequency because the acupoint B corresponds to two radio frequencies, and the optimal radio frequency of the acupoint B is the intersection of the radio frequency of the method 1 and the radio frequency of the method 2, i.e. 100 times and 115 times/min. Since the acupuncture point A, B, C needs to be applied simultaneously, the needle application time is the intersection of the needle application times of the method 1 and the method 2, namely 10-15 min. And the acupuncture point D needs to be independently applied, and the acupuncture time is the acupuncture time of the method 3.

TABLE 1

In summary, the inference system 1322 sends the electric acupuncture device operation data 4, i.e. the acupoint combinations of the electronic acupuncture and the electric acupuncture parameter value ranges to the electric acupuncture device 140, as shown in table 2:

TABLE 2

It should be noted that table 2 shows the operation data 4 of the primary electric acupuncture device, and the execution sequence thereof may be ABCD. Or ABC can be performed on three acupuncture points simultaneously and then on acupuncture point D.

Thereafter, the electric needle device 140 performs the electric acupuncture on the user according to the received electric needle device operation data.

In one embodiment of the present application, after the electronic acupuncture device 140 performs electronic acupuncture on the user, the detecting device 110 detects the user again to obtain new physiological data, including blood oxygen, blood pressure, pulse, etc., and sends the new physiological data to the neural network unit 133 for the next round of adjustment of the electrical needle parameters.

It will be appreciated by those skilled in the art that the above approach is only suitable for the specific case where some parameters, such as radio wave frequency or needle application time, intersect in the operation data of various electric needle devices. However, if there is no intersection between partial parameters in the electric needle device operation data, one electric needle device operation data can be determined in the following manner. The following is set forth in detail: as shown in the table 3 below, the following examples,

TABLE 3

The acupuncture points of the method 1 and the acupuncture points of the method 2 are the same, but the value ranges of the radio wave frequencies do not have intersection, and one electric acupuncture device operation data is generated for each value range. Accordingly, the inference system 1322 transmits the combinations of the points of the electronic acupuncture and the ranges of values of the parameters of the electric acupuncture to the electric acupuncture device 140 as shown in tables 4 and 5:

TABLE 4

TABLE 5

Table 4 and table 5 correspond to the operation data of the electric acupuncture device, respectively. In tables 4 and 5, the operations may be performed sequentially or simultaneously on the acupuncture point A, B, D.

In fig. 1, when the electronic acupuncture device 140 receives a plurality of electric acupuncture device operation data, the electronic acupuncture device applies an acupuncture to a user according to the electric acupuncture device operation data, and simultaneously sends an operation serial number and an eye characteristic type identifier of each electric acupuncture device operation data to the electric acupuncture mode selection unit 150.

When the electric acupuncture mode selection unit 150 receives the operation serial number and the visual feature type identifier of the operation data of each electric acupuncture device, it establishes a connection with the detection device 110, so that the detection device 110 temporarily cuts off the connection with the neural network unit 133, and simultaneously sends the physiological data after acupuncture to the electric acupuncture mode selection unit 150. The electric needle mode selection unit 150 determines the electric needle device operation data with the maximum variation of the brain oxygenated hemoglobin as the target data according to the physiological data after acupuncture.

Then, the electric acupuncture mode selection unit 150 sends an instruction to the detection device 110 according to the operation serial number, so that the detection device 110 sends the detection result corresponding to the operation data of the target electric acupuncture device to the neural network unit 133, and disconnects the connection with the detection device. In this manner, the detection device 110 restores the connection with the neural network unit 133 and transmits the detection result corresponding to the target data to the neural network unit 133. Meanwhile, the electric acupuncture mode selection unit 150 converts the optimal electric acupuncture device operation data and the corresponding visual feature type identifier into a new electronic acupuncture case, and adds the new electronic acupuncture case to the knowledge base 1321 in the electric acupuncture data acquisition unit 132.

To sum up, this application embodiment is when implementing acupuncture to the user, acquires user's physiological data in real time to according to real-time physiological data, carry out real-time adjustment to electric needle equipment operation data, in order to improve the acupuncture effect.

In the embodiment of the present application, the condition of the user himself has a great influence on the visual characteristics, so that the visual characteristic type identifier in the training sample cannot include all the visual characteristics. In order to solve the above problem, the neural network unit 133 adds the new visual characteristics and the corresponding data thereof to the training sample in real time based on the incremental linear discriminant analysis algorithm, thereby ensuring the accuracy of the result of the artificial intelligence algorithm. The specific implementation mode is as follows:

the neural network unit 133 adds the received current visual function parameters and physiological data to the training samples to form new training data. And then updating the intra-class divergence matrix and the inter-class divergence matrix of the current neural network according to the new training data. And finally, updating the neural network model according to the updated intra-class divergence matrix and the updated inter-class divergence matrix.

With the electric needle data acquisition unit 132, during use, the electric needle data acquisition unit 132 may obtain a plurality of electric needle device operation data, but the electric needle data acquisition unit 132 cannot determine which electric needle device operation data is most suitable for the current user. Therefore, in the embodiment of the present application, the electric needle mode selecting unit 150 determines the electric needle device operation data most suitable for the current user according to the physiological data fed back by the detecting device 110, and generates a new electronic acupuncture case or experience data according to the electric needle device operation data and the electric needle parameters thereof. Finally, the electric needle pattern selection unit 150 sends the new electronic acupuncture case or the empirical data to the knowledge base 1321 in the electric needle data acquisition unit 132. In this way, the inference system in the electric acupuncture data acquisition unit 132 can adjust the inference mode according to the new electronic acupuncture case or the empirical data.

In summary, the real-time updating of the training samples is realized by detecting the physiological data of the user in real time, so that the accuracy of selecting the operation data of the electric acupuncture device by the vision characteristic encoding unit 131 and the electric acupuncture data acquisition unit 132 is improved.

The embodiment of the invention realizes automatic analysis of the cortical activity intensity of the user, screens the optimal acupuncture intensity, frequency and acupuncture point combination according to the analysis result, realizes intelligent selection of acupuncture points for electric acupuncture, and can guide the optimization of clinical schemes.

In one embodiment of the present application, an electronic acupuncture device includes: the device comprises an acquisition module, an image processing module and an acupuncture point positioning module;

the acquisition module is used for acquiring a facial image of a user. The image processing module determines a boundary of a glasses area of the user from the face image of the user with the cloud. The acupuncture point positioning module is used for determining the specific positions of the eye acupuncture points in the eyebrow images according to the boundaries of the glasses areas.

Specifically, the acupuncture points of the eye correspond to the corresponding positions on the face, such as Qingming acupuncture point located at a concave position slightly above the inner canthus of the face, Zanzhu acupuncture point located at a concave position on the inner side edge of the eyebrow, Sizhu acupuncture point located at a concave position on the tip of the eyebrow, Tongzao acupuncture point located beside the outer canthus of the face, Chengqi acupuncture point located between the eyeball and the lower edge of the orbit, Taiyang acupuncture point located above the extension line of the outer canthus on both sides of the forehead, retrobulbar acupuncture point located at the junction of the outer 1/4 of the lower edge of the orbit and the inner 3/4 of the lower edge of the orbit, and Yuyao acupuncture point located at the position directly above the eyebrow of the forehead. Therefore, after the three-dimensional face image is subjected to image recognition, the image processing module can recognize the positions of eyes, eyebrows and eye sockets in the face image, and then can determine the positions of the eye acupuncture points in the face image according to the preset position relationship of the eye acupuncture points in the face, namely the position relationship between each eye acupuncture point and each organ. For example, in determining the location of the clear point, the location of the eye can be first identified in the facial image and then the location of the medial canthus, i.e., where the upper and lower lid margins meet at the medial extremity, can be determined in the eye. And determining the center of the inner canthus, wherein the position of the inner canthus center close to the preset distance of the inner side of the face is the position of Qingming acupoint. The preset distance can be determined according to the size of the inner canthus and the age of the user. For example, the distance between the center of the inner canthus and the farthest position of the inner canthus and the distance corresponding to the preset age of the user are added, and the sum of the distance and the distance is the preset distance. When the Zhanzhu acupoint is determined, the position of the eyebrow is only needed to be identified in the face image, and then the upper edge of the inner end of the eyebrow is the position of the Zhanzhu acupoint. The other methods for determining the acupoints are similar to the above methods, and are not repeated herein.

In order to position each acupuncture point, image processing needs to be performed on the face image, so that after the positions of human eyes and eyebrows are determined in the face image, each acupuncture point is positioned according to the positions of the human eyes and the eyebrows. For convenience of description, the areas of the human eyes and the eyebrows are collectively referred to as an eyebrow area.

When image recognition is carried out, firstly, coarse positioning can be carried out on the eyebrow area by training a corresponding positioning model. When the positioning model is trained, a plurality of facial images including eyebrow areas can be collected in advance to serve as training samples, the facial images serve as input, the eyebrow areas serve as output, and the model is trained. In order to reduce the workload during recognition and reduce the influence of color information on recognition, the face image may be grayed and then input into the model. In the following embodiments, unless otherwise specified, the images are all subjected to the graying processing.

The classifier can be obtained by training an AdaBoost learning algorithm and is realized through a multi-stage classifier structure. In the AdaBoost algorithm, each training sample is given a weight. In each iteration, if a training sample can be correctly classified by the weak classifier of the current round, the weight of the sample needs to be reduced before learning the weak classifier of the next round, so that the importance of the sample is reduced. On the contrary, the weights of the samples misjudged by the weak classifiers in the current round are increased, so that a new round of training mainly surrounds the samples which cannot be correctly classified.

By roughly positioning the face image, an eyebrow image including an eyebrow region can be acquired. In the eyebrow image, the eye area can first be located to determine the position of the eye area in the eyebrow image.

Wherein the eye region can be positioned from both the lateral and the longitudinal direction. Since the human eye is the most varying region in the face, both in the lateral and longitudinal directions, the human eye region can be located based on the gray level variation in the face.

When positioning the eye region from the lateral direction, a lateral operator may first be constructed from the size of the eyebrow image. When constructing the transverse operator, obtaining a pixel quantity index w according to the number of pixels of each line in the eyebrow image, and obtaining the transverse operator according to the difference of w. For example, w may be the number of pixels n included in each row divided by a fixed number and rounded up, followed by another fixed number, and w is an odd number greater than 1.

After obtaining w, if w has a value of 5, the horizontal operator may be [1,1,0, -1, -1], if w has a value of 9, the horizontal operator may be [1,1,1,1,0, -1, -1, and so on.

After the transverse operator is obtained, the transverse operator is convolved with the eyebrow image to obtain a transverse gray scale change curve capable of expressing the eyebrow image. In the eyebrow region, since the lateral direction of the eye includes structures such as the iris and the sclera, and the gray level change is more obvious than other positions, the maximum value in the lateral gray level change curve of the eyebrow image can be used as the center position of the eye region in the lateral direction. After the center position of the eye region in the transverse direction is determined, the upper boundary and the lower boundary of the eye region can be determined according to the center position, so that the position of the eye region in the transverse direction can be determined.

Specifically, the upper boundary and the lower boundary may be determined according to a maximum value in a lateral gray-scale variation curve of the eyebrow image. For example, the maximum value in the lateral gray-scale variation curve of the eyebrow image is respectively upward and downward at the position corresponding to the eyebrow image until reaching a preset proportion of the maximum value in the lateral gray-scale variation curve of the eyebrow image, for example, a half of the maximum value, as the upper boundary and the lower boundary of the eye region. I.e. the areas within the upper and lower boundaries, a lateral eye area is determined.

After the horizontal position of the eye area is determined, the eyebrow image can be intercepted according to the upper boundary and the lower boundary to obtain a horizontal position image determined according to the horizontal position, and the longitudinal position of the eye area is determined in the image.

When the longitudinal position of the eye area is determined, firstly, the transverse position image is passed through, and for each pixel in the transverse position image, the abscissa of the pixel is set as x0Calculating the horizontal position image in the interval [ y1,y2]The vertical gray integration function above, the formula of the vertical gray integration function may be:

wherein, y1And y2The coordinates corresponding to the upper boundary and the lower boundary of the image are referred to, and the position of the image in the coordinate system may be arbitrary, for example, the lower left corner of the image is used as the origin, or the center point of the image is used as the origin, and the like, which is not limited herein. Because the structure of the eye region is relatively fixed, the brightness difference of the iris, the sclera and other regions is relatively obvious, and therefore, the longitudinal gray scale integral function has peaks or valleys at the boundary of the iris and the sclera. And combining the approximate positions of the eyes in the determined region in the prior knowledge, namely, taking the positions corresponding to the two peaks at the outermost side of the longitudinal gray scale integration function as the left boundary and the right boundary of the eye region in the longitudinal direction. The priori knowledge is to determine the approximate position of the eye region in the image according to the existing mature technology such as the physiological structure of the human body.

After the lateral position as well as the longitudinal position of the eye region, i.e. the upper, lower, left and right borders of the eye region in the eyebrow image, are determined, the eye region is determined. And then intercepting the eyebrow image to obtain an eye image. The eye image includes a left eye image and a right eye image, and for convenience of description, the left eye image and the right eye image are both referred to as the eye image because the left eye image and the right eye image are processed similarly in the following.

After the eye image is obtained, the edge of the eye in the eye image is identified to further determine the shape and position of the eye.

Specifically, the edge of the eye may first be coarsely located to determine the location of the edge pixels. A coordinate system is constructed in the eye image, the location of the origin of the coordinate system being not limited. Then, for each pixel in the eye image, the second-order directional derivative of each pixel in the gradient direction of the grayscale image I (x, y) is calculated. Wherein the gradient direction is a direction perpendicular to the human eye edge. And then calculating according to the second-order directional derivative to obtain a Laplacian operator of each pixel point, and marking the pixel point of the Laplacian operator at the zero crossing point on the function image as an edge pixel.

The edge of the eye is positioned through a Laplacian operator, and the positioning precision of the edge can only be accurate to the pixel level. However, when an actual face is imaged in an image, the actual edge cannot be completely consistent with the edge of the image element. However, the eye image has a small proportion in the whole face image, and if the accuracy of the acquisition device is low when the face image is acquired, a large error may be generated when determining the edge of the eye. Therefore, after determining the positions of the edge pixels, the embodiment of the application further precisely positions the edge of the eye in each edge pixel to determine the sub-pixel positions of the eye.

Specifically, since the area of each pixel point is already small, the image representing the edge in each edge pixel can be regarded as a straight line in the edge pixel. The equation for a straight line is defined as:

xcosα+ysinα=l

wherein l is the distance between the coordinate center point and the edge, and alpha is the included angle between the edge gradient direction and the x axis. Then, the formula for the two-dimensional step edge can be defined as:

where a is the gray value inside the edge and b is the edge height. The two-dimensional step edge is centered on the pixel (x, y)Sub-pixel position of edge (x)1,y1) Can be expressed as:

at this point, the position of the edge of the eye is determined.

After the positions of the edges of the eyes are determined in the eye images, the partial area, on the upper side of the upper boundary, of the eye area in the eyebrow image can be used as an eyebrow candidate area. And intercepting the area from the eyebrow image to be used as an eyebrow candidate image. And identifying in the eyebrow candidate image to determine the position of the eyebrow.

Specifically, after the eyebrow candidate image is obtained, the eyebrow candidate image can be subjected to image enhancement in a histogram equalization mode. And then obtaining a gray histogram of the eyebrow candidate image, selecting a plurality of candidate gray values in the gray histogram, and arranging the plurality of candidate gray values in a descending order to obtain a candidate gray value set. The gray value in the gray histogram as the trough can be selected as the gray value to be selected, that is, the value of the gray value to be selected in the gray histogram is lower than the values of the gray values at the two sides.

And then, carrying out binarization processing on the eyebrow image to be selected according to each gray value to be selected in the gray value set to be selected, so as to obtain a plurality of eyebrow images to be selected after binarization processing. When the binarization processing is performed, the following may be defined: adjusting the gray value of the pixel with the gray value smaller than the to-be-selected gray value to be 255, namely white; and adjusting the gray value of the pixel with the gray value larger than or equal to the gray value to be selected to be 0, namely black. For convenience of description, the eyebrow candidate image after the binarization processing is referred to as a binarized eyebrow candidate image for short hereinafter. In the multiple images to be selected for the binary eyebrow, due to the difference of the gray values during the binary process, the contents of the images are also different, that is, the ranges of black and white in the images are different. For each binary eyebrow candidate image, if a pixel meets the condition that the gray value is smaller than the gray value corresponding to the binary eyebrow candidate image, namely a white pixel in the binary eyebrow candidate image, the pixel is called as a meeting pixel, and an area formed by the meeting pixels is called as an effective area, namely the white area in the binary eyebrow candidate image. The remaining area is called an invalid area, namely a black area in the binary eyebrow candidate image.

For each image to be selected, if the area ratio of the effective area to the ineffective area is smaller than a preset threshold value, for example 2/3, it indicates that the area of the effective area in the image to be selected is not too large, the gray value to be selected corresponding to the image to be selected for binarizing during the binarization processing can be recorded as an effective gray value, and the image to be selected for binarizing corresponding to all the effective gray values is obtained.

And carrying out image fusion on the binary eyebrow images to be selected corresponding to all the effective gray values. In the merging process, if the first effective region is completely included in the second effective region, the second effective region and the first effective region are merged into the second effective region, and in this case, the second effective region is referred to as including the first effective region. Similarly, if the second effective region is completely included in the third effective region, the second effective region, the first effective region, and the third effective region are merged into the third effective region, and in this case, the third effective region is referred to as including the first effective region and the second effective region. After all the binary eyebrow candidate images are fused, if the number of the effective areas included in one effective area is greater than a preset threshold value, for example, greater than 3, the effective area after the fusion is called a candidate effective area.

According to the formula:

and calculating the information entropy of all the candidate effective areas. Where H (A) is the information entropy of the candidate effective region, p (x)j) Is the probability of a gray value of j in the candidate valid region. Since eyebrows contain more information than skin, regions with the largest entropy of information among the candidate effective regions can be identifiedThe field serves as an eyebrow area. After the eyebrow region is obtained, the edge of the eyebrow can be determined in the eyebrow region by a method similar to the method for determining the edge of the eye in the eye region in the above embodiment. At this time, the determination of the position and shape of the eyebrow and the eye is completed. Then, the positions of the acupuncture points in the face image can be determined according to the position relation among the acupuncture points, the eyes and the eyebrows.

When the acupuncture points are determined according to the position relation among the acupuncture points, the eyes and the eyebrows, the acupuncture points can be directly determined on the face image according to the preset position relation, grids can also be constructed on the face image, and then the positions of the acupuncture points are determined according to the grids. For example, the face image may be divided into a rows in the horizontal direction and B columns in the vertical direction, resulting in a face image of a × B grid. Since the edges of the eyes and eyebrows are already determined in the face image, the grids corresponding to the edges of the eyes and eyebrows in the grid are determined. At this time, the grids corresponding to the acupuncture points can be obtained according to the position relationship between the preset grids where the acupuncture points are located and the grids at the edges of the eyebrows, and the positions of the acupuncture points are determined.

In addition, the positions of the acupuncture points can be directly identified by training the corresponding identification models except that after the positions and the shapes of the eyes and the eyebrows are determined by carrying out image identification on the face image, the positions of the acupuncture points are determined according to the position relation between the acupuncture points and the eyes and the eyebrows.

Specifically, a plurality of three-dimensional face models, for which eye acupuncture points have been determined, may be collected in advance as training samples. When training samples are collected, a plurality of corresponding three-dimensional face models can be obtained by three-dimensionally scanning the faces of a plurality of users, then the positions of the eye acupuncture points can be determined on the three-dimensional face models manually, and the three-dimensional face models with the determined positions of the eye acupuncture points are used as the training samples for training the recognition models. The recognition model is then trained by a corresponding algorithm. The algorithm may be a convolutional neural network or a deep neural network algorithm, and is not further limited herein. After the recognition model is trained, a three-dimensional face model of the user may be collected. And then, through the trained recognition model, recognizing the positions of the eye acupuncture points on the three-dimensional face model.

In addition, after the face image is acquired, the electronic acupuncture device firstly judges whether the acquired face image is a blurred image or not, and when the face image is judged to be the blurred image, the determined blurred image is subjected to image deblurring processing so as to improve the definition of the image and improve the accuracy of subsequent image identification.

Specifically, first, since processing a non-blurred image may deteriorate the original quality of the image, the electronic acupuncture device may determine a gradient map of a face image according to the following formula:

gx(i,j)=|f(i+1,j)-f(i,j)|

gy(i,j)=|f(i+1,j)-f(i,j)|

and determines whether the face image for use is a blurred image according to the following formula.

Wherein, gx(i, j) and gy(i, j) are gradient diagrams of the image f in the x and y directions, respectively, m, n are the number of lines and columns of the image f in the x and y directions, respectively, and GnumIs the sum of the number of non-zero gradient values of the x-direction gradient map and the y-direction gradient map. When S is<And 7, the electronic acupuncture equipment can judge the eye image as a fuzzy image. The value 7 can be determined experimentally.

Secondly, the electronic acupuncture device can determine a foreground blurred image in the blurred image according to the following formula:

wherein q (x, y) is the foreground blurred image, c is the third preset value, d is the fourth preset value, N is the total number of pixels in the neighborhood of the pixel with (x, y) in the blurred image, h (x, y) is the set of pixel points in the neighborhood of the pixel with (x, y) in the blurred image, I (s, t) is the gray value of the pixel with (x, y) in the blurred image, and m (x, y) is the mean value of I (x, y).

And finally, the electronic acupuncture equipment can process the determined foreground fuzzy image by adopting Gaussian filtering to obtain a foreground clear image, then the foreground clear image is used as a facial image subjected to image deblurring processing, and the processed facial image is subjected to image recognition.

The embodiment of the application provides a brain vision detection and analysis method based on nerve feedback, which comprises the following steps:

step 1, acquiring physiological data and visual function parameters of a user by a detection device, wherein the physiological data comprises: oxyhemoglobin concentration change data and/or deoxyhemoglobin concentration change data.

Step 2, the first server updates the current training sample and the current neural network model according to the current visual function parameters and the physiological data; and outputting the type and grade of the visual disorder of the user by taking the current visual function parameters and physiological data as input through the updated neural network model.

Step 3, the first server determines at least one piece of electronic acupuncture data according to the vision disorder type and the vision disorder grade; the first server prestores a plurality of electronic acupuncture data, and the electronic acupuncture data are used for indicating the corresponding relation between the vision disorder type and/or the vision disorder grade and the electric acupuncture parameter, the acupuncture point and the acupuncture type; according to at least one piece of electronic acupuncture data, corresponding electric needle equipment operation data are matched for a user, and the electric needle equipment operation data comprise: electric needle parameters, acupuncture types and acupuncture points.

And 4, carrying out electronic acupuncture for the user by the electronic acupuncture equipment according to the operation data of the electric acupuncture equipment.

The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种个性化电子针灸装置及其生成方法与生成装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!