Deep learning-based voice recognition system and method for intelligent helmet

文档序号:1867234 发布日期:2021-11-23 浏览:22次 中文

阅读说明:本技术 一种智能头盔用基于深度学习的语音识别系统及方法 (Deep learning-based voice recognition system and method for intelligent helmet ) 是由 徐晴雯 张谨 彭兴政 王雪峰 于 2021-09-26 设计创作,主要内容包括:本发明公开了一种智能头盔用基于深度学习的语音识别系统及方法,包括环境监控单元、单向分析单元、关键获取单元、融合分析单元、处理器、仪表蓝牙端、噪音消除单元、处理器、显示单元、管理单元和手机端;本发明通过进行安全判定,测量在摩托车左右两侧的距离,将左侧和右侧的距离对应的标记为左向距离和右向距离;根据二者判定此时摩托车经过了多少车辆,从而判定用户是否处于危险情况,给予一定的技术上的数据支持;通过对左向个数和右向个数进行单向融合分析,得到一个能综合表现车辆多寡的参数,即为融向个数。(The invention discloses a voice recognition system and method based on deep learning for an intelligent helmet, which comprises an environment monitoring unit, a one-way analysis unit, a key acquisition unit, a fusion analysis unit, a processor, an instrument Bluetooth end, a noise elimination unit, a processor, a display unit, a management unit and a mobile phone end, wherein the environment monitoring unit is used for monitoring the environment of the intelligent helmet; according to the invention, through safety judgment, the distances at the left side and the right side of the motorcycle are measured, and the distances at the left side and the right side are marked as the left-direction distance and the right-direction distance correspondingly; determining how many vehicles the motorcycle passes by at the moment according to the two information, thereby determining whether the user is in a dangerous condition and providing certain technical data support; and performing one-way fusion analysis on the left-direction number and the right-direction number to obtain a parameter capable of comprehensively expressing the quantity of the vehicles, namely the fusion number.)

1. A voice recognition system based on deep learning for an intelligent helmet is characterized by comprising an environment monitoring unit, a one-way analysis unit, a key acquisition unit, a fusion analysis unit, a processor, an instrument Bluetooth end, a noise elimination unit, a processor, a display unit, a management unit and a mobile phone end;

the collection end comprises a data interaction unit, a controller and a demand corresponding unit;

the environment monitoring unit is used for monitoring a formed environment and performing grade analysis operation, and the specific grade analysis operation steps are as follows:

the method comprises the following steps: the environment monitoring unit comprises a transverse distance measuring module and an intelligent judging module which are arranged at a handlebar or a head of the motorcycle and are not shielded by people;

step two: firstly, measuring the distances at the left side and the right side of the motorcycle by using a transverse distance measuring module, and marking the distances at the left side and the right side as a left-direction distance and a right-direction distance correspondingly;

step three: acquiring a left-direction distance and a right-direction distance once every T1 time to obtain a left-direction distance group Zi, i =1.. n; and a right distance group Yi, i =1.. n;

step four: transmitting the left distance group Zi and the right distance group Yi to an intelligent judgment module;

step five: the intelligent judgment module receives the left distance group Zi and the right distance group Yi transmitted by the transverse distance measurement module;

step six: then, automatically analyzing the distance group Zi, firstly obtaining a left distance group Zi, and judging the left direction to obtain the number of left directions;

step seven: acquiring a right distance group Yi, and judging the right direction of the right distance group Yi to obtain the number of right directions;

the environment monitoring unit is used for transmitting the left-direction number and the right-direction number to the one-way analysis unit, and the one-way analysis unit receives the left-direction number and the right-direction number transmitted by the environment monitoring unit and performs one-way fusion analysis on the left-direction number and the right-direction number to obtain a fusion number Rg;

the unidirectional analysis unit is used for transmitting the fusion direction number Rg to the fusion analysis unit, and the fusion analysis unit receives the fusion direction number Rg transmitted by the unidirectional analysis unit;

the key acquisition unit is used for acquiring the real-time speed of the motorcycle and transmitting the real-time speed to the fusion analysis unit, and the fusion analysis unit receives the real-time speed transmitted by the key acquisition unit and analyzes the safety value by combining the real-time speed to obtain the safety value Az;

the fusion analysis unit is used for transmitting the ampere value Az to the processor, and the processor receives the ampere value Az transmitted by the fusion analysis unit;

the processor generates a stop signal when the safety value Az exceeds a preset value X4, and the processor automatically cuts off the display of the display unit and cuts off the connection between the processor and the data interaction unit when the stop signal is generated.

2. The intelligent helmet deep learning-based speech recognition system according to claim 1, wherein the specific steps of determining the left direction in the sixth step are as follows:

s1: firstly acquiring Zi, and calculating the simple increment by using a formula, wherein the specific formula is as follows:

single increment = Zi-1;

s2: when the single increment is less than or equal to X1 and Zi-1 is less than or equal to X2, generating a passing signal; both X1 and X2 are preset values, and X1 is not more than X2;

s3: continuously monitoring the time T2, wherein T2 is preset time, and T2> T1; acquiring the number of generated passing signals, and marking the number as a left number; and then repeating the monitoring of the step six.

3. The intelligent helmet deep learning-based speech recognition system according to claim 1, wherein the specific steps of determining the right direction in the seventh step are as follows:

s1: firstly, acquiring Yi, and calculating a single increment value two by using a formula, wherein the specific formula is as follows:

simple increment of two = Yi-1;

s2: when the single increment is satisfied, namely X1 is less than or equal to two and Zi-1 is less than or equal to X2, a passing signal is generated;

s3: continuously monitoring for T2 time; acquiring the number of generated passing signals, and marking the number as a right-direction number; the monitoring of this step seven is then repeated.

4. The intelligent helmet deep learning-based speech recognition system according to claim 1, wherein the orientation analysis comprises the following specific steps:

SS 1: acquiring a left direction number and a right direction number;

SS 2: and then calculating the fusion direction number Rg by using a formula, wherein the specific calculation formula is as follows:

rg =1.25 left-hand count + right-hand count;

SS 3: obtaining the fusion orientation number Rg.

5. The intelligent helmet deep learning-based speech recognition system according to claim 1, wherein the safety analysis comprises the following steps:

s01: acquiring orientation fusion number Rg;

s02: acquiring the real-time speed of the motorcycle, and acquiring the real-time speed once every T3 time to obtain a real-time speed group Vj, wherein j =1.. n;

s03: setting a speed parameter set, namely selecting from the latest real-time speed to obtain Vn, and then continuously pushing nine sets forwards to obtain Vn-9 to Vn;

s04: then acquiring a speed parameter set Vj, j = n-9,. and n; then, carrying out average value solving on the speed parameter group, and marking the average value as Pv;

s05: then leave the set of velocity parameters that satisfy Vj-Pv ≧ X3.

6. Re-averaging;

s06: marking the obtained mean value as a target speed value Mv;

s07: and then calculating a real-time safety value Az, wherein the specific calculation formula is as follows:

az =0.532 × Mv +0.468 × Rg; in the formula, each calculation factor is calculated after removing dimensions.

7. The deep learning-based voice recognition system for the intelligent helmet as claimed in claim 1, wherein the instrument bluetooth end is further used for performing voice collection on the helmet and automatically performing voice collection operation in combination with a noise elimination unit, and the voice collection operation specifically comprises the steps of:

SS 01: firstly, real-time environmental sound collection is carried out, and the concrete collection steps are as follows:

SS 011: at the beginning, background voice with the duration of T4 is collected;

SS 012: after T4 time, if no sound information of the user is detected, continuously collecting background voice; meanwhile, when the background voice recorded in each unit of time is newly recorded, the background voice in the initial unit time period is deleted;

SS 013: when the user sound information is detected, the environmental sound at the time T4 is acquired;

SS 02: then automatically acquiring the sound information of the user;

SS 03: marking the sound information as a comprehensive sound;

SS 04: simultaneously marking the ambient sound as a background sound;

SS 05: removing background sound in the comprehensive sound to obtain pure human voice;

SS 06: carrying out voice to character conversion on the pure voice, and carrying out instruction identification to obtain a user instruction;

the noise elimination unit is used for transmitting a user instruction to the processor, the processor receives the user instruction transmitted by the noise elimination unit and transmits the user instruction to the data interaction unit, the data interaction unit is used for transmitting the user instruction to the controller, the unit corresponding to the control requirement of the controller is used for executing the instruction, and a frame for executing the instruction forms a feedback frame; the controller is used for returning the feedback picture to the processor through the data interaction unit, the processor is used for transmitting the feedback picture to the display unit, and the display unit receives the feedback picture transmitted by the processor and displays the feedback picture in real time.

8. The deep learning-based speech recognition method for the intelligent helmet according to any one of claims 1 to 6, wherein the method specifically comprises the following steps:

the method comprises the following steps: and (3) carrying out safety judgment, wherein the specific judgment mode is as follows:

s1: measuring the distances at the left side and the right side of the motorcycle, and marking the distances at the left side and the right side as a left-direction distance and a right-direction distance correspondingly;

s2: acquiring a left-direction distance and a right-direction distance once every T1 time to obtain a left-direction distance group Zi, i =1.. n; and a right distance group Yi, i =1.. n;

s3: acquiring a left distance group Zi, and judging the left direction thereof, specifically:

s31: firstly acquiring Zi, and calculating the simple increment by using a formula, wherein the specific formula is as follows:

single increment = Zi-1;

s32: when the single increment is less than or equal to X1 and Zi-1 is less than or equal to X2, generating a passing signal; both X1 and X2 are preset values, and X1 is not more than X2;

s33: continuously monitoring the time T2, wherein T2 is preset time, and T2> T1; acquiring the number of generated passing signals, and marking the number as a left number; then repeating the monitoring of the step six;

s4: acquiring a right distance group Yi, and judging the right direction of the right distance group Yi, wherein the method specifically comprises the following steps:

s41: firstly, acquiring Yi, and calculating a single increment value two by using a formula, wherein the specific formula is as follows:

simple increment of two = Yi-1;

s42: when the single increment is satisfied, namely X1 is less than or equal to two and Zi-1 is less than or equal to X2, a passing signal is generated;

s43: continuously monitoring for T2 time; acquiring the number of generated passing signals, and marking the number as a right-direction number; the monitoring is continued;

step two: the one-way fusion analysis is carried out on the left direction number and the right direction number, and specifically comprises the following steps:

SS 1: acquiring a left direction number and a right direction number;

SS 2: and then calculating the fusion direction number Rg by using a formula, wherein the specific calculation formula is as follows:

rg =1.25 left-hand count + right-hand count;

SS 3: obtaining the orientation fusion number Rg;

step three: the method comprises the following steps of obtaining the real-time speed of the motorcycle, and analyzing an ampere value by combining the real-time speed, wherein the specific ampere value analysis step comprises the following steps:

s01: acquiring orientation fusion number Rg;

s02: acquiring the real-time speed of the motorcycle, and acquiring the real-time speed once every T3 time to obtain a real-time speed group Vj, wherein j =1.. n;

s03: setting a speed parameter set, namely selecting from the latest real-time speed to obtain Vn, and then continuously pushing nine sets forwards to obtain Vn-9 to Vn;

s04: then acquiring a speed parameter set Vj, j = n-9,. and n; then, carrying out average value solving on the speed parameter group, and marking the average value as Pv;

s05: then leave the set of velocity parameters that satisfy Vj-Pv ≧ X3.

9. Re-averaging;

s06: marking the obtained mean value as a target speed value Mv;

s07: and then calculating a real-time safety value Az, wherein the specific calculation formula is as follows:

az =0.532 × Mv +0.468 × Rg; in the formula, all the calculation factors are calculated after dimension is removed;

step four: when the safety value Az exceeds a preset value X4, a stop signal is generated, when the stop signal is generated, the display of the display unit is automatically cut off, and meanwhile, the connection between the helmet and the mobile phone is cut off;

step five: carry out interdynamic between cell-phone and the helmet under the condition of intercommunication, carry out pronunciation collection effect through instrument bluetooth end to carry out pronunciation collection operation automatically, the concrete step of pronunciation collection operation is:

SS 01: firstly, real-time environmental sound collection is carried out, and the concrete collection steps are as follows:

SS 011: at the beginning, background voice with the duration of T4 is collected;

SS 012: after T4 time, if no sound information of the user is detected, continuously collecting background voice; meanwhile, when the background voice recorded in each unit of time is newly recorded, the background voice in the initial unit time period is deleted;

SS 013: when the user sound information is detected, the environmental sound at the time T4 is acquired;

SS 02: then automatically acquiring the sound information of the user;

SS 03: marking the sound information as a comprehensive sound;

SS 04: simultaneously marking the ambient sound as a background sound;

SS 05: removing background sound in the comprehensive sound to obtain pure human voice;

SS 06: carrying out voice to character conversion on the pure voice, and carrying out instruction identification to obtain a user instruction;

step six: and carrying out corresponding commands according to the user instructions.

Technical Field

The invention belongs to the field of voice recognition, relates to a voice recognition technology, and particularly relates to a deep learning-based voice recognition system and method for an intelligent helmet.

Background

In the conventional server and client type speech recognition apparatus, publication No. CN106537494A discloses that when either speech recognition result is not returned, the user needs to speak from the beginning, which causes a problem of a large burden on the user. A speech recognition device transmits an input speech to a server, receives a 1 st speech recognition result which is a result of speech recognition of the transmitted input speech by the server, performs speech recognition of the input speech to obtain a 2 nd speech recognition result, determines a speech rule conforming to the 2 nd speech recognition result with reference to a speech rule expressing a structure of a speech element of the input speech, determines a speech recognition state indicating the speech element for which the speech recognition result is not obtained based on correspondence between presence or absence of the 1 st speech recognition result and presence or absence of the 2 nd speech recognition result and presence or absence of the speech element constituting the speech rule, generates a response sentence inquiring the speech element for which the speech recognition result is not obtained in correspondence with the determined speech recognition state, and outputs the response sentence.

However, the voice recognition technology does not relate to the problem that a motorcycle user cannot perform voice recognition when the helmet end and the mobile phone end are interconnected when driving, and cannot perform recognition under certain conditions; in order to solve this technical drawback, a solution is now provided.

Disclosure of Invention

The invention aims to provide a speech recognition system and method based on deep learning for an intelligent helmet.

The purpose of the invention can be realized by the following technical scheme:

a voice recognition system based on deep learning for an intelligent helmet comprises an environment monitoring unit, a one-way analysis unit, a key acquisition unit, a fusion analysis unit, a processor, an instrument Bluetooth end, a noise elimination unit, a processor, a display unit, a management unit and a mobile phone end;

the collection end comprises a data interaction unit, a controller and a demand corresponding unit;

the environment monitoring unit is used for monitoring a formed environment and performing grade analysis operation, and the specific grade analysis operation steps are as follows:

the method comprises the following steps: the environment monitoring unit comprises a transverse distance measuring module and an intelligent judging module which are arranged at a handlebar or a head of the motorcycle and are not shielded by people;

step two: firstly, measuring the distances at the left side and the right side of the motorcycle by using a transverse distance measuring module, and marking the distances at the left side and the right side as a left-direction distance and a right-direction distance correspondingly;

step three: acquiring a left-direction distance and a right-direction distance once every T1 time to obtain a left-direction distance group Zi, i =1.. n; and a right distance group Yi, i =1.. n;

step four: transmitting the left distance group Zi and the right distance group Yi to an intelligent judgment module;

step five: the intelligent judgment module receives the left distance group Zi and the right distance group Yi transmitted by the transverse distance measurement module;

step six: then, automatically analyzing the distance group Zi, firstly obtaining a left distance group Zi, and judging the left direction to obtain the number of left directions;

step seven: acquiring a right distance group Yi, and judging the right direction of the right distance group Yi to obtain the number of right directions;

the environment monitoring unit is used for transmitting the left-direction number and the right-direction number to the one-way analysis unit, and the one-way analysis unit receives the left-direction number and the right-direction number transmitted by the environment monitoring unit and performs one-way fusion analysis on the left-direction number and the right-direction number to obtain a fusion number Rg;

the unidirectional analysis unit is used for transmitting the fusion direction number Rg to the fusion analysis unit, and the fusion analysis unit receives the fusion direction number Rg transmitted by the unidirectional analysis unit;

the key acquisition unit is used for acquiring the real-time speed of the motorcycle and transmitting the real-time speed to the fusion analysis unit, and the fusion analysis unit receives the real-time speed transmitted by the key acquisition unit and analyzes the safety value by combining the real-time speed to obtain the safety value Az;

the fusion analysis unit is used for transmitting the ampere value Az to the processor, and the processor receives the ampere value Az transmitted by the fusion analysis unit;

the processor generates a stop signal when the safety value Az exceeds a preset value X4, and the processor automatically cuts off the display of the display unit and cuts off the connection between the processor and the data interaction unit when the stop signal is generated.

Further, the specific steps of the left-going determination in the step six are as follows:

s1: firstly acquiring Zi, and calculating the simple increment by using a formula, wherein the specific formula is as follows:

single increment = Zi-1;

s2: when the single increment is less than or equal to X1 and Zi-1 is less than or equal to X2, generating a passing signal; both X1 and X2 are preset values, and X1 is not more than X2;

s3: continuously monitoring the time T2, wherein T2 is preset time, and T2> T1; acquiring the number of generated passing signals, and marking the number as a left number; and then repeating the monitoring of the step six.

Further, the specific steps of determining the right-hand direction in the step seven are as follows:

s1: firstly, acquiring Yi, and calculating a single increment value two by using a formula, wherein the specific formula is as follows:

simple increment of two = Yi-1;

s2: when the single increment is satisfied, namely X1 is less than or equal to two and Zi-1 is less than or equal to X2, a passing signal is generated;

s3: continuously monitoring for T2 time; acquiring the number of generated passing signals, and marking the number as a right-direction number; the monitoring of this step seven is then repeated.

Further, the fusion direction analysis comprises the following specific steps:

SS 1: acquiring a left direction number and a right direction number;

SS 2: and then calculating the fusion direction number Rg by using a formula, wherein the specific calculation formula is as follows:

rg =1.25 left-hand count + right-hand count;

SS 3: obtaining the fusion orientation number Rg.

Further, the specific steps of the ampere value analysis are as follows:

s01: acquiring orientation fusion number Rg;

s02: acquiring the real-time speed of the motorcycle, and acquiring the real-time speed once every T3 time to obtain a real-time speed group Vj, wherein j =1.. n;

s03: setting a speed parameter set, namely selecting from the latest real-time speed to obtain Vn, and then continuously pushing nine sets forwards to obtain Vn-9 to Vn;

s04: then acquiring a speed parameter set Vj, j = n-9,. and n; then, carrying out average value solving on the speed parameter group, and marking the average value as Pv;

s05: then leave the set of velocity parameters that satisfy Vj-Pv ≧ X3. Re-averaging;

s06: marking the obtained mean value as a target speed value Mv;

s07: and then calculating a real-time safety value Az, wherein the specific calculation formula is as follows:

az =0.532 × Mv +0.468 × Rg; in the formula, each calculation factor is calculated after removing dimensions.

Further, instrument bluetooth end still is used for the helmet to carry out the pronunciation and gathers the effect to combine the noise elimination unit to carry out the pronunciation collection operation automatically, the concrete step of pronunciation collection operation is:

SS 01: firstly, real-time environmental sound collection is carried out, and the concrete collection steps are as follows:

SS 011: at the beginning, background voice with the duration of T4 is collected;

SS 012: after T4 time, if no sound information of the user is detected, continuously collecting background voice; meanwhile, when the background voice recorded in each unit of time is newly recorded, the background voice in the initial unit time period is deleted;

SS 013: when the user sound information is detected, the environmental sound at the time T4 is acquired;

SS 02: then automatically acquiring the sound information of the user;

SS 03: marking the sound information as a comprehensive sound;

SS 04: simultaneously marking the ambient sound as a background sound;

SS 05: removing background sound in the comprehensive sound to obtain pure human voice;

SS 06: carrying out voice to character conversion on the pure voice, and carrying out instruction identification to obtain a user instruction;

the noise elimination unit is used for transmitting a user instruction to the processor, the processor receives the user instruction transmitted by the noise elimination unit and transmits the user instruction to the data interaction unit, the data interaction unit is used for transmitting the user instruction to the controller, the unit corresponding to the control requirement of the controller is used for executing the instruction, and a frame for executing the instruction forms a feedback frame; the controller is used for returning the feedback picture to the processor through the data interaction unit, the processor is used for transmitting the feedback picture to the display unit, and the display unit receives the feedback picture transmitted by the processor and displays the feedback picture in real time.

A speech recognition method based on deep learning for an intelligent helmet specifically comprises the following steps:

the method comprises the following steps: and (3) carrying out safety judgment, wherein the specific judgment mode is as follows:

s1: measuring the distances at the left side and the right side of the motorcycle, and marking the distances at the left side and the right side as a left-direction distance and a right-direction distance correspondingly;

s2: acquiring a left-direction distance and a right-direction distance once every T1 time to obtain a left-direction distance group Zi, i =1.. n; and a right distance group Yi, i =1.. n;

s3: acquiring a left distance group Zi, and judging the left direction thereof, specifically:

s31: firstly acquiring Zi, and calculating the simple increment by using a formula, wherein the specific formula is as follows:

single increment = Zi-1;

s32: when the single increment is less than or equal to X1 and Zi-1 is less than or equal to X2, generating a passing signal; both X1 and X2 are preset values, and X1 is not more than X2;

s33: continuously monitoring the time T2, wherein T2 is preset time, and T2> T1; acquiring the number of generated passing signals, and marking the number as a left number; then repeating the monitoring of the step six;

s4: acquiring a right distance group Yi, and judging the right direction of the right distance group Yi, wherein the method specifically comprises the following steps:

s41: firstly, acquiring Yi, and calculating a single increment value two by using a formula, wherein the specific formula is as follows:

simple increment of two = Yi-1;

s42: when the single increment is satisfied, namely X1 is less than or equal to two and Zi-1 is less than or equal to X2, a passing signal is generated;

s43: continuously monitoring for T2 time; acquiring the number of generated passing signals, and marking the number as a right-direction number; the monitoring is continued;

step two: the one-way fusion analysis is carried out on the left direction number and the right direction number, and specifically comprises the following steps:

SS 1: acquiring a left direction number and a right direction number;

SS 2: and then calculating the fusion direction number Rg by using a formula, wherein the specific calculation formula is as follows:

rg =1.25 left-hand count + right-hand count;

SS 3: obtaining the orientation fusion number Rg;

step three: the method comprises the following steps of obtaining the real-time speed of the motorcycle, and analyzing an ampere value by combining the real-time speed, wherein the specific ampere value analysis step comprises the following steps:

s01: acquiring orientation fusion number Rg;

s02: acquiring the real-time speed of the motorcycle, and acquiring the real-time speed once every T3 time to obtain a real-time speed group Vj, wherein j =1.. n;

s03: setting a speed parameter set, namely selecting from the latest real-time speed to obtain Vn, and then continuously pushing nine sets forwards to obtain Vn-9 to Vn;

s04: then acquiring a speed parameter set Vj, j = n-9,. and n; then, carrying out average value solving on the speed parameter group, and marking the average value as Pv;

s05: then leave the set of velocity parameters that satisfy Vj-Pv ≧ X3. Re-averaging;

s06: marking the obtained mean value as a target speed value Mv;

s07: and then calculating a real-time safety value Az, wherein the specific calculation formula is as follows:

az =0.532 × Mv +0.468 × Rg; in the formula, all the calculation factors are calculated after dimension is removed;

step four: when the safety value Az exceeds a preset value X4, a stop signal is generated, when the stop signal is generated, the display of the display unit is automatically cut off, and meanwhile, the connection between the helmet and the mobile phone is cut off;

step five: carry out interdynamic between cell-phone and the helmet under the condition of intercommunication, carry out pronunciation collection effect through instrument bluetooth end to carry out pronunciation collection operation automatically, the concrete step of pronunciation collection operation is:

SS 01: firstly, real-time environmental sound collection is carried out, and the concrete collection steps are as follows:

SS 011: at the beginning, background voice with the duration of T4 is collected;

SS 012: after T4 time, if no sound information of the user is detected, continuously collecting background voice; meanwhile, when the background voice recorded in each unit of time is newly recorded, the background voice in the initial unit time period is deleted;

SS 013: when the user sound information is detected, the environmental sound at the time T4 is acquired;

SS 02: then automatically acquiring the sound information of the user;

SS 03: marking the sound information as a comprehensive sound;

SS 04: simultaneously marking the ambient sound as a background sound;

SS 05: removing background sound in the comprehensive sound to obtain pure human voice;

SS 06: carrying out voice to character conversion on the pure voice, and carrying out instruction identification to obtain a user instruction;

step six: and carrying out corresponding commands according to the user instructions.

The invention has the beneficial effects that:

according to the invention, through safety judgment, the distances at the left side and the right side of the motorcycle are measured, and the distances at the left side and the right side are marked as the left-direction distance and the right-direction distance correspondingly; determining how many vehicles the motorcycle passes by at the moment according to the two information, thereby determining whether the user is in a dangerous condition and providing certain technical data support; performing one-way fusion analysis on the left-direction number and the right-direction number to obtain a parameter capable of comprehensively expressing the number of vehicles, namely the fusion number;

then, carrying out safety value analysis by combining with the real-time speed of the motorcycle to obtain a safety value Az, generating a stop signal when the safety value Az exceeds a preset value X4, automatically cutting off the display of the display unit when the stop signal is generated, and simultaneously cutting off the connection between the helmet and the mobile phone;

interaction between the mobile phone and the helmet is carried out under the condition of communication, voice collection is carried out through the Bluetooth end of the instrument, voice collection operation is automatically carried out, and noise reduction processing is carried out in the process in a mode of presetting a background; the invention is simple, effective and easy to use.

Drawings

In order to facilitate understanding for those skilled in the art, the present invention will be further described with reference to the accompanying drawings.

FIG. 1 is a block diagram of the system of the present invention;

fig. 2 is a detailed structural diagram of the interaction between the helmet and the mobile phone according to the present invention.

Detailed Description

As shown in fig. 1, a speech recognition system based on deep learning for an intelligent helmet comprises an environment monitoring unit, a one-way analysis unit, a key acquisition unit, a fusion analysis unit, a processor, an instrument bluetooth end, a noise elimination unit, a processor, a display unit, a management unit and a mobile phone end;

the collection end comprises a data interaction unit, a controller and a demand corresponding unit;

the environment monitoring unit is used for monitoring a formed environment and performing grade analysis operation, and the specific grade analysis operation steps are as follows:

the method comprises the following steps: the environment monitoring unit comprises a transverse distance measuring module and an intelligent judging module which are arranged at a handlebar or a head of the motorcycle and are not shielded by people;

step two: firstly, measuring the distances at the left side and the right side of the motorcycle by using a transverse distance measuring module, and marking the distances at the left side and the right side as a left-direction distance and a right-direction distance correspondingly;

step three: acquiring a left-direction distance and a right-direction distance once every T1 time to obtain a left-direction distance group Zi, i =1.. n; and a right distance group Yi, i =1.. n;

step four: transmitting the left distance group Zi and the right distance group Yi to an intelligent judgment module;

step five: the intelligent judgment module receives the left distance group Zi and the right distance group Yi transmitted by the transverse distance measurement module;

step six: and then automatically analyzing the distance group Zi, firstly acquiring a left distance group Zi, and judging the left distance group Zi, specifically:

s1: firstly acquiring Zi, and calculating the simple increment by using a formula, wherein the specific formula is as follows:

single increment = Zi-1;

s2: when the single increment is less than or equal to X1 and Zi-1 is less than or equal to X2, generating a passing signal; both X1 and X2 are preset values, and X1 is not more than X2;

s3: continuously monitoring the time T2, wherein T2 is preset time, and T2> T1; acquiring the number of generated passing signals, and marking the number as a left number; then repeating the monitoring of the step six;

step seven: acquiring a right distance group Yi, and judging the right direction of the right distance group Yi, wherein the method specifically comprises the following steps:

s1: firstly, acquiring Yi, and calculating a single increment value two by using a formula, wherein the specific formula is as follows:

simple increment of two = Yi-1;

s2: when the single increment is satisfied, namely X1 is less than or equal to two and Zi-1 is less than or equal to X2, a passing signal is generated;

s3: continuously monitoring for T2 time; acquiring the number of generated passing signals, and marking the number as a right-direction number; then repeating the monitoring of the seventh step;

the environment monitoring unit is used for transmitting the left number and the right number to the one-way analysis unit, the one-way analysis unit receives the left number and the right number transmitted by the environment monitoring unit, and performs one-way fusion analysis on the left number and the right number, and specifically:

SS 1: acquiring a left direction number and a right direction number;

SS 2: and then calculating the fusion direction number Rg by using a formula, wherein the specific calculation formula is as follows:

rg =1.25 left-hand count + right-hand count;

SS 3: obtaining the orientation fusion number Rg;

the unidirectional analysis unit is used for transmitting the fusion direction number Rg to the fusion analysis unit, and the fusion analysis unit receives the fusion direction number Rg transmitted by the unidirectional analysis unit;

the key acquisition unit is used for acquiring the real-time speed of the motorcycle and transmitting the real-time speed to the fusion analysis unit, the fusion analysis unit receives the real-time speed transmitted by the key acquisition unit and analyzes the safety value by combining the real-time speed, and the specific safety value analysis step is as follows:

s01: acquiring orientation fusion number Rg;

s02: acquiring the real-time speed of the motorcycle, and acquiring the real-time speed once every T3 time to obtain a real-time speed group Vj, wherein j =1.. n;

s03: setting a speed parameter set, namely selecting from the latest real-time speed to obtain Vn, and then continuously pushing nine sets forwards to obtain Vn-9 to Vn;

s04: then acquiring a speed parameter set Vj, j = n-9,. and n; then, carrying out average value solving on the speed parameter group, and marking the average value as Pv;

s05: then leave the set of velocity parameters that satisfy Vj-Pv ≧ X3. Re-averaging;

s06: marking the obtained mean value as a target speed value Mv;

s07: and then calculating a real-time safety value Az, wherein the specific calculation formula is as follows:

az =0.532 × Mv +0.468 × Rg; in the formula, all the calculation factors are calculated after dimension is removed;

the fusion analysis unit is used for transmitting the ampere value Az to the processor, and the processor receives the ampere value Az transmitted by the fusion analysis unit;

the processor generates a stop signal when the safety value Az exceeds a preset value X4, and the processor automatically cuts off the display of the display unit and simultaneously cuts off the connection between the processor and the data interaction unit when the processor generates the stop signal;

the instrument bluetooth end still is used for the helmet to carry out the pronunciation and gathers the effect to automatic noise elimination unit that combines carries out pronunciation collection operation, the operation is gathered to pronunciation concrete step is:

SS 01: firstly, real-time environmental sound collection is carried out, and the concrete collection steps are as follows:

SS 011: at the beginning, background voice with the duration of T4 is collected;

SS 012: after T4 time, if no sound information of the user is detected, continuously collecting background voice; meanwhile, when the background voice recorded in each unit of time is newly recorded, the background voice in the initial unit time period is deleted;

SS 013: when the user sound information is detected, the environmental sound at the time T4 is acquired;

SS 02: then automatically acquiring the sound information of the user;

SS 03: marking the sound information as a comprehensive sound;

SS 04: simultaneously marking the ambient sound as a background sound;

SS 05: removing background sound in the comprehensive sound to obtain pure human voice;

SS 06: carrying out voice to character conversion on the pure voice, and carrying out instruction identification to obtain a user instruction;

the noise elimination unit is used for transmitting a user instruction to the processor, the processor receives the user instruction transmitted by the noise elimination unit and transmits the user instruction to the data interaction unit, the data interaction unit is used for transmitting the user instruction to the controller, the unit corresponding to the control requirement of the controller is used for executing the instruction, and a frame for executing the instruction forms a feedback frame; the controller is used for returning the feedback picture to the processor through the data interaction unit, the processor is used for transmitting the feedback picture to the display unit, and the display unit receives the feedback picture transmitted by the processor and displays the feedback picture in real time.

As shown in fig. 2, in specific implementation, a voice signal is required to be collected at a helmet end, signal amplification and filtering are performed, a/D is converted into a digital signal, preliminary noise elimination and echo elimination are performed, the work is realized by using a bluetooth module, low power consumption can be ensured, the service life of a battery is prolonged, and the design of the existing bluetooth helmet is not changed.

After the helmet transmits the voice signal to the instrument, the powerful computing power of the instrument SOC is utilized to realize voice recognition, and the helmet has two characteristics, how to effectively recognize voice semantics in a strong noise environment, and a large number of model libraries and a voice library with a strong noise background are established. The algorithm adds a deep learning function, and the recognition accuracy can be improved along with the increase of the use times. Sending the information identified by the voice to an APP (application) of the mobile phone end through a Bluetooth SPP (protocol for protocol) for function matching to realize a voice control function;

the function of the voice recognition part can be integrated on the intelligent helmet, so that the voice control of the helmet to the mobile phone can be realized. Has the characteristic of flexible configuration.

The electronic cost is not increased basically on the whole, and the language control function is realized;

a speech recognition method based on deep learning for an intelligent helmet comprises the following steps:

the method comprises the following steps: and (3) carrying out safety judgment, wherein the specific judgment mode is as follows:

s1: measuring the distances at the left side and the right side of the motorcycle, and marking the distances at the left side and the right side as a left-direction distance and a right-direction distance correspondingly;

s2: acquiring a left-direction distance and a right-direction distance once every T1 time to obtain a left-direction distance group Zi, i =1.. n; and a right distance group Yi, i =1.. n;

s3: acquiring a left distance group Zi, and judging the left direction thereof, specifically:

s31: firstly acquiring Zi, and calculating the simple increment by using a formula, wherein the specific formula is as follows:

single increment = Zi-1;

s32: when the single increment is less than or equal to X1 and Zi-1 is less than or equal to X2, generating a passing signal; both X1 and X2 are preset values, and X1 is not more than X2;

s33: continuously monitoring the time T2, wherein T2 is preset time, and T2> T1; acquiring the number of generated passing signals, and marking the number as a left number; then repeating the monitoring of the step six;

s4: acquiring a right distance group Yi, and judging the right direction of the right distance group Yi, wherein the method specifically comprises the following steps:

s41: firstly, acquiring Yi, and calculating a single increment value two by using a formula, wherein the specific formula is as follows:

simple increment of two = Yi-1;

s42: when the single increment is satisfied, namely X1 is less than or equal to two and Zi-1 is less than or equal to X2, a passing signal is generated;

s43: continuously monitoring for T2 time; acquiring the number of generated passing signals, and marking the number as a right-direction number; the monitoring is continued;

step two: the one-way fusion analysis is carried out on the left direction number and the right direction number, and specifically comprises the following steps:

SS 1: acquiring a left direction number and a right direction number;

SS 2: and then calculating the fusion direction number Rg by using a formula, wherein the specific calculation formula is as follows:

rg =1.25 left-hand count + right-hand count;

SS 3: obtaining the orientation fusion number Rg;

step three: the method comprises the following steps of obtaining the real-time speed of the motorcycle, and analyzing an ampere value by combining the real-time speed, wherein the specific ampere value analysis step comprises the following steps:

s01: acquiring orientation fusion number Rg;

s02: acquiring the real-time speed of the motorcycle, and acquiring the real-time speed once every T3 time to obtain a real-time speed group Vj, wherein j =1.. n;

s03: setting a speed parameter set, namely selecting from the latest real-time speed to obtain Vn, and then continuously pushing nine sets forwards to obtain Vn-9 to Vn;

s04: then acquiring a speed parameter set Vj, j = n-9,. and n; then, carrying out average value solving on the speed parameter group, and marking the average value as Pv;

s05: then leave the set of velocity parameters that satisfy Vj-Pv ≧ X3. Re-averaging;

s06: marking the obtained mean value as a target speed value Mv;

s07: and then calculating a real-time safety value Az, wherein the specific calculation formula is as follows:

az =0.532 × Mv +0.468 × Rg; in the formula, all the calculation factors are calculated after dimension is removed;

step four: when the safety value Az exceeds a preset value X4, a stop signal is generated, when the stop signal is generated, the display of the display unit is automatically cut off, and meanwhile, the connection between the helmet and the mobile phone is cut off;

step five: carry out interdynamic between cell-phone and the helmet under the condition of intercommunication, carry out pronunciation collection effect through instrument bluetooth end to carry out pronunciation collection operation automatically, the concrete step of pronunciation collection operation is:

SS 01: firstly, real-time environmental sound collection is carried out, and the concrete collection steps are as follows:

SS 011: at the beginning, background voice with the duration of T4 is collected;

SS 012: after T4 time, if no sound information of the user is detected, continuously collecting background voice; meanwhile, when the background voice recorded in each unit of time is newly recorded, the background voice in the initial unit time period is deleted;

SS 013: when the user sound information is detected, the environmental sound at the time T4 is acquired;

SS 02: then automatically acquiring the sound information of the user;

SS 03: marking the sound information as a comprehensive sound;

SS 04: simultaneously marking the ambient sound as a background sound;

SS 05: removing background sound in the comprehensive sound to obtain pure human voice;

SS 06: carrying out voice to character conversion on the pure voice, and carrying out instruction identification to obtain a user instruction;

step six: and carrying out corresponding commands according to the user instructions.

A speech recognition system based on deep learning for an intelligent helmet measures the distance between the left side and the right side of a motorcycle by carrying out safety judgment when the system works, and marks the distance between the left side and the right side as a left-direction distance and a right-direction distance correspondingly; determining how many vehicles the motorcycle passes by at the moment according to the two information, thereby determining whether the user is in a dangerous condition and providing certain technical data support; performing one-way fusion analysis on the left-direction number and the right-direction number to obtain a parameter capable of comprehensively expressing the number of vehicles, namely the fusion number;

then, carrying out safety value analysis by combining with the real-time speed of the motorcycle to obtain a safety value Az, generating a stop signal when the safety value Az exceeds a preset value X4, automatically cutting off the display of the display unit when the stop signal is generated, and simultaneously cutting off the connection between the helmet and the mobile phone;

interaction between the mobile phone and the helmet is carried out under the condition of communication, voice collection is carried out through the Bluetooth end of the instrument, voice collection operation is automatically carried out, and noise reduction processing is carried out in the process in a mode of presetting a background; the invention is simple, effective and easy to use.

The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种内胆均匀导温的空调头盔

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!