intelligent blind person auxiliary device capable of automatically finding way and method thereof

文档序号:1777525 发布日期:2019-12-06 浏览:41次 中文

阅读说明:本技术 一种自动寻路的盲人智能辅助装置及其方法 (intelligent blind person auxiliary device capable of automatically finding way and method thereof ) 是由 文耀立 杨琪钧 秦慧平 陈思瀚 于 2019-08-27 设计创作,主要内容包括:本发明公开了一种自动寻路的盲人智能辅助装置,包括导盲盲人眼镜和机器导盲犬;所述导盲盲人眼镜用于获取所在场景实时图像信息并进行处理和识别;所述机器导盲犬包括控制计算模块、语音模块、导航模块、避障模块、驱动模块;所述控制计算模块用于图像信息的交互和处理,对驱动模块的控制和数据传输;所述语音模块用于人机交互,所述导航模块与语音模块进行交互,根据语音指令规划路径以及向使用者返回规划路径;所述避障模块获取周边环境数据,并进行避障处理;所述驱动模块驱动机器导盲犬移动;本发明实现了语音交互,实时导航,路线规划,实时场景检测,特定场景提醒,人脸识别,辅助读字,天气查询,智能聊天以及避障功能的一体化导盲套装。(The invention discloses an intelligent auxiliary device for blind persons, which automatically seeks paths and comprises blind person guiding glasses and a machine blind person guiding dog; the blind guiding glasses are used for acquiring real-time image information of a scene, processing and identifying the information; the machine guide dog comprises a control calculation module, a voice module, a navigation module, an obstacle avoidance module and a driving module; the control calculation module is used for the interaction and processing of image information, and the control and data transmission of the driving module; the voice module is used for man-machine interaction, the navigation module interacts with the voice module, plans a path according to a voice instruction and returns the planned path to a user; the obstacle avoidance module acquires surrounding environment data and carries out obstacle avoidance processing; the driving module drives the machine guide dog to move; the invention realizes the integrated blind guiding set with the functions of voice interaction, real-time navigation, route planning, real-time scene detection, specific scene reminding, face recognition, auxiliary character reading, weather inquiry, intelligent chat and obstacle avoidance.)

1. an intelligent auxiliary device for blind persons capable of automatically finding paths is characterized by comprising blind person guiding glasses and a machine blind person guiding dog; the blind guiding glasses comprise an identification module of a camera device, wherein the identification module is used for acquiring real-time image information of a scene, and processing and identifying the real-time image information; the machine guide dog comprises a control calculation module, and a voice module, a navigation module, an obstacle avoidance module and a driving module which are connected with the control calculation module; the control calculation module comprises a raspberry pi 3B +, an STM32 controller and a communication module, wherein the raspberry pi 3B + is used for interacting and processing image information, the STM32 controller is used for controlling the driving module, and the communication module is used for data transmission; the navigation module comprises a positioning module, a voice module and a navigation module, wherein the positioning module is used for interacting with the voice module, planning a path according to a voice instruction and returning the planned path to a user; the obstacle avoidance module acquires surrounding environment data by using a laser radar and carries out obstacle avoidance processing according to the surrounding environment data; the driving module comprises a direct current speed reducing motor, a battery and a motor control driving board, the motor control driving board which takes an STM32 controller as a core controls the direct current speed reducing motor to provide power to drive the machine guide dog to move, and the battery provides energy; the positioning module comprises a Beidou GPS dual-mode positioning module and is used for acquiring current position information.

2. The intelligent auxiliary device for the blind for automatically finding the way according to claim 1, wherein the processing and identifying the real-time image information are specifically as follows: and (3) identifying the object by adopting an AI open platform and networking, and identifying the face by adopting a face identification algorithm, namely, carrying out face identification by adopting a face identification library face _ recognition developed by Facebook.

3. the intelligent assisting device for the blind with the automatic way finding is characterized in that the voice module realizes intelligent conversation by combining an online voice API and a local voice TTS/STT module.

4. the intelligent assisting device for the blind with automatic way finding as claimed in claim 1, wherein the positioning module is a big dipper GPS dual-mode positioning module, specifically ATGM 336H.

5. The intelligent auxiliary device for the blind for automatically finding the way as claimed in claim 1, wherein the obstacle avoidance module comprises an attitude sensor and a laser radar, and the obstacle avoidance module performs obstacle avoidance calculation by adopting a potential energy gradient descent method.

6. The intelligent auxiliary device for the blind for automatically finding the way according to claim 5, characterized in that the attitude sensor is a 9-axis attitude sensor MPU9250, and the 9-axis attitude sensor MPU9250 comprises a 3-axis accelerometer, a 3-axis gyroscope and a 3-axis magnetic sensor; the laser radar is fir Sichuan Delta 3.

7. The intelligent auxiliary device for the blind for automatically finding the way according to claim 5, wherein the obstacle avoidance process specifically comprises: setting the machine guide dog to move in the environment as a virtual force field, wherein the obstacle generates repulsion to the machine guide dog, the final destination generates attraction to the machine guide dog, and the resultant force of the attraction and the repulsion is used as the control force of the machine guide dog to control the machine guide dog to avoid the obstacle to reach the final destination; adding a relative position term of the final destination and the machine guide dog into a repulsive potential function so that a potential field at the final destination is globally minimum, wherein the repulsive potential function is as follows:

wherein, η X is a preset value of a repulsion gain function, where η X is 1, XR is a current position coordinate vector of the machine guide dog, Xi is a position coordinate vector of the ith obstacle point, ρ (XR, Xi) is a distance between the machine guide dog and the ith obstacle, ρ 0 is a distance constant, and its value is 3;

The gravitational function is:

F(X)=-▽(X)=k(X-X),

Wherein Fatt (X) is a gravitation function, X is the distance between a target and the current position of the machine guide dog, k is a gravitation constant, XR is the coordinate vector of the current position of the machine guide dog, and XG is the coordinate vector of a destination;

And finally, controlling the trolley to move by calculating resultant force:

wherein Fatt (X) is a function of attraction force, Frep (xi) is a function of repulsion force at the ith obstacle point;

when the calculated force accumulation is larger than the repulsion force caused by a single obstacle point with the distance of 0.5, the machine guide dog is controlled to rotate to the specified direction.

8. The intelligent auxiliary device for the blind for automatically finding the way as claimed in claim 1, wherein the driving machine for guiding the movement of the blind dog is specifically as follows: the motor control drive board receives a motion instruction sent by the raspberry pi 3B +, analyzes the motion instruction, converts the motion instruction into PWM waves with corresponding pulse width and polarity, outputs the PWM waves to an H-bridge chip DRV8833, and pushes the direct-current speed reduction motor to operate by an H-bridge; meanwhile, a Hall coded disc is connected to a main shaft of the direct-current speed reducing motor to detect the rotation angle of the direct-current speed reducing motor, an STM32 controller receives calculation pulses and then sends the calculation pulses to a raspberry pi 3B + as odometer data, the rotation speed of the direct-current speed reducing motor is solved, and the rotation speed of the motor is used as the input of a PID closed-loop control algorithm; and the PID closed-loop control algorithm receives the current motor rotating speed, calculates the current motor rotating speed and the preset speed to obtain an error value, performs proportional, integral and differential calculation on the error value, and calculates the weighted sum of the error value and the error value to be used as the duty ratio of the final output PWM wave.

9. an intelligent assisting method for blind people capable of automatically finding ways is characterized by comprising the following steps:

s1, receiving and recognizing the voice signal, completing voice feature extraction, waking up the voice interaction system, and the voice input function specifically comprises:

s101, receiving a voice signal, finishing extraction of voice signal characteristic information by an off-line awakening engine, carrying out matching according to keyword information, awakening a system if the matching is successful, and entering a function selection standby state, otherwise, continuing to sleep the system to wait for input of a correct voice awakening word;

S102, when the system is kept in an awakening state, detecting a function selection voice command in real time, and if a navigation function keyword is triggered, switching to a navigation function module, namely, switching to S2; if the keyword reading function is triggered, switching to a real-time character recognition function module, namely switching to S3; if the scene recognition function keyword is triggered, switching to a real-time scene recognition function module, namely switching to S4; if other input is input, the voice assistant function module is automatically triggered, i.e. the process goes to S5;

s2, navigation function flow: the method comprises the steps of inputting a destination by voice, planning a path according to destination information, formulating a navigation scheme, detecting surrounding environment in real time, and finishing obstacle avoidance and specific scene prompt functions, and specifically comprises the following steps:

s201, according to the destination information input by the user through voice, retrieving a destination through a network map interface, and performing voice broadcasting, after the user confirms, determining the destination as a final destination, otherwise, retrieving again and broadcasting;

S202, obtaining current position information according to positioning, and obtaining final destination information according to destination information;

S203, planning a path according to the current position information and the final destination information, and determining a travel route, namely determining a navigation scheme;

S204, providing a walking scheme and a public transportation scheme according to the requirements of the user; for the walking scheme, according to the travel route, continuously updating the path planning request by combining the current position and the coordinate information of the turning point position of different road sections in real time; for the public transportation scheme, according to the travel route, automatically selecting the most convenient and rapid mode, broadcasting the public transportation scheme by voice, and simultaneously providing an approach transfer scheme;

s205, selecting a scheme by a user, and starting a guide dog; simultaneously, synchronously performing a real-time scene recognition function, specifically step S4;

s206, the laser radar can detect information from the obstacle in the 360-degree direction, the point cloud data of the surrounding environment are obtained, the obstacle in front is found through calculation, whether the vehicle body can pass through the point cloud data or not is estimated, and the direction that two sides can pass through the point cloud data is found when the vehicle body meets the obstacle;

S207, the obtained information of the obstacles obtains the angle and the distance of the obstacles through the control calculation module, the travel route is corrected according to the angle and the distance of the obstacles, and the guide dog can be driven to move according to the corrected route by combining a navigation scheme and real-time GPS information;

S208, acquiring sensing data through a 9-axis attitude sensor, calculating an Euler angle, and further assisting in correcting a travel path according to the Euler angle;

s3, auxiliary character reading function flow: current image is taken a candid photograph, the text information that interface analysis contains is called, pronunciation completion characters read-out after the commentaries on classics is specifically:

S301, capturing current picture information, uploading the current picture information to a cloud OCR interface, completing interface analysis, calling and returning JSON results, and analyzing all text information contained in the picture;

s302, inputting text information to a local TTS module by using serial port communication, completing character-to-voice output by the local TTS, driving a loudspeaker and broadcasting a result;

s4, real-time scene recognition function flow, scene snapshot is accomplished, scene analysis, face analysis reports the result, specifically is:

s401, capturing current image information in real time, wherein the frequency is 5 seconds and one frame;

s402, uploading a cloud image analysis interface to analyze the content contained in the picture; if a face is detected, calling a face library to realize the recognition of the face, specifically step S403, otherwise, performing voice broadcast on the target object, and if scenes which are focused on the blind are detected, such as stairs and escalators, performing special prompt;

s403, for face recognition: extracting face features through a face recognition library face _ recognition developed by Facebook, and converting an image into a gradient histogram through an HOG algorithm so as to eliminate the influence of light on the change of the face color under different illumination; marking 68 characteristic points of the face by using a face landmark animation algorithm, carrying out affine transformation on the faces at different angles, and converting the faces at different angles to a positive direction; extracting face features by using a face recognition library of facebook, coding a face by using a million face database of facebook, and extracting a plurality of key face features for recognizing the face; extracting 128 main characteristics of the human face by using a face recognition library of Facebook; using a facial photo database Labeled Facesin the Wild, carrying out linear classification on 128 main features of each photo in the database by using an SVM (support vector machine) to obtain feature weight, and training a primary classification model; pre-storing photos of a face to be recognized and printing a name label to prepare a local face library, putting the local face library and the face to be recognized into a primary classification model during recognition, outputting a weighted characteristic value code by the model, sequencing the coded value by using a k-NN proximity model, determining a face recognition result by using a k-NN proximity algorithm, and prompting a user according to the face recognition result;

S5, voice assistant function flow, realizing semantic recognition, completing specific functions such as chatting, weather inquiry and the like, and realizing voice interaction.

Technical Field

the invention relates to the research field of blind intelligent auxiliary devices, in particular to a blind intelligent auxiliary device capable of automatically finding a way and a method thereof.

background

Most of the external information of human beings is obtained by eyes, but for the blind, the world seems to be unfriendly to them. The ordinary people tend to go on a regular trip, recognize objects and scenes, read, and call with relatives and friends, so that the blind and friends seem to be difficult to finish autonomously, and only part of the processes can be finished by means of touch sense and auditory sense. The life and the travel of the blind are always a topic of social attention, but the blind road occupied by people, braille signs difficult to find and the like make the facilities difficult to play the functions. At present, products capable of solving the blind pain points and schemes for providing high-quality navigation experience for blind people are few in the market.

The blind person can conveniently experience the life. It is not only necessary to recognize scenes such as zebra crossing, stairs, etc. and to remind the blind to pay attention, but also to recognize common articles in life such as mineral water, money, computer keyboard. The traditional blind guiding robot mostly has no function of identifying scenes and objects, a single advancing and obstacle avoiding mode is clumsy, and the limitation of the blind in traveling is large.

in many occasions in life, information needs to be transmitted by means of characters, even if blind people with braille can not obtain corresponding information (blind people cannot see where the blind people indicate the blind people), and equipment which can help blind people to recognize normal characters is lacking in the market.

Disclosure of Invention

the invention mainly aims to overcome the defects of the prior art and provide an intelligent auxiliary device for the blind for automatically finding the way, and adopts a microcontroller dual controller of raspberry pi 3B + and STM32F1 series to perform division control, thereby saving energy consumption and calculation resources and reducing cost; the laser radar obstacle avoidance system is adopted, and the 9-axis attitude sensor is used for assisting in detection, so that the obstacle avoidance precision is high, and the safety is higher; the PID algorithm is used for controlling the current rotating speed of the motor, so that the blind guiding dog is prevented from deviating from a route due to the manufacturing difference of the motor; the intelligent conversation is realized by adopting a scheme combining the on-line voice API and the local voice TTS/STT module, the conversation requirements of different scenes can be flexibly met, and the requirements of accuracy and reaction speed are met; the Beidou GPS dual-mode positioning and API are used for path planning, so that the planning cost is lower, and the accuracy is higher; the potential energy gradient method is used for avoiding the obstacle, the force field motion is simulated, the obstacle avoiding process is smoother, sudden stop or turning is avoided, and the potential energy gradient method is suitable for blind people.

The invention also aims to provide an intelligent assisting method for the blind for automatically searching the way.

the main purpose of the invention is realized by the following technical scheme:

an intelligent auxiliary device for blind persons capable of automatically finding paths is characterized by comprising blind person guiding glasses and a machine blind person guiding dog; the blind guiding glasses comprise an identification module of a camera device, wherein the identification module is used for acquiring real-time image information of a scene, and processing and identifying the real-time image information; the machine guide dog comprises a control calculation module, and a voice module, a navigation module, an obstacle avoidance module and a driving module which are connected with the control calculation module; the control calculation module comprises a raspberry pi 3B +, an STM32 controller and a communication module, wherein the raspberry pi 3B + is used for interacting and processing image information, the STM32 controller is used for controlling the driving module, and the communication module is used for data transmission; the navigation module comprises a positioning module, a voice module and a navigation module, wherein the positioning module is used for interacting with the voice module, planning a path according to a voice instruction and returning the planned path to a user; the obstacle avoidance module acquires surrounding environment data by using a laser radar and carries out obstacle avoidance processing according to the surrounding environment data; the driving module comprises a direct current speed reducing motor, a battery and a motor control driving board, the motor control driving board which takes an STM32 controller as a core controls the direct current speed reducing motor to provide power to drive the machine guide dog to move, and the battery provides energy; the positioning module comprises a big Dipper GPS dual-mode positioning module.

Further, the processing and identifying the real-time image information specifically includes: and (3) identifying the object by adopting an AI open platform and networking, and identifying the face by adopting a face identification algorithm, namely, carrying out face identification by adopting a face identification library face _ recognition developed by Facebook.

furthermore, the voice module realizes intelligent dialogue by combining an online voice API and a local voice TTS/STT module.

further, the positioning module is a big dipper GPS dual-mode positioning module.

further, the big dipper GPS dual mode positioning module is ATGM 336H.

Further, the STM32 controller is STM32F103C8T 6.

further, the laser radar is fir Sichuan Delta 3.

further, the direct current gear motor is JGB 37-520.

furthermore, the obstacle avoidance module comprises an attitude sensor and an obstacle detection sensor laser radar, and the obstacle avoidance module performs obstacle avoidance calculation by adopting a potential energy gradient descent method.

Further, the attitude sensor is a 9-axis attitude sensor MPU9250, and the 9-axis attitude sensor MPU9250 includes a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetic sensor.

Further, the obstacle avoidance processing specifically includes: setting the machine guide dog to move in the environment as a virtual force field, wherein the obstacle generates repulsion to the machine guide dog, the final destination generates attraction to the machine guide dog, and the resultant force of the attraction and the repulsion is used as the control force of the machine guide dog to control the machine guide dog to avoid the obstacle to reach the final destination; adding a relative position term of the final destination and the machine guide dog into a repulsive potential function so that a potential field at the final destination is globally minimum, wherein the repulsive potential function is as follows:

Wherein, η X is a preset value of a repulsion gain function, where η X is 1, XR is a current position coordinate vector of the machine guide dog, Xi is a position coordinate vector of the ith obstacle point, ρ (XR, Xi) is a distance between the machine guide dog and the ith obstacle, ρ 0 is a distance constant, and its value is 3;

The gravitational function is:

F(X)=-▽(X)=k(X-X),

wherein Fatt (X) is a gravitation function, X is the distance between a target and the current position of the machine guide dog, k is a gravitation constant, XR is the coordinate vector of the current position of the machine guide dog, and XG is the coordinate vector of a destination;

and finally, controlling the trolley to move by calculating resultant force:

wherein Fatt (X) is a function of attraction force, Frep (xi) is a function of repulsion force at the ith obstacle point;

when the calculated force accumulation is larger than the repulsion force caused by a single obstacle point with the distance of 0.5, the machine guide dog is controlled to rotate to the specified direction.

Further, the movement of the drive machine guide dog is specifically as follows: the motor control drive board receives a motion instruction sent by the raspberry pi 3B +, analyzes the motion instruction, converts the motion instruction into PWM waves with corresponding pulse width and polarity, outputs the PWM waves to an H-bridge chip DRV8833, and pushes the direct-current speed reduction motor to operate by an H-bridge; meanwhile, a Hall coded disc is connected to a main shaft of the direct-current speed reducing motor to detect the rotation angle of the direct-current speed reducing motor, an STM32 controller receives calculation pulses and then sends the calculation pulses to a raspberry pi 3B + as odometer data, the rotation speed of the direct-current speed reducing motor is solved, and the rotation speed of the motor is used as the input of a PID closed-loop control algorithm; and the PID closed-loop control algorithm receives the current motor rotating speed, calculates the current motor rotating speed and the preset speed to obtain an error value, performs proportional, integral and differential calculation on the error value, and calculates the weighted sum of the error value and the error value to be used as the duty ratio of the final output PWM wave.

The other purpose of the invention is realized by the following technical scheme:

An intelligent assisting method for blind people with automatic path finding is characterized in that a system carries out voice signal recognition in real time, see step S1, and determines which flow of the flows S2, S3, S4 or S5 is to be entered next in a voice signal recognition flow S1, and specifically comprises the following steps:

s1, receiving and recognizing the voice signal, completing voice feature extraction, waking up the voice interaction system, and the voice input function specifically comprises:

s101, receiving a voice signal, finishing extraction of voice signal characteristic information by an off-line awakening engine, carrying out matching according to keyword information, awakening a system if the matching is successful, and entering a function selection standby state, otherwise, continuing to sleep the system to wait for input of a correct voice awakening word;

S102, when the system is kept in an awakening state, detecting a function selection voice command in real time, and if a navigation function keyword is triggered, switching to a navigation function module, namely, switching to S2; if the keyword reading function is triggered, switching to a real-time character recognition function module, namely switching to S3; if the scene recognition function keyword is triggered, switching to a real-time scene recognition function module, namely switching to S4; if other input is input, the voice assistant function module is automatically triggered, i.e. the process goes to S5;

s2, navigation function flow: the method comprises the steps of inputting a destination by voice, planning a path according to destination information, formulating a navigation scheme, detecting surrounding environment in real time, and finishing obstacle avoidance and specific scene prompt functions, and specifically comprises the following steps:

S201, according to the destination information input by the user through voice, retrieving a destination through a network map interface, and performing voice broadcasting, after the user confirms, determining the destination as a final destination, otherwise, retrieving again and broadcasting;

S202, obtaining current position information according to positioning, and obtaining final destination information according to destination information;

s203, planning a path according to the current position information and the final destination information, and determining a travel route, namely determining a navigation scheme;

s204, providing a walking scheme and a public transportation scheme according to the requirements of the user; for the walking scheme, according to the travel route, continuously updating the path planning request by combining the current position and the coordinate information of the turning point position of different road sections in real time; for the public transportation scheme, according to the travel route, automatically selecting the most convenient and rapid mode, broadcasting the public transportation scheme by voice, and simultaneously providing an approach transfer scheme;

s205, selecting a scheme by a user, and starting a guide dog; simultaneously, synchronously performing a real-time scene recognition function, specifically step S4;

s206, the laser radar can detect information from the obstacle in the 360-degree direction, the point cloud data of the surrounding environment are obtained, the obstacle in front is found through calculation, whether the vehicle body can pass through the point cloud data or not is estimated, and the direction that two sides can pass through the point cloud data is found when the vehicle body meets the obstacle;

S207, the obtained information of the obstacles obtains the angle and the distance of the obstacles through the control calculation module, the travel route is corrected according to the angle and the distance of the obstacles, and the guide dog can be driven to move according to the corrected route by combining a navigation scheme and real-time GPS information;

S208, acquiring sensing data through a 9-axis attitude sensor, calculating an Euler angle, and further assisting in correcting a travel path according to the Euler angle;

S3, auxiliary character reading function flow: current image is taken a candid photograph, the text information that interface analysis contains is called, pronunciation completion characters read-out after the commentaries on classics is specifically:

S301, capturing current picture information, uploading the current picture information to a cloud OCR interface, completing interface analysis, calling and returning JSON results, and analyzing all text information contained in the picture;

s302, inputting text information to a local TTS module by using serial port communication, completing character-to-voice output by the local TTS, driving a loudspeaker and broadcasting a result;

s4, real-time scene recognition function flow, scene snapshot is accomplished, scene analysis, face analysis reports the result, specifically is:

s401, capturing current image information in real time, wherein the frequency is 5 seconds and one frame;

S402, uploading a cloud image analysis interface to analyze the content contained in the picture; if a face is detected, calling a face library to realize the recognition of the face, specifically step S403, otherwise, performing voice broadcast on the target object, and if scenes which are focused on the blind are detected, such as stairs and escalators, performing special prompt;

s403, for face recognition: extracting face features through a face recognition library face _ recognition developed by Facebook, and converting an image into a gradient histogram through an HOG algorithm so as to eliminate the influence of light on the change of the face color under different illumination; marking 68 characteristic points of the face by using a face landmark animation algorithm, carrying out affine transformation on the faces at different angles, and converting the faces at different angles to a positive direction; extracting face features by using a face recognition library of facebook, coding a face by using a million face database of facebook, and extracting a plurality of key face features for recognizing the face; extracting 128 main characteristics of the human face by using a face recognition library of Facebook; using a facial photo database laboratory Faces in the Wild, carrying out linear classification on 128 main features of each photo in the database by using an SVM (support vector machine) to obtain feature weights, and training a primary classification model; pre-storing photos of a face to be recognized and printing a name label to prepare a local face library, putting the local face library and the face to be recognized into a primary classification model during recognition, outputting a weighted characteristic value code by the model, sequencing the coded value by using a k-NN proximity model, determining a face recognition result by using a k-NN proximity algorithm, and prompting a user according to the face recognition result;

S5, voice assistant function flow, realizing semantic recognition, completing specific functions such as chatting, weather inquiry and the like, and realizing voice interaction.

compared with the prior art, the invention has the following advantages and beneficial effects:

1. The invention adopts the raspberry pi 3B + and STM32F1 series microcontroller dual controller to carry out division control, thereby saving energy consumption and computing resources and reducing cost; the system can carry out information interaction and is suitable for blind people; the blind person positioning and route planning system has the advantages that the functions of positioning, route planning, real-time scene detection, specific scene reminding, face recognition calling, auxiliary character reading, weather inquiry, intelligent chatting and the like can be achieved, meanwhile, the machine guide dog can also achieve the function of obstacle avoidance, and help is provided for the life and the trip of the blind person.

2. The invention adopts the laser radar obstacle avoidance system and assists in detection through the 9-axis attitude sensor, so that the obstacle avoidance precision is high and the safety is higher.

3. the invention uses the PID algorithm to control the current rotating speed of the motor, and avoids the deviation of the guide dog from the route caused by the manufacturing difference of the motor.

4. The invention adopts the scheme of combining the on-line voice API and the local voice TTS/STT module to realize intelligent conversation, can flexibly meet the conversation requirements of different scenes and simultaneously meets the requirements of accuracy and reaction speed.

5. the Beidou GPS dual-mode positioning and API are used for path planning, so that the planning cost is lower, and the accuracy is higher.

6. The invention utilizes a potential energy gradient method to avoid the obstacle, simulates the motion of a force field, has more gentle obstacle avoiding process, cannot stop suddenly or turn, and is suitable for the blind.

Drawings

FIG. 1 is a block diagram of an intelligent auxiliary device for blind people for automatically finding a way according to the present invention;

FIG. 2 is a schematic structural diagram of a machine guide dog according to the embodiment of the present invention;

FIG. 3 is a circuit diagram of a motor control drive board according to the embodiment of the present invention;

FIG. 4 is a block diagram of the overall algorithm architecture in the embodiment of the present invention;

fig. 5 is a flowchart of a method for automatically seeking a way for the intelligent assistance of the blind in the embodiment of the present invention.

Detailed Description

The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种偏心环焦镜片

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!