Personal hearing device, external sound processing device and related computer program product

文档序号:1784978 发布日期:2019-12-06 浏览:19次 中文

阅读说明:本技术 个人听力装置、外部声音处理装置及相关计算机程序产品 (Personal hearing device, external sound processing device and related computer program product ) 是由 许云旭 陈柏儒 于 2018-05-29 设计创作,主要内容包括:本发明提供了个人听力装置、外部声音处理装置及相关计算机程序产品。一种个人听力装置,包含:一麦克风,用以接收一输入声音信号,其中该输入声音信号混合有一第一声音来源所发出的声音与其他声音来源所发出的声音;一扬声器;以及一声音处理电路,用以自动地从该输入声音信号区分出该第一声音来源所发出的声音与该其他声音来源所发出的声音;其中,该声音处理电路还将该输入声音信号进行处理,以将该第一声音来源所发出的声音以及该其他声音来源所发出的声音进行不同的调整,藉此产生一输出声音信号于该扬声器播放给使用者。(The invention provides a personal hearing device, an external sound processing device and a related computer program product. A personal hearing device, comprising: a microphone for receiving an input audio signal, wherein the input audio signal is mixed with sound emitted by a first audio source and sound emitted by other audio sources; a speaker; and a sound processing circuit for automatically distinguishing the sound emitted from the first sound source from the sound emitted from the other sound sources; the sound processing circuit also processes the input sound signal to adjust the sound emitted by the first sound source and the sound emitted by the other sound sources differently, so as to generate an output sound signal to be played by the loudspeaker to a user.)

1. a personal hearing device, comprising:

A microphone for receiving an input sound signal, wherein the input sound signal is mixed with sound emitted by a first sound source and sound emitted by a second sound source;

a speaker; and

A sound processing circuit for automatically distinguishing the sound emitted by the first sound source from the input sound signal;

the sound processing circuit is further configured to process the input sound signal to adjust the sound emitted from the first sound source and the sounds other than the sound emitted from the first sound source differently, so as to generate an output sound signal to be played to a user through the speaker.

2. The personal hearing device of claim 1, wherein the sound processing circuit filters out sounds in the input sound signal other than the sound emitted by the first sound source.

3. The personal hearing device of claim 1, wherein the sound processing circuit distinguishes the sound emitted by the first sound source based on a voiceprint.

4. the personal hearing device of claim 3, wherein the sound processing circuit further distinguishes the sound emitted by the first sound source with respect to a voiceprint characteristic of a speaker.

5. The personal hearing device of claim 3, wherein the sound processing circuit further determines that the sound emitted by the first sound source includes a particular word or sound segment.

6. The personal hearing device of claim 1, wherein the sound processing circuit distinguishes sounds emitted by the first sound source based on the orientation of the sounds.

7. The personal hearing device of claim 1, wherein the sound processing circuit filters out sound emitted by the first sound source.

8. The personal hearing device of claim 1, wherein the sound processing circuit amplifies a volume of sound emitted by the first sound source relative to sounds in the input sound signal other than the sound emitted by the first sound source.

9. The personal hearing device of claim 1, wherein the sound processing circuit performs two or more different processes on the input sound signal and switches between different processing modes in response to user instructions.

10. The personal hearing device of claim 1, further comprising a functional module;

when the sound processing circuit judges that the sound emitted by the first sound source meets a preset condition, a driving signal is emitted to the functional module to execute a preset function.

11. A personal hearing device wirelessly coupled to an external sound processing device, the personal hearing device comprising:

A microphone for receiving an input sound signal, wherein the input sound signal is mixed with sound emitted by a first sound source and sound emitted by a second sound source;

A speaker; and

A communication circuit for wirelessly transmitting the input audio signal to the external audio processing device, the external audio processing device automatically distinguishing the audio emitted by the first audio source from the input audio signal;

the external sound processing device further processes the input sound signal to adjust the sound emitted by the first sound source and the sounds other than the sound emitted by the first sound source differently, so as to generate an output sound signal, which is received by the communication circuit and played to a user by the speaker.

12. An external sound processing device wirelessly coupled to the personal hearing device of claim 11, the external sound processing device comprising:

A processor for automatically distinguishing the sound emitted from the first sound source from the input sound signal and processing the input sound signal to adjust the sound emitted from the first sound source and the sounds other than the sound emitted from the first sound source differently, thereby generating the output sound signal and providing the output sound signal to the personal hearing device.

13. A computer program product stored on a computer usable medium, comprising a computer readable program that, when executed on an external sound processing device according to claim 12, automatically distinguishes sound emitted by the first sound source from the input sound signal and processes the input sound signal to differently adjust sound emitted by the first sound source and sounds other than sound emitted by the first sound source, respectively, thereby generating and providing the output sound signal to the personal hearing device.

Technical Field

The present invention generally relates to personal hearing devices. In particular, it relates to a personal hearing device which can detect sound information that a user needs to pay attention to and perform appropriate processing according to the hearing needs of the user.

Background

existing personal hearing devices, such as digital hearing aids, may gain compensate for the user's hearing attenuation profile for different frequencies. Other existing personal hearing devices, such as active anti-noise headphones, may also be de-noised only for certain frequency components (e.g., 100Hz-1kHz ambient and vehicle noise) in order to allow the user to hear the human voice in the external environment in addition to the music.

other prior art regarding personal hearing devices may refer to, for example, U.S. patent publication nos. US pub.2018/0115840, US pub.2014/0023219, or US 8965016.

Disclosure of Invention

The present invention recognizes that most of the sounds are meaningless to the user in real life. For example, on the street, noise emitted by vehicles and strangers typically do not have information about or of interest to the user. In other words, most of the sounds are not sound information that requires attention from the user. On the other hand, for example, the noise reduction earphone in the prior art filters the frequency band of the vehicle noise distribution, that is, the conversation sound of strangers cannot be filtered, but if the frequency band of the voices is filtered, the speaking sound of relatives or friends is also filtered. It is conceivable that this is not an ideal result.

The present invention therefore recognizes that prior art personal hearing devices do not determine whether sound information may be included in the sound received from the external environment that the user desires to focus on. On the contrary, even though the source of the external environmental sound is not single, the prior art processes or optimizes the received sound from the outside (i.e. actually the sound from different sources mixed) as a whole, for example, filters out all the specific frequency band or frequency components of the received mixed sound. However, this method does not process individual voice information that is to be focused on by the user, and therefore, although the entire frequency components can be filtered, the voice information that the user needs to hear is distorted, that is, for example, if the stranger conversation sound is filtered in a frequency band in which the voice is distributed, the voice of the acquaintance and the friend is also affected. This condition can cause a life nuisance, particularly for hearing impaired users.

In view of the above, an aspect of the present invention provides a personal hearing device, which can automatically detect sound information related to or interested by a user, and perform appropriate processing according to the user's requirement, and then play the sound information to the user for hearing. The method can preserve the integrity of the sound information, so that the condition of sound information distortion can be reduced.

In order to determine whether the sound received from the external environment may contain information related to the user, one proposed method of the present invention is to use voiceprint analysis. For example, it can be determined whether the voice contains a specific word (e.g., the user's name) by voiceprint analysis. Or whether a voiceprint feature is included that allows identification of a particular source of sound. The specific sound source may be, for example, a relative or friend previously designated by the user, or a specific device (e.g., a fire alarm), and it is understood that the sound emitted by the relative or friend or the fire alarm is mostly a sound message that the user needs to pay attention to or respond to.

In another aspect, compared to the prior art, the personal hearing device according to one aspect of the present invention distinguishes the received sounds by sound source, rather than by frequency band, so that the sounds emitted from the individual sources can be identified and extracted for individual processing or optimization. Therefore, in addition to identifying the sound source by using the voiceprint feature, the sound source can also be identified by the orientation of the sound. In addition, other methods of identifying individual sources of sound are within the scope of the present invention.

According to an embodiment of the invention, a personal hearing device is provided, comprising:

● a microphone for receiving an input audio signal, wherein the input audio signal is a mixture of sound from a first audio source and sound from a second audio source;

● a speaker; and

● a sound processing circuit for automatically distinguishing the sound emitted by the first sound source from the input sound signal;

● wherein the sound processing circuit further processes the input sound signal to adjust the sound emitted from the first sound source and the sound other than the sound emitted from the first sound source differently, thereby generating an output sound signal to be played to the user through the speaker.

According to another embodiment of the invention, a personal hearing device is provided, comprising:

● a microphone for receiving an input audio signal, wherein the input audio signal is a mixture of sound from a first audio source and sound from a second audio source;

● a speaker; and

● a sound processing circuit for automatically distinguishing the first sound source from other sound sources (such as the second sound source);

● wherein the sound processing circuit further processes the input sound signal to adjust the sound emitted from the first sound source and the sound other than the sound emitted from the first sound source differently, thereby generating an output sound signal to be played to the user through the speaker.

according to another embodiment of the present invention, a personal hearing device wirelessly connected to an external sound processing device is provided, the personal hearing device comprising:

● a microphone for receiving an input audio signal, wherein the input audio signal is a mixture of sound from a first audio source and sound from a second audio source;

● a speaker; and

● a communication circuit for wirelessly transmitting the input audio signal to the external audio processing device, the external audio processing device automatically distinguishing the sound emitted by the first audio source from the input audio signal;

● wherein the external sound processing device further processes the input sound signal to adjust the sound emitted from the first sound source and the sound other than the sound emitted from the first sound source differently, thereby generating an output sound signal which is received by the communication circuit and played by the speaker to the user.

in another embodiment, the present invention further provides an external sound processing device wirelessly connected to the personal hearing device and providing the required cooperation. Furthermore, the present invention also provides a computer program product, which is operable on the external sound processing device to provide the required cooperation of the personal hearing device.

Thus, discussion of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussion of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.

These features and advantages of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.

Drawings

in order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described with additional specificity and detail through reference to the accompanying drawings, in which:

Fig. 1 is a personal hearing device according to an embodiment of the invention.

Fig. 2 is a personal hearing device according to another embodiment of the invention.

FIG. 3 illustrates an exemplary use scenario in accordance with an embodiment of the present invention.

Detailed Description

Reference throughout this specification to "one embodiment," "an embodiment," or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment" and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.

As will be appreciated by one of ordinary skill in the art, the present invention may be embodied as a computer system/apparatus, method, or computer-readable medium as a computer program product. Accordingly, the present invention may be embodied in various forms, such as an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," module "or" system. Furthermore, the present invention may also be embodied as a computer program product in any tangible medium having computer usable program code stored thereon.

A combination of one or more computer usable or readable media may be utilized. The computer-usable or readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific computer-readable media embodiments may include the following (non-limiting examples): an electrical connection consisting of one or more wires, a portable computer diskette, a hard disk drive, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc (CD-ROM), an optical storage device, a transmission media such as the Internet or an intranet base connection, or a magnetic storage device. Note that the computer-usable or computer-readable medium could also be paper or any suitable medium upon which the program sequence is printed, as the program can be electronically executed, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In this context, a computer-usable or readable medium may be any medium that can contain, store, communicate, propagate, or transport the program code for processing by the instruction execution system, apparatus, or device in connection with which the medium is involved. The computer-usable medium may include a propagated data signal with computer-usable program code stored thereon, either in baseband or partially-carrier form. The transmission of the computer usable program code may be over any medium including, but not limited to, wireless, wired, fiber optic cable, Radio Frequency (RF), etc.

Computer program code for carrying out operations of the present invention may be written in a combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the C programming language or other similar programming languages.

The following description of the present invention refers to the flowchart and/or block diagram of systems, devices, methods and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and any combination of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions or acts specified in the flowchart and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the function or act specified in the flowchart and/or block diagram block or blocks.

The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus create means for implementing the functions or acts specified in the flowchart and/or block diagram block or blocks.

Referring next to fig. 1-3, shown are flow charts and block diagrams of architectures, functions and operations that may be implemented by apparatus, methods and computer program products according to various embodiments of the present invention. Accordingly, each block in the flowchart or block diagrams may represent a module, segment, or portion of program code, which comprises one or more executable instructions for implementing the specified logical function(s). It is also noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, blocks shown in two or more figures may in fact be executed in both figures, or in some cases, in reverse order, depending on the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

< personal hearing device >

The following describes the personal hearing device of the present invention with reference to a hearing aid, but it should be understood that the present invention is not intended to be limited to hearing aids for hearing impaired persons. For example, the personal hearing device of the present invention may also implement an earpiece for use in work at a particular sound location, or an earpiece for use in vehicles in general.

Fig. 1 shows a block diagram of a hearing aid 100 according to an embodiment. In this embodiment, the hearing aid 100 includes a sound input stage 110, a sound processing circuit 120, and a sound output stage 130. The audio input stage 110 includes a microphone 111 and an analog-to-digital converter (ADC) 112. The microphone 111 is used for receiving an input audio signal 10 (for example, an analog audio signal), and converting the input audio signal 10 into an input electrical signal 11, and the adc 112 converts the input electrical signal 11 into an input digital signal 12 as an input of the audio processing circuit 120. In addition, the microphone 111 may be built-in or external.

The sound processing circuit 120 performs sound processing on the input digital signal 12 to generate an output digital signal 14. Details of the processing will be described later. In some embodiments, the sound processing circuit 120 may be a microcontroller (microcontroller), a processor, a Digital Signal Processor (DSP), or an application-oriented integrated circuit (ASIC), but the invention is not limited thereto.

The audio output stage 130 includes, for example, a digital-to-analog converter 132 and a speaker 134. The digital-to-analog converter 132 is used for converting the output digital signal 14 generated by the sound processing circuit 120 into the output electrical signal 15. The speaker (also referred to as a receiver) 134 converts the electrical output signal 15 into an audio output signal 16 (e.g., an analog audio signal) and plays the audio output signal 16 for a user to listen to.

Other parts of the Hearing aid 100 not directly related to the present invention may refer to existing digital Hearing aids, such as digital Hearing aid products manufactured by GN Hearing a/S or alton corporation, which are not described herein.

Fig. 2 shows a block diagram of a hearing aid 200 according to another embodiment. Like the hearing aid 100 of fig. 1, the hearing aid 200 also has a sound input stage 210 and a sound output stage 230, which are substantially similar to the sound input stage 110 and the sound output stage 130 of fig. 1, and therefore will not be described again. The main difference between the hearing aid 200 and the hearing aid 100 of fig. 1 is that the hearing aid 200 may omit the sound processing circuit 120 in the hearing aid 100, whereas the hearing aid 200 has the communication circuit 250, so that the input digital signal 22 generated by the sound input stage 210 may be transmitted to an external sound processing device 300 for processing by wireless communication.

Like the sound processing circuit 120 of fig. 1, the external sound processing device 300 may generate the output digital signal 24 and further may transmit the output digital signal 24 back to the sound output stage 230 of the hearing aid 200 via wireless communication.

it should be noted that the wireless communication method between the hearing aid 200 and the external sound processing device 300 is not particularly limited, and may be, for example, bluetooth, infrared, or Wi-Fi. Also, the communication between the hearing aid 200 and the external sound processing device 300 is not limited to direct point-to-point communication, and in some embodiments may be via a local area network, a cellular telephone network, or the internet.

The external sound processing device 300 may be, for example, a dedicated sound processing device having a specially-made microprocessor 310 or application-oriented integrated circuit (ASIC). Alternatively, the external sound processing device 300 may be implemented by an existing smart phone (e.g., iPhone, product of Apple inc.). The processor 310 in the smart phone may execute an application built in the operating system or may additionally download an Application (APP) to implement the required sound processing function (details will be described later). In another embodiment, the external sound processing apparatus 300 may be implemented by a personal computer or a server disposed in the cloud. In other words, the external sound processing device 300 may be implemented as long as it has a sound processing capability and can perform wireless communication with the hearing aid 200.

it should be noted that the operations of fig. 1 and fig. 2 do not conflict with each other, and they may be combined and implemented together.

< Sound processing >

The following description is directed to the audio processing performed by the audio processing circuit 120 in fig. 1 or the external audio processing device 300 in fig. 2. The sound processing in the present invention can be divided into an identification stage and an adjustment stage, which will be described in detail below.

identification phase

In order to determine whether the sound received from the external environment may contain information related to the user, two categories, namely, voiceprint analysis and non-voiceprint analysis, can be further distinguished.

in one embodiment, which uses voiceprint feature analysis, the sound is converted to a spectral voiceprint, which is then identified based on the voiceprint features. Especially for the speech of people, because of the size of the pronunciation organ and the muscle using mode of each person, each person who speaks also has the unique voiceprint characteristic for identification. The technology related to voiceprint recognition is already a mature technology, for example, refer to US 8036891, and voiceprint recognition also has industry standards, such as "technical specification for automatic voiceprint recognition (speaker recognition)" (serial No. SJ/T11380-2008) and "technical requirement and testing method for security voiceprint validation application algorithm" (serial No. GA/T1179 and 2014) in china, and therefore, it is not described herein again. Generally, the speaker voiceprint recognition technology can distinguish the voice from the environmental noise before recognizing the voice. However, it should be noted that if it is subsequently necessary to recover, extract or separate specific sound information from the voiceprint data for individual adjustment, it is preferable to use a suitable voiceprint feature analysis algorithm, such as STFT (short-time Fourier transform), which is referred to in US 5473759.

The above embodiments utilize voiceprint characteristics unique to the speaker for identification. In contrast, in another embodiment that employs voiceprint analysis, the identification is based on voiceprint characteristics of a particular word or sound segment (e.g., the phone ring of the own handset or a fire alarm). This section also belongs to the mature technology, for example, refer to the voice to text input technology (voice to text input) in the prior art. Similarly, if it is subsequently necessary to recover, extract or separate specific audio information from the voiceprint data for individual adjustment, then an appropriate voiceprint feature analysis algorithm, preferably STFT, is used.

In addition, before the voiceprint feature analysis is performed, the voiceprint analysis algorithm needs to be trained. Commonly used training methods can be applied to the present invention, for example, refer to US5850627 and US 9691377. It should be noted that, the identification (or registration) of the voiceprint feature of a specific word or sound segment (such as a fire alarm sound) does not necessarily require the user to provide a sample for training, but a general sample can be used. However, if the specific words are not limited, and the voice print features unique to the speaker are recognized, the user is usually required to provide samples for training since the object to be recognized varies from person to person. It is not easy for a typical hearing aid user to accumulate a large number of samples for the relevant speaker, e.g. friends and relatives. The preferred approach is therefore training by single sample learning (one shot learning) because only a small number of speech samples and enough recognition of their friends and relatives need to be collected.

On the other hand, non-voiceprint analysis means that rather than identifying features unique to the source of a sound from a voiceprint or frequency component analysis, such unique features may be associated with or unrelated to the sound. In a non-voiceprint analysis but voice-related embodiment, different sources of sound are identified based on the orientation of the sound emitted by the source. In this embodiment, the microphone 111 shown in fig. 1 may have left and right channels, so that the direction of the sound source can be located according to the time difference between the left and right channels receiving the same sound (the sound from the same sound source can still be determined by using the voiceprint). Locating the sound source by time difference is also a well-established technique and therefore not described in detail herein. In addition, if a camera lens (not shown) is provided, the direction of the sound source can also be located by image recognition, for which reference is made to an article entitled "Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation" by Ariel Ephrat, Inbar Mosseri, Oran Lang, Tali Dekel, Kevin Wilson, Avinatan Hassidim, William T.Freeman, Michael Rubnstein.

The above-described various voiceprint analysis methods do not conflict with the non-voiceprint analysis method, and may be used in combination.

adjusting phase

after determining that the sound received from the external environment contains information related to the user (or contains sound information that the user needs to pay attention to), the next stage of sound processing is to extract the recognized sound information from the overall received sound and adjust the extracted sound information individually to meet the hearing requirement of the user. In one embodiment, the extracted voice message is increased in volume, or other sounds other than the extracted voice message are reduced or filtered. However, if the specific sound information is intentionally ignored for special purposes, the volume of the recognized sound information may be reduced or filtered, or the sound other than the recognized sound information may be increased. In addition to the volume, the frequency of the extracted voice information can be adjusted (i.e., shifted), for example, the voice with the originally sharp tone is down-converted to a voice with a low tone, but the other voice keeps the original frequency.

Further, the adjustment of the sound information may be different depending on the recognition result. For example, the volume of the ring tone of the user's own mobile phone may be amplified when the ring tone of the user's own mobile phone is recognized, but the volume of the ring tone of the desk may be reduced or filtered when the ring tone of the desk is recognized.

Or in another example, there are different modes for the adjustment to be made to the voice message, and the user can switch between the different modes by himself/herself through the instruction. For example, in one mode, the volume of the voice message from friend A is amplified when the voice message is recognized to be from friend A, but the volume of the voice message from co-worker B is reduced or filtered when the voice message is recognized to be from co-worker B. When the user switches to another mode, the volume of the voice message of friend A is reduced or filtered when the voice message is recognized to be from friend A, but the volume of the voice message of co-worker B is amplified when the voice message is recognized to be from co-worker B.

< usage flow >

Fig. 3 is a flow chart of an exemplary embodiment of the present invention, illustrating the present invention in conjunction with the personal hearing device illustrated in fig. 1 or 2.

step 30: the personal hearing device is trained on the algorithms used for sound processing, i.e. given the ability to recognize sound information. For generic voice information recognition, which does not involve personalization, the training part can be done at the time of the personal hearing device's factory, but in some cases, especially for personalized voice information recognition, the user still has to provide the personal hearing device with a voice sample for training.

Step 32: a personal hearing device is used to receive outside sounds. In general, the environment is flooded with a wide variety of sound sources, and many of the sounds emitted by these sources will be received together by the microphone on the personal hearing device.

Step 34: whether the sound received from the external environment may include sound information that the user needs to pay attention to (or intentionally ignore) is determined by the sound processing circuit 120 in fig. 1 or the external sound processing apparatus 300 in fig. 2. The method of judgment can refer to the description of the aforementioned identification stage. Briefly, in addition to determining whether the voice message is a voice message that needs to be focused by the user according to the voiceprint characteristics of a specific word or voice fragment (e.g., the phone ring tone of the mobile phone or the fire alarm sound), the determination can be made by identifying the voice source, and this part can be identified by the unique voiceprint characteristics of the voice source or the orientation of the voice source. For example, most of the voices uttered by relatives or friends are voice information that the user needs to pay attention to or respond to, or most of the voices uttered by speakers directly in front of the user are also voice information that the user needs to pay attention to or respond to. In addition, in use, it may happen that the sound received from the external environment contains sound information of which a plurality of users are interested, and the sound information or the sound source can be prioritized in the training stage, so that the identified sound information with lower priority can be ignored and not processed in the subsequent steps 36 or 38, but in other embodiments, the identified sound information can be processed in the subsequent steps 36 or 38.

step 36: after identifying the sound information that the user needs to pay attention to (or intentionally ignore), the step is to extract and adjust the sound information from the overall received sound, for example, to increase or decrease the volume relative to other non-extracted sounds, or even to filter out the sound information, and then to play the sound information for the user to listen to. Reference is made in this section to the description of the preceding adjustment phase. It should be noted that, in another embodiment, different identified sound information can be adjusted by instructing to switch between different adjustment modes, or the same sound information can be adjusted differently.

Step 38 (optional step): the hearing aid 100 of fig. 1 may further include a function module 180 electrically connected to the sound processing circuit 120. When the sound processing circuit 120 identifies that the sound information that the user needs to pay attention to belongs to the sound source that is pre-designated by the user, the driving signal 18 can be sent to the function module 180 to drive the function module 180 to execute a predetermined function, preferably but not limited to, to remind the user of the attention. For example, the function module 180 may have a vibrator (not shown), and the user pre-designates the trigger condition as "family C" or his name by training the sound processing circuit 120, and when the sound processing circuit 120 recognizes the sound information of the family C or recognizes the name of the user (i.e. someone shouts the user), it sends the driving signal 18 to the function module 180, and the vibrator in the function module 180 may generate a slight vibration to alert the user. It should be noted that, in another embodiment, step 36 may be skipped before step 38, that is, step 38 is not necessarily premised on step 36.

The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described specific embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Description of the symbols

Input sound signal 10

inputting electrical signals 11

Input digital signal 12

Output digital signal 14

Output electrical signals 15

Output sound signal 16

Drive signal 18

input digital signal 22

Output digital signal 24

Hearing aid 100

Sound input stage 110

Microphone 111

Analog-to-digital converter 112

Sound processing circuit 120

Sound output stage 130

digital-to-analog converter 132

Loudspeaker 134

Function module 180

Hearing aid 200

Sound input stage 210

Sound output stage 230

Communication circuit 250

External sound processing device 300

A processor 310.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种用于发声装置的导电膜以及发声装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!