System and method for autonomous activation of auditory prosthesis

文档序号:862406 发布日期:2021-03-16 浏览:6次 中文

阅读说明:本技术 用于自主启用听觉假体的系统和方法 (System and method for autonomous activation of auditory prosthesis ) 是由 卢克·麦肯 简·雷蒙德·詹森 于 2019-08-23 设计创作,主要内容包括:提供一种方法,其包括确定听觉假体的声音处理器的状态,以及至少部分地基于所确定的声音处理器的状态来选择性地发起声音处理器的自主编程。(A method is provided that includes determining a state of a sound processor of a hearing prosthesis, and selectively initiating autonomous programming of the sound processor based at least in part on the determined state of the sound processor.)

1. A method, comprising:

determining a state of a sound processor of a hearing prosthesis; and

selectively initiate autonomous programming of the sound processor based at least in part on the determined state of the sound processor.

2. The method of claim 1, wherein the sound processor is configured to access one or more operational parameter maps, and determining the state of the sound processor includes determining whether a programming state of the sound processor is one of:

a first programming state in which all of the one or more operational parameter maps are unavailable to the sound processor;

a second programmed state in which at least one, but less than all, of the operational parameter maps are unavailable to the sound processor; and

a third programming state in which none of the operational parameter maps are available to the sound processor.

3. The method of claim 2, wherein when the programming state of the sound processor is the first programming state, the method further comprises generating at least one of an unavailable operational parameter map using the autonomous programming, and updating the programming state of the sound processor.

4. A method as claimed in claim 2 or claim 3, wherein when the programming state of the sound processor is the second programming state, the method further comprises determining which of the operational parameter maps is not available to the sound processor.

5. The method of claim 4, wherein when the programming state of the sound processor is the second programming state, the method further comprises generating at least one of an unavailable operational parameter map using the autonomous programming, and updating the programming state of the sound processor.

6. The method of any of claims 2 to 5, wherein when the programming state of the sound processor is the third programming state, the method further comprises determining whether any of the operational parameter maps need to be improved by autonomous programming.

7. The method of claim 6, wherein when the programming state of the sound processor is the third programming state, the method further comprises using the autonomous programming to improve at least one of the operational parameter maps that needs improvement, and updating the programming state of the sound processor.

8. The method of any preceding claim, wherein determining the state of the sound processor comprises determining whether a power state of the sound processor is one of:

a first power state in which the sound processor is configured to receive power from an internal power source configured to be continuously operatively coupled to the sound processor; and

a second power state, wherein the sound processor is configured to receive power from an external power source configured to be selectively operatively coupled to the sound processor.

9. The method of claim 8, wherein when the power state of the sound processor is the first power state, then selectively initiating autonomous programming is performed regardless of whether the sound processor is operatively coupled to the external power source.

10. The method of claim 8, wherein selectively initiating autonomous programming is performed only when the sound processor is operatively coupled to the external power source when the power state of the sound processor is the second power state.

11. An apparatus, comprising:

a sound processing circuit of a hearing prosthesis, the sound processing circuit configured to: accessing one or more signal processing data sets, processing signals received from a microphone of the hearing prosthesis using at least one of the accessed signal processing data sets, and generating stimulation signals for transmission to at least a portion of a hearing system of a recipient of the hearing prosthesis;

data storage circuitry configured to store the one or more signal processing data sets; and

control circuitry of the hearing prosthesis, the control circuitry configured to access information indicative of at least one of: a programming state of the hearing prosthesis, an identification of the hearing prosthesis, and a power state of the hearing prosthesis, the control circuit further configured to selectively initiate autonomous programming of the hearing prosthesis to generate or modify at least one of the signal processing data sets at least partially in response to the accessed information.

12. The apparatus of claim 11, wherein the data storage circuitry further comprises the information, and the control circuitry is configured to access the information from the data storage circuitry.

13. The device of claim 11 or claim 12, wherein the information indicates the programming status of the hearing prosthesis, the programming status indicating one or more states of the one or more signal processing data sets.

14. The apparatus of claim 13, wherein the data storage circuitry comprises one or more bytes configured to be read by the control circuitry to determine the programming state and further configured to be written by the control circuitry to update the programming state stored by the data storage circuitry.

15. The device of claim 11 or claim 12, wherein the information is indicative of the identity of the hearing prosthesis, and the device further comprises a communication circuit, the control circuit further configured to access the programming status of the hearing prosthesis stored remotely from the device via the communication circuit, and update the programming status of the hearing prosthesis via the communication circuit.

16. The device of any of claims 11 to 15, wherein the one or more signal processing data sets comprise one or more of:

a first set of signal processing data configured for use by the sound processing circuit in a normal sound environment;

a second set of signal processing data configured for use by the sound processing circuit in a quiet sound environment;

a third signal processing data set configured for use by the sound processing circuit in a noisy sound environment;

a fourth set of signal processing data configured for use by the sound processing circuit in a musical sound environment; and

a fifth signal processing data set configured to be used by the sound processing circuit during sleep of the recipient.

17. The apparatus according to any one of claims 11 to 16, further comprising at least one indicator configured to indicate to the recipient at least one of: whether the autonomous programming is currently being performed; whether the autonomous programming is currently experiencing one or more problem conditions that prevent the autonomous programming from operating properly; which of the one or more signal processing data sets is being generated or modified.

18. An apparatus, comprising:

at least one processor configured to generate at least one control signal;

at least one communication link in operable communication with the at least one processor, the at least one communication link configured to transmit the at least one control signal to an implantable hearing prosthesis and to receive at least one status signal from the implantable hearing prosthesis, the implantable hearing prosthesis comprising a sound processor configured to transmit at least one status signal indicative of a status of the sound processor and to perform autonomous programming in response to the at least one control signal to generate or modify at least one operational parameter map;

at least one indicator in operable communication with the at least one processor, the at least one indicator configured to transmit a status of the sound processor to at least one of a recipient and a clinician of the implantable hearing prosthesis in response to the received at least one status signal; and

at least one user input mechanism in operable communication with the at least one processor, the at least one user input mechanism configured to be utilized by at least one of the recipient and the clinician to provide at least one user input signal to the at least one processor, and the at least one processor configured to respond to the at least one user input signal by generating the at least one control signal.

19. A method, comprising:

initiating, by a sound processor of an implantable hearing prosthesis, a self-programming operation; and

controlling the self-programming operation based at least in part on the determined state of the hearing prosthesis.

20. The method of claim 19, wherein the determined state of the hearing prosthesis comprises at least one of a programming state of the hearing prosthesis and a power state of the hearing prosthesis.

21. The method of claim 19 or 20, wherein controlling the self-programming operation comprises directing the self-programming operation to generate or modify at least one operational parameter map of the hearing prosthesis.

Technical Field

The present application relates generally to implantable hearing prostheses, and more particularly to systems and methods for enabling autonomous programming of hearing prostheses.

Background

Hearing loss can be caused by many different causes, typically two types: conductive and/or sensory neural. Conductive hearing loss occurs when, for example, the ossicular chain or ear canal damage obstructs the normal mechanical path of the outer and/or middle ear. Sensorineural hearing loss occurs when there is damage to the inner ear or the neural pathways from the inner ear to the brain. Various types of auditory prostheses are widely used to improve the life of users. Such devices include, for example, hearing aids, cochlear implants, bone conduction implants, middle ear implants, and electro-acoustic devices.

Individuals with conductive hearing loss often have some form of residual hearing because the hair cells in the cochlea may not be damaged. Thus, an individual with conduction-type hearing loss may receive a hearing prosthesis that generates mechanical movement of cochlear fluid instead of a hearing assistance device based on the type of conduction loss, the amount of hearing loss, and customer preference. Such prostheses include, for example, bone conduction devices and direct acoustic stimulators.

However, in many people who are deaf, the cause of deafness is sensorineural hearing loss. Persons with some forms of sensorineural hearing loss fail to properly benefit from hearing prostheses that produce mechanical movement of cochlear fluid. Such individuals may benefit from an implantable hearing prosthesis that otherwise (e.g., electrically, optically, etc.) stimulates neural cells of the recipient's auditory system. Cochlear implants are often proposed when sensorineural hearing loss is due to the absence or destruction of cochlear hair cells that convert sound signals into neural impulses. Auditory brainstem stimulators may also be advised when a recipient experiences sensorineural hearing loss due to auditory nerve damage.

These "mostly implantable", "fully implantable" or "fully implantable" forms of auditory prostheses have the advantage of allowing the user to have excellent aesthetic results, as the recipient is visually indistinguishable in daily activities from an individual not receiving such devices. Such devices also have the additional advantage of being generally inherently waterproof, allowing the recipient to shower, swim, etc. without the need to take any special measures. Examples of such devices include, but are not limited to, fully implanted cochlear implants ("TICIs"), most implantable cochlear implants ("MICIs"), and fully implantable middle ear implants that utilize a fully implantable actuator ("TIA").

While conventional hearing prostheses use externally disposed microphone assemblies, some most, complete, or fully implantable hearing prostheses use subcutaneously implantable microphone assemblies. Such microphone assemblies are configured to be positioned (e.g., during a surgical procedure) below the skin and on, within, or near the recipient's skull, and at a location that facilitates the reception of acoustic signals by the microphone assembly once implanted (e.g., at a location between the recipient's skin and skull, behind and above the recipient's ears, or within the mastoid cavity).

Disclosure of Invention

In one aspect disclosed herein, a method is provided that includes determining a state of a sound processor of a hearing prosthesis, and selectively initiating autonomous programming of the sound processor based at least in part on the determined state of the sound processor.

In another aspect disclosed herein, an apparatus is provided that includes a sound processing circuit of an auditory prosthesis. The sound processing circuit is configured to access one or more signal processing data sets and process signals received from a microphone of the hearing prosthesis using at least one of the accessed signal processing data sets and generate a stimulation signal that is transmitted to at least a portion of the auditory system of a recipient of the hearing prosthesis. The apparatus also includes data storage circuitry configured to store one or more signal processing data sets. The device further comprises a control circuit of the auditory prosthesis. The control circuitry is configured to access information indicative of at least one of: a programmed state of the hearing prosthesis, an identification of the hearing prosthesis, and a power state of the hearing prosthesis. The control circuit is further configured to selectively initiate autonomous programming of the hearing prosthesis to generate or modify at least one of the signal processing data sets at least partially in response to the accessed information.

In yet another aspect disclosed herein, an apparatus is provided that includes at least one processor configured to generate at least one control signal. The apparatus also includes at least one communication link in operable communication with the at least one processor. The at least one communication link is configured to transmit the at least one control signal to the implantable hearing prosthesis and to receive the at least one status signal from the implantable hearing prosthesis. An implantable hearing prosthesis includes a sound processor configured to transmit at least one status signal indicative of a status of the sound processor, and perform autonomous programming in response to at least one control signal to generate or modify at least one operational parameter map. The apparatus also includes at least one indicator in operable communication with the at least one processor. The at least one indicator is configured to transmit a status of the sound processor to at least one of a recipient and a clinician of the implantable hearing prosthesis in response to the received at least one status signal. The device also includes at least one user input mechanism in operable communication with the at least one processor. The at least one user input mechanism is configured to be utilized by at least one of the recipient and the clinician to provide at least one user input signal to the at least one processor. The at least one processor is configured to respond to at least one user input signal by generating at least one control signal.

In yet another aspect disclosed herein, a method is provided that includes initiating a self-programming operation by a sound processor of an implantable hearing prosthesis, and controlling the self-programming operation based at least in part on a determined hearing prosthesis state.

Drawings

Embodiments are described herein with reference to the accompanying drawings, in which:

fig. 1 is a perspective view of an exemplary cochlear implant hearing prosthesis implanted in a recipient, according to certain embodiments described herein;

2A-2E schematically illustrate examples of an apparatus according to certain embodiments described herein;

FIG. 3A is a flow chart of an example of a method according to some embodiments described herein;

FIG. 3B is a flow chart of another example of a method according to some embodiments described herein;

FIG. 3C is a flow diagram of another example of a method according to some embodiments described herein;

FIG. 4 is a flow diagram of an exemplary method according to certain embodiments described herein; and

fig. 5 schematically illustrates an example apparatus according to some embodiments described herein.

Detailed Description

Certain embodiments described herein provide systems and methods for initiating a self-programming session of a hearing prosthesis based on a state of the hearing prosthesis. For example, the status may be that the hearing prosthesis is missing one or more operational parameter maps (e.g., because the hearing prosthesis has not been previously programmed or subjected to a fitting procedure), or that the identified one or more operational parameter maps require improvement. The hearing prosthesis of certain embodiments is configured to perform one or more self-programming sessions (e.g., with neuro-responsive telemetry measurements) to generate missing operational parameter maps or to refine or modify existing operational parameter maps. Such a self-programming session may produce an operational parameter map during or shortly after the implant procedure, allowing the recipient to begin enjoying the benefits of the auditory prosthesis immediately upon waking from the implant procedure (e.g., "wake with hearing" or "wake with sound"). In certain embodiments, the status of the auditory prosthesis relative to the operational parameter map and self-programming is communicated to the clinician and/or recipient via an indicator.

In at least some embodiments, the teachings detailed herein are applicable to any type of auditory prosthesis utilizing an implantable actuator assembly, including but not limited to: an electro-acoustic electrical/acoustic system, a cochlear implant device, an implantable hearing aid device, a middle ear implant device, a bone conduction device (e.g., an active bone conduction device, a passive bone conduction device, a transcutaneous bone conduction device, a transdermal bone conduction device), a Direct Acoustic Cochlear Implant (DACI), a Middle Ear Transducer (MET), an electro-acoustic implant device, other types of auditory prosthetic devices and/or combinations or variations thereof, or any other suitable auditory prosthetic system with or without one or more external components. Embodiments may include any type of auditory prosthesis that is capable of utilizing the teachings detailed herein and/or variations thereof. In some embodiments, the teachings detailed herein and/or variations thereof may be utilized in other types of prostheses besides auditory prostheses.

Fig. 1 is a perspective view of an exemplary cochlear implant auditory prosthesis 100 implanted in a recipient according to certain embodiments described herein. Exemplary hearing prosthesis 100 is shown in fig. 1 as including an implantable stimulator unit 120 (e.g., an actuator) and an external microphone assembly 124 (e.g., a partially implantable cochlear implant). An exemplary hearing prosthesis 100 according to some embodiments described herein (e.g., a fully implantable cochlear implant; a mostly implantable cochlear implant) may replace the external microphone assembly 124 shown in fig. 1 with a subcutaneous implantable assembly that includes an acoustic transducer (e.g., microphone).

As shown in fig. 1, the recipient has an outer ear 101, a middle ear 105, and an inner ear 107. In a fully functional ear, outer ear 101 includes a pinna 110 and an ear canal 102. The sound pressure or sound wave 103 is collected by the pinna 110 and enters through the passage and passes through the ear canal 102. Across the distal end of the ear canal 102 is disposed a tympanic membrane 104 that vibrates in response to sound waves 103. The vibrations are coupled to the oval or oval window 112 through the three bones of the middle ear 105, collectively referred to as the ossicles 106, and including the malleus 108, the incus 109, and the stapes 111. Bones 108, 109 and 111 of middle ear 105 serve to filter and amplify sound wave 103, causing elliptical window 112 to articulate or vibrate in response to vibration of tympanic membrane 104. This vibration creates perilymph fluid motion waves within the cochlea 140. This fluid movement in turn activates tiny hair cells (not shown) inside the cochlea 140. Activation of the hair cells causes the appropriate nerve impulses to be generated and transmitted through the spiral ganglion cells (not shown) and the acoustic nerve 114 to the brain (also not shown) where they are perceived as sound.

As shown in fig. 1, the exemplary hearing prosthesis 100 includes one or more components that are temporarily or permanently implanted in a recipient. An exemplary auditory prosthesis 100 is shown in fig. 1 with: an external component 142 attached directly or indirectly to the recipient's body; and an internal component 144 that is temporarily or permanently implanted in the recipient (e.g., positioned in a recess adjacent the temporal bone of the recipient's pinna 110). The external component 142 typically includes one or more sound input elements for detecting sound (e.g., an external microphone 124), a sound processing unit 126 (e.g., disposed in a behind-the-ear unit), a power source (not shown), and an external transmitter unit 128. In the embodiment illustrated in fig. 1, the external transmitter unit 128 includes an external coil 130 (e.g., a wire antenna coil comprising multiple turns of electrically insulated single or multi-strand platinum or gold wire), and preferably includes a magnet (not shown) secured directly or indirectly to the external coil 130. The external coil 130 of the external transmitter unit 128 is part of an inductive Radio Frequency (RF) communication link with the internal components 144. The sound processing unit 126 processes the output of the microphone 124, which in the depicted embodiment is positioned outside the recipient's body by the recipient's pinna 110. The sound processing unit 126 generates an encoded signal, sometimes referred to herein as an encoded data signal, which is provided (e.g., via a cable) to an external transmitter unit 128. It will be appreciated that the sound processing unit 126 may utilize digital processing techniques to provide frequency shaping, amplification, compression, and other signal conditioning, including conditioning based on recipient-specific fitting parameters.

The power source of the external component 142 is configured to provide power to the hearing prosthesis 100, wherein the hearing prosthesis 100 includes a battery (e.g., located in the internal component 144 or provided at a separate implant location) that is recharged by power provided from the external component 142 (e.g., via a transcutaneous energy transfer link). The transcutaneous energy transfer link is used to transmit power and/or data to the internal components 144 of the hearing prosthesis 100. Various types of energy transfer, such as Infrared (IR), electromagnetic, capacitive, and inductive transfer, may be used to transfer power and/or data from the external component 142 to the internal component 144. During operation of the hearing prosthesis 100, the power stored by the rechargeable battery is distributed to various other implanted components as needed.

The inner member 144 includes the inner receiver unit 132, the stimulator unit 120, and the elongate electrode assembly 118. In some embodiments, the internal receiver unit 132 and the stimulator unit 120 are hermetically sealed within the biocompatible housing. The internal receiver unit 132 includes an internal coil 136 (e.g., a wire antenna coil comprising multiple turns of electrically insulated single or multi-strand platinum or gold wire), and preferably includes a magnet (also not shown) fixed relative to the internal coil 136. The internal receiver unit 132 and the stimulator unit 120 are hermetically sealed within a biocompatible housing, which is sometimes collectively referred to as a stimulator/receiver unit. The internal coil 136 receives power and/or data signals from the external coil 130 through a transcutaneous energy transfer link (e.g., an inductive RF link). The stimulator unit 120 generates electrical stimulation signals based on the data signals, and the stimulation signals are delivered to the recipient through the elongate electrode assembly 118.

The elongate electrode assembly 118 has a proximal end connected to the stimulator unit 120 and a distal end implanted in the cochlea 140. Electrode assembly 118 extends from stimulator unit 120 through mastoid bone 119 to cochlea 140. In some embodiments, the electrode assembly 118 may be implanted at least in the basal region 116, and sometimes deeper. For example, electrode assembly 118 may extend toward the apex of cochlea 140 (referred to as cochlea tip 134). In some cases, electrode assembly 118 may be inserted into cochlea 140 through cochleostomy 122. In other cases, a cochlear fenestration may be formed by circular window 121, oval window 112, promontory 123, or by the roof 147 of cochlea 140.

The elongate electrode assembly 118 includes a longitudinally aligned and distally extending array 146 of electrodes or contacts 148 disposed along the length thereof, sometimes referred to herein as an electrode or contact array 146. Although electrode array 146 may be disposed on electrode assembly 118, in most practical applications, electrode array 146 is integrated into electrode assembly 118 (e.g., electrode array 146 is disposed in electrode assembly 118). As noted, the stimulator unit 120 generates stimulation signals that are applied by the electrodes 148 to the cochlea 140, thereby stimulating the acoustic nerve 114.

Although fig. 1 schematically illustrates the hearing prosthesis 100 utilizing external components 142 including the external microphone 124, the external sound processing unit 126, and the external power source, in certain other embodiments, one or more of the microphone 124, the sound processing unit 126, and the power source may be implanted on or within the recipient (e.g., within the internal component 144). For example, hearing prosthesis 100 may have each of microphone 124, sound processing unit 126, and power source (e.g., enclosed within a subcutaneously located biocompatible component) implantable on or in the recipient, and may be referred to as a fully implantable cochlear implant ("TICI"). For another example, the hearing prosthesis 100 may have most of the components of a cochlear implant (e.g., not including a microphone, which may be an in-canal microphone) that is implantable on or within the recipient, and may be referred to as a mostly implantable cochlear implant ("MICI").

The proper functioning of various implantable hearing prostheses (e.g., cochlear implant systems; acoustic implant systems) depends on the establishment of one or more signal processing data sets (e.g., recipient-specific fitting parameters; operational parameter maps) that are used by the hearing prosthesis to generate appropriate, safe, and comfortable stimulation signals in response to received sound signals, and that are fitted or customized to suit the needs of a particular recipient. Typically, for cochlear implants, one or more signal processing data sets are initially established by a fitting procedure during a visit by the recipient to the clinician. During a visit, the clinician performs a fitting procedure over several stimuli or frequency channels (e.g., initiating several beeps or tones and requesting the recipient to judge loudness). In some cases, rather than utilizing beeps or tones, neural response telemetry ("NRT") may be used in the fitting procedure by measuring the response of the recipient's auditory nerve to electrical stimulation applied using a cochlear implant. For example, NRT may include providing a stimulation signal to each stimulation channel (e.g., an electrode of an electrode array) and measuring a neural response (e.g., an electrically evoked compound action potential) using another electrode of the electrode array (e.g., an adjacent electrode to the stimulation electrode). These measurements may include collecting and determining recipient-specific parameters, such as a threshold level (T-level) and a maximum comfort level (C-level) for each stimulation channel. Subsequent visits and fitting procedures are performed to obtain further NRT measurements to further optimize one or more signal processing data sets (e.g., the recipient's C-level curve and T-level curve).

There may be a significant time delay (e.g., days or weeks) between implantation of the cochlear implant and the initial fitting procedure to establish the one or more signal processing data sets, and there may be a further time delay (e.g., days or weeks) between the initial fitting procedure and a subsequent fitting procedure to further optimize the one or more signal processing data sets. Thus, the recipient does not begin to enjoy the benefits provided by the cochlear implant until after the initial fitting procedure, and does not fully enjoy the benefits provided by the cochlear implant until after the one or more signal processing data sets are fully optimized.

Instead of relying on a clinician, autonomous programming (e.g., self-programming) of the hearing prosthesis 100 may be utilized to generate initial values for one or more signal processing data sets and/or subsequent optimizations for one or more signal processing data sets. As used herein, the term "autonomic programming" has its broadest reasonable interpretation, including but not limited to automatically measuring evoked neural responses (e.g., by monitoring the recipient's auditory nerve and/or brain signals), and using such measurements to automatically generate and/or modify one or more signal processing datasets (see, e.g., U.S. patent No. 8,965,520) without substantial involvement of the clinician (e.g., no clinical intervention other than merely initiating a procedure).

Using autonomous programming according to certain embodiments described herein, the hearing prosthesis may begin generating at least one signal processing data set shortly after completing implantation of the hearing prosthesis in the recipient, such that the at least one signal processing data set is usable by the hearing prosthesis when the recipient wakes up after the implantation procedure, thereby enabling the recipient to have at least a preliminary hearing level when waking up after the implantation procedure. Further, in certain embodiments described herein, autonomous programming may be used by a hearing prosthesis having an autonomous power source (e.g., implanted power source) to automatically generate additional operational parameter maps that are adjusted for specific environments and/or use by the hearing prosthesis during certain times (e.g., nighttime), and to refine one or more of the operational parameter maps without clinical intervention.

Fig. 2A-2E schematically illustrate examples of an apparatus 200 according to some embodiments described herein. The device 200 includes a sound processing circuit 220 of the auditory prosthesis 100. The sound processing circuit 220 is configured to access one or more signal processing data sets 232 and process the signals 212 received from the microphone 210 of the hearing prosthesis 100 using at least one of the accessed signal processing data sets 232 and generate stimulation signals 222 that are transmitted to at least a portion of the auditory system of a recipient of the hearing prosthesis 100. The device 200 further includes a data storage circuit 230 configured to store one or more signal processing data sets 232. The device 200 further comprises a control circuit 240 of the hearing prosthesis 100. The control circuitry 240 is configured to access information 250 indicative of at least one of: a programmed state of the hearing prosthesis 100, an identification of the hearing prosthesis 100, and a power state of the hearing prosthesis 100. The control circuit 240 is further configured to selectively initiate autonomous programming of the hearing prosthesis 100 to generate or modify at least one of the signal processing data sets 232 at least partially in response to the accessed information 250.

In some embodiments, the microphone 210 includes an external microphone component 124 (see, e.g., fig. 1), while in some other embodiments, the microphone 210 includes a subcutaneous implantable microphone component. The microphone 210 of some embodiments includes an acoustic transducer configured to convert a received sound signal into an electrical signal 212 and transmit the signal 212 to a sound processing circuit 220. In some embodiments, the microphone 210 wirelessly transmits the signal 212 to the sound processing circuit 220, while in some other embodiments, the microphone 210 is wired to the sound processing circuit 220 and transmits the signal 212 to the sound processing circuit 220 via a wire.

The sound processing circuit 220 of certain embodiments includes at least one processor (e.g., microelectronic circuitry; sound processor) that may be located external to the recipient's body or within a device implanted in or on the recipient's body, in operative communication with the data storage circuit 230. In the exemplary device 200 of fig. 2B, the sound processing circuit 220 includes a digital signal processor 224 and a stimulator unit 120. The digital signal processor 224 of certain embodiments comprises at least one integrated circuit configured to receive the signal 212 from the microphone 210 and process the signal 212 (e.g., apply one or more of digitization, shifting, shaping, amplification, compression, filtering, and/or other signal conditioning to the signal 212). The digital signal processor 224 is also configured to transmit the processed signals to the stimulator unit 120. The stimulator unit 120 of certain embodiments is configured to respond to the processed signals from the digital signal processor 224 and generate and transmit stimulation signals 222 to a portion of the recipient's auditory system (e.g., the cochlea 140) via the electrodes 148 of the electrode array 146, thereby stimulating the auditory nerve 114.

As schematically shown in fig. 2B, the stimulator unit 120 of certain embodiments is configured to receive one or more signal processing data sets 232 (e.g., a set of recipient-specific fitting parameters; a set of operational parameter maps) from the data storage circuitry 230 and generate the stimulation signal 222 based at least in part on the one or more signal processing data sets 232. Although fig. 2B schematically illustrates the stimulator unit 120 receiving the one or more signal processing data sets 232 directly from the data storage circuit 232 and the digital signal processor 224 and the stimulator unit 120 as separate components of the sound processing circuit 220, other configurations are also compatible with certain embodiments described herein (e.g., the digital signal processor 224 receiving the one or more signal processing data sets 232 directly from the data storage circuit 230; the digital signal processor 224 or the stimulator unit 120 receiving the one or more signal processing data sets 232 directly from the control circuit 240; the stimulator unit 120 is a separate component from the sound processing circuit 220).

In certain embodiments, the data storage circuit 230 includes a non-volatile memory (e.g., flash memory) circuit in operative communication with the sound processing circuit 220 and the control circuit 240. The data storage circuit 230 is configured to receive one or more signal processing data sets 232 from the control circuit 240, store the one or more signal processing data sets 232, and provide the one or more signal processing data sets 232 to the sound processing circuit 220, as described herein.

In certain embodiments, the control circuit 240 includes at least one processor (e.g., microelectronic circuitry) in operative communication with the sound processing circuit 220 and the data storage circuit 230. In certain embodiments (see, e.g., fig. 2B), the control circuitry 240 is configured to perform autonomous programming of the device 200 by transmitting control signals 242 to the stimulator unit 120, which is configured to respond to the control signals 242 by performing neuro-response telemetry measurements to be used in the autonomous programming. For example, in response to the control signal 242, the stimulator unit 120 may generate the stimulation signal 222 and may transmit the stimulation signal 222 to at least one electrode 148 of the electrode array 146 to evoke a neural response from the recipient's auditory system. In response to the stimulation signal 222, the recipient's auditory system generates an electrical signal 244 that is detected by the stimulator unit 120 using at least some of the other electrodes 148 of the electrode array 146. The stimulator unit 120 is also configured to transmit a signal 246 indicative of the measured response to the control circuit 240. Using the measured response of the recipient's auditory system (e.g., as represented by signal 246), the control circuit 240 of some embodiments generates or modifies at least one of the signal processing data sets 232.

In certain embodiments, the control circuit 240 is configured to access information 250 indicative of at least one aspect of the hearing prosthesis 100, and selectively initiate autonomous programming of the hearing prosthesis 100 at least partially in response to the accessed information 250. As schematically shown in fig. 2B, in certain embodiments, the data storage circuitry 230 is configured to store the signal processing data set 232 and the information 250, and the control circuitry 240 is configured to access the information 250 from the data storage circuitry 230. The information 250 of certain embodiments indicates a programming status of the hearing prosthesis 100 (e.g., indicates whether the hearing prosthesis 100 has been previously programmed to include one or more of the signal processing data sets 232; indicates one or more statuses of one or more of the signal processing data sets 232; indicates whether the signal processing data sets 232 are stored in the data storage circuitry 230; indicates whether the signal processing data sets 232 require modification using autonomous programming).

Upon accessing the information 250 indicating that at least one of the signal processing data sets 232 is unavailable (e.g., not stored in the data storage circuitry 230) or needs improvement, the control circuitry 240 may initiate autonomous programming to generate a previously unavailable signal processing data set 232 or modify (e.g., improve) the signal processing data set 232 that needs improvement. In addition to storing the newly generated or newly improved signal processing data set 232 in the data storage circuit 230, the control circuit 240 of some embodiments also updates the information 250 stored by the data storage circuit 250 to reflect the updated status of the signal processing data set 232. For example, the data storage circuitry 230 may include one or more bytes (e.g., at predetermined storage locations) that are configured to be read by the control circuitry 240 to determine a programming status, and are further configured to be written by the control circuitry 240 to update the programming status once at least one of the signal processing data sets 232 is generated or modified. In some such embodiments, a predetermined value may be written in one or more bytes during manufacture of the device 200, and this predetermined value may be interpreted by the control circuitry 240 as indicating that the corresponding signal processing data set 232 is missing (e.g., a mapping procedure has not been performed using the device 200 since its manufacture). In certain other embodiments, a predetermined value may be written in one or more bytes to generate a corresponding signal processing data set 232 during or immediately after the mapping procedure has been performed using device 200, and the absence of this predetermined value may be interpreted by the control circuitry as indicating that the signal processing data set 232 is absent (e.g., after manufacture therefrom, the mapping procedure has not been performed using device 200). In some embodiments, the predetermined value may indicate a corresponding signal processing data set 232 (e.g., an identification number of the signal processing data set 232), while in some other embodiments, the predetermined value does not indicate a corresponding signal processing data set 232.

In certain other embodiments, the information 250 stored in the data storage circuitry 230 includes an identification 250a of the auditory prosthesis 100 (e.g., indicating a unique serial number of the identification), and the information 250b indicating the programming status of the auditory prosthesis 100 is stored remotely from the device 200 (e.g., stored on a dedicated server available through wireless communication with the internet). As schematically shown in fig. 2C, in certain such embodiments, the device 200 includes a communication circuit 260 (e.g., wireless; radio frequency; bluetooth; WiFi) in operative communication with the control circuit 240 and configured to be used by the control circuit 240 to access information 250b indicative of the programming status of the auditory prosthesis 100. For example, the control circuit 240 may access the identification 250a of the hearing prosthesis 100 from the data storage circuit 230, transmit the identification 250a to the server via the communication circuit 260 along with a request for the programming status 250b of the hearing prosthesis 100 corresponding to the identification 250a, and receive the programming status 250b of the hearing prosthesis 100 from the server via the communication circuit 260. After generating and/or modifying the signal processing data set 232, the control circuit 240 may also transmit the identification 250a and information 250b indicating the updated programming status to be stored on the server to reflect the updated status of the signal processing data set 232 of the hearing prosthesis 100 identified by the identification 250 a.

In certain embodiments, the control circuitry 240 is configured to utilize the communication circuitry 260 to access (e.g., retrieve; modify; store) one or more signal processing datasets 232 stored remotely from the appliance 200 (e.g., on a dedicated server available through wireless communication with the Internet; in the cloud). In addition to the remotely stored signal processing data set 232, certain embodiments may also remotely store impedance, NRT values, and/or diagnostic information in a database that is specific to a particular auditory prosthesis 100. In certain embodiments, the database may be accessed by an artificial intelligence ("AI") software system to generate and/or refine the signal processing data set 232.

One or more signal processing data sets 232 stored remotely may be accessed by device 200. For example, the initial version of the signal processing data set 232 may be stored remotely but accessible to the device 200 via the communication circuitry 260, and the device 200 may be configured to subsequently modify the initial version to generate a version tailored to the recipient. For another example, the device 200 may be configured to upload one or more signal processing data sets 232 to a remote storage location as a backup copy of one or more signal processing data sets 232 stored locally in the data storage circuitry 230. In some embodiments, access to the one or more signal processing data sets 232 by the control circuitry 240 may be performed in response to a state (e.g., a programmed state and/or a power state) of the device 200. For yet another example, by having one or more recipient-adjusted signal processing data sets 232 stored remotely from the device 200, a new device 200 that was not previously used with the recipient's implantable acoustic prosthesis 100 may be placed on the recipient and the remotely stored recipient-adjusted signal processing data sets 232 may be accessed to provide hearing to the recipient.

In certain embodiments, as schematically shown in fig. 2D, the device 200 includes power circuitry 270 (e.g., at least one processor; microelectronic circuitry) configured to be in operative communication with the control circuitry 240 and a power source 272 (e.g., a battery) of the auditory prosthesis 100. For example, the power circuit 270 may be configured to detect whether the device 200 is in a first power state in which the device 200 is configured to receive power from an internal power source (e.g., a power source implanted in or on the recipient) that is continuously operatively coupled to the device 200 or a second power state in which the device 200 is configured to receive power from an external power source (e.g., a power source external to the recipient) that is selectively operatively coupled to the device 200. The power supply circuit 270 may be further configured to provide a power status signal 274 of the auditory prosthesis 100 to the control circuit 240 indicative of the power status of the device 200.

The control circuit 240 of certain embodiments is configured to initiate and/or direct an autonomous programming procedure in response to a power status signal 274 detected by interrogating the power supply circuit 270. For example, when the power status signal 274 indicates a first power state, the control circuitry 240 may selectively initiate autonomous programming regardless of whether the device 200 is operatively coupled to an external power source. In this way, certain embodiments may be self-powered from the device 200 to perform autonomous programming at any time. In certain embodiments with a self-powered device 200, NRT measurements used to generate or improve the signal processing data sets 232 may be performed over a long period of time, thereby enabling very accurate signal processing data sets 232 to be established (e.g., by averaging) to improve the quality of these signal processing data sets 232. Furthermore, in some embodiments utilizing a self-powered device 200, NRT measurements may be taken while the recipient remains asleep after implantation surgery, thereby allowing the recipient to wake up after surgery with hearing that has been restored.

For another example, when the power status signal 274 indicates the second power state, the control circuitry 240 may selectively initiate autonomous programming only when the device 200 is operatively coupled to an external power source. In certain embodiments, the power status signal 274 from the power supply circuit 270 indicates a status of power transfer (e.g., a measured value of voltage and/or current) from the power supply 272 to the hearing prosthesis 100, and the control circuit 240 is configured to selectively initiate autonomous programming based on the status of power transfer (e.g., whether the measured voltage and/or current is above a predetermined threshold corresponding to sufficient power for such autonomous programming). In some embodiments utilizing an external power supply device 200, NRT measurements may be taken once the power source is in operable communication with the device 200. If such a connection is made during the implant surgery or while the recipient remains asleep after the implant surgery, there is sufficient time to generate signal processing data set 2332, the recipient may wake up post-surgery with hearing that has been restored.

In certain embodiments, as schematically shown in FIG. 2E, the device 200 includes at least one indicator 280 (e.g., a display; an LED or other light source; a speaker) in operative communication with the control circuit 240. In response to the signal 282 from the control circuit 240, the at least one indicator 280 is configured to indicate to the recipient and/or clinician (e.g., via an image, color, sound, or other signal perceptible to the recipient and/or clinician) a status of the device 200 with respect to autonomous programming. For example, the status may include at least one of: whether autonomous programming is currently being performed; whether the autonomous programming is currently experiencing one or more problem conditions that prevent the autonomous programming from operating properly; which of the one or more signal processing data sets 232 is being generated or modified by the autonomous programming; and (4) completing the autonomous programming. In certain embodiments, the control circuitry 240 is configured to respond to detection of one or more problem conditions by entering a diagnostic mode in which the control circuitry 240 is configured to facilitate identification and/or resolution of the problem condition (e.g., by presenting information to a clinician to diagnose the problem).

Fig. 3A is a flow diagram of an example of a method 300 according to some embodiments described herein. In operation block 310, the method 300 includes determining a status of a sound processor (e.g., sound processing circuit 220; device 200) of the hearing prosthesis 100. In operation block 320, the method 300 further includes selectively initiating autonomous programming of the sound processor based at least in part on the determined sound processor state.

In certain embodiments, the sound processor is configured to access one or more operational parameter maps (e.g., one or more signal processing data sets 232; one or more recipient-specific fitting parameters). The one or more operational parameter maps may include one or more of a plurality of operational parameter maps. For example, the first operational parameter map may be configured for use by the sound processor in a normal sound environment (e.g., during the day time; within a predetermined time period that the recipient is expected to be awake; in an environment where the sound level is within a predetermined range). The first operational parameter map may be a default map (e.g., including specification data) to be used when conditions and/or circumstances of other operational parameter maps are not currently present. The second operational parameter map may be configured for use by the sound processor in quiet sound environments (e.g., in environments where the sound level is below a predetermined threshold). The third operational parameter map may be configured for use by the sound processor in a noisy sound environment (e.g., in an environment where the sound level is above a predetermined threshold). The fourth operational parameter map may be configured for use by the sound processor in a musical sound environment (e.g., in an environment in which the recipient is listening to music). The fifth operational parameter map may be configured to be used by the sound processor during sleep of the recipient (e.g., during the night; within a predetermined time period in which the recipient is expected to fall asleep). Other operational parameter maps and other numbers of operational parameter maps are also consistent with certain embodiments described herein.

In some embodiments, the sound processor is configured to be in one of a plurality of programmed states. In the first programming state 330a of the sound processor, all of the one or more operational parameter maps are not available to the sound processor. For example, when the sound processor is not undergoing any autonomous or non-autonomous programming, the sound processor may be in the first programming state 330a, and thus there is no operational parameter map available (e.g., there is no operational parameter map in the data storage circuitry 230 of the device 200). In the second programming state 330b of the sound processor, at least one, but less than all, of the operational parameter maps are not available to the sound processor. For example, when the sound processor has undergone at least some programming (e.g., autonomous; non-autonomous), the sound processor may be in the second programming state 330b such that certain operational parameter maps are available to the sound processor (e.g., in the data storage circuitry 230 of the device 200) and at least one operational parameter map is not available to the sound processor (e.g., not in the data storage circuitry 230 of the device 200). In the third programming state 330c of the sound processor, no operational parameter map is available to the sound processor. For example, when the sound processor is not undergoing sufficient programming (e.g., autonomous; non-autonomous), the sound processor may be in the third programming state 330c, and thus all of the operational parameter maps are available to the sound processor (e.g., in the data storage circuitry 230 of the device 200).

Fig. 3B is a flow diagram of another example of a method 300 according to some embodiments described herein. As shown in fig. 3B, the method 300 of some embodiments includes an operation block 302 in which certain initial actions are performed (e.g., placing an external sound processor and/or an external power source or charger on the recipient to operatively communicate with the implanted portion of the hearing prosthesis 100 in operation block 304; powering on or "firing" the hearing prosthesis 100 in operation block 306). In some embodiments, one or more of these initial actions are performed at the time of implantation (e.g., for a fully implanted cochlear implant system in which the sound processor and power supply are implanted while in operable communication with other components of the system).

As shown in fig. 3B, determining the state of the sound processor in operation block 310 directs the method 300 to one of a plurality of alternative logical paths 332a, 332B, 332 c. When the programming state is the first programming state 330a, the method 300 further includes a logic path 332a that includes generating at least one of the unavailable operational parameter maps (e.g., performing NRT measurements and generating a "first time" map) using autonomous programming, and updating the programming state of the sound processor (e.g., updating the information 250 for later access when determining the state in the future). When the programming state is in the second programming state 330b, the method 300 further includes a logic path 332b that includes determining which operational parameter map is not available to the sound processor, and using autonomous programming to generate at least one unavailable operational parameter map (e.g., performing NRT measurements and generating a missing map), and updating the programming state of the sound processor. When the programming state is in the third programming state 330c, the method 300 further includes a logic path 332c that includes determining whether any operational parameter map needs to be improved by autonomous programming, using autonomous programming to improve the operational parameter map that needs to be improved (e.g., performing NRT measurements and updating the map), and updating the programming state of the sound processor. For example, in addition to indicating the availability or unavailability of each operational parameter map, the state of the sound processor may also indicate whether each available operational parameter map requires improvement (e.g., based on the amount of time since the last update of the operational parameter map; based on the amount of data compiled and used when generating the operational parameter map).

While the logic paths 332a and 332b may be executed in a relatively short amount of time (e.g., a few minutes), the amount of time of the logic path 332c may depend on the amount of improvement to be performed. After each of the logical paths 332a, 332b, 332c, the method 300 of some embodiments further includes evaluating the quality of the operational parameter map generated or modified by the logical paths 332a, 332b, 332c (e.g., based on the amount of data compiled and used in generating the operational parameter map) in operation block 334. If the quality is below the predetermined threshold, the method 300 may include returning to operation block 310 and performing additional autonomous programming in response to the state of the device 200.

Fig. 3C is a flow diagram of another example of a method 300 according to some embodiments described herein. As shown in fig. 3C, in some embodiments, determining the state of the sound processor includes determining a programming state of the sound processor in operation block 310a and also includes determining a power state of the sound processor in operation block 310 b. For example, the sound processor may be in one of a plurality of power states. In a first power state, the sound processor is configured to receive power from an internal power source that is configured to be continuously operatively coupled to the sound processor. In the second power state, the sound processor is configured to receive power from an external power source configured to be selectively operatively coupled to the sound processor. As shown in fig. 3C, in some embodiments, after determining the power state of the sound processor, method 300 may include the option of bypassing the determination of the programming state of the sound processor.

Selectively initiating autonomous programming is performed only when the sound processor is operatively coupled to the external power source when the power state is the second power state. Since the sound processor in the second power state receives power from the external power source, NRT measurements are only taken as part of the autonomous programming when the external power source is connected. When the power state is a first power state, then selectively initiating autonomous programming is performed regardless of whether the sound processor is operatively coupled to the external power source. Since the sound processor in the first power state receives power from the internal power source, NRT measurements are taken as part of the autonomous programming even though the sound processor is not connected to any external power source. Thus, the sound processor in the first power state is able to perform NRT measurements and/or average them over a longer period of time, advantageously resulting in a more improved map of operational parameters (e.g., a map that is very accurate and has a higher optimization) than maps generated using measurements over a shorter period of time.

Fig. 4 is a flow diagram of an exemplary method 400 in accordance with certain embodiments described herein. In operation block 410, the method 400 includes initiating a self-programming operation (e.g., autonomous programming) by a sound processor (e.g., sound processing circuit 220; device 200) of an implanted hearing prosthesis 100 (e.g., a cochlear implant system). In operation block 420, the method 400 further includes controlling a self-programming operation based at least in part on the determined hearing prosthesis state. For example, the determined state of the hearing prosthesis may include at least one of: a programmed state of the hearing prosthesis and a power state of the hearing prosthesis. In some embodiments, controlling the self-programming operation includes directing the self-programming operation to generate or modify at least one operational parameter map of the auditory prosthesis.

Fig. 5 schematically illustrates an example apparatus 500 in accordance with certain embodiments described herein. In certain embodiments, the apparatus 500 includes an external sound processor configured to operatively communicate with the implanted portion of the hearing prosthesis 100, while in certain other embodiments, the apparatus 500 includes a separate external device (e.g., a smart device; a smartphone; a tablet; a remote control) configured to operatively communicate with the implanted portion of the hearing prosthesis 100, which may include the implanted sound processor.

In certain embodiments, the device 500 includes at least one processor 510 (e.g., a microprocessor; microelectronic circuitry) configured to generate at least one control signal 512. The device 500 also includes at least one communication link 520 (e.g., wired; wireless; radio frequency; bluetooth; WiFi; inductive) in operable communication with the at least one processor 510. The at least one communication link 520 is configured to transmit the at least one control signal 512 to the implantable auditory prosthesis 100 and to receive the at least one status signal 522 from the implantable auditory prosthesis 100. The implantable hearing prosthesis 100 includes a sound processor (e.g., sound processing circuitry 220; apparatus 200) configured to transmit at least one status signal 522 indicative of a status of the sound processor and to perform autonomous programming in response to the at least one control signal 512 to generate or modify at least one operational parameter map. The device 500 also includes at least one indicator 530 (e.g., a display; an LED or other light source; a speaker) in operative communication with the at least one processor 510. The at least one indicator 530 is configured to transmit (e.g., via an image, color, sound, or other signal perceptible by the recipient and/or clinician) the status of the sound processor to at least one of the recipient and the clinician of the implantable auditory prosthesis 100 in response to the received at least one status signal 522. The device 500 also includes at least one user input mechanism 540 (e.g., buttons; switches; touch pad; track ball; mouse) in operable communication with the at least one processor 510. The at least one user input mechanism 540 is configured to be utilized by at least one of the recipient and the clinician to provide at least one user input signal 542 to the at least one processor 510, and the at least one processor 510 is configured to respond to the at least one user input signal 542 by generating at least one control signal 512.

In some embodiments, the device 500 may be used to initiate the method 300 and/or the method 400, and may be used in conjunction with the device 200 of the hearing prosthesis 100. For example, during or shortly after the implant procedure (e.g., in an operating room), the clinician may access the user input mechanism 540 to instruct the device 500 to transmit a first control signal 512a to the hearing prosthesis 100, the first control signal 512a configured to instruct the hearing prosthesis 100 to start or power up and transmit a first status signal 522a to the device 500. The first status signal 522a may indicate a status of the sound processor of the hearing prosthesis 100, which may include a programming status of the sound processor and a power status of the sound processor. In this example, the programming state of the sound processor may be a first programming state in which no map of operational parameters is available (e.g., hearing prosthesis 100 is first powered on or energized), and the power state may be a first power state in which the sound processor is configured to receive power from an internal power source (e.g., hearing prosthesis 100 includes a fully implantable cochlear implant). The device 500 may be configured to respond to the first status signal 522a by transmitting the status of the sound processor to the clinician (e.g., the first programming status and the first power status).

In some embodiments where the programmed state is stored locally in data storage circuitry 230, control circuitry 240 of device 200 retrieves the state from data storage circuitry 230. In certain other embodiments where the programmed state is stored remotely (e.g., in the cloud), the control circuitry 240 retrieves the identification 250a from the data storage circuitry 230 and retrieves the state information 250b from the remote storage via the communication circuitry 260. The implanted portion of the hearing prosthesis 100 of some embodiments accesses the remote status information 250b via the sound processor of the hearing prosthesis 100, while in some other embodiments the implanted portion of the hearing prosthesis 100 has a direct communication link with an accessory having internet access capability, thereby avoiding the use of the sound processor as an intermediary.

The clinician may then access the user input mechanism 540 again to instruct the device 500 to transmit a second control signal 512b to the hearing prosthesis 100, the second control signal 512b configured to instruct the hearing prosthesis 100 to initiate autonomous programming (e.g., NRT measurement to estimate at least one of a C-level curve and a T-level curve of the recipient) to generate an operational parameter map (e.g., a default operational parameter map). The hearing prosthesis 100 may further respond to the second control signal 512b by sending one or more second status signals 522b to the device 500, the one or more second status signals 522b indicating the status of the sound processor with respect to autonomous programming (e.g., whether autonomous programming is currently being performed; whether autonomous programming currently encounters one or more problem conditions that prevent normal operation of autonomous programming; which operational parameter map is being generated or modified by autonomous programming; completion of autonomous programming). The device 500 may be configured to respond to the one or more second status signals 522b by transmitting the corresponding status of the sound processor to the clinician. In certain embodiments, the clinician may command the device 500 to enter a diagnostic mode or command the auditory prosthesis 100 to enter a diagnostic mode in which a problem condition is to be identified and/or addressed.

In certain embodiments, device 500 may be used by a clinician and/or recipient to initiate a subsequent autonomous programming procedure (e.g., a self-programming session) by hearing prosthesis 100 (e.g., to select one of the operational parameter maps to be modified or improved by autonomous programming), terminate and/or modify an autonomous programming procedure currently being run by hearing prosthesis 100 (e.g., to manually override an aspect of the autonomous programming procedure automatically initiated), and/or monitor a state of hearing prosthesis 100 relative to the autonomous programming (e.g., to detect an error condition or successful completion of the autonomous programming procedure).

It is to be understood that the embodiments disclosed herein are not mutually exclusive and can be combined with one another in various arrangements.

The invention described and claimed herein is not to be limited in scope by the specific exemplary embodiments herein disclosed, since these embodiments are intended as illustrations of several aspects of the invention and not as limitations. Any equivalent embodiments are intended to be within the scope of the present invention. Indeed, various modifications in form and detail of the present invention in addition to those shown and described herein will become apparent to those skilled in the art from the foregoing description. Such modifications are also intended to fall within the scope of the claims. The breadth and scope of the present invention should not be limited by any of the exemplary embodiments disclosed herein, but should be defined only in accordance with the following claims and their equivalents.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:患者中介的疗法管理

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!