[go: up one dir, main page]

WO2025041006A1 - Audio processing device operable as remote sensor - Google Patents

Audio processing device operable as remote sensor Download PDF

Info

Publication number
WO2025041006A1
WO2025041006A1 PCT/IB2024/057901 IB2024057901W WO2025041006A1 WO 2025041006 A1 WO2025041006 A1 WO 2025041006A1 IB 2024057901 W IB2024057901 W IB 2024057901W WO 2025041006 A1 WO2025041006 A1 WO 2025041006A1
Authority
WO
WIPO (PCT)
Prior art keywords
recipient
circuitry
sound
microphone
implanted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IB2024/057901
Other languages
French (fr)
Inventor
Quang Luu THAI
Rachel MACFARLANE
Brett Anthony Swanson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cochlear Ltd
Original Assignee
Cochlear Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cochlear Ltd filed Critical Cochlear Ltd
Publication of WO2025041006A1 publication Critical patent/WO2025041006A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/60Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles
    • H04R25/604Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers
    • H04R25/606Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers acting directly on the eardrum, the ossicles or the skull, e.g. mastoid, tooth, maxillary or mandibular bone, or mechanically stimulating the cochlea, e.g. at the oval window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/67Implantable hearing aids or parts thereof not covered by H04R25/606
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/13Hearing devices using bone conduction transducers

Definitions

  • the present application relates generally to systems and methods for communicating with a device worn by a recipient or implanted on or within a recipient’s body.
  • Medical devices have provided a wide range of therapeutic benefits to recipients over recent decades. Medical devices can include internal or implantable components/de vices, external or wearable components/devices, or combinations thereof (e.g., a device having an external component communicating with an implantable component). Medical devices, such as traditional hearing aids, partially or fully-implantable hearing prostheses (e.g., bone conduction devices, mechanical stimulators, cochlear implants, etc.), pacemakers, defibrillators, functional electrical stimulation devices, and other medical devices, have been successful in performing lifesaving and/or lifestyle enhancement functions and/or recipient monitoring for a number of years.
  • medical devices such as traditional hearing aids, partially or fully-implantable hearing prostheses (e.g., bone conduction devices, mechanical stimulators, cochlear implants, etc.), pacemakers, defibrillators, functional electrical stimulation devices, and other medical devices, have been successful in performing lifesaving and/or lifestyle enhancement functions and/or recipient monitoring for a number of years.
  • implantable medical devices now often include one or more instruments, apparatus, sensors, processors, controllers or other functional mechanical or electrical components that are permanently or temporarily implanted in a recipient. These functional devices are typically used to diagnose, prevent, monitor, treat, or manage a disease/injury or symptom thereof, or to investigate, replace or modify the anatomy or a physiological process. Many of these functional devices utilize power and/or data received from external devices that are part of, or operate in conjunction with, implantable components.
  • an apparatus comprises at least one microphone configured to receive sound and to generate audio signals indicative of the sound.
  • the apparatus further comprises circuitry configured to receive the audio signals from the at least one microphone.
  • the circuitry has a plurality of operational states comprising a first operational state and a second operational state. In the first operational state, the circuitry collects a sample portion of the audio signals. The sample portion is indicative of a sound sample received by the at least one microphone from a target sound source while the at least one microphone is positioned in proximity to the target sound source. In the second operational state, the circuitry uses the sample portion of the audio signals to process further audio signals received by the circuitry subsequently to receiving the sample portion.
  • an apparatus comprises a housing configured to be worn by a recipient and at least one microphone on or within the housing.
  • the at least one microphone is configured to receive sound and to generate information indicative of the sound.
  • the apparatus further comprises first circuitry and second circuitry on or within the housing.
  • the first circuitry is configured to wirelessly transmit the information to a first device implanted on or within the recipient while the housing is worn by the recipient.
  • the second circuitry is configured to wirelessly transmit the information to at least the first device while the housing is remote from the recipient.
  • a method comprises providing a first sound processor configured to receive sound and to generate signals indicative of the sound.
  • the method further comprises, in response to receiving a first control signal, placing the first sound processor in a first operational mode in which the first sound processor is configured to transmit the signals to only a first device implanted on or within a recipient’s body.
  • the method further comprises, in response to receiving a second control signal, placing the first sound processor in a second operational mode in which the first sound processor is configured to transmit the signals to the first device and to at least one second device.
  • FIG. 1 is a perspective view of an example cochlear implant auditory prosthesis implanted in a recipient in accordance with certain implementations described herein;
  • FIG. 2 is a perspective view of an example fully implantable middle ear implant auditory prosthesis implanted in a recipient in accordance with certain implementations described herein;
  • FIG. 3 schematically illustrate a portion of another example transcutaneous bone conduction auditory prosthesis implanted in a recipient in accordance with certain implementations described herein;
  • FIGs. 4A-4C schematically illustrate an example apparatus in accordance with certain implementations described herein;
  • FIG. 5A schematically illustrates an example state diagram of the control circuitry in accordance with certain implementations described herein;
  • FIGs. 5B and 5C schematically illustrate an example operation of the control circuitry in the first and second operational states, respectively, in accordance with certain implementations described herein;
  • FIG. 6 schematically illustrates an example first device in accordance with certain implementations described herein;
  • FIG. 7 schematically illustrates an example usage of an apparatus in accordance with certain implementations described herein;
  • FIG. 8 is a flow diagram of an example method in accordance with certain implementations described herein;
  • FIG. 9A schematically illustrates an example apparatus in accordance with certain implementations described herein;
  • FIG. 9B schematically illustrates an example state diagram of the circuitry of FIG. 9A in accordance with certain implementations described herein;
  • FIGs. 10A and 10B schematically illustrate an example operation of the circuitry of FIG. 9A in the first operational state and the second operational state, respectively, in accordance with certain implementations described herein;
  • FIG. 11 schematically illustrates an example operation of the target enhancement circuitry 914 in accordance with certain implementations described herein;
  • FIG. 12 schematically illustrates another example operation of the target enhancement circuitry in accordance with certain implementations described herein.
  • Certain implementations described herein provide a wearable auditory device configured to be in wireless communication with a stimulation device of a recipient while being worn by the recipient (e.g., over an implanted stimulation device).
  • the wearable device can be removed from being worn by the recipient and placed in proximity to a target sound source.
  • the auditory device can be configured to be used as a remote microphone by wirelessly broadcasting signals indicative of the received sounds while in proximity to the target sound source to the stimulation device and to another auditory device worn by the recipient and/or to other auditory devices worn by other people.
  • the wearable auditory device can be configured to receive and analyze a sound sample to identify characteristics of the target sound while in proximity to the target sound source and then, while worn by the recipient, to process (e.g., filter) subsequently received sounds and to provide the stimulation device with signals that accentuate target sounds from the target sound source over other sound contributions within the received sounds.
  • process e.g., filter
  • implantable or non-implantable stimulation system or device e.g., implantable or non-implantable sensory prosthesis device or system; implantable or non- implantable auditory prosthesis device or system; hearing device for hearing-impaired recipients; hearing device for non-hearing-impaired recipients.
  • Certain implementations can be used as hearing devices that are worn on the recipient’s head, in the ear (ITE), behind the ear (BTE), or off the ear (OTE).
  • hearing devices can include, but are not limited to: sound processing units for cochlear implant systems, middle ear actuator implant systems, or bone-anchored hearing aids; hearing aids; consumer wireless earbuds.
  • an implantable auditory prosthesis device e.g., implantable transducer assembly
  • stimulation signals e.g., electrical and vibrational
  • sounds e.g., evoking a hearing percept
  • examples of which include but are not limited to: electro-acoustic electrical/acoustic systems, cochlear implant devices, implantable hearing aid devices, middle ear implant devices, bone conduction devices (e.g., active bone conduction devices; passive bone conduction devices, percutaneous bone conduction devices; transcutaneous bone conduction devices), Direct Acoustic Cochlear Implant (DACI), middle ear transducer (MET), electro-acoustic implant devices, other types of auditory prosthesis devices, and/or combinations or variations thereof, or any other suitable hearing prosthesis system with or without one or more external components.
  • stimulation signals e.g., electrical and vibrational
  • middle ear implant devices e.g., bone conduction devices (e.g., active bone conduction devices; passive bone conduction devices, percutaneous bone conduction devices; transcutaneous
  • Implementations can include any type of auditory prosthesis that can utilize the teachings detailed herein and/or variations thereof. Certain such implementations can be referred to as “partially implantable,” “semiimplantable,” “mostly implantable,” “fully implantable,” or “totally implantable” auditory prostheses. In some implementations, the teachings detailed herein and/or variations thereof can be utilized in other types of prostheses beyond auditory prostheses.
  • While certain implementations are described herein in the context of auditory prosthesis devices, certain other implementations are compatible with of other types of sensory prosthesis systems that are configured to evoke other types of neural or sensory (e.g., sight, tactile, smell, taste) percepts are compatible with certain implementations described herein, including but are not limited to: vestibular devices (e.g., vestibular implants), visual devices (e.g., bionic eyes), visual prostheses (e.g., retinal implants), somatosensory implants, and chemosensory implants.
  • vestibular devices e.g., vestibular implants
  • visual devices e.g., bionic eyes
  • visual prostheses e.g., retinal implants
  • somatosensory implants e.g., somatosensory implants
  • chemosensory implants chemosensory implants
  • Certain other implementations are compatible with other types of medical devices that can utilize the teachings detailed herein and/or variations thereof to provide a wide range of therapeutic benefits to recipients, patients, or other users (e.g., neurostimulators; pacemakers; other medical implants comprising an implanted power source).
  • a wide range of therapeutic benefits to recipients, patients, or other users (e.g., neurostimulators; pacemakers; other medical implants comprising an implanted power source).
  • FIG. 1 is a perspective view of an example cochlear implant auditory prosthesis 100 implanted in a recipient in accordance with certain implementations described herein.
  • the example auditory prosthesis 100 is shown in FIG. 1 as comprising an implanted stimulator unit 120 and a microphone assembly 124 that is external to the recipient (e.g., a partially implantable cochlear implant).
  • An example auditory prosthesis 100 e.g., a totally implantable cochlear implant; a mostly implantable cochlear implant
  • the recipient has an outer ear 101, a middle ear 105, and an inner ear 107.
  • the outer ear 101 comprises an auricle 110 and an ear canal 102.
  • An acoustic pressure or sound wave 103 is collected by the auricle 110 and is channeled into and through the ear canal 102.
  • a tympanic membrane 104 Disposed across the distal end of the ear canal 102 is a tympanic membrane 104 which vibrates in response to the sound wave 103.
  • This vibration is coupled to oval window or fenestra ovalis 112 through three bones of middle ear 105, collectively referred to as the ossicles 106 and comprising the malleus 108, the incus 109, and the stapes 111.
  • the bones 108, 109, and 111 of the middle ear 105 serve to filter and amplify the sound wave 103, causing the oval window 112 to articulate, or vibrate in response to vibration of the tympanic membrane 104.
  • This vibration sets up waves of fluid motion of the perilymph within cochlea 140.
  • Such fluid motion activates tiny hair cells (not shown) inside the cochlea 140. Activation of the hair cells causes appropriate nerve impulses to be generated and transferred through the spiral ganglion cells (not shown) and auditory nerve 114 to the brain (also not shown) where they are perceived as sound.
  • the example auditory prosthesis 100 comprises one or more components which are temporarily or permanently implanted in the recipient.
  • the example auditory prosthesis 100 is shown in FIG. 1 with an external component 142 which is directly or indirectly attached to the recipient’s body, and an internal component 144 which is temporarily or permanently implanted in the recipient (e.g., positioned in a recess of the temporal bone adjacent auricle 110 of the recipient).
  • the external component 142 typically comprises one or more sound input elements (e.g., an external microphone 124) for detecting sound, a sound processing unit 126 (e.g., disposed in a Behind-The-Ear unit), a power source (not shown), and an external transmitter unit 128.
  • the external transmitter unit 128 comprises an external coil 130 (e.g., a wire antenna coil comprising multiple turns of electrically insulated single-strand or multi-strand platinum or gold wire) and, preferably, a magnet (not shown) secured directly or indirectly to the external coil 130.
  • the external coil 130 of the external transmitter unit 128 is part of an inductive radio frequency (RF) communication link with the internal component 144.
  • the sound processing unit 126 processes the output of the microphone 124 that is positioned externally to the recipient’s body, in the depicted implementation, by the recipient’s auricle 110.
  • the sound processing unit 126 processes the output of the microphone 124 and generates encoded signals, sometimes referred to herein as encoded data signals, which are provided to the external transmitter unit 128 (e.g., via a cable).
  • the sound processing unit 126 can utilize digital processing techniques to provide frequency shaping, amplification, compression, and other signal conditioning, including conditioning based on recipient-specific fitting parameters.
  • the power source of the external component 142 is configured to provide power to the auditory prosthesis 100, where the auditory prosthesis 100 includes a battery or other power storage device (e.g., circuitry located in the internal component 144, or disposed in a separate implanted location) that is recharged by the power provided from the external component 142 (e.g., via a transcutaneous energy transfer link).
  • the transcutaneous energy transfer link is used to transfer power and/or data to the internal component 144 of the auditory prosthesis 100.
  • Various types of energy transfer such as infrared (IR), electromagnetic, capacitive, and inductive transfer, may be used to transfer the power and/or data from the external component 142 to the internal component 144.
  • the internal component 144 comprises an internal receiver unit 132, a stimulator unit 120, and an elongate electrode assembly 118.
  • the internal receiver unit 132 and the stimulator unit 120 are hermetically sealed within a biocompatible housing.
  • the internal receiver unit 132 comprises an internal coil 136 (e.g., a wire antenna coil comprising multiple turns of electrically insulated single-strand or multistrand platinum or gold wire), and preferably, a magnet (also not shown) fixed relative to the internal coil 136.
  • the internal receiver unit 132 and the stimulator unit 120 are hermetically sealed within a biocompatible housing, sometimes collectively referred to as a stimulator/receiver unit.
  • the internal coil 136 receives power and/or data signals from the external coil 130 via a transcutaneous energy transfer link (e.g., an inductive RF link).
  • the stimulator unit 120 generates electrical stimulation signals based on the data signals, and the stimulation signals are delivered to the recipient via the elongate electrode assembly 118.
  • the elongate electrode assembly 118 has a proximal end connected to the stimulator unit 120, and a distal end implanted in the cochlea 140.
  • the electrode assembly 118 extends from the stimulator unit 120 to the cochlea 140 through the mastoid bone 119.
  • the electrode assembly 118 may be implanted at least in the basal region 116, and sometimes further.
  • the electrode assembly 118 may extend towards apical end of cochlea 140, referred to as cochlea apex 134.
  • the electrode assembly 118 may be inserted into the cochlea 140 via a cochleostomy 122.
  • a cochleostomy may be formed through the round window 121, the oval window 112, the promontory 123, or through an apical turn 147 of the cochlea 140.
  • the elongate electrode assembly 118 comprises a longitudinally aligned and distally extending array 146 of electrodes or contacts 148, sometimes referred to as electrode or contact array 146 herein, disposed along a length thereof.
  • electrode or contact array 146 can be disposed on the electrode assembly 118, in most practical applications, the electrode array 146 is integrated into the electrode assembly 118 (e.g., the electrode array 146 is disposed in the electrode assembly 118).
  • the stimulator unit 120 generates stimulation signals which are applied by the electrodes 148 to the cochlea 140, thereby stimulating the auditory nerve 114.
  • FIG. 1 schematically illustrates an auditory prosthesis 100 utilizing an external component 142 comprising an external microphone 124, an external sound processing unit 126, and an external power source
  • one or more of the microphone 124, sound processing unit 126, and power source are implantable on or within the recipient (e.g., within the internal component 144).
  • the auditory prosthesis 100 can have each of the microphone 124, sound processing unit 126, and power source implantable on or within the recipient (e.g., encapsulated within a biocompatible assembly located subcutaneously), and can be referred to as a totally implantable cochlear implant (“TICI”).
  • TICI totally implantable cochlear implant
  • the auditory prosthesis 100 can have most components of the cochlear implant (e.g., excluding the microphone, which can be an in-the-ear-canal microphone) implantable on or within the recipient, and can be referred to as a mostly implantable cochlear implant (“MICI”).
  • MICI implantable cochlear implant
  • FIG. 2 schematically illustrates a perspective view of an example fully implantable auditory prosthesis 200 (e.g., fully implantable middle ear implant or totally implantable acoustic system), implanted in a recipient, utilizing an acoustic actuator in accordance with certain implementations described herein.
  • the example auditory prosthesis 200 of FIG. 2 comprises a biocompatible implantable assembly 202 (e.g., comprising an implantable capsule) located subcutaneously (e.g., beneath the recipient’s skin and on a recipient's skull). While FIG.
  • the implantable assembly 202 includes a signal receiver 204 (e.g., comprising a coil element) and an acoustic transducer (e.g., a microphone assembly 206 comprising a diaphragm and an electret or piezoelectric transducer) that is positioned to receive acoustic signals through the recipient’s overlying tissue.
  • the implantable assembly 202 may further be utilized to house a number of components of the fully implantable auditory prosthesis 200.
  • the implantable assembly 202 can include a power storage device (e.g., battery or other power storage circuitry) and a signal processor (e.g., a sound processing unit).
  • a power storage device e.g., battery or other power storage circuitry
  • a signal processor e.g., a sound processing unit
  • Various additional processing logic and/or circuitry components can also be included in the implantable assembly 202 as a matter of design choice.
  • the signal processor of the implantable assembly 202 is in operative communication (e.g., electrically interconnected via a wire 208) with an actuator 210 (e.g., comprising a transducer configured to generate mechanical vibrations in response to electrical signals from the signal processor).
  • the example auditory prosthesis 100, 200 shown in FIGs. 1 and 2 can comprise an implantable microphone assembly, such as the microphone assembly 206 shown in FIG. 2.
  • the signal processor of the implantable assembly 202 can be in operative communication (e.g., electrically interconnected via a wire) with the microphone assembly 206 and the stimulator unit 120 of the main implantable component.
  • at least one of the microphone assembly 206 and the signal processor e.g., a sound processing unit
  • the actuator 210 of the example auditory prosthesis 200 shown in FIG. 2 is supportably connected to a positioning system 212, which in turn, is connected to a bone anchor 214 mounted within the recipient's mastoid process (e.g., via a hole drilled through the skull).
  • the actuator 210 includes a connection apparatus 216 for connecting the actuator 210 to the ossicles 106 of the recipient. In a connected state, the connection apparatus 216 provides a communication path for acoustic stimulation of the ossicles 106 (e.g., through transmission of vibrations from the actuator 210 to the incus 109).
  • ambient acoustic signals e.g., ambient sound
  • a signal processor within the implantable assembly 202 processes the signals to provide a processed audio drive signal via wire 208 to the actuator 210.
  • the signal processor may utilize digital processing techniques to provide frequency shaping, amplification, compression, and other signal conditioning, including conditioning based on recipient-specific fitting parameters.
  • the audio drive signal causes the actuator 210 to transmit vibrations at acoustic frequencies to the connection apparatus 216 to affect the desired sound sensation via mechanical stimulation of the incus 109 of the recipient.
  • the subcutaneously implantable microphone assembly 202 is configured to respond to auditory signals (e.g., sound; pressure variations in an audible frequency range) by generating output signals (e.g., electrical signals; optical signals; electromagnetic signals) indicative of the auditory signals received by the microphone assembly 202, and these output signals are used by the auditory prosthesis 100, 200 to generate stimulation signals which are provided to the recipient’s auditory system.
  • auditory signals e.g., sound; pressure variations in an audible frequency range
  • output signals e.g., electrical signals; optical signals; electromagnetic signals
  • the diaphragm of an implantable microphone assembly 202 can be configured to provide higher sensitivity than are external non-implantable microphone assemblies.
  • the diaphragm of an implantable microphone assembly 202 can be configured to be more robust and/or larger than diaphragms for external non-implantable microphone assemblies.
  • FIG. 3 schematically illustrate a portion of an example transcutaneous bone conduction auditory prosthesis 300 implanted in a recipient in accordance with certain implementations described herein.
  • the example transcutaneous bone conduction auditory prosthesis 300 comprises an external device component and an implantable component 306.
  • the auditory prosthesis 300 is an active transcutaneous bone conduction auditory prosthesis in that the vibrating actuator 308 is located in the implantable component 306.
  • a vibratory element in the form of a vibrating actuator 308 is located in a housing 310 of the implantable component 306.
  • the vibrating actuator 308 is a device that converts electrical signals into vibration.
  • the vibrating actuator 308 can be in direct contact with the outer surface of the recipient’s bone 196 (e.g., the vibrating actuator 308 is in substantial contact with the recipient’s bone 196 such that vibration forces from the vibrating actuator 308 are communicated from the vibrating actuator 308 to the recipient’s bone 196).
  • there can be one or more thin non-bone tissue layers e.g., a silicone layer 324) between the vibrating actuator 308 and the recipient’s bone 196 (e.g., bone tissue; skull bone) while still permitting sufficient support so as to allow efficient communication of the vibration forces generated by the vibrating actuator 308 to the recipient’s bone 196.
  • the external component 304 includes a sound input element 326 that converts sound into electrical signals.
  • the auditory prosthesis 300 provides these electrical signals to the vibrating actuator 308, or to a sound processor (not shown) that processes the electrical signals, and then provides those processed signals to the implantable component 306 through the tissue of the recipient (e.g., skin 190, fat 192, muscle 194) via a magnetic inductance link.
  • a communication coil 332 of the external component 304 can transmit these signals to an implanted communication coil 334 located in a housing 336 of the implantable component 306.
  • Components (not shown) in the housing 336 such as, for example, a signal generator or an implanted sound processor, then generate electrical signals to be delivered to the vibrating actuator 308 via electrical lead assembly 338.
  • the vibrating actuator 308 converts the electrical signals into vibrations.
  • the vibrating actuator 308 can be positioned with such proximity to the housing 336 that the electrical leads 338 are not present (e.g., the housing 310 and the housing 336 are the same single housing containing the vibrating actuator 308, the communication coil 334, and other components, such as, for example, a signal generator or a sound processor).
  • the vibrating actuator 308 is mechanically coupled to the housing 310.
  • the housing 310 and the vibrating actuator 308 collectively form a vibrating element.
  • the housing 310 can be substantially rigidly attached to a bone fixture 318.
  • the housing 310 can include a through hole 320 that is contoured to the outer contours of the bone fixture 318.
  • the screw 322 can be used to secure the housing 310 to the bone fixture 318.
  • the head of the screw 322 is larger than the through hole 320 of the housing 310, and thus the screw 322 positively retains the housing 310 to the bone fixture 318.
  • a portion of the screw 322 interfaces with the bone fixture 318, thus permitting the screw 322 to readily fit into an existing bone fixture 318 used in a percutaneous bone conduction device (or an existing passive bone conduction device).
  • the screw 322 is configured so that the same tools and procedures that are used to install and/or remove an abutment screw from the bone fixture 318 can be used to install and/or remove the screw 322 from the bone fixture 318.
  • the bone fixture 318 can be made of any material that has a known ability to integrate into surrounding bone tissue (e.g., comprising a material that exhibits acceptable osseointegration characteristics).
  • the bone fixture 318 is formed from a single piece of material (e.g., titanium) and comprises outer screw threads forming a male screw which is configured to be installed into the skull bone 196 and a flange configured to function as a stop when the fixture 318 is implanted into the skull bone 196.
  • the screw threads can have a maximum diameter of about 3.5 mm to about 5.0 mm, and the flange can have a diameter which exceeds the maximum diameter of the screw threads (e.g., by approximately 10%-20%).
  • the flange can have a planar bottom surface for resting against the outer bone surface, when the fixture 318 has been screwed down into the skull bone 196.
  • the flange prevents the fixture 318 (e.g. , the screw threads) from potentially completely penetrating completely through the bone 196.
  • the body of the fixture 318 can have a length sufficient to securely anchor the fixture 318 to the skull bone 196 without penetrating entirely through the skull bone 196.
  • the length of the body can therefore depend on the thickness of the skull bone 196 at the implantation site.
  • the fixture 318 can have a length, measured from the planar bottom surface of the flange to the end of the distal region (e.g., the portion farthest from the flange), that is no greater than 5 mm or between about 3.0 mm to about 5.0 mm, which limits and/or prevents the possibility that the fixture 318 might go completely through the skull bone 196.
  • the interior of the fixture 318 can further include an inner lower bore having female screw threads configured to mate with male screw threads of the screw 322 to the fixture 318.
  • the fixture 318 can further include an inner upper bore that receives a bottom portion of the abutment 312.
  • the example auditory prostheses 100 shown in FIG. 1 utilizes an external microphone 124
  • the auditory prosthesis 200 shown in FIG. 2 utilizes an implantable microphone assembly 206 comprising a subcutaneously implantable acoustic transducer
  • the example transcutaneous bone conduction auditory prosthesis 300 of FIG. 3 comprises an external sound input element 326 (e.g., external microphone).
  • a subcutaneously implantable sound input assembly e.g., implanted microphone
  • one or more external microphone assemblies is used with the auditory prostheses 100, 200, 300.
  • an external microphone assembly can be used to supplement an implantable microphone assembly of the auditory prosthesis 100, 200, 300.
  • teachings detailed herein and/or variations thereof can be utilized with any type of external or implantable microphone arrangement, and the acoustic prostheses 100, 200, 300 shown in FIGs. 1, 2, and 3 are merely illustrative.
  • FIGs. 4A-4C schematically illustrate an example apparatus 400 in accordance with certain implementations described herein.
  • the apparatus 400 comprises a housing 410 configured to be worn by a recipient (e.g., on an external surface, such as skin 190, of a portion of the recipient’s tissue 500).
  • the apparatus 400 further comprises at least one microphone 420 on or within the housing 410.
  • the at least one microphone 420 is configured to receive sound and to generate information 422 indicative of the sound.
  • the apparatus 400 further comprises first circuitry 430 on or within the housing 410.
  • the first circuitry 430 is configured to wirelessly transmit the information 422 to a first device 510 implanted on or within the recipient (e.g., beneath a portion of the recipient’s tissue 500) while the housing 410 is worn by the recipient (e.g., while the housing 410 is on the external surface of the recipient’s tissue 500, see FIG. 4B).
  • the apparatus 400 further comprises second circuitry 440 on or within the housing 410.
  • the second circuitry 440 is configured to wirelessly transmit the information 422 to at least the first device 510 while the housing 410 is remote from the recipient (e.g., while the housing 410 is spaced from the external surface of the recipient’s tissue 500, see, FIG. 4C).
  • the apparatus 400 and the first device 510 are components of a stimulation system configured to provide stimulation signals to the recipient.
  • a sensory stimulation system e.g., auditory prosthesis system; visual prosthesis system
  • the stimulation signals can be configured to be received and perceived by the recipient as sensory information.
  • the apparatus 400 can comprise an external microphone assembly 124 or external component 304 configured to wirelessly communicate with a first device 510 comprising an implanted stimulator unit 120 of a cochlear implant auditory prosthesis 100, an actuator 210 of a middle ear implant 200, or an implantable component 306 of a transcutaneous bone conduction auditory prosthesis 300.
  • the apparatus 400 is in wireless communication with an implanted first device 510, in certain other implementations, the apparatus 400 is in wireless communication with a non-implanted first device 510 (e.g., worn externally by the recipient).
  • the apparatus 400 further comprises control circuitry 450 (not shown in FIGs. 4A-4C) in electrical communication with the at least one microphone 420, the first circuitry 430, and the second circuitry 440.
  • the control circuitry 450 can comprise at least one microcontroller configured to receive data signals from the at least one microphone 420 and to generate output data signals and/or control signals to the first circuitry 430 and the second circuitry 440.
  • the at least one microcontroller can comprise at least one application-specific integrated circuit (ASIC) microcontroller, digital signal processing (DSP) microcontroller, generalized integrated circuits programmed by software with computer executable instructions, and/or microcontroller core.
  • ASIC application-specific integrated circuit
  • DSP digital signal processing
  • control circuitry 450, first circuitry 430, and second circuitry 440 comprise different portions of the same circuitry (e.g., each comprising respective portions of a single microcontroller), while in certain other implementations, the control circuitry 450, first circuitry 430, and second circuitry 440 comprise different microcontrollers.
  • control circuitry 450 comprises and/or is in operative communication with storage circuitry configured to store information (e.g., data; commands) accessed by the control circuitry 450 during operation (e.g., while providing the functionality of certain implementations described herein).
  • the storage circuitry can comprise at least one tangible (e.g., non-transitory) computer readable storage medium, examples of which include but are not limited to: read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory.
  • the storage circuitry can be encoded with software (e.g., a computer program downloaded as an application) comprising computer executable instructions for instructing the control circuitry 450 (e.g., executable data access logic, evaluation logic, and/or information outputting logic).
  • the control circuitry 450 executes the instructions of the software to provide functionality as described herein.
  • the control circuitry 450 of certain implementations further comprises other digital circuitry (e.g., registers; filters; output controllers; memory controllers).
  • the apparatus 400 further comprises at least one input interface in operative communication with the control circuitry 450 and/or at least one output interface in operative communication with the control circuitry 450.
  • the at least one input interface can be configured to receive input signals (e.g., from the recipient) indicative of user input (e.g., commands; operational parameters such as thresholds). Examples of the at least one input interface include but are not limited to: rotatable knobs (e.g., connected to potentiometers); buttons; switches; touchscreen; microphone and voice-responsive circuitry.
  • the at least one output interface can be configured to provide output signals (e.g., to the recipient) indicative of the operational state or status of the apparatus 400.
  • Examples of the at least one output interface include but are not limited to: an LED or LCD display configured to generate visual signals (e.g., colored lights, images, or alphanumeric characters; a portion of the control circuitry 450 configured to generate and transmit output signals indicative of an informative tone or sound to be presented to the recipient via the first device 510; a haptic motor configured to generate vibrations or other tactile signals.
  • visual signals e.g., colored lights, images, or alphanumeric characters
  • a portion of the control circuitry 450 configured to generate and transmit output signals indicative of an informative tone or sound to be presented to the recipient via the first device 510
  • a haptic motor configured to generate vibrations or other tactile signals.
  • the apparatus 400 comprises an antenna configured to be used as the at least one input interface to receive wireless input signals (e.g., Bluetooth signals; WiFi signals) from an external device separate from the apparatus 400 and the implanted first device 510 (e.g., smart phone, smart tablet, smart watch; other computing device) and/or to be used as the at least one output interface to transmit wireless output signals to the external device separate from the apparatus 400 and the first device 510 to display the output signals.
  • wireless input signals e.g., Bluetooth signals; WiFi signals
  • the implanted first device 510 e.g., smart phone, smart tablet, smart watch; other computing device
  • the apparatus 400 comprises an antenna configured to be used as the at least one input interface to receive wireless input signals (e.g., Bluetooth signals; WiFi signals) from an external device separate from the apparatus 400 and the implanted first device 510 (e.g., smart phone, smart tablet, smart watch; other computing device) and/or to be used as the at least one output interface to transmit wireless output signals to the external device separate from the apparatus 400
  • the housing 410 of the apparatus 400 is configured to be positioned on and/or over an outer surface of the skin and to hermetically seal the first and second circuitry 430, 440 from an environment surrounding the housing 410.
  • the housing 410 can comprise at least one biocompatible (e.g., skin-friendly) material, examples of which include but are not limited to: metals; plastics; polymer; rubber; silicone; ceramics.
  • the housing 410 can have a width (e.g., along a lateral direction substantially parallel to the recipient’s skin) less than or equal to 40 millimeters (e.g., in a range of 15 millimeters to 35 millimeters; in a range of 25 millimeters to 35 millimeters; in a range of less than 30 millimeters; in a range of 15 millimeters to 30 millimeters).
  • a width e.g., along a lateral direction substantially parallel to the recipient’s skin
  • 40 millimeters e.g., in a range of 15 millimeters to 35 millimeters; in a range of 25 millimeters to 35 millimeters; in a range of less than 30 millimeters; in a range of 15 millimeters to 30 millimeters.
  • the housing 410 can have a thickness (e.g., in a direction substantially perpendicular to the recipient’s skin) less than or equal to 10 millimeters (e.g., in a range of less than or equal to 7 millimeters, in a range of less than or equal to 6 millimeters; in a range of less than or equal to 5 millimeters).
  • the at least one microphone 420 comprises a diaphragm and an electret or piezoelectric transducer and is configured to be positioned to receive acoustic signals from an environment surrounding the at least one microphone 420.
  • the at least one microphone 420 can be integrated with the housing 410 or can be a separate component from the housing 410.
  • Other types of microphones 420 e.g., magnetic; dynamic; optical; electromechanical are also compatible with certain implementations described herein.
  • the first circuitry 430 comprises at least one first communication coil 432 configured to be in wireless communication with at least one implanted communication coil 512 of the first device 510 (e.g., via a wireless transcutaneous magnetic induction communication link while the housing 410 is worn by the recipient).
  • the first circuitry 430 can further comprise wireless communications interface circuitry configured to drive the at least one first communication coil 432 in response to control signals from control circuitry of the apparatus 400 over the transcutaneous magnetic induction communication link between the apparatus 400 and the first device 510.
  • the at least one first communication coil 432 comprises multiple turns of electrically insulated single-strand or multi-strand metal wire (e.g., a planar electrically conductive wire with multiple windings having a substantially circular, rectangular, spiral, or oval shape or other shape) or metal traces on epoxy of a printed circuit board.
  • the first circuitry 430 can comprise at least one magnetic induction (MI) coil 432 in operative communication with at least one MI coil 512 of the first device 510 to form a transcutaneous wireless communication link configured to transfer power and/or data signals between the apparatus 400 and the first device 510.
  • MI magnetic induction
  • the second circuitry 440 comprises at least one antenna 442 configured to be in wireless communication with at least one implanted antenna 514 of the first device 510 (e.g., via at least one wireless broadcast channel while the housing 410 is remote from the recipient).
  • the second circuitry 440 can further comprise wireless communications interface circuitry configured to drive the at least one antenna 442 in response to control signals from control circuitry of the apparatus 400.
  • the second circuitry 440 can comprise at least one radio-frequency (RF) antenna in operative communication with at least one RF antenna of the first device 510 to form a transcutaneous wireless communication link (e.g., having multiple frequency channels) configured to transfer data signals from the apparatus 400 to the first device 510.
  • RF radio-frequency
  • the signals transmitted via the at least one antenna 442 can have one or more carrier frequencies in a range of 2 MHz to 6 GHz (e.g., in a range of 2 MHz to 10 MHz; in a range of 10 MHz to 30 MHz; in a range of 30 MHz to 1 GHz; in a range of 1 GHz to 6 GHz; about 5 MHz; about 22.7 MHz; about 2.4 GHz).
  • Examples of wireless communication protocols for the transmission by the second circuitry 440 include, but are not limited to: AuracastTM broadcast audio; Bluetooth® 5.2 LE Audio; FM radio transmission; Roger wireless transmission.
  • the first device 510 comprises a biocompatible housing 516 configured to be positioned beneath the skin, fat, and/or muscular layers and above a bone (e.g., skull) in a portion of the recipient’s body (e.g., the head).
  • the housing 516 of certain implementations comprises at least one material (e.g., polymer; silicone) that is substantially transparent to the electromagnetic signals generated by the apparatus 400 (e.g., by the first circuitry 430 and the second circuitry 440) such that the housing 516 does not substantially interfere with the transmission of the electromagnetic signals between the apparatus 400 and the first device 510.
  • the first device 510 can comprise a power source (e.g., battery; capacitor; not shown) configured to store power received via the at least one communication coil 512 from an external power source (e.g., the apparatus 400) and to provide at least some of the power to other components of the first device 510.
  • the first device 510 can be configured to operate both with and without the apparatus 400.
  • the housing 516 is configured to hermetically seal circuitry of the first device 510 (e.g., the at least one communication coil 512, the at least one antenna 514, control circuitry, stimulation circuitry, power source, or other circuitry) from an environment surrounding the housing 516.
  • the first device 510 comprises at least one implanted antenna 514 configured to be in wireless communication with the second circuitry 440 (e.g., comprising at least one antenna 442) of the apparatus 400 (e.g., via at least one wireless broadcast channel while the housing 410 is remote from the recipient).
  • the first device 510 can further comprise wireless communications interface circuitry configured to receive signals from the at least one implanted antenna 514 and to provide the signals to the stimulation circuitry.
  • the at least one implanted antenna 514 can comprise at least one radio-frequency (RF) antenna in operative communication with at least one RF antenna of the apparatus 400 to form a transcutaneous wireless communication link (e.g., having multiple frequency channels) configured to transfer data signals from the apparatus 400 to the first device 510.
  • RF radio-frequency
  • the control circuitry 450 of the apparatus 400 has at least two operational states.
  • FIG. 5A schematically illustrates an example state diagram 600 of the control circuitry 450 in accordance with certain implementations described herein.
  • a first operational state 610 e.g., a proximal state
  • the apparatus 400 is worn on the recipient’s body
  • a second operational state 620 e.g., a remote state
  • the apparatus 400 is remote (e.g., spaced from) the recipient’s body.
  • the control circuitry 450 can automatically switch from the first operational state 610 to the second operational state 620 in response to the apparatus 400 being removed from the recipient’ s body and can automatically switch from the second operational state 620 to the first operational state 610 in response to the apparatus 400 being placed on the recipient’s body. While FIG. 5A shows two operational states 610, 620, certain other implementations include additional operational states (e.g., off state in which the apparatus 400 is powered off; calibration state in which at least some components of the apparatus 400 undergo a calibration or conditioning process).
  • the control circuitry 450 is configured to receive user input signals (e.g., via the at least one input interface) placing the control circuitry 450 into either the first operational state 610 or the second operational state 620.
  • the apparatus 400 further comprises at least one sensor (e.g., in operable communication with the control circuitry 450) configured to automatically detect whether the housing 410 is worn by the recipient or is remote from the recipient.
  • the at least one sensor can comprise an accelerometer configured to detect movement of the apparatus 400 indicative of being removed from and/or placed on the recipient’s body.
  • the at least one sensor can comprise a portion of the control circuitry 450 configured to detect a loss (e.g., degradation) and/or re-establishment (e.g., restoration) of the wireless communication link between the first circuitry 430 and the first device 510 (e.g., a coil-off event and/or a coil-on event).
  • a loss e.g., degradation
  • re-establishment e.g., restoration
  • the apparatus 400 can further comprise an external magnetic element (e.g., ferromagnetic material; permanent magnet) and the first device 510 can comprises an internal magnetic element (e.g., ferromagnetic material; permanent magnet), and the external and internal magnetic elements can be configured to establish a magnetic attraction between sufficient to hold the apparatus 400 against the outer surface (e.g., skin) of the recipient’s tissue above the first device 510.
  • the at least one sensor can comprise a magnetic sensor configured to detect whether the external magnetic element of the apparatus 400 is experiencing an attractive magnetic force due to proximity to the internal magnetic element of the first device 510 and/or is not experiencing such an attractive magnetic force.
  • FIGs. 5B and 5C schematically illustrate an example operation of the control circuitry 450 in the first and second operational states 610, 620, respectively, in accordance with certain implementations described herein.
  • the control circuitry 450 of FIGs. 5B and 5C comprises microphone processing circuitry 452, signal processing circuitry 454, and a switch 456 configured to selectively provide output signals from the signal processing circuitry 454 to either the first circuitry 430 or the second circuitry 440. While FIGs. 5B and 5C show the control circuitry 450 using the same processing circuitry in both the first and second operational states 610, 620, certain other implementations use different processing circuitry in the first operational state 610 as compared to the processing circuitry used in the second operational state 620.
  • the at least one microphone 420 generates the information 422 (e.g., data signals) indicative of the sounds received by the at least one microphone 420.
  • the microphone processing circuitry 452 of the control circuitry 450 e.g., comprising an analog-to-digital converter (ADC), a calibration filter, and automatic gain control (AGC) for each microphone of the at least one microphone 420
  • ADC analog-to-digital converter
  • AGC automatic gain control
  • the microphone processing circuitry 452 can receive and process the information 422a, b using beamforming techniques to generate audio signals 453 which include directional information regarding the received sounds.
  • the at least one microphone 420 can comprise at least one omnidirectional microphone configured to capture sounds substantially equally from all directions.
  • the signal processing circuitry 454 of the control circuitry 450 can be configured to receive and process the audio signals 453 to generate output signals 458.
  • the signal processing circuitry 454 can perform frequency-dependent gain and compression to provide output signals 458 that compensate for particular aspects of the recipient’s hearing loss or that conform to the recipient’s preferences.
  • the signal processing circuitry 454 can perform sound coding processing to provide output signals 458 that convey appropriate stimulation commands to the stimulation assembly (e.g., stimulation unit 120) of the first device 510.
  • the at least one microphone 420 receives sounds in a region proximal to the recipient (e.g., near the recipient) and the switch 456 transmits the resulting output signals 458 to the first circuitry 430 to be transmitted to the implanted first device 510.
  • the switch 456 transmits the resulting output signals 458 to the first circuitry 430 to be transmitted to the implanted first device 510.
  • the apparatus 400 in the second (e.g., remote) operational state 620, by virtue of the apparatus 400 being remote (e.g., spaced from) the recipient’s body, the at least one microphone 420 receives sounds in a region remote from the recipient (e.g., spaced from the recipient) and the switch 456 transmits the resulting output signals 458 to the second circuitry 440 to be transmitted to at least the implanted first device 510.
  • the first device 510 is left without an external device from which the first device 510 can communicate with (e.g., receive power signals, transmit and/or receive data signals, and/or transmit and/or receive control signals) via the at least one implanted communication coil 512.
  • the apparatus 400 of certain implementations described herein in the second operational state 620 can instead use the second circuitry 440 to wirelessly, remotely, and directly communicate with the implanted first device 510 via the at least one implanted antenna 514.
  • the microphone processing circuitry 452 in the second operational state 620 utilizes the information 422 from multiple microphones 420 of the apparatus 400 to increase the SNR.
  • the microphone processing circuitry 452 can use beamforming to focus on target source sounds coming from a predetermined direction (e.g., target speaker’s voice when the apparatus 400 is being worn by the target speaker in a predetermined position, such as clipped to a lapel or hanging from the neck of the target speaker).
  • the microphone processing circuitry 452 can form an omnidirectional pattern to pick up target sounds from all directions (e.g., the apparatus 400 placed on a tabletop around which there are multiple target speakers).
  • FIG. 6 schematically illustrates an example first device 510 in accordance with certain implementations described herein.
  • the first device 510 of FIG. 6 further comprises wireless stream processing circuitry 520, signal processing circuitry 530, and a stimulation assembly 540 (e.g., stimulation unit 120).
  • Either the at least one implanted communication coil 512 can receive the output signals 458 from the first circuitry 430 or the at least one implanted antenna 514 can receive the output signals 458 from the second circuitry 440, and the output signals 458 are provided to the wireless stream processing circuitry 520.
  • the wireless stream processing circuitry 520 can be configured to perform one or more processing operations (e.g., decompressing and/or decoding the output signals 458) and to generate processed audio signals 522.
  • the signal processing circuitry 530 can be configured to receive the processed audio signals 522 and to apply sound coding processing to provide stimulation signals 532 to the stimulation assembly 540 which provide the stimulation signals 532 as electrical and/or vibrational stimulus to the recipient’s body to evoke a hearing percept.
  • the apparatus 400 in the second operational state 620 is configured to be used as a wireless accessory (e.g., remote microphone; mini microphone; microphone comprising an FM transmitter) configured to receive sounds from a region spaced from the recipient and to transmit output signals 458 to the implanted first device 510.
  • a wireless accessory e.g., remote microphone; mini microphone; microphone comprising an FM transmitter
  • This functionality is provided by the apparatus 400 of certain implementations without using additional devices beyond the external component and the implanted component of the auditory prosthesis system (e.g., providing improved convenience and less complexity over systems which utilize a separate microphone -based transmission device for such functionality; reducing burdens on clinicians by allowing clinicians to support simpler auditory prosthesis systems with fewer devices; reducing costs to users, insurance companies, and government support programs by reducing the number of devices to be used).
  • the ability of the recipient to understand speech from a target sound source can depend on various factors, including but not limited to the relative amplitude of the speech and the other sounds from other sound sources, which can be quantified as a signal-to-noise ratio (SNR) in which the speech from the target sound source corresponds to the signal and the other sounds (e.g., from the other sound sources) correspond to the noise.
  • SNR signal-to-noise ratio
  • the apparatus 400 in the second operational state 620 can be placed in proximity to the target sound source (e.g., closer to the target sound source than is the first device 510 and/or recipient) thereby increasing the SNR for the sound from the target sound source.
  • the apparatus 400 can be worn by the target sound source (e.g., a teacher in a classroom wearing the apparatus 400 on a lanyard around their neck or attached to their clothing), with the at least one microphone 420 detecting the sounds from the target sound source and transmitting corresponding output signals 458 to the implanted first device 510 (e.g., an implanted auditory prosthesis of a hearing-impaired student).
  • certain implementations described herein wirelessly, remotely, and directly communicate with the implanted first device 510.
  • certain implementations described herein provide a remote microphone capability by streaming audio information to the implanted first device 510 and utilizing the first device 510 to provide a hearing percept to the recipient, rather than leaving the first device 510 to remain idle while the apparatus 400 is in the second operational state 620 (e.g., which could otherwise result in a degradation in hearing performance from the temporary loss of use of the first device 510).
  • the apparatus 400 and first device 510 can comprise an external sound processor and an implanted stimulation assembly, respectively, of a unilateral auditory prosthesis system or of a bilateral auditory prosthesis system.
  • the apparatus 400 can be worn on the recipient’s body over the first device 510 to transcutaneously communicate the information 422 to the first device 510 via the first circuitry 430 (e.g., the at least one first communication coil 432) and the at least one implanted communication coil 512.
  • the first circuitry 430 e.g., the at least one first communication coil 432
  • the at least one implanted communication coil 512 e.g., the at least one implanted communication coil 512.
  • the apparatus 400 can be selectively placed close to the target sound source, to be used as a remote microphone that transcutaneously communicates (e.g., broadcasts; stream) the information 422 to the first device 510 via the second circuitry 440 (e.g., the at least one antenna 442) and the at least one implanted antenna 514.
  • the apparatus 400 and the first device 510 can provide the utility of a remote microphone without a separate microphone-based transmission device (e.g., providing the improved convenience of not having to track such an additional device and the cost savings of not having to purchase such an additional device).
  • the apparatus 400 can transcutaneously broadcast or stream the information 422 to the second device 710.
  • the second device 710 can comprise an implanted component (e.g., an implanted stimulation assembly in wireless communication with another external sound processor) and/or an externally worn component (e.g., an external sound processor worn on the recipient’s body over and in wireless communication with another implanted stimulation assembly; an externally worn auditory prosthesis without an implanted component, such as an externally worn or ITE hearing aid), and the information 422 can be wirelessly broadcast either directly to the implanted component and/or the externally worn component.
  • an implanted component e.g., an implanted stimulation assembly in wireless communication with another external sound processor
  • an externally worn component e.g., an external sound processor worn on the recipient’s body over and in wireless communication with another implanted stimulation assembly; an externally worn auditory prosthesis without an implanted component, such as an externally worn or ITE hearing aid
  • the information 422 can be wirelessly broadcast either directly to the implanted component and/or the externally worn component.
  • the apparatus 400 is configured to be used as a remote microphone for broadcasting (e.g., streaming) audio information to multiple other devices (e.g., worn by other people).
  • the second circuitry 440 can be further configured to wirelessly transmit the information 422 to at least one second device 710 implanted on or within the recipient, worn by the recipient, implanted on or within another recipient, or worn by another recipient.
  • Certain such implementations can be used under conditions in which there are multiple people that are using devices compatible with the wireless broadcast protocol of the apparatus 400 and who wish to listen to sounds from the same target sound source in proximity to the apparatus 400 (e.g., avoiding having multiple apparatus 400 from the multiple people from being worn by the target speaker).
  • FIG. 7 schematically illustrates an example usage of an apparatus 400 in accordance with certain implementations described herein.
  • the apparatus 400 is configured to be worn by a recipient 720 and, while in the first operational state 610, in wireless communication with a first device 510 (e.g., cochlear implant) via the first circuitry 430.
  • the apparatus 400 is further configured to be placed remotely from the recipient’s body and, while in the second operational state 620, in wireless communication with the first device 510 via the second circuitry 440. As shown in FIG.
  • the apparatus 400 can be removed from the recipient 720 (e.g., student in a classroom) and placed in proximity to a target sound source 730 (e.g., teacher or lecturer in the classroom) where the at least one microphone 420 of the apparatus 400 can receive the target sounds 732 (e.g., the teacher’s voice) with more clarity (e.g., higher magnitude; less noise contributions; higher SNR) than while the apparatus 400 is worn by the recipient 720.
  • the apparatus 400 can provide the functionality of sharing the broadcasted signals with multiple other users who may benefit from such access.
  • the apparatus 400 can wirelessly broadcast the signals to an auditory second device 710 of the recipient, the second device 710 (e.g., an implanted component and/or an external component of an auditory prosthesis system; an externally worn or ITE hearing aid, wireless speaker, or consumer wireless earbud) configured to receive the wirelessly broadcast signals.
  • the second device 710 e.g., an implanted component and/or an external component of an auditory prosthesis system; an externally worn or ITE hearing aid, wireless speaker, or consumer wireless earbud
  • both auditory prosthesis devices of the recipient 720 can contribute to the evoked hearing percept of the recipient 720.
  • the apparatus 400 can wirelessly broadcast the signals including the information 422 to other auditory devices 740 of other people 750 (e.g., other students in the classroom) besides the recipient 720.
  • These other auditory devices 740 can be components of unilateral auditory prosthesis systems, bilateral auditory prosthesis systems, or externally worn speakers or earbuds of the other people 750 and can include an operational state in which the auditory device 740 responds to the information 422 received from the apparatus 400 to cause the respective person 750 to hear the target sounds 732.
  • an auditory device 740 can be configured to detect the presence of an available broadcast from the apparatus 400 and to notify the respective person 750 accordingly (e.g., via an audible alert generated by the auditory device 740).
  • the auditory device 740 can be responsive to at least one attribute or information (e.g., coding; metadata) contained in the signals broadcasted by the apparatus 400 by generating the notification or alert to the person 750.
  • people 750 can be notified of the presence of an available broadcast from the apparatus 400 by an audible or visible notification from a broadcast assistant device dedicated to such notifications (e.g., an AuracastTM broadcast assistant device) or from an application running on a general-purpose device (e.g., smart phone, smart tablet, smart watch; other computing device).
  • a broadcast assistant device dedicated to such notifications
  • an application running on a general-purpose device e.g., smart phone, smart tablet, smart watch; other computing device.
  • a person 750 can select to listen to the broadcast from the apparatus 400 by placing the auditory device 740 in an appropriate operational state to respond to the information 422 received from the apparatus 400 to cause the respective person 750 to hear the target sounds 732.
  • the auditory device 740 can cycle through available broadcasts (e.g., in response to a signal from a user interface of the auditory device 740) and the person 750 can select (e.g., via the user interface) a broadcast to be provided by the auditory device 740.
  • the information 422 broadcasted by the apparatus 400 is encrypted (e.g., to protect the signals from being accessed by unauthorized users) and the people 750 seeking to listen to the broadcasted signals provide their devices 740 with an appropriate decryption key or password to access the broadcasted signals.
  • FIG. 8 is a flow diagram of an example method 800 in accordance with certain implementations described herein. While the method 800 is described by referring to some of the structures of the example apparatus 400 and first device 510 described herein, other apparatus and systems with other configurations of components can also be used to perform the method 800 in accordance with certain implementations described herein.
  • the method 800 comprises providing a first sound processor (e.g., apparatus 400) configured to receive sound and to generate signals indicative of the sound.
  • a first sound processor e.g., apparatus 400
  • the method 800 further comprises, in response to receiving a first control signal (e.g., an indication that the apparatus 400 is being worn on the recipient’s body), placing the first sound processor in a first operational mode (e.g., first operational state 610) in which the first sound processor is configured to transmit the signals to only a first stimulation assembly (e.g., first device 510) implanted on a recipient’s body (e.g., via the first circuitry 430).
  • a first control signal e.g., an indication that the apparatus 400 is being worn on the recipient’s body
  • a first operational mode e.g., first operational state 610
  • the first sound processor and the first stimulation assembly provide a hearing percept to the recipient.
  • the method 800 further comprises, in response to receiving a second control signal (e.g., an indication that the apparatus 400 is remote from the recipient’s body), placing the first sound processor in a second operational mode (e.g., second operational state 620) in which the first sound processor is configured to transmit the signals to the first stimulation assembly and to at least one second device (e.g., second device 710; auditory device 740).
  • a second control signal e.g., an indication that the apparatus 400 is remote from the recipient’s body
  • the first sound processor in a second operational mode in which the first sound processor is configured to transmit the signals to the first stimulation assembly and to at least one second device (e.g., second device 710; auditory device 740).
  • the first sound processor and the first stimulation assembly provide a hearing percept to the recipient
  • the first sound processor and the at least one second device provide a hearing percept to at least one person on which the at least one second device is implanted or worn.
  • the at least one second device can comprise at least one of: a second sound processor worn on the recipient’s body; a second stimulation assembly implanted on or within the recipient’s body; a third sound processor worn on another person’s body; a third stimulation assembly implanted on or within another person’s body.
  • the first control signal and/or the second control signal is generated by a user input interface of the first sound processor.
  • the first control signal is generated by at least one sensor of the first sound processor in response at least in part to the at least one sensor detecting that the first sound processor is on the recipient’s body and the second control signal is generated by the at least one sensor in response at least in part to the at least one sensor detecting that the first sound processor is not on the recipient’s body.
  • FIG. 9A schematically illustrates an example apparatus 900 in accordance with certain implementations described herein and FIG. 9B schematically illustrates an example state diagram 930 of the circuitry 910 in accordance with certain implementations described herein.
  • the apparatus 900 is configured to be used as a remote microphone (e.g., to receive sound and to generate audio signals indicative of the sound) and to collect a target sound sample from the target sound source to be later used in processing of the detected sounds within the environment to selectively enhance or accentuate the target sounds within the detected sounds (e.g., using the target sound sample to calibrate the output of the auditory prosthesis system).
  • the recipient can remove the apparatus 900 from the recipient’s body and ask the conversation partner to hold the apparatus 900 close to their mouth and to speak into it for a short time (e.g., in a range of 10 seconds to 30 seconds), during which the apparatus 900 captures features of the conversation partner’s voice.
  • a short time e.g., in a range of 10 seconds to 30 seconds
  • the sound received by the apparatus 900 has a higher contribution from the target sound source than other sources (e.g., the SNR of the conversation partner’s voice when the apparatus 900 is close to the conversation partner is higher than when the apparatus 900 is on the recipient’s body).
  • the apparatus 900 can use the captured features to enhance the conversation partner’s voice for the remainder of the conversation.
  • the collected target sound sample from the target sound source can be used by the apparatus 900 to process (e.g., filter) the detected sounds during the conversation to increase the SNR of the conversation partner’ s voice (e.g., increase a magnitude of the conversation partner’s voice; decrease a background noise contribution).
  • the increased SNR of the conversation partner’s voice can be obtained throughout the conversation, regardless of whether the apparatus 900 remains in proximity to the conversation partner or whether the apparatus 900 is returned to being worn by the recipient.
  • apparatus 900 and the state diagram 930 are described herein with regard to the apparatus 400 and its components and states, other devices, components, and states are also compatible with certain implementations described herein.
  • the apparatus 900 of certain implementations described herein comprises the same components and functionality as does the apparatus 400 as described herein, while in certain other implementations, the apparatus 900 does not have the functionality of transcutaneously and wirelessly communicating with a device implanted within the recipient’s body and/or the functionality of broadcasting the information 422 to multiple devices as described herein.
  • the example apparatus 900 of FIG. 9A comprises at least one microphone 420 configured to receive sound and to generate audio signals (e.g., information 422) indicative of the sound.
  • the apparatus 900 further comprises circuitry 910 (e.g., control circuitry 450) configured to receive the audio signals from the at least one microphone 420 and to generate processed audio signals 912.
  • the apparatus 900 can further comprise an output device 920 configured to receive the processed audio signals 912 and to provide information regarding the processed audio signals 912 to the recipient.
  • the output device 920 can comprise first circuitry 430 configured to be in transcutaneous wireless communication with an implanted first device 510 configured to provide stimulation signals to the recipient.
  • the output device 920 can comprise at least one electroacoustic transducer (e.g., acoustic speaker) in operative communication with the circuitry 910 and configured to respond to the processed audio signals 912 by providing sounds to the recipient (e.g., the apparatus 900 comprising an externally worn or ITE hearing aid or a consumer wireless earbud).
  • acoustic transducer e.g., acoustic speaker
  • the output device 920 can comprise at least one electroacoustic transducer (e.g., acoustic speaker) in operative communication with the circuitry 910 and configured to respond to the processed audio signals 912 by providing sounds to the recipient (e.g., the apparatus 900 comprising an externally worn or ITE hearing aid or a consumer wireless earbud).
  • the state diagram 930 comprises a default state 932, a first operational state 934, and a second operational state 936.
  • the circuitry 910 of the apparatus 900 can initially be in the default operational state 932.
  • the default operational state 932 can be the first operational state 610 (e.g., proximal state) as described herein with regard to FIG. 5A.
  • the circuitry 910 can switch from the default operational state 932 to the first operational state 934 (e.g., a target capture state). For example, the circuitry 910 can be switched to the first operational state 934 in response to the circuitry 910 detecting that the apparatus 900 has been removed from the recipient’s body (e.g., detected automatically, in response to a signal from at least one sensor indicative of the removal of the apparatus 900) and/or in response to a user input command (e.g., manually). With the apparatus 900 being returned to be worn on the recipient’s body, the circuitry 910 can switch from the first operational state 934 to the second operational state 936 (e.g., target enhancement state).
  • the first operational state 934 e.g., a target capture state
  • the circuitry 910 can switch from the first operational state 934 to the second operational state 936 (e.g., target enhancement state).
  • the circuitry 910 can be switched to the second operational state 936 in response to the circuitry 910 detecting that the apparatus 900 has been returned to the recipient’s body (e.g., detected automatically, in response to a signal from at least one sensor indicative of the return of the apparatus 900) and/or in response to a user input command (e.g., manually).
  • a user input command e.g., manually
  • the circuitry 910 can return to the default state 932 automatically (e.g., upon detecting that the target sounds have been absent for a predetermined period of time indicative of the conversation being over, such as one minute) or manually (e.g., in response to a user input command or the recipient removing the apparatus 900 from the recipient’s body and immediately replacing the apparatus 900 onto the recipient’s body within a predetermined period of time, such as within two seconds).
  • FIGs. 10A and 10B schematically illustrate an example operation of the circuitry 910 in the first operational state 934 and the second operational state 936, respectively, in accordance with certain implementations described herein.
  • the circuitry 910 of FIG. 10A comprises microphone processing circuitry 452, target enhancement circuitry 914, feature storage circuitry 916 (e.g., at least one storage device) in operable communication with the target enhancement circuitry 914.
  • the target enhancement circuitry 914 can comprise the signal processing circuitry 454 (e.g., as described with regard to FIGs.5B and 5C) or can be separate from but in operative communication with the signal processing circuitry 454.
  • the circuitry 910 is configured to receive (e.g., collect; store) and process a sample portion of the sound received by the at least one microphone 420 from the target sound source (e.g., while the at least one microphone 420 is positioned in proximity to the target sound source and remotely from the recipient).
  • a sample portion 422a of the information 422 from the at least one microphone 420 is received by the microphone processing circuitry 452 of the circuitry 910 which processes the sample portion 422a to generate sample audio signals 453a (e.g., as described herein with regard to FIG. 5B).
  • the target enhancement circuitry 914 analyzes the sample audio signals 453a and extracts a set of features 915 indicative of the target sound source from which the sample portion of the sound was received, and the set of features 915 are stored in the feature storage circuitry 916.
  • features 915 extracted from the sample audio signals 453a include, but are not limited to: a range of fundamental frequencies (F0), a range of formant frequencies, an estimate of vocal tract length of the target sound source; a syllable rate; mel-frequency cepstral coefficients (MFCCs); or other characteristics of the sample portion of the sound.
  • the target enhancement circuitry 914 can utilize machine learning processes (e.g., “i-vector” analysis; “d- vector” analysis) to extract the features 915 that are indicative of the target sound source (see, e.g., N. Dehak et al., “Front-end factor analysis for speaker verification,” IEEE Trans. Audio, Speech, and Lang. Process., Vol. 19, No. 4, pp. 788-798 (2010); L. Wan et al., “Generalized end-to- end loss for speaker verification,” Int’l Conf. Aeons., Speech and Signal Process. (ICASSP), IEEE, pp. 4879-4883 (2016)).
  • machine learning processes e.g., “i-vector” analysis; “d- vector” analysis
  • the circuitry 910 can capture (e.g., extract) and store the features 915 of the target sound from the target sound source for later use in the second operational state 936.
  • the apparatus 900 while in the first operational state 934, the apparatus 900 performs the remote functionality described herein with regard to FIGs. 5A and 5C.
  • the apparatus 900 can transmit the output signals 458 via the second circuitry 440 to the first device 510, to a second device 710, and/or to at least one auditory device 740.
  • the circuitry 910 is configured to receive further sounds (e.g., while the at least one microphone 420 is positioned on the recipient’s body) and to use the stored features 915 of the target sound source to process the further sounds to generate enhanced audio signals 917 (e.g., in which the target sounds are enhanced and/or noise contributions are reduced).
  • the portion 442b of the information 422 from the at least one microphone 420 is received by the microphone processing circuitry 452 of the circuitry 910 which processes the portion 422b to generate audio signals 453b (e.g., as described herein with regard to FIG. 5B).
  • the target enhancement circuitry 914 accesses the stored set of features 915 from the feature storage circuitry 916 and processes the audio signals 453b using the stored set of features 915 to generate the enhanced audio signals 917 (e.g., using noise reduction and/or speech enhancement processes).
  • FIG. 11 schematically illustrates an example operation of the target enhancement circuitry 914 in accordance with certain implementations described herein.
  • the target enhancement circuitry 914 comprises source separation circuitry 940 (e.g., blind source separation circuitry) configured to separate the audio signals 453b received from the microphone processing circuitry 452 into two or more source signals 942 (FIG. 11 shows three source signals 942 as an example), each source signal 942 corresponding to sound of the audio signals 453b detected to be from a separate source, one of which is expected to be the target sound source.
  • source separation circuitry 940 e.g., blind source separation circuitry
  • the target enhancement circuitry 914 further comprises source merger circuitry 950 configured to extract a set of source features 952 from each of the source signals 942 (e.g., sets 952a, b,c, one set for each of the three source signals).
  • the source merger circuitry 950 can use the same circuitry and/or process as used by the target enhancement circuitry 914 in the first operational block state (e.g., capture state) to extract the set of features 915 from the sample audio signals 453a.
  • the source merger circuitry 950 can be further configured to determine which of the source signals 942 most closely matches the target sound source (e.g., by comparing the sets of source features 952 to the stored target features 915).
  • the source merger circuitry 950 can be further configured to generate the enhanced audio signals 917 in a manner that enhances the target sounds within the enhanced audio signals 917.
  • the source merger circuitry 950 can output the source signals 942 with source features 952 that most closely match the stored target features 915.
  • the source merger circuitry 950 can recombine (e.g., merge) some or all of the source signals 942 into enhanced audio signals 917 (e.g., by applying gains to process each of the source signals 942, each gain amplifying/reducing the source signal 942 dependent upon the degree of similarity of the corresponding set of source features 952 with the stored target features 915, and then summing the processed source signals 942).
  • FIG. 12 schematically illustrates another example operation of the target enhancement circuitry 914 in accordance with certain implementations described herein.
  • the target enhancement circuitry 914 can comprise filterbank circuitry 1010 configured to split the audio signals 453b received from the microphone processing circuitry 452 into multiple channels 1012 (e.g., bands), each channel 1012 corresponding to a band of frequencies of the audio signals 453b.
  • the channels 1012 can be determined by the filterbank circuitry 1010 applying a short time Fourier transform (STFT) (e.g., using a fast Fourier transform (FFT) algorithm) to the audio signals 453b.
  • STFT short time Fourier transform
  • FFT fast Fourier transform
  • the target enhancement circuitry 914 can further comprise gain calculation circuitry 1020 configured to receive the channels 1012 from the filterbank circuitry 1010, to receive the stored target features 915 from the feature storage circuitry 916, and to calculate gains 1022 to be applied to each of the channels 1012, the gains 1022 optimized based on the stored target features 915.
  • the gain calculation circuitry 1020 can determine that the target sounds from the target sound source are temporarily absent (e.g., due to the conversation partners taking turns to speak) and can temporarily apply more attenuation to at least some of the channels 1012.
  • a time-varying gain mask or gain ratio can be calculated using an estimate of the SNR in each time-frequency sample (see, e.g., P.W.
  • the target enhancement circuitry 914 can further comprise gain application circuitry 1030 configured to receive the channels 1012 from the filterbank circuitry 1010 and the gains 1022 from the gain calculation circuitry 1020 and apply the gains 1022 to the channels 1012, resulting in the enhanced audio signals 917 provided to the output device 920.
  • the circuitry 910 is configured to use machine learning processes, such as deep neural networks (DNNs) to separate the target sounds from the sounds from the environment (e.g., containing multiple voices and background noise) detected by the at least one microphone 420.
  • DNNs deep neural networks
  • the circuitry 910 can obtain reference samples of the recipient’s own voice, either during the first operational state 934 (e.g., capture state) and/or the second operational state 936 (e.g., enhancement state), and the reference samples can be used by the circuitry 910. See, e.g., Q. Wang et al., “VoiceFilter: Targeted voice separation by speaker-conditioned spectrogram masking,” Proc. Interspeech, pp.
  • the apparatus 900 performs the DNN training, while in certain other implementations (e.g., in which the apparatus 900 has insufficient processing capability to train the DNN), the apparatus 900 transmits the reference samples to another device with sufficient processing capability (e.g., smart phone; smart tablet; networked computing device) to perform the training.
  • the apparatus 900 transmits the reference samples to another device with sufficient processing capability (e.g., smart phone; smart tablet; networked computing device) to perform the training.
  • the terms “generally parallel” and “substantially parallel” refer to a value, amount, or characteristic that departs from exactly parallel by ⁇ 10 degrees, by ⁇ 5 degrees, by ⁇ 2 degrees, by ⁇ 1 degree, or by ⁇ 0.1 degree
  • the terms “generally perpendicular” and “substantially perpendicular” refer to a value, amount, or characteristic that departs from exactly perpendicular by ⁇ 10 degrees, by ⁇ 5 degrees, by ⁇ 2 degrees, by ⁇ 1 degree, or by ⁇ 0.1 degree.
  • the ranges disclosed herein also encompass any and all overlap, sub-ranges, and combinations thereof. Language such as “up to,” “at least,” “greater than,” less than,” “between,” and the like includes the number recited.
  • ordinal adjectives e.g., first, second, etc.
  • the ordinal adjective are used merely as labels to distinguish one element from another (e.g., one signal from another or one circuit from one another), and the ordinal adjective is not used to denote an order of these elements or of their use.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Neurosurgery (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Prostheses (AREA)

Abstract

An apparatus includes a housing configured to be worn by a recipient and at least one microphone on or within the housing. The at least one microphone is configured to receive sound and to generate information indicative of the sound. The apparatus further includes first circuitry and second circuitry on or within the housing. The first circuitry is configured to wirelessly transmit the information to a first device implanted on or within the recipient while the housing is worn by the recipient. The second circuitry is configured to wirelessly transmit

Description

AUDIO PROCESSING DEVICE OPERABLE AS REMOTE SENSOR
BACKGROUND
Field
[0001] The present application relates generally to systems and methods for communicating with a device worn by a recipient or implanted on or within a recipient’s body. Description of the Related Art
[0002] Medical devices have provided a wide range of therapeutic benefits to recipients over recent decades. Medical devices can include internal or implantable components/de vices, external or wearable components/devices, or combinations thereof (e.g., a device having an external component communicating with an implantable component). Medical devices, such as traditional hearing aids, partially or fully-implantable hearing prostheses (e.g., bone conduction devices, mechanical stimulators, cochlear implants, etc.), pacemakers, defibrillators, functional electrical stimulation devices, and other medical devices, have been successful in performing lifesaving and/or lifestyle enhancement functions and/or recipient monitoring for a number of years.
[0003] The types of medical devices and the ranges of functions performed thereby have increased over the years. For example, many medical devices, sometimes referred to as “implantable medical devices,” now often include one or more instruments, apparatus, sensors, processors, controllers or other functional mechanical or electrical components that are permanently or temporarily implanted in a recipient. These functional devices are typically used to diagnose, prevent, monitor, treat, or manage a disease/injury or symptom thereof, or to investigate, replace or modify the anatomy or a physiological process. Many of these functional devices utilize power and/or data received from external devices that are part of, or operate in conjunction with, implantable components.
SUMMARY
[0004] In one aspect disclosed herein, an apparatus comprises at least one microphone configured to receive sound and to generate audio signals indicative of the sound. The apparatus further comprises circuitry configured to receive the audio signals from the at least one microphone. The circuitry has a plurality of operational states comprising a first operational state and a second operational state. In the first operational state, the circuitry collects a sample portion of the audio signals. The sample portion is indicative of a sound sample received by the at least one microphone from a target sound source while the at least one microphone is positioned in proximity to the target sound source. In the second operational state, the circuitry uses the sample portion of the audio signals to process further audio signals received by the circuitry subsequently to receiving the sample portion.
[0005] In another aspect disclosed herein, an apparatus comprises a housing configured to be worn by a recipient and at least one microphone on or within the housing. The at least one microphone is configured to receive sound and to generate information indicative of the sound. The apparatus further comprises first circuitry and second circuitry on or within the housing. The first circuitry is configured to wirelessly transmit the information to a first device implanted on or within the recipient while the housing is worn by the recipient. The second circuitry is configured to wirelessly transmit the information to at least the first device while the housing is remote from the recipient.
[0006] In another aspect disclosed herein, a method comprises providing a first sound processor configured to receive sound and to generate signals indicative of the sound. The method further comprises, in response to receiving a first control signal, placing the first sound processor in a first operational mode in which the first sound processor is configured to transmit the signals to only a first device implanted on or within a recipient’s body. The method further comprises, in response to receiving a second control signal, placing the first sound processor in a second operational mode in which the first sound processor is configured to transmit the signals to the first device and to at least one second device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] Implementations are described herein in conjunction with the accompanying drawings, in which:
[0008] FIG. 1 is a perspective view of an example cochlear implant auditory prosthesis implanted in a recipient in accordance with certain implementations described herein;
[0009] FIG. 2 is a perspective view of an example fully implantable middle ear implant auditory prosthesis implanted in a recipient in accordance with certain implementations described herein; [0010] FIG. 3 schematically illustrate a portion of another example transcutaneous bone conduction auditory prosthesis implanted in a recipient in accordance with certain implementations described herein;
[0011] FIGs. 4A-4C schematically illustrate an example apparatus in accordance with certain implementations described herein;
[0012] FIG. 5A schematically illustrates an example state diagram of the control circuitry in accordance with certain implementations described herein;
[0013] FIGs. 5B and 5C schematically illustrate an example operation of the control circuitry in the first and second operational states, respectively, in accordance with certain implementations described herein;
[0014] FIG. 6 schematically illustrates an example first device in accordance with certain implementations described herein;
[0015] FIG. 7 schematically illustrates an example usage of an apparatus in accordance with certain implementations described herein;
[0016] FIG. 8 is a flow diagram of an example method in accordance with certain implementations described herein;
[0017] FIG. 9A schematically illustrates an example apparatus in accordance with certain implementations described herein;
[0018] FIG. 9B schematically illustrates an example state diagram of the circuitry of FIG. 9A in accordance with certain implementations described herein;
[0019] FIGs. 10A and 10B schematically illustrate an example operation of the circuitry of FIG. 9A in the first operational state and the second operational state, respectively, in accordance with certain implementations described herein;
[0020] FIG. 11 schematically illustrates an example operation of the target enhancement circuitry 914 in accordance with certain implementations described herein; and
[0021] FIG. 12 schematically illustrates another example operation of the target enhancement circuitry in accordance with certain implementations described herein.
DETAILED DESCRIPTION
[0022] Certain implementations described herein provide a wearable auditory device configured to be in wireless communication with a stimulation device of a recipient while being worn by the recipient (e.g., over an implanted stimulation device). The wearable device can be removed from being worn by the recipient and placed in proximity to a target sound source. The auditory device can be configured to be used as a remote microphone by wirelessly broadcasting signals indicative of the received sounds while in proximity to the target sound source to the stimulation device and to another auditory device worn by the recipient and/or to other auditory devices worn by other people. The wearable auditory device can be configured to receive and analyze a sound sample to identify characteristics of the target sound while in proximity to the target sound source and then, while worn by the recipient, to process (e.g., filter) subsequently received sounds and to provide the stimulation device with signals that accentuate target sounds from the target sound source over other sound contributions within the received sounds.
[0023] The teachings detailed herein are applicable, in at least some implementations, to any type of implantable or non-implantable stimulation system or device (e.g., implantable or non-implantable sensory prosthesis device or system; implantable or non- implantable auditory prosthesis device or system; hearing device for hearing-impaired recipients; hearing device for non-hearing-impaired recipients). Certain implementations can be used as hearing devices that are worn on the recipient’s head, in the ear (ITE), behind the ear (BTE), or off the ear (OTE). For example, such hearing devices can include, but are not limited to: sound processing units for cochlear implant systems, middle ear actuator implant systems, or bone-anchored hearing aids; hearing aids; consumer wireless earbuds.
[0024] Merely for ease of description, apparatus and methods disclosed herein are primarily described with reference to an illustrative medical system comprising an implantable auditory prosthesis device (e.g., implantable transducer assembly) configured to generate and apply stimulation signals (e.g., electrical and vibrational) that are perceived by the recipient as sounds (e.g., evoking a hearing percept), examples of which include but are not limited to: electro-acoustic electrical/acoustic systems, cochlear implant devices, implantable hearing aid devices, middle ear implant devices, bone conduction devices (e.g., active bone conduction devices; passive bone conduction devices, percutaneous bone conduction devices; transcutaneous bone conduction devices), Direct Acoustic Cochlear Implant (DACI), middle ear transducer (MET), electro-acoustic implant devices, other types of auditory prosthesis devices, and/or combinations or variations thereof, or any other suitable hearing prosthesis system with or without one or more external components. Implementations can include any type of auditory prosthesis that can utilize the teachings detailed herein and/or variations thereof. Certain such implementations can be referred to as “partially implantable,” “semiimplantable,” “mostly implantable,” “fully implantable,” or “totally implantable” auditory prostheses. In some implementations, the teachings detailed herein and/or variations thereof can be utilized in other types of prostheses beyond auditory prostheses.
[0025] While certain implementations are described herein in the context of auditory prosthesis devices, certain other implementations are compatible with of other types of sensory prosthesis systems that are configured to evoke other types of neural or sensory (e.g., sight, tactile, smell, taste) percepts are compatible with certain implementations described herein, including but are not limited to: vestibular devices (e.g., vestibular implants), visual devices (e.g., bionic eyes), visual prostheses (e.g., retinal implants), somatosensory implants, and chemosensory implants. Certain other implementations are compatible with other types of medical devices that can utilize the teachings detailed herein and/or variations thereof to provide a wide range of therapeutic benefits to recipients, patients, or other users (e.g., neurostimulators; pacemakers; other medical implants comprising an implanted power source).
[0026] FIG. 1 is a perspective view of an example cochlear implant auditory prosthesis 100 implanted in a recipient in accordance with certain implementations described herein. The example auditory prosthesis 100 is shown in FIG. 1 as comprising an implanted stimulator unit 120 and a microphone assembly 124 that is external to the recipient (e.g., a partially implantable cochlear implant). An example auditory prosthesis 100 (e.g., a totally implantable cochlear implant; a mostly implantable cochlear implant) in accordance with certain implementations described herein can replace the external microphone assembly 124 shown in FIG. 1 with a subcutaneously implantable microphone assembly, as described more fully herein.
[0027] As shown in FIG. 1, the recipient has an outer ear 101, a middle ear 105, and an inner ear 107. In a fully functional ear, the outer ear 101 comprises an auricle 110 and an ear canal 102. An acoustic pressure or sound wave 103 is collected by the auricle 110 and is channeled into and through the ear canal 102. Disposed across the distal end of the ear canal 102 is a tympanic membrane 104 which vibrates in response to the sound wave 103. This vibration is coupled to oval window or fenestra ovalis 112 through three bones of middle ear 105, collectively referred to as the ossicles 106 and comprising the malleus 108, the incus 109, and the stapes 111. The bones 108, 109, and 111 of the middle ear 105 serve to filter and amplify the sound wave 103, causing the oval window 112 to articulate, or vibrate in response to vibration of the tympanic membrane 104. This vibration sets up waves of fluid motion of the perilymph within cochlea 140. Such fluid motion, in turn, activates tiny hair cells (not shown) inside the cochlea 140. Activation of the hair cells causes appropriate nerve impulses to be generated and transferred through the spiral ganglion cells (not shown) and auditory nerve 114 to the brain (also not shown) where they are perceived as sound.
[0028] As shown in FIG. 1, the example auditory prosthesis 100 comprises one or more components which are temporarily or permanently implanted in the recipient. The example auditory prosthesis 100 is shown in FIG. 1 with an external component 142 which is directly or indirectly attached to the recipient’s body, and an internal component 144 which is temporarily or permanently implanted in the recipient (e.g., positioned in a recess of the temporal bone adjacent auricle 110 of the recipient). The external component 142 typically comprises one or more sound input elements (e.g., an external microphone 124) for detecting sound, a sound processing unit 126 (e.g., disposed in a Behind-The-Ear unit), a power source (not shown), and an external transmitter unit 128. In the illustrative implementations of FIG. 1, the external transmitter unit 128 comprises an external coil 130 (e.g., a wire antenna coil comprising multiple turns of electrically insulated single-strand or multi-strand platinum or gold wire) and, preferably, a magnet (not shown) secured directly or indirectly to the external coil 130. The external coil 130 of the external transmitter unit 128 is part of an inductive radio frequency (RF) communication link with the internal component 144. The sound processing unit 126 processes the output of the microphone 124 that is positioned externally to the recipient’s body, in the depicted implementation, by the recipient’s auricle 110. The sound processing unit 126 processes the output of the microphone 124 and generates encoded signals, sometimes referred to herein as encoded data signals, which are provided to the external transmitter unit 128 (e.g., via a cable). As will be appreciated, the sound processing unit 126 can utilize digital processing techniques to provide frequency shaping, amplification, compression, and other signal conditioning, including conditioning based on recipient-specific fitting parameters. [0029] The power source of the external component 142 is configured to provide power to the auditory prosthesis 100, where the auditory prosthesis 100 includes a battery or other power storage device (e.g., circuitry located in the internal component 144, or disposed in a separate implanted location) that is recharged by the power provided from the external component 142 (e.g., via a transcutaneous energy transfer link). The transcutaneous energy transfer link is used to transfer power and/or data to the internal component 144 of the auditory prosthesis 100. Various types of energy transfer, such as infrared (IR), electromagnetic, capacitive, and inductive transfer, may be used to transfer the power and/or data from the external component 142 to the internal component 144. During operation of the auditory prosthesis 100, the power stored by the rechargeable battery is distributed to the various other implanted components as needed.
[0030] The internal component 144 comprises an internal receiver unit 132, a stimulator unit 120, and an elongate electrode assembly 118. In some implementations, the internal receiver unit 132 and the stimulator unit 120 are hermetically sealed within a biocompatible housing. The internal receiver unit 132 comprises an internal coil 136 (e.g., a wire antenna coil comprising multiple turns of electrically insulated single-strand or multistrand platinum or gold wire), and preferably, a magnet (also not shown) fixed relative to the internal coil 136. The internal receiver unit 132 and the stimulator unit 120 are hermetically sealed within a biocompatible housing, sometimes collectively referred to as a stimulator/receiver unit. The internal coil 136 receives power and/or data signals from the external coil 130 via a transcutaneous energy transfer link (e.g., an inductive RF link). The stimulator unit 120 generates electrical stimulation signals based on the data signals, and the stimulation signals are delivered to the recipient via the elongate electrode assembly 118.
[0031] The elongate electrode assembly 118 has a proximal end connected to the stimulator unit 120, and a distal end implanted in the cochlea 140. The electrode assembly 118 extends from the stimulator unit 120 to the cochlea 140 through the mastoid bone 119. In some implementations, the electrode assembly 118 may be implanted at least in the basal region 116, and sometimes further. For example, the electrode assembly 118 may extend towards apical end of cochlea 140, referred to as cochlea apex 134. In certain circumstances, the electrode assembly 118 may be inserted into the cochlea 140 via a cochleostomy 122. In other circumstances, a cochleostomy may be formed through the round window 121, the oval window 112, the promontory 123, or through an apical turn 147 of the cochlea 140.
[0032] The elongate electrode assembly 118 comprises a longitudinally aligned and distally extending array 146 of electrodes or contacts 148, sometimes referred to as electrode or contact array 146 herein, disposed along a length thereof. Although the electrode array 146 can be disposed on the electrode assembly 118, in most practical applications, the electrode array 146 is integrated into the electrode assembly 118 (e.g., the electrode array 146 is disposed in the electrode assembly 118). As noted, the stimulator unit 120 generates stimulation signals which are applied by the electrodes 148 to the cochlea 140, thereby stimulating the auditory nerve 114.
[0033] While FIG. 1 schematically illustrates an auditory prosthesis 100 utilizing an external component 142 comprising an external microphone 124, an external sound processing unit 126, and an external power source, in certain other implementations, one or more of the microphone 124, sound processing unit 126, and power source are implantable on or within the recipient (e.g., within the internal component 144). For example, the auditory prosthesis 100 can have each of the microphone 124, sound processing unit 126, and power source implantable on or within the recipient (e.g., encapsulated within a biocompatible assembly located subcutaneously), and can be referred to as a totally implantable cochlear implant (“TICI”). For another example, the auditory prosthesis 100 can have most components of the cochlear implant (e.g., excluding the microphone, which can be an in-the-ear-canal microphone) implantable on or within the recipient, and can be referred to as a mostly implantable cochlear implant (“MICI”).
[0034] FIG. 2 schematically illustrates a perspective view of an example fully implantable auditory prosthesis 200 (e.g., fully implantable middle ear implant or totally implantable acoustic system), implanted in a recipient, utilizing an acoustic actuator in accordance with certain implementations described herein. The example auditory prosthesis 200 of FIG. 2 comprises a biocompatible implantable assembly 202 (e.g., comprising an implantable capsule) located subcutaneously (e.g., beneath the recipient’s skin and on a recipient's skull). While FIG. 2 schematically illustrates an example implantable assembly 202 comprising a microphone, in other example auditory prostheses 200, a pendant microphone can be used (e.g., connected to the implantable assembly 202 by a cable). The implantable assembly 202 includes a signal receiver 204 (e.g., comprising a coil element) and an acoustic transducer (e.g., a microphone assembly 206 comprising a diaphragm and an electret or piezoelectric transducer) that is positioned to receive acoustic signals through the recipient’s overlying tissue. The implantable assembly 202 may further be utilized to house a number of components of the fully implantable auditory prosthesis 200. For example, the implantable assembly 202 can include a power storage device (e.g., battery or other power storage circuitry) and a signal processor (e.g., a sound processing unit). Various additional processing logic and/or circuitry components can also be included in the implantable assembly 202 as a matter of design choice.
[0035] For the example auditory prosthesis 200 shown in FIG. 2, the signal processor of the implantable assembly 202 is in operative communication (e.g., electrically interconnected via a wire 208) with an actuator 210 (e.g., comprising a transducer configured to generate mechanical vibrations in response to electrical signals from the signal processor). In certain implementations, the example auditory prosthesis 100, 200 shown in FIGs. 1 and 2 can comprise an implantable microphone assembly, such as the microphone assembly 206 shown in FIG. 2. For such an example auditory prosthesis 100, the signal processor of the implantable assembly 202 can be in operative communication (e.g., electrically interconnected via a wire) with the microphone assembly 206 and the stimulator unit 120 of the main implantable component. In certain implementations, at least one of the microphone assembly 206 and the signal processor (e.g., a sound processing unit) is implanted on or within the recipient.
[0036] The actuator 210 of the example auditory prosthesis 200 shown in FIG. 2 is supportably connected to a positioning system 212, which in turn, is connected to a bone anchor 214 mounted within the recipient's mastoid process (e.g., via a hole drilled through the skull). The actuator 210 includes a connection apparatus 216 for connecting the actuator 210 to the ossicles 106 of the recipient. In a connected state, the connection apparatus 216 provides a communication path for acoustic stimulation of the ossicles 106 (e.g., through transmission of vibrations from the actuator 210 to the incus 109).
[0037] During normal operation, ambient acoustic signals (e.g., ambient sound) impinge on the recipient’ s tissue and are received transcutaneously at the microphone assembly 206. Upon receipt of the transcutaneous signals, a signal processor within the implantable assembly 202 processes the signals to provide a processed audio drive signal via wire 208 to the actuator 210. As will be appreciated, the signal processor may utilize digital processing techniques to provide frequency shaping, amplification, compression, and other signal conditioning, including conditioning based on recipient-specific fitting parameters. The audio drive signal causes the actuator 210 to transmit vibrations at acoustic frequencies to the connection apparatus 216 to affect the desired sound sensation via mechanical stimulation of the incus 109 of the recipient.
[0038] The subcutaneously implantable microphone assembly 202 is configured to respond to auditory signals (e.g., sound; pressure variations in an audible frequency range) by generating output signals (e.g., electrical signals; optical signals; electromagnetic signals) indicative of the auditory signals received by the microphone assembly 202, and these output signals are used by the auditory prosthesis 100, 200 to generate stimulation signals which are provided to the recipient’s auditory system. To compensate for the decreased acoustic signal strength reaching the microphone assembly 202 by virtue of being implanted, the diaphragm of an implantable microphone assembly 202 can be configured to provide higher sensitivity than are external non-implantable microphone assemblies. For example, the diaphragm of an implantable microphone assembly 202 can be configured to be more robust and/or larger than diaphragms for external non-implantable microphone assemblies.
[0039] FIG. 3 schematically illustrate a portion of an example transcutaneous bone conduction auditory prosthesis 300 implanted in a recipient in accordance with certain implementations described herein. As schematically illustrated by FIG. 3, the example transcutaneous bone conduction auditory prosthesis 300 comprises an external device component and an implantable component 306. The auditory prosthesis 300 is an active transcutaneous bone conduction auditory prosthesis in that the vibrating actuator 308 is located in the implantable component 306. For example, a vibratory element in the form of a vibrating actuator 308 is located in a housing 310 of the implantable component 306. In certain implementations, the vibrating actuator 308 is a device that converts electrical signals into vibration. The vibrating actuator 308 can be in direct contact with the outer surface of the recipient’s bone 196 (e.g., the vibrating actuator 308 is in substantial contact with the recipient’s bone 196 such that vibration forces from the vibrating actuator 308 are communicated from the vibrating actuator 308 to the recipient’s bone 196). In certain implementations, there can be one or more thin non-bone tissue layers (e.g., a silicone layer 324) between the vibrating actuator 308 and the recipient’s bone 196 (e.g., bone tissue; skull bone) while still permitting sufficient support so as to allow efficient communication of the vibration forces generated by the vibrating actuator 308 to the recipient’s bone 196.
[0040] In certain implementations, the external component 304 includes a sound input element 326 that converts sound into electrical signals. Specifically, the auditory prosthesis 300 provides these electrical signals to the vibrating actuator 308, or to a sound processor (not shown) that processes the electrical signals, and then provides those processed signals to the implantable component 306 through the tissue of the recipient (e.g., skin 190, fat 192, muscle 194) via a magnetic inductance link. For example, a communication coil 332 of the external component 304 can transmit these signals to an implanted communication coil 334 located in a housing 336 of the implantable component 306. Components (not shown) in the housing 336, such as, for example, a signal generator or an implanted sound processor, then generate electrical signals to be delivered to the vibrating actuator 308 via electrical lead assembly 338. The vibrating actuator 308 converts the electrical signals into vibrations. In certain implementations, the vibrating actuator 308 can be positioned with such proximity to the housing 336 that the electrical leads 338 are not present (e.g., the housing 310 and the housing 336 are the same single housing containing the vibrating actuator 308, the communication coil 334, and other components, such as, for example, a signal generator or a sound processor).
[0041] In certain implementations, the vibrating actuator 308 is mechanically coupled to the housing 310. The housing 310 and the vibrating actuator 308 collectively form a vibrating element. The housing 310 can be substantially rigidly attached to a bone fixture 318.
[0042] In this regard, the housing 310 can include a through hole 320 that is contoured to the outer contours of the bone fixture 318. The screw 322 can be used to secure the housing 310 to the bone fixture 318. As can be seen in FIG. 3, the head of the screw 322 is larger than the through hole 320 of the housing 310, and thus the screw 322 positively retains the housing 310 to the bone fixture 318. A portion of the screw 322 interfaces with the bone fixture 318, thus permitting the screw 322 to readily fit into an existing bone fixture 318 used in a percutaneous bone conduction device (or an existing passive bone conduction device). In certain implementations, the screw 322 is configured so that the same tools and procedures that are used to install and/or remove an abutment screw from the bone fixture 318 can be used to install and/or remove the screw 322 from the bone fixture 318.
[0043] The bone fixture 318 can be made of any material that has a known ability to integrate into surrounding bone tissue (e.g., comprising a material that exhibits acceptable osseointegration characteristics). In certain implementations, the bone fixture 318 is formed from a single piece of material (e.g., titanium) and comprises outer screw threads forming a male screw which is configured to be installed into the skull bone 196 and a flange configured to function as a stop when the fixture 318 is implanted into the skull bone 196. The screw threads can have a maximum diameter of about 3.5 mm to about 5.0 mm, and the flange can have a diameter which exceeds the maximum diameter of the screw threads (e.g., by approximately 10%-20%). The flange can have a planar bottom surface for resting against the outer bone surface, when the fixture 318 has been screwed down into the skull bone 196. The flange prevents the fixture 318 (e.g. , the screw threads) from potentially completely penetrating completely through the bone 196.
[0044] The body of the fixture 318 can have a length sufficient to securely anchor the fixture 318 to the skull bone 196 without penetrating entirely through the skull bone 196. The length of the body can therefore depend on the thickness of the skull bone 196 at the implantation site. For example, the fixture 318 can have a length, measured from the planar bottom surface of the flange to the end of the distal region (e.g., the portion farthest from the flange), that is no greater than 5 mm or between about 3.0 mm to about 5.0 mm, which limits and/or prevents the possibility that the fixture 318 might go completely through the skull bone 196. The interior of the fixture 318 can further include an inner lower bore having female screw threads configured to mate with male screw threads of the screw 322 to the fixture 318. The fixture 318 can further include an inner upper bore that receives a bottom portion of the abutment 312.
[0045] The example auditory prostheses 100 shown in FIG. 1 utilizes an external microphone 124, the auditory prosthesis 200 shown in FIG. 2 utilizes an implantable microphone assembly 206 comprising a subcutaneously implantable acoustic transducer, and the example transcutaneous bone conduction auditory prosthesis 300 of FIG. 3 comprises an external sound input element 326 (e.g., external microphone). In certain implementations described herein, a subcutaneously implantable sound input assembly (e.g., implanted microphone) is used with the auditory prostheses 100, 200, 300 and/or one or more external microphone assemblies is used with the auditory prostheses 100, 200, 300. In certain implementations, an external microphone assembly can be used to supplement an implantable microphone assembly of the auditory prosthesis 100, 200, 300. Thus, the teachings detailed herein and/or variations thereof can be utilized with any type of external or implantable microphone arrangement, and the acoustic prostheses 100, 200, 300 shown in FIGs. 1, 2, and 3 are merely illustrative.
[0046] FIGs. 4A-4C schematically illustrate an example apparatus 400 in accordance with certain implementations described herein. The apparatus 400 comprises a housing 410 configured to be worn by a recipient (e.g., on an external surface, such as skin 190, of a portion of the recipient’s tissue 500). The apparatus 400 further comprises at least one microphone 420 on or within the housing 410. The at least one microphone 420 is configured to receive sound and to generate information 422 indicative of the sound. The apparatus 400 further comprises first circuitry 430 on or within the housing 410. The first circuitry 430 is configured to wirelessly transmit the information 422 to a first device 510 implanted on or within the recipient (e.g., beneath a portion of the recipient’s tissue 500) while the housing 410 is worn by the recipient (e.g., while the housing 410 is on the external surface of the recipient’s tissue 500, see FIG. 4B). The apparatus 400 further comprises second circuitry 440 on or within the housing 410. The second circuitry 440 is configured to wirelessly transmit the information 422 to at least the first device 510 while the housing 410 is remote from the recipient (e.g., while the housing 410 is spaced from the external surface of the recipient’s tissue 500, see, FIG. 4C).
[0047] In certain implementations, the apparatus 400 and the first device 510 are components of a stimulation system configured to provide stimulation signals to the recipient. For a sensory stimulation system (e.g., auditory prosthesis system; visual prosthesis system), the stimulation signals can be configured to be received and perceived by the recipient as sensory information. For example, the apparatus 400 can comprise an external microphone assembly 124 or external component 304 configured to wirelessly communicate with a first device 510 comprising an implanted stimulator unit 120 of a cochlear implant auditory prosthesis 100, an actuator 210 of a middle ear implant 200, or an implantable component 306 of a transcutaneous bone conduction auditory prosthesis 300. While certain implementations are described herein with the apparatus 400 being in wireless communication with an implanted first device 510, in certain other implementations, the apparatus 400 is in wireless communication with a non-implanted first device 510 (e.g., worn externally by the recipient).
[0048] In certain implementations, the apparatus 400 further comprises control circuitry 450 (not shown in FIGs. 4A-4C) in electrical communication with the at least one microphone 420, the first circuitry 430, and the second circuitry 440. The control circuitry 450 can comprise at least one microcontroller configured to receive data signals from the at least one microphone 420 and to generate output data signals and/or control signals to the first circuitry 430 and the second circuitry 440. The at least one microcontroller can comprise at least one application-specific integrated circuit (ASIC) microcontroller, digital signal processing (DSP) microcontroller, generalized integrated circuits programmed by software with computer executable instructions, and/or microcontroller core. In certain implementations, the control circuitry 450, first circuitry 430, and second circuitry 440 comprise different portions of the same circuitry (e.g., each comprising respective portions of a single microcontroller), while in certain other implementations, the control circuitry 450, first circuitry 430, and second circuitry 440 comprise different microcontrollers. In certain implementations, the control circuitry 450 comprises and/or is in operative communication with storage circuitry configured to store information (e.g., data; commands) accessed by the control circuitry 450 during operation (e.g., while providing the functionality of certain implementations described herein). The storage circuitry can comprise at least one tangible (e.g., non-transitory) computer readable storage medium, examples of which include but are not limited to: read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory. The storage circuitry can be encoded with software (e.g., a computer program downloaded as an application) comprising computer executable instructions for instructing the control circuitry 450 (e.g., executable data access logic, evaluation logic, and/or information outputting logic). In certain implementations, the control circuitry 450 executes the instructions of the software to provide functionality as described herein. The control circuitry 450 of certain implementations further comprises other digital circuitry (e.g., registers; filters; output controllers; memory controllers). [0049] In certain implementations, the apparatus 400 further comprises at least one input interface in operative communication with the control circuitry 450 and/or at least one output interface in operative communication with the control circuitry 450. The at least one input interface can be configured to receive input signals (e.g., from the recipient) indicative of user input (e.g., commands; operational parameters such as thresholds). Examples of the at least one input interface include but are not limited to: rotatable knobs (e.g., connected to potentiometers); buttons; switches; touchscreen; microphone and voice-responsive circuitry. The at least one output interface can be configured to provide output signals (e.g., to the recipient) indicative of the operational state or status of the apparatus 400. Examples of the at least one output interface include but are not limited to: an LED or LCD display configured to generate visual signals (e.g., colored lights, images, or alphanumeric characters; a portion of the control circuitry 450 configured to generate and transmit output signals indicative of an informative tone or sound to be presented to the recipient via the first device 510; a haptic motor configured to generate vibrations or other tactile signals. In certain implementations, the apparatus 400 comprises an antenna configured to be used as the at least one input interface to receive wireless input signals (e.g., Bluetooth signals; WiFi signals) from an external device separate from the apparatus 400 and the implanted first device 510 (e.g., smart phone, smart tablet, smart watch; other computing device) and/or to be used as the at least one output interface to transmit wireless output signals to the external device separate from the apparatus 400 and the first device 510 to display the output signals.
[0050] In certain implementations, the housing 410 of the apparatus 400 is configured to be positioned on and/or over an outer surface of the skin and to hermetically seal the first and second circuitry 430, 440 from an environment surrounding the housing 410. The housing 410 can comprise at least one biocompatible (e.g., skin-friendly) material, examples of which include but are not limited to: metals; plastics; polymer; rubber; silicone; ceramics. The housing 410 can have a width (e.g., along a lateral direction substantially parallel to the recipient’s skin) less than or equal to 40 millimeters (e.g., in a range of 15 millimeters to 35 millimeters; in a range of 25 millimeters to 35 millimeters; in a range of less than 30 millimeters; in a range of 15 millimeters to 30 millimeters). The housing 410 can have a thickness (e.g., in a direction substantially perpendicular to the recipient’s skin) less than or equal to 10 millimeters (e.g., in a range of less than or equal to 7 millimeters, in a range of less than or equal to 6 millimeters; in a range of less than or equal to 5 millimeters).
[0051] In certain implementations, the at least one microphone 420 comprises a diaphragm and an electret or piezoelectric transducer and is configured to be positioned to receive acoustic signals from an environment surrounding the at least one microphone 420. The at least one microphone 420 can be integrated with the housing 410 or can be a separate component from the housing 410. Other types of microphones 420 (e.g., magnetic; dynamic; optical; electromechanical) are also compatible with certain implementations described herein.
[0052] In certain implementations, the first circuitry 430 comprises at least one first communication coil 432 configured to be in wireless communication with at least one implanted communication coil 512 of the first device 510 (e.g., via a wireless transcutaneous magnetic induction communication link while the housing 410 is worn by the recipient). The first circuitry 430 can further comprise wireless communications interface circuitry configured to drive the at least one first communication coil 432 in response to control signals from control circuitry of the apparatus 400 over the transcutaneous magnetic induction communication link between the apparatus 400 and the first device 510.
[0053] In certain implementations, the at least one first communication coil 432 comprises multiple turns of electrically insulated single-strand or multi-strand metal wire (e.g., a planar electrically conductive wire with multiple windings having a substantially circular, rectangular, spiral, or oval shape or other shape) or metal traces on epoxy of a printed circuit board. For example, the first circuitry 430 can comprise at least one magnetic induction (MI) coil 432 in operative communication with at least one MI coil 512 of the first device 510 to form a transcutaneous wireless communication link configured to transfer power and/or data signals between the apparatus 400 and the first device 510.
[0054] In certain implementations, the second circuitry 440 comprises at least one antenna 442 configured to be in wireless communication with at least one implanted antenna 514 of the first device 510 (e.g., via at least one wireless broadcast channel while the housing 410 is remote from the recipient). The second circuitry 440 can further comprise wireless communications interface circuitry configured to drive the at least one antenna 442 in response to control signals from control circuitry of the apparatus 400. For example, the second circuitry 440 can comprise at least one radio-frequency (RF) antenna in operative communication with at least one RF antenna of the first device 510 to form a transcutaneous wireless communication link (e.g., having multiple frequency channels) configured to transfer data signals from the apparatus 400 to the first device 510. The signals transmitted via the at least one antenna 442 can have one or more carrier frequencies in a range of 2 MHz to 6 GHz (e.g., in a range of 2 MHz to 10 MHz; in a range of 10 MHz to 30 MHz; in a range of 30 MHz to 1 GHz; in a range of 1 GHz to 6 GHz; about 5 MHz; about 22.7 MHz; about 2.4 GHz). Examples of wireless communication protocols for the transmission by the second circuitry 440 include, but are not limited to: Auracast™ broadcast audio; Bluetooth® 5.2 LE Audio; FM radio transmission; Roger wireless transmission.
[0055] In certain implementations, the first device 510 comprises a biocompatible housing 516 configured to be positioned beneath the skin, fat, and/or muscular layers and above a bone (e.g., skull) in a portion of the recipient’s body (e.g., the head). The housing 516 of certain implementations comprises at least one material (e.g., polymer; silicone) that is substantially transparent to the electromagnetic signals generated by the apparatus 400 (e.g., by the first circuitry 430 and the second circuitry 440) such that the housing 516 does not substantially interfere with the transmission of the electromagnetic signals between the apparatus 400 and the first device 510. In addition, the first device 510 can comprise a power source (e.g., battery; capacitor; not shown) configured to store power received via the at least one communication coil 512 from an external power source (e.g., the apparatus 400) and to provide at least some of the power to other components of the first device 510. The first device 510 can be configured to operate both with and without the apparatus 400. In certain implementations, the housing 516 is configured to hermetically seal circuitry of the first device 510 (e.g., the at least one communication coil 512, the at least one antenna 514, control circuitry, stimulation circuitry, power source, or other circuitry) from an environment surrounding the housing 516.
[0056] In certain implementations, the first device 510 comprises at least one implanted antenna 514 configured to be in wireless communication with the second circuitry 440 (e.g., comprising at least one antenna 442) of the apparatus 400 (e.g., via at least one wireless broadcast channel while the housing 410 is remote from the recipient). The first device 510 can further comprise wireless communications interface circuitry configured to receive signals from the at least one implanted antenna 514 and to provide the signals to the stimulation circuitry. For example, the at least one implanted antenna 514 can comprise at least one radio-frequency (RF) antenna in operative communication with at least one RF antenna of the apparatus 400 to form a transcutaneous wireless communication link (e.g., having multiple frequency channels) configured to transfer data signals from the apparatus 400 to the first device 510.
[0057] In certain implementations, the control circuitry 450 of the apparatus 400 has at least two operational states. FIG. 5A schematically illustrates an example state diagram 600 of the control circuitry 450 in accordance with certain implementations described herein. In a first operational state 610 (e.g., a proximal state), the apparatus 400 is worn on the recipient’s body, and in a second operational state 620 (e.g., a remote state), the apparatus 400 is remote (e.g., spaced from) the recipient’s body. The control circuitry 450 can automatically switch from the first operational state 610 to the second operational state 620 in response to the apparatus 400 being removed from the recipient’ s body and can automatically switch from the second operational state 620 to the first operational state 610 in response to the apparatus 400 being placed on the recipient’s body. While FIG. 5A shows two operational states 610, 620, certain other implementations include additional operational states (e.g., off state in which the apparatus 400 is powered off; calibration state in which at least some components of the apparatus 400 undergo a calibration or conditioning process).
[0058] In certain implementations, the control circuitry 450 is configured to receive user input signals (e.g., via the at least one input interface) placing the control circuitry 450 into either the first operational state 610 or the second operational state 620. In certain other implementations, the apparatus 400 further comprises at least one sensor (e.g., in operable communication with the control circuitry 450) configured to automatically detect whether the housing 410 is worn by the recipient or is remote from the recipient. For example, the at least one sensor can comprise an accelerometer configured to detect movement of the apparatus 400 indicative of being removed from and/or placed on the recipient’s body. For another example, the at least one sensor can comprise a portion of the control circuitry 450 configured to detect a loss (e.g., degradation) and/or re-establishment (e.g., restoration) of the wireless communication link between the first circuitry 430 and the first device 510 (e.g., a coil-off event and/or a coil-on event). For still another example, the apparatus 400 can further comprise an external magnetic element (e.g., ferromagnetic material; permanent magnet) and the first device 510 can comprises an internal magnetic element (e.g., ferromagnetic material; permanent magnet), and the external and internal magnetic elements can be configured to establish a magnetic attraction between sufficient to hold the apparatus 400 against the outer surface (e.g., skin) of the recipient’s tissue above the first device 510. The at least one sensor can comprise a magnetic sensor configured to detect whether the external magnetic element of the apparatus 400 is experiencing an attractive magnetic force due to proximity to the internal magnetic element of the first device 510 and/or is not experiencing such an attractive magnetic force.
[0059] FIGs. 5B and 5C schematically illustrate an example operation of the control circuitry 450 in the first and second operational states 610, 620, respectively, in accordance with certain implementations described herein. The control circuitry 450 of FIGs. 5B and 5C comprises microphone processing circuitry 452, signal processing circuitry 454, and a switch 456 configured to selectively provide output signals from the signal processing circuitry 454 to either the first circuitry 430 or the second circuitry 440. While FIGs. 5B and 5C show the control circuitry 450 using the same processing circuitry in both the first and second operational states 610, 620, certain other implementations use different processing circuitry in the first operational state 610 as compared to the processing circuitry used in the second operational state 620.
[0060] The at least one microphone 420 generates the information 422 (e.g., data signals) indicative of the sounds received by the at least one microphone 420. The microphone processing circuitry 452 of the control circuitry 450 (e.g., comprising an analog-to-digital converter (ADC), a calibration filter, and automatic gain control (AGC) for each microphone of the at least one microphone 420) can be configured to receive and process the information 422 from the at least one microphone 420 to generate audio signals 453. For example, for an apparatus 400 comprising a front microphone 420a (e.g., configured to generate a portion 422a of the information 422 indicative of sounds coming from in front of the recipient when the apparatus 400 is worn on the recipient’s body) and a back microphone 420b (e.g., configured to generate a portion 422b of the information 422 indicative of sounds coming from in back of the recipient when the apparatus 400 is worn on the recipient’s body), the microphone processing circuitry 452 can receive and process the information 422a, b using beamforming techniques to generate audio signals 453 which include directional information regarding the received sounds. For another example, the at least one microphone 420 can comprise at least one omnidirectional microphone configured to capture sounds substantially equally from all directions.
[0061] The signal processing circuitry 454 of the control circuitry 450 (e.g., comprising gain circuitry and/or compression circuitry) can be configured to receive and process the audio signals 453 to generate output signals 458. For example, the signal processing circuitry 454 can perform frequency-dependent gain and compression to provide output signals 458 that compensate for particular aspects of the recipient’s hearing loss or that conform to the recipient’s preferences. For another example, for an implanted first device 510 comprising a cochlear implant, the signal processing circuitry 454 can perform sound coding processing to provide output signals 458 that convey appropriate stimulation commands to the stimulation assembly (e.g., stimulation unit 120) of the first device 510.
[0062] As shown in FIG. 5B, in the first (e.g., proximal) operational state 610, by virtue of the apparatus 400 being worn on the recipient’s body, the at least one microphone 420 receives sounds in a region proximal to the recipient (e.g., near the recipient) and the switch 456 transmits the resulting output signals 458 to the first circuitry 430 to be transmitted to the implanted first device 510. As shown in FIG. 5C, in the second (e.g., remote) operational state 620, by virtue of the apparatus 400 being remote (e.g., spaced from) the recipient’s body, the at least one microphone 420 receives sounds in a region remote from the recipient (e.g., spaced from the recipient) and the switch 456 transmits the resulting output signals 458 to the second circuitry 440 to be transmitted to at least the implanted first device 510. By moving the apparatus 400 away from the implanted first device 510, the first device 510 is left without an external device from which the first device 510 can communicate with (e.g., receive power signals, transmit and/or receive data signals, and/or transmit and/or receive control signals) via the at least one implanted communication coil 512. However, the apparatus 400 of certain implementations described herein in the second operational state 620 can instead use the second circuitry 440 to wirelessly, remotely, and directly communicate with the implanted first device 510 via the at least one implanted antenna 514.
[0063] In certain implementations, the microphone processing circuitry 452 in the second operational state 620 utilizes the information 422 from multiple microphones 420 of the apparatus 400 to increase the SNR. For example, the microphone processing circuitry 452 can use beamforming to focus on target source sounds coming from a predetermined direction (e.g., target speaker’s voice when the apparatus 400 is being worn by the target speaker in a predetermined position, such as clipped to a lapel or hanging from the neck of the target speaker). For another example, the microphone processing circuitry 452 can form an omnidirectional pattern to pick up target sounds from all directions (e.g., the apparatus 400 placed on a tabletop around which there are multiple target speakers).
[0064] FIG. 6 schematically illustrates an example first device 510 in accordance with certain implementations described herein. Besides the at least one implanted communication coil 512 and the at least one implanted antenna 514, the first device 510 of FIG. 6 further comprises wireless stream processing circuitry 520, signal processing circuitry 530, and a stimulation assembly 540 (e.g., stimulation unit 120). Either the at least one implanted communication coil 512 can receive the output signals 458 from the first circuitry 430 or the at least one implanted antenna 514 can receive the output signals 458 from the second circuitry 440, and the output signals 458 are provided to the wireless stream processing circuitry 520. For example, the wireless stream processing circuitry 520 can be configured to perform one or more processing operations (e.g., decompressing and/or decoding the output signals 458) and to generate processed audio signals 522. The signal processing circuitry 530 can be configured to receive the processed audio signals 522 and to apply sound coding processing to provide stimulation signals 532 to the stimulation assembly 540 which provide the stimulation signals 532 as electrical and/or vibrational stimulus to the recipient’s body to evoke a hearing percept.
[0065] In certain implementations, the apparatus 400 in the second operational state 620 is configured to be used as a wireless accessory (e.g., remote microphone; mini microphone; microphone comprising an FM transmitter) configured to receive sounds from a region spaced from the recipient and to transmit output signals 458 to the implanted first device 510. This functionality is provided by the apparatus 400 of certain implementations without using additional devices beyond the external component and the implanted component of the auditory prosthesis system (e.g., providing improved convenience and less complexity over systems which utilize a separate microphone -based transmission device for such functionality; reducing burdens on clinicians by allowing clinicians to support simpler auditory prosthesis systems with fewer devices; reducing costs to users, insurance companies, and government support programs by reducing the number of devices to be used).
[0066] For example, for a recipient in a noisy environment (e.g., containing multiple sound sources), the ability of the recipient to understand speech from a target sound source (e.g., a conversation partner; a speaker to a group of people such as a lecturer or teacher) can depend on various factors, including but not limited to the relative amplitude of the speech and the other sounds from other sound sources, which can be quantified as a signal-to-noise ratio (SNR) in which the speech from the target sound source corresponds to the signal and the other sounds (e.g., from the other sound sources) correspond to the noise. In certain implementations, the apparatus 400 in the second operational state 620 can be placed in proximity to the target sound source (e.g., closer to the target sound source than is the first device 510 and/or recipient) thereby increasing the SNR for the sound from the target sound source. For example, the apparatus 400 can be worn by the target sound source (e.g., a teacher in a classroom wearing the apparatus 400 on a lanyard around their neck or attached to their clothing), with the at least one microphone 420 detecting the sounds from the target sound source and transmitting corresponding output signals 458 to the implanted first device 510 (e.g., an implanted auditory prosthesis of a hearing-impaired student).
[0067] In contrast to other systems that remotely transmit to an external device worn by the recipient and in proximity to the first device 510 (e.g., an external sound processor over and in operable communication with an implanted stimulator), certain implementations described herein wirelessly, remotely, and directly communicate with the implanted first device 510. In addition, in contrast to the other systems, certain implementations described herein provide a remote microphone capability by streaming audio information to the implanted first device 510 and utilizing the first device 510 to provide a hearing percept to the recipient, rather than leaving the first device 510 to remain idle while the apparatus 400 is in the second operational state 620 (e.g., which could otherwise result in a degradation in hearing performance from the temporary loss of use of the first device 510).
[0068] For example, the apparatus 400 and first device 510 can comprise an external sound processor and an implanted stimulation assembly, respectively, of a unilateral auditory prosthesis system or of a bilateral auditory prosthesis system. At selected times (e.g., normal or proximal operation), the apparatus 400 can be worn on the recipient’s body over the first device 510 to transcutaneously communicate the information 422 to the first device 510 via the first circuitry 430 (e.g., the at least one first communication coil 432) and the at least one implanted communication coil 512. Alternatively, at other times (e.g., remote operation), the apparatus 400 can be selectively placed close to the target sound source, to be used as a remote microphone that transcutaneously communicates (e.g., broadcasts; stream) the information 422 to the first device 510 via the second circuitry 440 (e.g., the at least one antenna 442) and the at least one implanted antenna 514. In this way, the apparatus 400 and the first device 510 can provide the utility of a remote microphone without a separate microphone-based transmission device (e.g., providing the improved convenience of not having to track such an additional device and the cost savings of not having to purchase such an additional device).
[0069] For a recipient wearing an auditory second device 710 besides the apparatus 400 (e.g., a component of a bilateral auditory prosthesis system) that is compatible with the wireless communication protocol of the apparatus 400, the apparatus 400 can transcutaneously broadcast or stream the information 422 to the second device 710. For example, the second device 710 can comprise an implanted component (e.g., an implanted stimulation assembly in wireless communication with another external sound processor) and/or an externally worn component (e.g., an external sound processor worn on the recipient’s body over and in wireless communication with another implanted stimulation assembly; an externally worn auditory prosthesis without an implanted component, such as an externally worn or ITE hearing aid), and the information 422 can be wirelessly broadcast either directly to the implanted component and/or the externally worn component.
[0070] In certain implementations, the apparatus 400 is configured to be used as a remote microphone for broadcasting (e.g., streaming) audio information to multiple other devices (e.g., worn by other people). For example, besides being configured to wirelessly transmit the information 422 to the first device 510 while the apparatus 400 is remote from the recipient, the second circuitry 440 can be further configured to wirelessly transmit the information 422 to at least one second device 710 implanted on or within the recipient, worn by the recipient, implanted on or within another recipient, or worn by another recipient. Certain such implementations can be used under conditions in which there are multiple people that are using devices compatible with the wireless broadcast protocol of the apparatus 400 and who wish to listen to sounds from the same target sound source in proximity to the apparatus 400 (e.g., avoiding having multiple apparatus 400 from the multiple people from being worn by the target speaker).
[0071] FIG. 7 schematically illustrates an example usage of an apparatus 400 in accordance with certain implementations described herein. The apparatus 400 is configured to be worn by a recipient 720 and, while in the first operational state 610, in wireless communication with a first device 510 (e.g., cochlear implant) via the first circuitry 430. The apparatus 400 is further configured to be placed remotely from the recipient’s body and, while in the second operational state 620, in wireless communication with the first device 510 via the second circuitry 440. As shown in FIG. 7, the apparatus 400 can be removed from the recipient 720 (e.g., student in a classroom) and placed in proximity to a target sound source 730 (e.g., teacher or lecturer in the classroom) where the at least one microphone 420 of the apparatus 400 can receive the target sounds 732 (e.g., the teacher’s voice) with more clarity (e.g., higher magnitude; less noise contributions; higher SNR) than while the apparatus 400 is worn by the recipient 720. In certain such implementations, the apparatus 400 can provide the functionality of sharing the broadcasted signals with multiple other users who may benefit from such access.
[0072] As shown in FIG. 7, besides wirelessly broadcasting signals that include the information 422 indicative of the target sounds 722 to the first device 510, the apparatus 400 can wirelessly broadcast the signals to an auditory second device 710 of the recipient, the second device 710 (e.g., an implanted component and/or an external component of an auditory prosthesis system; an externally worn or ITE hearing aid, wireless speaker, or consumer wireless earbud) configured to receive the wirelessly broadcast signals. In this way, both auditory prosthesis devices of the recipient 720 can contribute to the evoked hearing percept of the recipient 720. In addition, the apparatus 400 can wirelessly broadcast the signals including the information 422 to other auditory devices 740 of other people 750 (e.g., other students in the classroom) besides the recipient 720. These other auditory devices 740 can be components of unilateral auditory prosthesis systems, bilateral auditory prosthesis systems, or externally worn speakers or earbuds of the other people 750 and can include an operational state in which the auditory device 740 responds to the information 422 received from the apparatus 400 to cause the respective person 750 to hear the target sounds 732. [0073] In certain implementations, an auditory device 740 can be configured to detect the presence of an available broadcast from the apparatus 400 and to notify the respective person 750 accordingly (e.g., via an audible alert generated by the auditory device 740). For example, the auditory device 740 can be responsive to at least one attribute or information (e.g., coding; metadata) contained in the signals broadcasted by the apparatus 400 by generating the notification or alert to the person 750. In certain other implementations, people 750 can be notified of the presence of an available broadcast from the apparatus 400 by an audible or visible notification from a broadcast assistant device dedicated to such notifications (e.g., an Auracast™ broadcast assistant device) or from an application running on a general-purpose device (e.g., smart phone, smart tablet, smart watch; other computing device). Upon being informed of the presence of an available broadcast, a person 750 can select to listen to the broadcast from the apparatus 400 by placing the auditory device 740 in an appropriate operational state to respond to the information 422 received from the apparatus 400 to cause the respective person 750 to hear the target sounds 732. In certain implementations, the auditory device 740 can cycle through available broadcasts (e.g., in response to a signal from a user interface of the auditory device 740) and the person 750 can select (e.g., via the user interface) a broadcast to be provided by the auditory device 740. In certain implementations, the information 422 broadcasted by the apparatus 400 is encrypted (e.g., to protect the signals from being accessed by unauthorized users) and the people 750 seeking to listen to the broadcasted signals provide their devices 740 with an appropriate decryption key or password to access the broadcasted signals.
[0074] FIG. 8 is a flow diagram of an example method 800 in accordance with certain implementations described herein. While the method 800 is described by referring to some of the structures of the example apparatus 400 and first device 510 described herein, other apparatus and systems with other configurations of components can also be used to perform the method 800 in accordance with certain implementations described herein.
[0075] In an operational block 810, the method 800 comprises providing a first sound processor (e.g., apparatus 400) configured to receive sound and to generate signals indicative of the sound.
[0076] In an operational block 820, the method 800 further comprises, in response to receiving a first control signal (e.g., an indication that the apparatus 400 is being worn on the recipient’s body), placing the first sound processor in a first operational mode (e.g., first operational state 610) in which the first sound processor is configured to transmit the signals to only a first stimulation assembly (e.g., first device 510) implanted on a recipient’s body (e.g., via the first circuitry 430). In the first operational mode, the first sound processor and the first stimulation assembly provide a hearing percept to the recipient.
[0077] In an operational block 830, the method 800 further comprises, in response to receiving a second control signal (e.g., an indication that the apparatus 400 is remote from the recipient’s body), placing the first sound processor in a second operational mode (e.g., second operational state 620) in which the first sound processor is configured to transmit the signals to the first stimulation assembly and to at least one second device (e.g., second device 710; auditory device 740). In the second operational mode, the first sound processor and the first stimulation assembly provide a hearing percept to the recipient, and the first sound processor and the at least one second device provide a hearing percept to at least one person on which the at least one second device is implanted or worn.
[0078] For example, the at least one second device can comprise at least one of: a second sound processor worn on the recipient’s body; a second stimulation assembly implanted on or within the recipient’s body; a third sound processor worn on another person’s body; a third stimulation assembly implanted on or within another person’s body. In certain implementations, the first control signal and/or the second control signal is generated by a user input interface of the first sound processor. In certain other implementations, the first control signal is generated by at least one sensor of the first sound processor in response at least in part to the at least one sensor detecting that the first sound processor is on the recipient’s body and the second control signal is generated by the at least one sensor in response at least in part to the at least one sensor detecting that the first sound processor is not on the recipient’s body.
[0079] FIG. 9A schematically illustrates an example apparatus 900 in accordance with certain implementations described herein and FIG. 9B schematically illustrates an example state diagram 930 of the circuitry 910 in accordance with certain implementations described herein. The apparatus 900 is configured to be used as a remote microphone (e.g., to receive sound and to generate audio signals indicative of the sound) and to collect a target sound sample from the target sound source to be later used in processing of the detected sounds within the environment to selectively enhance or accentuate the target sounds within the detected sounds (e.g., using the target sound sample to calibrate the output of the auditory prosthesis system).
[0080] For example, for a conversation with someone (e.g., conversation partner) in a noisy environment (e.g., in which it can be difficult to otherwise understand the speech of the conversation partner), the recipient can remove the apparatus 900 from the recipient’s body and ask the conversation partner to hold the apparatus 900 close to their mouth and to speak into it for a short time (e.g., in a range of 10 seconds to 30 seconds), during which the apparatus 900 captures features of the conversation partner’s voice. By virtue of being closer to the target sound source than when worn on the recipient’s body, the sound received by the apparatus 900 has a higher contribution from the target sound source than other sources (e.g., the SNR of the conversation partner’s voice when the apparatus 900 is close to the conversation partner is higher than when the apparatus 900 is on the recipient’s body). Upon the recipient returning the apparatus 900 to the recipient’s body, the apparatus 900 can use the captured features to enhance the conversation partner’s voice for the remainder of the conversation. In this way, the collected target sound sample from the target sound source can be used by the apparatus 900 to process (e.g., filter) the detected sounds during the conversation to increase the SNR of the conversation partner’ s voice (e.g., increase a magnitude of the conversation partner’s voice; decrease a background noise contribution). The increased SNR of the conversation partner’s voice can be obtained throughout the conversation, regardless of whether the apparatus 900 remains in proximity to the conversation partner or whether the apparatus 900 is returned to being worn by the recipient.
[0081] While the apparatus 900 and the state diagram 930 are described herein with regard to the apparatus 400 and its components and states, other devices, components, and states are also compatible with certain implementations described herein. The apparatus 900 of certain implementations described herein comprises the same components and functionality as does the apparatus 400 as described herein, while in certain other implementations, the apparatus 900 does not have the functionality of transcutaneously and wirelessly communicating with a device implanted within the recipient’s body and/or the functionality of broadcasting the information 422 to multiple devices as described herein.
[0082] The example apparatus 900 of FIG. 9A comprises at least one microphone 420 configured to receive sound and to generate audio signals (e.g., information 422) indicative of the sound. The apparatus 900 further comprises circuitry 910 (e.g., control circuitry 450) configured to receive the audio signals from the at least one microphone 420 and to generate processed audio signals 912. As shown in FIG. 9 A, the apparatus 900 can further comprise an output device 920 configured to receive the processed audio signals 912 and to provide information regarding the processed audio signals 912 to the recipient. For example, the output device 920 can comprise first circuitry 430 configured to be in transcutaneous wireless communication with an implanted first device 510 configured to provide stimulation signals to the recipient. For another example, the output device 920 can comprise at least one electroacoustic transducer (e.g., acoustic speaker) in operative communication with the circuitry 910 and configured to respond to the processed audio signals 912 by providing sounds to the recipient (e.g., the apparatus 900 comprising an externally worn or ITE hearing aid or a consumer wireless earbud).
[0083] The state diagram 930 comprises a default state 932, a first operational state 934, and a second operational state 936. Upon being powered up and worn on the recipient’s body, the circuitry 910 of the apparatus 900 can initially be in the default operational state 932. For example, the default operational state 932 can be the first operational state 610 (e.g., proximal state) as described herein with regard to FIG. 5A.
[0084] With the apparatus 900 removed from the recipient’s body, the circuitry 910 can switch from the default operational state 932 to the first operational state 934 (e.g., a target capture state). For example, the circuitry 910 can be switched to the first operational state 934 in response to the circuitry 910 detecting that the apparatus 900 has been removed from the recipient’s body (e.g., detected automatically, in response to a signal from at least one sensor indicative of the removal of the apparatus 900) and/or in response to a user input command (e.g., manually). With the apparatus 900 being returned to be worn on the recipient’s body, the circuitry 910 can switch from the first operational state 934 to the second operational state 936 (e.g., target enhancement state). For example, the circuitry 910 can be switched to the second operational state 936 in response to the circuitry 910 detecting that the apparatus 900 has been returned to the recipient’s body (e.g., detected automatically, in response to a signal from at least one sensor indicative of the return of the apparatus 900) and/or in response to a user input command (e.g., manually). The circuitry 910 can return to the default state 932 automatically (e.g., upon detecting that the target sounds have been absent for a predetermined period of time indicative of the conversation being over, such as one minute) or manually (e.g., in response to a user input command or the recipient removing the apparatus 900 from the recipient’s body and immediately replacing the apparatus 900 onto the recipient’s body within a predetermined period of time, such as within two seconds).
[0085] FIGs. 10A and 10B schematically illustrate an example operation of the circuitry 910 in the first operational state 934 and the second operational state 936, respectively, in accordance with certain implementations described herein. The circuitry 910 of FIG. 10A comprises microphone processing circuitry 452, target enhancement circuitry 914, feature storage circuitry 916 (e.g., at least one storage device) in operable communication with the target enhancement circuitry 914. The target enhancement circuitry 914 can comprise the signal processing circuitry 454 (e.g., as described with regard to FIGs.5B and 5C) or can be separate from but in operative communication with the signal processing circuitry 454.
[0086] While in the first operational state 934, the circuitry 910 is configured to receive (e.g., collect; store) and process a sample portion of the sound received by the at least one microphone 420 from the target sound source (e.g., while the at least one microphone 420 is positioned in proximity to the target sound source and remotely from the recipient). As shown in FIG. 10A, a sample portion 422a of the information 422 from the at least one microphone 420 is received by the microphone processing circuitry 452 of the circuitry 910 which processes the sample portion 422a to generate sample audio signals 453a (e.g., as described herein with regard to FIG. 5B). The target enhancement circuitry 914 analyzes the sample audio signals 453a and extracts a set of features 915 indicative of the target sound source from which the sample portion of the sound was received, and the set of features 915 are stored in the feature storage circuitry 916. Examples of features 915 extracted from the sample audio signals 453a include, but are not limited to: a range of fundamental frequencies (F0), a range of formant frequencies, an estimate of vocal tract length of the target sound source; a syllable rate; mel-frequency cepstral coefficients (MFCCs); or other characteristics of the sample portion of the sound. In certain implementations, the target enhancement circuitry 914 can utilize machine learning processes (e.g., “i-vector” analysis; “d- vector” analysis) to extract the features 915 that are indicative of the target sound source (see, e.g., N. Dehak et al., “Front-end factor analysis for speaker verification,” IEEE Trans. Audio, Speech, and Lang. Process., Vol. 19, No. 4, pp. 788-798 (2010); L. Wan et al., “Generalized end-to- end loss for speaker verification,” Int’l Conf. Aeons., Speech and Signal Process. (ICASSP), IEEE, pp. 4879-4883 (2018)). In this way, the circuitry 910 can capture (e.g., extract) and store the features 915 of the target sound from the target sound source for later use in the second operational state 936. In certain implementations, while in the first operational state 934, the apparatus 900 performs the remote functionality described herein with regard to FIGs. 5A and 5C. For example, the apparatus 900 can transmit the output signals 458 via the second circuitry 440 to the first device 510, to a second device 710, and/or to at least one auditory device 740.
[0087] While in the second operational state 936, the circuitry 910 is configured to receive further sounds (e.g., while the at least one microphone 420 is positioned on the recipient’s body) and to use the stored features 915 of the target sound source to process the further sounds to generate enhanced audio signals 917 (e.g., in which the target sounds are enhanced and/or noise contributions are reduced). As shown in FIG. 10B, the portion 442b of the information 422 from the at least one microphone 420 is received by the microphone processing circuitry 452 of the circuitry 910 which processes the portion 422b to generate audio signals 453b (e.g., as described herein with regard to FIG. 5B). The target enhancement circuitry 914 accesses the stored set of features 915 from the feature storage circuitry 916 and processes the audio signals 453b using the stored set of features 915 to generate the enhanced audio signals 917 (e.g., using noise reduction and/or speech enhancement processes).
[0088] FIG. 11 schematically illustrates an example operation of the target enhancement circuitry 914 in accordance with certain implementations described herein. The target enhancement circuitry 914 comprises source separation circuitry 940 (e.g., blind source separation circuitry) configured to separate the audio signals 453b received from the microphone processing circuitry 452 into two or more source signals 942 (FIG. 11 shows three source signals 942 as an example), each source signal 942 corresponding to sound of the audio signals 453b detected to be from a separate source, one of which is expected to be the target sound source. The target enhancement circuitry 914 further comprises source merger circuitry 950 configured to extract a set of source features 952 from each of the source signals 942 (e.g., sets 952a, b,c, one set for each of the three source signals). For example, the source merger circuitry 950 can use the same circuitry and/or process as used by the target enhancement circuitry 914 in the first operational block state (e.g., capture state) to extract the set of features 915 from the sample audio signals 453a. The source merger circuitry 950 can be further configured to determine which of the source signals 942 most closely matches the target sound source (e.g., by comparing the sets of source features 952 to the stored target features 915). The source merger circuitry 950 can be further configured to generate the enhanced audio signals 917 in a manner that enhances the target sounds within the enhanced audio signals 917. For example, the source merger circuitry 950 can output the source signals 942 with source features 952 that most closely match the stored target features 915. For another example, the source merger circuitry 950 can recombine (e.g., merge) some or all of the source signals 942 into enhanced audio signals 917 (e.g., by applying gains to process each of the source signals 942, each gain amplifying/reducing the source signal 942 dependent upon the degree of similarity of the corresponding set of source features 952 with the stored target features 915, and then summing the processed source signals 942).
[0089] FIG. 12 schematically illustrates another example operation of the target enhancement circuitry 914 in accordance with certain implementations described herein. The target enhancement circuitry 914 can comprise filterbank circuitry 1010 configured to split the audio signals 453b received from the microphone processing circuitry 452 into multiple channels 1012 (e.g., bands), each channel 1012 corresponding to a band of frequencies of the audio signals 453b. For example, the channels 1012 can be determined by the filterbank circuitry 1010 applying a short time Fourier transform (STFT) (e.g., using a fast Fourier transform (FFT) algorithm) to the audio signals 453b.
[0090] The target enhancement circuitry 914 can further comprise gain calculation circuitry 1020 configured to receive the channels 1012 from the filterbank circuitry 1010, to receive the stored target features 915 from the feature storage circuitry 916, and to calculate gains 1022 to be applied to each of the channels 1012, the gains 1022 optimized based on the stored target features 915. For example, the gain calculation circuitry 1020 can determine that the target sounds from the target sound source are temporarily absent (e.g., due to the conversation partners taking turns to speak) and can temporarily apply more attenuation to at least some of the channels 1012. A time-varying gain mask or gain ratio can be calculated using an estimate of the SNR in each time-frequency sample (see, e.g., P.W. Dawson et al., “Clinical evaluation of signal-to-noise ratio based noise reduction in Nucleus cochlear-implant recipients,” Ear Hear., Vol. 32, pp. 382-390 (2011)). [0091] The target enhancement circuitry 914 can further comprise gain application circuitry 1030 configured to receive the channels 1012 from the filterbank circuitry 1010 and the gains 1022 from the gain calculation circuitry 1020 and apply the gains 1022 to the channels 1012, resulting in the enhanced audio signals 917 provided to the output device 920.
[0092] In certain implementations, the circuitry 910 is configured to use machine learning processes, such as deep neural networks (DNNs) to separate the target sounds from the sounds from the environment (e.g., containing multiple voices and background noise) detected by the at least one microphone 420. For example, the circuitry 910 can obtain reference samples of the recipient’s own voice, either during the first operational state 934 (e.g., capture state) and/or the second operational state 936 (e.g., enhancement state), and the reference samples can be used by the circuitry 910. See, e.g., Q. Wang et al., “VoiceFilter: Targeted voice separation by speaker-conditioned spectrogram masking,” Proc. Interspeech, pp. 2728-2732 (2019); Q. Wang et al., “VoiceFilter- Lite: Streaming Targeted Voice Separation for On-Device Speech Recognition,” doi: 10.48550/arXiv.2009.04323 (2020). In certain implementations, the apparatus 900 performs the DNN training, while in certain other implementations (e.g., in which the apparatus 900 has insufficient processing capability to train the DNN), the apparatus 900 transmits the reference samples to another device with sufficient processing capability (e.g., smart phone; smart tablet; networked computing device) to perform the training.
[0093] Although commonly used terms are used to describe the systems and methods of certain implementations for ease of understanding, these terms are used herein to have their broadest reasonable interpretations. Although various aspects of the disclosure are described with regard to illustrative examples and implementations, the disclosed examples and implementations should not be construed as limiting. Conditional language, such as, among others, "can," "could," "might," or "may," unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain implementations include, while other implementations do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more implementations or that one or more implementations necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular implementation. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a nonexclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced.
[0094] It is to be appreciated that the implementations disclosed herein are not mutually exclusive and may be combined with one another in various arrangements. In addition, although the disclosed methods and apparatuses have largely been described in the context of various devices, various implementations described herein can be incorporated in a variety of other suitable devices, methods, and contexts. More generally, as can be appreciated, certain implementations described herein can be used in a variety of implantable medical device contexts that can benefit from certain attributes described herein.
[0095] Language of degree, as used herein, such as the terms “approximately,” “about,” “generally,” and “substantially,” represent a value, amount, or characteristic close to the stated value, amount, or characteristic that still performs a desired function or achieves a desired result. For example, the terms “approximately,” “about,” “generally,” and “substantially” may refer to an amount that is within ± 10% of, within ± 5% of, within ± 2% of, within ± 1 % of, or within ± 0.1% of the stated amount. As another example, the terms “generally parallel” and “substantially parallel” refer to a value, amount, or characteristic that departs from exactly parallel by ± 10 degrees, by ± 5 degrees, by ± 2 degrees, by ± 1 degree, or by ± 0.1 degree, and the terms “generally perpendicular” and “substantially perpendicular” refer to a value, amount, or characteristic that departs from exactly perpendicular by ± 10 degrees, by ± 5 degrees, by ± 2 degrees, by ± 1 degree, or by ± 0.1 degree. The ranges disclosed herein also encompass any and all overlap, sub-ranges, and combinations thereof. Language such as “up to,” “at least,” “greater than,” less than,” “between,” and the like includes the number recited. As used herein, the meaning of “a,” “an,” and “said” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “into” and “on,” unless the context clearly dictates otherwise.
[0096] While the methods and systems are discussed herein in terms of elements labeled by ordinal adjectives (e.g., first, second, etc.), the ordinal adjective are used merely as labels to distinguish one element from another (e.g., one signal from another or one circuit from one another), and the ordinal adjective is not used to denote an order of these elements or of their use.
[0097] The invention described and claimed herein is not to be limited in scope by the specific example implementations herein disclosed, since these implementations are intended as illustrations, and not limitations, of several aspects of the invention. Any equivalent implementations are intended to be within the scope of this invention. Indeed, various modifications of the invention in form and detail, in addition to those shown and described herein, will become apparent to those skilled in the art from the foregoing description. Such modifications are also intended to fall within the scope of the claims. The breadth and scope of the invention should not be limited by any of the example implementations disclosed herein but should be defined only in accordance with the claims and their equivalents.

Claims

WHAT IS CLAIMED IS:
1. An apparatus comprising: at least one microphone configured to receive sound and to generate audio signals indicative of the sound; and circuitry configured to receive the audio signals from the at least one microphone, the circuitry having a plurality of operational states comprising: a first operational state in which the circuitry collects a sample portion of the audio signals, the sample portion indicative of a sound sample received by the at least one microphone from a target sound source while the at least one microphone is positioned in proximity to the target sound source; and a second operational state in which the circuitry uses the sample portion of the audio signals to process further audio signals received by the circuitry subsequently to receiving the sample portion.
2. The apparatus of claim 1, wherein the circuitry in the second operational state uses the sample portion to enhance a portion of the further audio signals corresponding to sounds from the target sound source.
3. The apparatus of claim 1 or claim 2, wherein the circuitry comprises communication circuitry configured to wirelessly transmit information indicative of the audio signals to an auditory processing system configured to provide a hearing percept to a recipient.
4. The apparatus of claim 1 or claim 2, further comprising at least one electroacoustic transducer in operative communication with the circuitry, the at least one electroacoustic transducer configured to respond to the further audio signals processed by the circuitry and to provide sounds to a recipient.
5. The apparatus of claim 3 or claim 4, wherein, in the first operational state, the circuitry collects the sample portion while the at least one microphone is positioned remotely from the recipient and, in the second operational state, the circuitry uses the sample portion while the at least one microphone is positioned on the recipient.
6. An apparatus comprising: a housing configured to be worn by a recipient; at least one microphone on or within the housing, the at least one microphone configured to receive sound and to generate information indicative of the sound; first circuitry on or within the housing, the first circuitry configured to wirelessly transmit the information to a first device implanted on or within the recipient while the housing is worn by the recipient; and second circuitry on or within the housing, the second circuitry configured to wirelessly transmit the information to at least the first device while the housing is remote from the recipient.
7. The apparatus of claim 6, wherein the first circuitry is configured to be in wireless communication with the first device via transcutaneous magnetic induction while the housing is worn by the recipient.
8. The apparatus of claim 7, wherein the first circuitry comprises a first communication coil and the first device comprises an implanted communication coil.
9. The apparatus of any of claims 6 to 8, wherein the second circuitry is configured to be in wireless communication with at least the first device via at least one wireless broadcast channel while the housing is remote from the recipient.
10. The apparatus of claim 9, wherein the second circuitry comprises an antenna and the first device comprises an implanted antenna.
11. The apparatus of any of claims 6 to 10, wherein the second circuitry is further configured to wirelessly transmit the information to at least one second device implanted on or within the recipient, worn by the recipient, implanted on or within another recipient, or worn by another recipient.
12. The apparatus of any of claims 6 to 11, further comprising at least one sensor configured to detect whether the housing is worn by the recipient or is remote from the recipient.
13. The apparatus of any of claims 6 to 12, wherein the apparatus comprises an external sound processor of an auditory prosthesis system and the first device comprises an implanted stimulation assembly of the auditory prosthesis system.
14. A method comprising: providing a first sound processor configured to receive sound and to generate signals indicative of the sound; in response to receiving a first control signal, placing the first sound processor in a first operational mode in which the first sound processor is configured to transmit the signals to only a first device implanted on or within a recipient’s body; and in response to receiving a second control signal, placing the first sound processor in a second operational mode in which the first sound processor is configured to transmit the signals to the first device and to at least one second device.
15. The method of claim 14, wherein the first device comprises a first stimulation assembly and the at least one second device comprises at least one of: a second sound processor worn on the recipient’s body; a second stimulation assembly implanted on or within the recipient’s body; a third sound processor worn on another person’s body; a third stimulation assembly implanted on or within another person’s body.
16. The method of claim 14 or claim 15, wherein the first control signal and/or the second control signal is generated by a user input interface of the first sound processor.
17. The method of any of claims 14 to 16, wherein the first control signal is generated by at least one sensor of the first sound processor in response at least in part to the at least one sensor detecting that the first sound processor is on the recipient’s body.
18. The method of claim 17, wherein the second control signal is generated by the at least one sensor in response at least in part to the at least one sensor detecting that the first sound processor is not on the recipient’s body.
19. The method of any of claims 14 to 18, wherein, in the first operational mode, the first sound processor and the first device provide a hearing percept to the recipient.
20. The method of claim 19, wherein, in the second operational mode, the first sound processor and the first device provide a hearing percept to the recipient, and the first sound processor and the at least one second device provide a hearing percept to at least one person on which the at least one second device is implanted or worn.
PCT/IB2024/057901 2023-08-22 2024-08-14 Audio processing device operable as remote sensor Pending WO2025041006A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363578027P 2023-08-22 2023-08-22
US63/578,027 2023-08-22

Publications (1)

Publication Number Publication Date
WO2025041006A1 true WO2025041006A1 (en) 2025-02-27

Family

ID=94731601

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2024/057901 Pending WO2025041006A1 (en) 2023-08-22 2024-08-14 Audio processing device operable as remote sensor

Country Status (1)

Country Link
WO (1) WO2025041006A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130308804A1 (en) * 2012-05-15 2013-11-21 Cochlear Limited Adaptive Data Rate for a Bilateral Hearing Prosthesis System
US20140270212A1 (en) * 2013-03-15 2014-09-18 Cochlear Limited Audio Monitoring of a Hearing Prosthesis
US20160366522A1 (en) * 2015-06-09 2016-12-15 Martin Evert Gustaf Hillbratt Hearing prostheses for single-sided deafness
US20180110984A1 (en) * 2015-01-08 2018-04-26 Koen Erik Van den Heuvel Implanted auditory prosthesis control by component movement
KR101933966B1 (en) * 2017-03-31 2018-12-31 경북대학교 산학협력단 Implantable hearing aid device and mastication noise reduction device of fully implantable hearing aid

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130308804A1 (en) * 2012-05-15 2013-11-21 Cochlear Limited Adaptive Data Rate for a Bilateral Hearing Prosthesis System
US20140270212A1 (en) * 2013-03-15 2014-09-18 Cochlear Limited Audio Monitoring of a Hearing Prosthesis
US20180110984A1 (en) * 2015-01-08 2018-04-26 Koen Erik Van den Heuvel Implanted auditory prosthesis control by component movement
US20160366522A1 (en) * 2015-06-09 2016-12-15 Martin Evert Gustaf Hillbratt Hearing prostheses for single-sided deafness
KR101933966B1 (en) * 2017-03-31 2018-12-31 경북대학교 산학협력단 Implantable hearing aid device and mastication noise reduction device of fully implantable hearing aid

Similar Documents

Publication Publication Date Title
US8641596B2 (en) Wireless communication in a multimodal auditory prosthesis
US9511225B2 (en) Hearing system comprising an auditory prosthesis device and a hearing aid
US10225671B2 (en) Tinnitus masking in hearing prostheses
US9913983B2 (en) Alternate stimulation strategies for perception of speech
US10003895B2 (en) Selective environmental classification synchronization
US20160165362A1 (en) Impulse noise management
US20170171671A1 (en) Audio Logging for Protected Privacy
CN109417674B (en) Electroacoustic adaptation in hearing prostheses
US9949042B2 (en) Audio processing pipeline for auditory prosthesis having a common, and two or more stimulator-specific, frequency-analysis stages
US20250381400A1 (en) Implantable sensor training
US20250235160A1 (en) Body noise signal processing
WO2025041006A1 (en) Audio processing device operable as remote sensor
US12335692B2 (en) Implantable tinnitus therapy
CN115916327A (en) Broadcast selection
US20250229091A1 (en) External portion of medical implant with compliant skin-contacting surface
US20250099759A1 (en) Transmission of signal information to an implantable medical device
US20250203299A1 (en) Multi-band channel coordination
US20250254472A1 (en) Wireless streaming from multiple sources for an implantable medical device
WO2024052753A1 (en) Auditory device with vibrating external actuator compatible with bilateral operation
US20240185881A1 (en) System and method for smart broadcast management
WO2025078920A1 (en) System and method to facilitate finding a misplaced device
US20230089767A1 (en) Enhancing auditory percepts with vestibular simulation
WO2025068826A1 (en) Implant with sensor integrated with induction coil
WO2025153924A1 (en) Implantable sensor device
WO2025248384A1 (en) External portion of implantable system with hair grip mechanism

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24855956

Country of ref document: EP

Kind code of ref document: A1