[go: up one dir, main page]

WO2025238503A1 - Réglages basés sur des données environnementales enregistrées - Google Patents

Réglages basés sur des données environnementales enregistrées

Info

Publication number
WO2025238503A1
WO2025238503A1 PCT/IB2025/054897 IB2025054897W WO2025238503A1 WO 2025238503 A1 WO2025238503 A1 WO 2025238503A1 IB 2025054897 W IB2025054897 W IB 2025054897W WO 2025238503 A1 WO2025238503 A1 WO 2025238503A1
Authority
WO
WIPO (PCT)
Prior art keywords
settings
recipient
operational settings
memory
environmental data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IB2025/054897
Other languages
English (en)
Inventor
Erik Andersson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cochlear Ltd
Original Assignee
Cochlear Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cochlear Ltd filed Critical Cochlear Ltd
Publication of WO2025238503A1 publication Critical patent/WO2025238503A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/02Details
    • A61N1/04Electrodes
    • A61N1/05Electrodes for implantation or insertion into the body, e.g. heart electrode
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Definitions

  • the present invention relates generally to determining settings/parameters associated with a medical device based on recorded environmental data.
  • Medical devices have provided a wide range of therapeutic benefits to recipients over recent decades.
  • Medical devices can include internal or implantable components/devices, external or wearable components/devices, or combinations thereof (e.g., a device having an external component communicating with an implantable component).
  • Medical devices such as traditional hearing aids, partially or fully-implantable hearing prostheses (e.g., bone conduction devices, mechanical stimulators, cochlear implants, etc.), pacemakers, defibrillators, functional electrical stimulation devices, and other medical devices have been successful in performing lifesaving and/or lifestyle enhancement functions and/or recipient monitoring for a number of years.
  • implantable medical devices now often include one or more instruments, apparatus, sensors, processors, controllers or other functional mechanical or electrical components that are permanently or temporarily implanted in a recipient. These functional devices are typically used to diagnose, prevent, monitor, treat, or manage a disease/injury or symptom thereof, or to investigate, replace or modify the anatomy or a physiological process. Many of these functional devices utilize power and/or data received from external devices that are part of, or operate in conjunction with, implantable components.
  • a method comprises obtaining, from a memory device, environmental data previously recorded by a medical device during a prior period of time in which the environmental data was used by the medical device to deliver stimulation signals to a recipient of the medical device in accordance with a plurality of operational settings; and using the environmental data obtained from the memory device to adjust one or more of the plurality of operational settings.
  • another method comprises capturing, at a hearing device, audio data representative of an ambient environment associated with the hearing device; capturing, at the hearing device contemporaneously with the capturing of the audio data, settings data representing one or more operational settings of the hearing device at the time the audio data is captured; storing the audio data and the settings data in a memory; retrieving the audio data and the settings data from the memory; and using the retrieved audio data and retrieved settings data to adjust at least one of the one or more operational settings of the hearing device.
  • a medical device comprises a memory; and one or more processors configured to: obtain, from the memory, environmental data previously recorded by the medical device during a prior period of time in which the environmental data was used by the medical device to deliver stimulation signals to a recipient of the medical device in accordance with a plurality of operational settings; and use the environmental data obtained from the memory to adjust one or more of the plurality of operational settings.
  • one or more non-transitory computer-readable media containing instructions are provided that, when processed by one or more processors of a hearing device, cause the processor to: capture audio data representative of an ambient environment associated with the hearing device; capture, contemporaneously with the capturing of the audio data, settings data representing one or more operational settings of the hearing device at the time the audio data is captured; store the audio data and the settings data in a memory; retrieve the audio data and the settings data from the memory; and use the retrieved audio data and retrieved settings data to adjust at least one of the one or more operational settings of the hearing device.
  • FIG. 1A is a schematic diagram illustrating a cochlear implant system with which aspects of the techniques presented herein can be implemented
  • FIG. IB is a side view of a recipient wearing a sound processing unit of the cochlear implant system of FIG. 1A;
  • FIG. 1C is a schematic view of components of the cochlear implant system of FIG. 1 A;
  • FIG. ID is a block diagram of the cochlear implant system of FIG. 1A;
  • FIG. IE is a schematic diagram illustrating a computing device with which aspects of the techniques presented herein can be implemented
  • FIG. 2 is a block diagram of a signal processing path for processing and recording an audio signal, according to embodiments described herein;
  • FIG. 3 is a block diagram illustrating a method of analyzing recorded data and suggesting signal processing configurations based on the recorded data, according to embodiments described herein;
  • FIG. 4 is a block diagram illustrating an example in which an audio sample is replayed using different signal processing operational settings to identify optimal settings, according to embodiments described herein;
  • FIG. 5 is a flow diagram illustrating a method of adjusting operation settings of a medical device based on environmental data previously recorded by a medical device, according to embodiments described herein;
  • FIG. 6 is a flow diagram illustrating a method of using retrieved audio data and settings data to adjust operational settings of a hearing device, according to embodiments described herein;
  • FIG. 7 is a schematic diagram illustrating a vestibular stimulator system with which aspects of the techniques presented herein can be implemented.
  • FIG. 8 is a schematic diagram illustrating a retinal prosthesis system with which aspects of the techniques presented herein can be implemented.
  • the operational settings of a medical device or hearing device can be set/determined for a specific recipient/user, set for a specific environment, etc.
  • the operational settings of certain medical devices or hearing devices can be automatically adjusted upon entering a particular environmental situation/scenario (e.g., adjusted based on a determination/evaluation of a current ambient environment associated with the device).
  • a recipient uses her medical device in an ambient environment and environmental data (e.g., sound signals, light signals, etc.) associated with the ambient environment are captured by one or more sensors of a medical device. These environmental signals captured while the medical device operates in the ambient environment are recorded and subsequently used to determine optimal settings for the medical device in that environment from which they were recorded.
  • environmental data e.g., sound signals, light signals, etc.
  • These environmental signals captured while the medical device operates in the ambient environment are recorded and subsequently used to determine optimal settings for the medical device in that environment from which they were recorded.
  • the techniques presented herein are primarily described with reference to hearing devices and, more specifically a hearing device in the form of a cochlear implant system.
  • the techniques presented herein can also be partially or fully implemented by/with any of a number of different types of medical devices or other devices that are capability of recording environmental data, including certain consumer electronic device (e.g., mobile phones), wearable devices (e.g., smartwatches), other hearing devices, implantable medical devices, etc.
  • wearable device e.g., smartwatches
  • other hearing devices e.g., implantable medical devices, etc.
  • hearing device is to be broadly construed as any device that acts on an acoustical perception of an individual, including to improve perception of sound signals, to reduce perception of sound signals, etc.
  • a hearing device can deliver sound signals to a user in any form, including in the form of acoustical stimulation, mechanical stimulation, electrical stimulation, etc., and/or can operate to suppress all or some sound signals.
  • a hearing device can be a device for use by a hearing- impaired person (e.g., hearing aids, middle ear auditory prostheses, bone conduction devices, direct acoustic stimulators, electro-acoustic hearing prostheses, auditory brainstem stimulators, bimodal hearing prostheses, bilateral hearing prostheses, dedicated tinnitus therapy devices, tinnitus therapy device systems, combinations or variations thereof, etc.), a device for use by a person with normal hearing (e.g., consumer devices that provide audio streaming, consumer headphones, earphones, and other listening devices), a hearing protection device, etc.
  • a hearing- impaired person e.g., hearing aids, middle ear auditory prostheses, bone conduction devices, direct acoustic stimulators, electro-a
  • the techniques presented herein can be implemented by, or used in conjunction with, various implantable medical devices, such as visual devices (i.e., bionic eyes), sensors, pacemakers, drug delivery systems, defibrillators, functional electrical stimulation devices, catheters, seizure devices (e.g., devices for monitoring and/or treating epileptic events), sleep apnea devices, electroporation devices, etc.
  • visual devices i.e., bionic eyes
  • sensors i.e., bionic eyes
  • pacemakers drug delivery systems
  • defibrillators defibrillators
  • functional electrical stimulation devices catheters
  • seizure devices e.g., devices for monitoring and/or treating epileptic events
  • sleep apnea devices e.g., electroporation devices, etc.
  • FIGs. 1A-1D illustrate an example cochlear implant system 102 with which aspects of the techniques presented herein can be implemented.
  • the cochlear implant system 102 comprises an external component 104 that is configured to be directly or indirectly attached to the body of the recipient, and an intemal/implantable component 112 that is configured to be implanted in or worn on the head of the recipient.
  • the implantable component 112 is sometimes referred to as a “cochlear implant.”
  • FIG. 1A illustrates the cochlear implant 112 implanted in the head 154 of a recipient
  • FIG. IB is a schematic drawing of the external component 104 worn on the head 154 of the recipient.
  • FIG. 1C is another schematic view of the cochlear implant system 102
  • FIG. ID illustrates further details of the cochlear implant system 102.
  • FIGs. 1A-1D will generally be described together.
  • the external component 104 comprises a sound processing unit 106, an external coil 108, and generally, a magnet fixed relative to the external coil 108.
  • the cochlear implant 112 includes an implantable coil 114, an implant body 134, and an elongate stimulating assembly 116 configured to be implanted in the recipient’s cochlea.
  • the sound processing unit 106 is an off-the-ear (OTE) sound processing unit, sometimes referred to herein as an OTE component, that is configured to send data and power to the implantable component 112.
  • OTE off-the-ear
  • an OTE sound processing unit is a component having a generally cylindrically shaped housing 111 and which is configured to be magnetically coupled to the recipient’s head 154 (e.g., includes an integrated external magnet 150 configured to be magnetically coupled to an intemal/implantable magnet 152 in the implantable component 112).
  • the OTE sound processing unit 106 also includes an integrated external (headpiece) coil 108 (the external coil 108) that is configured to be inductively coupled to the implantable coil 114.
  • the OTE sound processing unit 106 is merely illustrative of the external devices that could operate with implantable component 112.
  • the external component 104 can comprise a behind-the-ear (BTE) sound processing unit configured to be attached to, and worn adjacent to, the recipient’s ear.
  • BTE sound processing unit comprises a housing that is shaped to be worn on the outer ear of the recipient.
  • the BTE is connected to a separate external coil assembly via a cable, where the external coil assembly is configured to be magnetically and inductively coupled to the implantable coil 114, while in other embodiments the BTE includes a coil disposed in or on the housing worn on the outer ear of the recipient.
  • alternative external components could be located in the recipient’s ear canal, worn on the body, etc.
  • the cochlear implant system 102 includes the sound processing unit 106 and the cochlear implant 112, as described below, the cochlear implant 112 can operate independently from the sound processing unit 106, for at least a period, to stimulate the recipient.
  • the cochlear implant 112 can operate in a first general mode, sometimes referred to as an “external hearing mode,” in which the sound processing unit 106 captures sound signals which are then used as the basis for delivering stimulation signals (in this case electrical stimulation signals) to the recipient.
  • the cochlear implant 112 can also operate in a second general mode, sometimes referred as an “invisible hearing” mode, in which the sound processing unit 106 is unable to provide sound signals to the cochlear implant 112 (e.g., the sound processing unit 106 is not present, the sound processing unit 106 is powered-off, the sound processing unit 106 is malfunctioning, etc.).
  • the cochlear implant 112 captures sound signals itself via implantable sound sensors and then uses those sound signals as the basis for delivering stimulation signals to the recipient. Further details regarding operation of the cochlear implant 112 in the external hearing mode are provided below, followed by details regarding operation of the cochlear implant 112 in the invisible hearing mode. It is to be appreciated that reference to the external hearing mode and the invisible hearing mode is merely illustrative and that the cochlear implant 112 could also operate in alternative modes.
  • the cochlear implant system 102 is shown with an external device 110, configured to implement aspects of the techniques presented.
  • Tire external device 110 which is shown in greater detail in FIG. IE, is a computing device, such as a personal computer (e.g., laptop, desktop, tablet), a mobile phone (e.g., smartphone), a remote control unit, etc.
  • the external device 110 and the cochlear implant system 102 e.g., sound processing unit 106 or the cochlear implant 112 wirelessly communicate via a bi-directional communication link 126.
  • the bi-directional communication link 126 can comprise, for example, a short-range communication, such as Bluetooth link, Bluetooth Low Energy (BLE) link, a proprietary link, etc.
  • BLE Bluetooth Low Energy
  • the sound processing unit 106 of the external component 104 also comprises one or more input devices configured to capture and/or receive input signals (e.g., sound or data signals) at the sound processing unit 106.
  • input signals e.g., sound or data signals
  • the one or more input devices include, for example, one or more sound input devices 118 (e.g., one or more external microphones, audio input ports, telecoils, etc.), one or more auxiliary input devices 128 (e.g., audio ports, such as a Direct Audio Input (DAI), data ports, such as a Universal Serial Bus (USB) port, cable port, etc.), and a short-range wireless transmitter/receiver (wireless transceiver) 120 (e.g., for communication with the external device 110), each located in, on or near the sound processing unit 106.
  • DAI Direct Audio Input
  • USB Universal Serial Bus
  • wireless transceiver wireless transceiver
  • the sound processing unit 106 also comprises the external coil 108, a charging coil, a closely-coupled radio frequency transmitter/receiver (RF transceiver) 122, at least one rechargeable battery 132, and an external sound processing module 124.
  • the external sound processing module 124 can be configured to perform a number of operations that are represented in FIG. ID by a sound recording module 131, a sound processor 133, and a replaying module 135.
  • Each of the sound recording module 131, the sound processor 133, and the replaying module 135 can be formed by one or more processors (e.g., one or more Digital Signal Processors (DSPs), one or more uC cores, etc.), firmware, software, etc. arranged to perform operations described herein. That is, the sound recording module 131, the sound processor 133, and the replaying module 135 can each be implemented as firmware elements, partially or fully implemented with digital logic gates in one or more application-specific integrated circuits (ASICs), partially or fully in software, etc.
  • DSPs Digital Signal Processors
  • ASICs application-specific integrated circuits
  • ID illustrates the sound recording module 131, a sound processor 133, and the replaying module 135 as being implemented/performed at the external sound processing module 124, it is to be appreciated that these elements (e.g., functional operations) could also or alternatively be implemented/performed as part of the implantable sound processing module 158, as part of the external device 110, etc.
  • the implantable component 112 comprises an implant body (main module) 134, a lead region 136, and the stimulating assembly 116, all configured to be implanted under the skin (tissue) 115 of the recipient.
  • the implant body 134 generally comprises a hermetically-sealed housing 138 that includes, in certain examples, at least one power source 125 (e.g., one or more batteries, one or more capacitors, etc.), in which the RF interface circuitry 140 and a stimulator unit 142 are disposed.
  • the implant body 134 also includes the intemal/implantable coil 114 that is generally external to the housing 138, but which is connected to the RF interface circuitry 140 via a hermetic feedthrough (not shown in FIG. ID).
  • the stimulating assembly 116 is configured to be at least partially implanted in the recipient’s cochlea.
  • the stimulating assembly 116 includes a plurality of longitudinally spaced intra-cochlear electrical stimulating contacts (electrodes) 144 that collectively form a contact array (electrode array) 146 for delivery of electrical stimulation (current) to the recipient’s cochlea.
  • the stimulating assembly 116 extends through an opening in the recipient’s cochlea (e.g., cochleostomy, the round window, etc.) and has a proximal end connected to stimulator unit 142 via lead region 136 and a hermetic feedthrough (not shown in FIG. ID).
  • Lead region 136 includes a plurality of conductors (wires) that electrically couple the electrodes 144 to the stimulator unit 142.
  • the implantable component 112 also includes an electrode outside of the cochlea, sometimes referred to as the extra-cochlear electrode (ECE) 139.
  • ECE extra-cochlear electrode
  • the cochlear implant system 102 includes the external coil 108 and the implantable coil 114.
  • the external magnet 150 is fixed relative to the external coil 108 and the intemal/implantable magnet 152 is fixed relative to the implantable coil 114.
  • the external magnet 150 and the intemal/implantable magnet 152 fixed relative to the external coil 108 and the intemal/implantable coil 114, respectively, facilitate the operational alignment of the external coil 108 with the implantable coil 114.
  • This operational alignment of the coils enables the external component 104 to transmit data and power to the implantable component 112 via a closely-coupled wireless link 148 formed between the external coil 108 with the implantable coil 114.
  • the closely-coupled wireless link 148 is an RF link.
  • various other types of energy transfer such as infrared (IR), electromagnetic, capacitive, and inductive transfer, can be used to transfer the power and/or data from an external component to an implantable component and, as such, FIG. ID illustrates only one example arrangement.
  • the sound processing unit 106 includes the external sound processing module 124.
  • the external sound processing module 124 is configured to process the received input audio signals (received at one or more of the input devices, such as sound input devices 118 and/or auxiliary input devices 128) and convert the received input audio signals into output control signals for use in stimulating a first ear of a recipient/user (i.e., the external sound processing module 124 is configured to perform sound processing on input signals received at the sound processing unit 106).
  • the one or more processors e.g., processing element(s) implementing firmware, software, etc.
  • the external sound processing module 124 are configured to execute sound processing logic in memory to convert the received input audio signals into output control signals that represent electrical stimulation for delivery to the recipient.
  • FIG. ID illustrates an embodiment in which the external sound processing module 124 in the sound processing unit 106 generates the output control signals.
  • the sound processing unit 106 can send less processed information (e.g., audio data) to the implantable component 112, and the sound processing operations (e.g., conversion of input sounds to output control signals 156) can be performed by a processor within the implantable component 112.
  • output control signals are provided to the RF transceiver 122, which transcutaneously transfers the output control signals (e.g., in an encoded manner) to the implantable component 112 via the external coil 108 and the implantable coil 114.
  • the output control signals are received at the RF interface circuitry 140 via the implantable coil 114 and provided to the stimulator unit 142.
  • the stimulator unit 142 is configured to utilize the output control signals to generate electrical stimulation signals (e.g., current signals) for delivery to the recipient’s cochlea via one or more of the stimulating contacts 144.
  • electrical stimulation signals e.g., current signals
  • cochlear implant system 102 electrically stimulates the recipient’s auditory nerve cells, bypassing absent or defective hair cells that normally transduce acoustic vibrations into neural activity, in a manner that causes the recipient to perceive one or more components of the input audio signals (the received sound signals).
  • the cochlear implant 112 receives processed sound signals from the sound processing unit 106.
  • the cochlear implant 112 is configured to capture and process sound signals for use in electrically stimulating the recipient’s auditory nerve cells.
  • an example embodiment of the cochlear implant 112 can include a plurality of implantable sound sensors 165(1), 165(2) that collectively form a sensor array 160, and an implantable sound processing module 158.
  • the implantable sound processing module 158 can comprise, for example, one or more processors and a memory device (memory) that includes sound processing logic.
  • the memory device can comprise any one or more of: Non-Volatile Memory (NVM), Ferroelectric Random Access Memory (FRAM), read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices.
  • NVM Non-Volatile Memory
  • FRAM Ferroelectric Random Access Memory
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage media devices optical storage media devices
  • flash memory devices electrical, optical, or other physical/tangible memory storage devices.
  • the one or more processors are, for example, microprocessors or microcontrollers that execute instructions for the sound processing logic stored in memory device.
  • the implantable sound sensors 165(1), 165(2) of the sensor array 160 are configured to detect/capture input sound signals 166 (e.g., acoustic sound signals, vibrations, etc.), which are provided to the implantable sound processing module 158.
  • the implantable sound processing module 158 is configured to convert received input sound signals 166 (received at one or more of the implantable sound sensors 165(1), 165(2)) into output control signals 156 for use in stimulating the first ear of a recipient or recipient (i.e., the implantable sound processing module 158 is configured to perform sound processing operations).
  • the one or more processors e.g., processing element(s) implementing firmware, software, etc.
  • the implantable sound processing module 158 are configured to execute sound processing logic in memory to convert the received input sound signals 166 into output control signals 156 that are provided to the stimulator unit 142.
  • the stimulator unit 142 is configured to utilize the output control signals 156 to generate electrical stimulation signals (e.g., current signals) for delivery to the recipient’s cochlea, thereby bypassing the absent or defective hair cells that normally transduce acoustic vibrations into neural activity.
  • the cochlear implant 112 could use signals captured by the sound input devices 118 and the implantable sound sensors 165(1), 165(2) of sensor array 160 in generating stimulation signals for delivery to the recipient.
  • the external sound processing module 124 can also include an inertial measurement unit (IMU) 170.
  • the IMU 170 is configured to measure the inertia of the recipient's head, that is, motion of the recipient's head.
  • the IMU 170 comprises one or more sensors 175 each configured to sense one or more of rectilinear or rotatory motion in the same or different axes.
  • sensors 175 that can be used as part of inertial measurement unit 170 include accelerometers, gyroscopes, inclinometers, compasses, and the like.
  • Such sensors can be implemented in, for example, micro electromechanical systems (MEMS) or with other technology suitable for the particular application.
  • MEMS micro electromechanical systems
  • a second IMU 180 including one or more sensors 185 is incorporated into implantable sound processing module 158 of implant body 134.
  • the second IMU 180 can serve as an additional or alternative inertial measurement unit to the IMU 170 of external sound processing module 124.
  • sensors 185 can each be configured to sense one or more of rectilinear or rotatory motion in the same or different axes.
  • sensors 185 that can be used as part of inertial measurement unit 180 include accelerometers, gyroscopes, inclinometers, compasses, and the like. Such sensors can be implemented in, for example, MEMS or with other technology suitable for the particular application.
  • a hearing device that includes an implantable sound processing module, such as implantable sound processing module 158, that includes an IMU, such as the IMU 180, the techniques presented herein can be implemented without an external processor. Accordingly, a hearing device that includes an implant body 134 and lacks an external component 104 can be configured to implement the techniques presented herein.
  • FIG. IE is a block diagram illustrating one example arrangement for an external computing device 110 configured to perform one or more operations in accordance with certain embodiments presented herein.
  • the external computing device 110 includes at least one processing unit 183 and a memory 184.
  • the processing unit 183 includes one or more hardware or software processors (e.g., Central Processing Units) that can obtain and execute instructions.
  • the processing unit 183 can communicate with and control the performance of other components of the external computing device 110.
  • the memory 184 is one or more software or hardware-based computer-readable storage media operable to store information accessible by the processing unit 183.
  • the memory 184 can store, among other things, instructions executable by the processing unit 183 to implement applications or cause performance of operations described herein, as well as other data.
  • the memory 184 can be volatile memory (e.g., RAM), non-volatile memory (e.g., ROM), or combinations thereof.
  • the memory 184 can include transitory memory or non-transitory memory.
  • the memory 184 can also include one or more removable or non-removable storage devices.
  • the memory 184 can include RAM, ROM) EEPROM (Electronically- Erasable Programmable Read-Only Memory), flash memory, optical disc storage, magnetic storage, solid state storage, or any other memory media usable to store information for later access.
  • the memory 184 can include wired media, such as a wired network or direct-wired connection, and wireless media, such as acoustic, RF, infrared, other wireless media, or combinations thereof.
  • the memory 184 comprises logic 195 and 196 that, when executed, enables the processing unit 183 to perform aspects of the techniques presented.
  • the external computing device 110 further includes a network adapter 186, one or more input devices 187, and one or more output devices 188.
  • the external computing device 110 can include other components, such as a system bus, component interfaces, a graphics system, a power source (e.g., a battery), among other components.
  • the network adapter 186 is a component of the external computing device 110 that provides network access (e.g., access to at least one network 189).
  • the network adapter 186 can provide wired or wireless network access and can support one or more of a variety of communication technologies and protocols, such as Ethernet, cellular, Bluetooth, near-field communication, and RF, among others.
  • the network adapter 186 can include one or more antennas and associated components configured for wireless communication according to one or more wireless communication technologies and protocols.
  • the one or more input devices 187 are devices over which the external computing device 110 receives input from a recipient/user (user input).
  • the one or more input devices 187 can include physically- actuatable user-interface elements (e.g., buttons, switches, or dials), a keypad, keyboard, mouse, touchscreen, and voice input devices, among other input devices that can accept user input.
  • the one or more output devices 188 are devices by which the external computing device 110 is able to provide output to a user (e.g., the recipient).
  • the output devices 188 can include a display 190 (e.g., a liquid crystal display (LCD)) and one or more speakers 191, among other output devices for presentation of visual or audible information to the recipient, a clinician, an audiologist, or other user.
  • a display 190 e.g., a liquid crystal display (LCD)
  • speakers 191 among other output devices for presentation of visual or audible information to the recipient, a clinician, an audiologist, or other user.
  • the external computing device 110 shown in FIG. IE is merely illustrative and that aspects of the techniques presented herein can be implemented at a number of different types of systems/devices including any combination of hardware, software, and/or firmware configured to perform the functions described herein.
  • the external computing device 110 can be a personal computer (e.g., a desktop or laptop computer), a hand-held device (e.g., a tablet computer), a mobile device (e.g., a smartphone), a surgical system, and/or any other electronic device having the capabilities to perform the associated operations described elsewhere herein.
  • a recipient for example in dynamic environments, by adjusting, setting, or otherwise determining one or more operational settings of a medical device or hearing device (recipient-associated device) based on recorded environmental data/signals (e.g., sound signals, light signals, etc.) associated with a particular ambient environment experienced by the recipient (e.g., based on the specific environmental situations encountered by the recipient).
  • environmental data/signals e.g., sound signals, light signals, etc.
  • the techniques presented herein are primarily described with reference to hearing devices, such as cochlear implant system 102 of FIGs.
  • a recipient uses her hearing device in an ambient environment and audio/sound data associated with the ambient environment is captured by one or more sensors of the hearing device.
  • the environmental data captured while the hearing device operates in the ambient environment is recorded subsequently used to set, adjust, or otherwise determine settings for the medical device (e.g., instantiated in the hearing device for later use in the environment from which they were recorded and/or similar environments).
  • the captured and recorded audio is sometimes referred to herein as “recorded audio data” or a “recorded audio file.”
  • the recorded audio data can include data representing the environmental sounds captured by the sound processor microphone at an earlier time and/or additional data, such as settings associated with the hearing device when the file was recorded, a location of the hearing device, a timestamp, etc.
  • the recipient can initiate the capturing of the environmental sounds with a user interface (e.g., of an application) or a button. In other embodiments, the initiation can occur on the basis of a rule (e.g., a voice command or another signal to indicate troubles with hearing).
  • the recorded audio data can be used to recreate the ambient environment for the recipient through the hearing device.
  • the recorded audio data can be analyzed automatically or manually at a later time.
  • the recorded audio data can be analyzed to determine potential settings or adjustments for the hearing device that can increase the recipient’s satisfaction with the audio in the particular environment.
  • the settings or adjustments can be suggested to the recipient to test, using the same recording for evaluation.
  • the recipient can play the recorded audio data through the hearing device while applying the suggested settings or adjustments.
  • the recipient can indicate preferred settings or adjustments for the audio in the environment. Information associated with the settings can be stored and the settings can be applied when the recipient is in a similar acoustic environment.
  • an artificial intelligence (Al) algorithm can generate or suggest settings adjustments in near real-time.
  • the recipient can provide a query (e.g., a text or verbal query) requesting a change in the settings and an Al algorithm can provide an updated settings configuration based on the audio environment and the query.
  • the recorded audio data can be played (delivered to the recipient through the hearing device) with the updated operational settings.
  • the recipient can provide a new query and the recorded audio data can be replayed with another updated operational settings received from the Al algorithm. In this way, the recipient can identify the preferred settings for the recorded audio data quickly and easily by, in some situations, providing verbal commands or queries.
  • a recipient of a hearing device finds that, while sitting in a restaurant, she struggles to follow the conversations around the table because of the background noise.
  • the recipient can record, for example, a 30 second sample of the environment data (environmental sounds).
  • the environment data can include speech, background noises, etc.
  • the recipient can open a mobile application and provide additional input about the experienced struggles.
  • the recipient can describe that speech perception was reduced due to background noise.
  • the application can take the input and determine recommended suggestions that can improve the experience. For example, the application can suggest one or more settings adjustment for the hearing device to optimize speech understanding in the environment.
  • the recipient can try the suggested settings adjustments to determine her preferred settings for use in the environment or a similar environment. For example, in certain embodiments the hearing device can replay the recorded audio data with a suggested settings adjustment applied and determine if the suggested settings adjustment provides a relatively better (recipient-preferred) output than the settings that were applied to the hearing device while the recipient was in the restaurant (e.g., the original settings used in the original environment).
  • the recipient can indicate which of the suggested settings are the preferred settings and the settings can be applied to one of the program slots.
  • the preferred settings adjustments can be stored along with additional information associated with the acoustic environment. The settings adjustments can subsequently be applied to the hearing device when the hearing device is in a similar acoustic environment.
  • FIG. 2 is a block diagram of a signal processing path of hearing device for processing and recording audio signals in accordance with certain embodiments presented.
  • the hearing device could be, for example, a hearing aid, a cochlear implant system such as cochlear implant system 102, etc.
  • the hearing device captures audio data (one or more audio signals) using one or more sound sensors, such as one or more microphones (not shown in FIG. 2).
  • the captured audio data includes sounds in the environment, such as music, background noise, speech, speech in noise, etc.
  • the captured audio data comprises one or more analog audio signals (analog audio data) that are provided to an analog-to-digital converter 202.
  • the analog-to-digital converter 202 converts the analog audio data to digital signals (digital audio data) that is processed by configurable signal processing blocks 204 for output as a stimulation signal to a recipient of the hearing device.
  • the configurable signal processing blocks 204 generally operate to convert the received sound signals into output stimulation signals (e.g., acoustic stimulation signals, mechanical stimulation signals, electrical stimulation signals, etc.), which can be used for delivering stimulation to a recipient in a manner that evokes perception of the sound signals.
  • output stimulation signals e.g., acoustic stimulation signals, mechanical stimulation signals, electrical stimulation signals, etc.
  • the configurable signal processing blocks 204 can include, for example, a pre- filterbank processing module, a filterbank module, a post-filterbank processing module, a channel selection module, a mapping module, and/or additional or different processing modules.
  • the configurable signal processing blocks 204 process the digital signal according to one or operational settings (settings) determined for the hearing device.
  • the configurable signal processing blocks 204 can perform operations such as microphone directionality operations, noise reduction operations, input mixing/combining operations, input selection/reduction operations, dynamic range control operations, other types of signal enhancement operations, filtering operations, sound processing operations, signal mapping operations, and additional processing operations.
  • the configurable signal processing blocks 204 can process the digital signal based on a type of the sound in the sound environment.
  • settings associated with the hearing device can be adjusted based on the type of sound environment surrounding the recipient of the hearing device. Different settings adjustments can result in processing the audio signals in different ways to optimize the output stimulation signals delivered to the recipient for different audio environments.
  • the configurable signal processing blocks 204 can process the digital signal in a first manner if the recipient is in an environment where music is playing (e.g., at a concert) and can process the digital signal in a second manner if the recipient is in an environment with speech in noise (e.g., at a restaurant with another person or other people).
  • the recipient when the recipient is in a sound environment with music, settings associated with the hearing device can be adjusted to optimize the musical sounds.
  • settings associated with the hearing device can be adjusted to atenuate or cancel the background noise to optimize the speech of a person speaking with the recipient.
  • the recipient can, in accordance with embodiments presented herein, record the sound environment for subsequent (i.e., at a later) use in determining preferred setings for the hearing device in that environment. More specifically, the recipient may not be able to suitably adjust the operational setings in real time (while at the restaurant) but would like to determine beter or optimal operational setings to use in a similar sound environment in the future. In this case, the recipient can record a sample of the environment sound using a user interface, such as by pressing a buton or choosing an option on an application of a mobile device.
  • the recipient can use a voice command to initiate the capturing of the environmental sounds.
  • the sample can automatically be recorded based on recognizing that the recipient is struggling to hear the conversations.
  • the medical device or hearing device system can recognize or detect a cognitive change associated with a listening effort by the recipient that indicates that the recipient is trying hard to hear. In this case, the hearing device can automatically begin recording a sample of the sound environment for analysis at a later time.
  • signal recorder 206 can record the sample after the microphone input has been converted to a digital signal.
  • the sample is a raw signal that has not been processed by the configurable signal processing blocks 204.
  • the raw signal can be unfiltered by setings so that different setings can be applied later to identify the optimal setings to apply to the signal based on the audio environment.
  • Signal recorder 206 or another element/module of the hearing device can additionally record other data associated with the environment and/or hearing device. For example, the hearing device setings that were applied to the hearing device at the time that the environment signal was recorded can be identified and stored. In this way, the sample can be replayed for the recipient using the original setings to compare the original setings to adjusted setings.
  • FIG. 3 is a block diagram illustrating analyzing the recorded data and suggesting signal processing configurations based on the recorded data.
  • recorded data 302 can be transmitted to recording analyzer 304 for analysis.
  • the recording analyzer 304 can be integrated with the hearing device.
  • the recording analyzer 304 can be remote from the hearing device.
  • the recorded data 302 includes the recorded sample and can include the additional information (such as the operational settings) or metadata captured by the hearing device when the sample was recorded.
  • the recorded data 302 can additionally include an input from the recipient of the hearing device.
  • the input can be provided by the recipient using, for example, an application stored on a mobile device or can be provided verbally.
  • the input can include, for example, an indication of struggles experienced in the sound environment or an indication of what sounds in the environment should be enhanced.
  • the additional input can describe that the speech perception was reduced due to background noise .
  • the additional input can indicate that the recipient would like to better perceive the conversation around the table instead of the background noise/conversation or that the recipient would like the immediate conversation to be louder and/or the background conversation to be quieter.
  • the recipient can choose the input from a selection of items provided on a user interface of an application.
  • the recipient can provide input verbally by providing instructions or commands aloud. For example, the recipient can say “I didn’t like the way the conversation sounded” or “Make the conversation sounds louder.”
  • the recording analyzer 304 can analyze the recorded sample of the sound environment and determine or generate one or more signal processing configuration suggestions 306 for the recorded sample.
  • the one or more signal processing configuration suggestions 306 are recommended operational settings for the hearing device that attempt to improve the perceived sounds in the sound environment.
  • the recording analyzer 304 can be an artificial intelligence module that uses artificial intelligence, such as artificial intelligence algorithms or machine learning models, to determine or generate the one or more signal processing configuration suggestions 306.
  • the one or more signal processing configuration suggestions 306 can be determined based on the recipient input (user input) and/or the additional data.
  • the one or more signal processing configuration suggestions 306 can additionally be determined or generated based on previous signal processing operational settings generated for other recipients and feedback or input received from the other recipients.
  • the one or more signal processing configuration suggestions 306 can include settings adjustments or settings configurations for the hearing device.
  • the different settings adjustments can include configuration changes that can be applied to the configurable signal processing blocks 204 to change the output stimulation signal associated with the recorded sample.
  • the different operational settings can change the output stimulation signal to, for example, attenuate the background noise, increase the perceived sound of the immediate conversation, etc.
  • the recording analyzer 304 can analyze the recorded sample with the settings that were used when the sample was recorded and can identify why the recipient was having difficulty hearing using the hearing device settings and how to adjust the settings to optimize the output stimulation signal. In another embodiment, the recording analyzer 304 can analyze the recorded sample and the recipient’s input to identify the issue the recipient was having and determine how to adjust the settings to address the issue.
  • the signal processing configuration suggestions 306 can be transmitted to the hearing device or stored (e.g., at a remote device) so the recipient can retrieve the signal processing configuration suggestions 306 at a later time.
  • FIG. 4 is a block diagram illustrating an example in which the sample is replayed using different signal processing configuration suggestions 306 to identify optimal settings.
  • the recipient of the hearing device can be notified that there are available signal processing configuration suggestions 306.
  • the recipient can choose one of the operational settings and replay the digital signal (i.e., the audio sample) using replayer 402 with the chosen operational settings applied.
  • the recipient can play the digital signal with each of the signal processing configuration suggestions 306 applied separately to the configurable signal processing blocks 204 to determine which settings provides the best output signal for the recipient.
  • the recipient can compare the different settings to the original settings that were applied to the audio signal when the audio signal was recorded.
  • the recipient can indicate whether particular signal processing operational settings are an improvement over the original settings.
  • the recipient can test each of the received signal processing configuration suggestions 306 or stop testing the signal processing configuration suggestions 306 when the recipient has identified an optimal operational settings.
  • the recipient can use a user interface associated with an application (e.g., on a mobile device) to select different signal processing operational settings options.
  • the user interface can provide several operational settings options.
  • the different options can provide additional information about how the settings have been adjusted (e.g., noise reduction has been activated, different sounds have been enhanced, etc.).
  • the recipient can select one of the options (e.g., by tapping on the option in the user interface) and the recorded audio signal can be played with the operational settings applied to the recorded audio.
  • the recipient can select another option until all of the options have been selected or until the recipient has identified the optimal operational settings.
  • the recipient can use the user interface associated with the application to indicate the preferred signal processing operational settings. For example, the recipient can select the best signal processing operational settings, rank the different signal processing operational settings, indicate signal processing operational settings to eliminate (e.g., if the recipient does not like the way the recorded sample with a particular signal processing configuration setting applied), or provide other input with respect to different signal processing operational settings.
  • the recipient can additionally provide input with respect to the different operational settings options (e.g., the speech was difficult to identify, some noises were sharp, etc.).
  • the recipient input can be used for providing operational settings options for similar sound environments in the future.
  • the recipient input can additionally be used by the recipient for determining ways to manually adjust operational settings in the future.
  • the recipient may not transmit the audio sample to the recording analyzer 304 for analysis.
  • the recipient can adjust the operational settings manually and determine settings adjustments that improve the audio of the recorded sample.
  • the recipient can replay the audio sample while activating noise reduction (if noise reduction had not been activated at the time of recording the sample).
  • the recipient can try different settings adjustments and identify settings adjustments that improve the perceived sound of the audio sample.
  • the recipient can provide information about the preferred or improved operational settings using, for example, an application on a mobile device.
  • recording analyzer 304 can use an Al algorithm to provide a dynamic real time process for identifying optimal operational settings for the audio sample.
  • the recipient can provide a text or verbal input and the recording analyzer 304 can automatically provide operational settings adjustment in near real time.
  • the recording analyzer 304 can provide operational settings adjustment for a sound environment that a recipient of a hearing device was previously in (i.e., when the recording was recorded) or for a sound environment that the recipient is currently in.
  • a sample can be recorded and transmitted to the recording analyzer 304 or information associated with the audio environment (e.g., the environment is a music environment, a noisy environment, a noisy environment with speech, etc.) can be transmitted to the recording analyzer 304.
  • information associated with the audio environment e.g., the environment is a music environment, a noisy environment, a noisy environment with speech, etc.
  • the recording analyzer 304 can determine adjusted operational settings based on the recipient input and the sound environment and can transmit the adjusted operational settings to the hearing device. If the recipient is no longer in the sound environment, the hearing device can automatically play the recording of the sound environment with the adjusted operational settings applied. If the recipient is in the sound environment, the hearing device can adjust the operational settings based on the input received from the recording analyzer 304.
  • the recipient may, for example, provide an input indicating that no new operational settings is needed or provide no input or response. If the recipient is dissatisfied with the adjusted operational settings, the recipient can provide a new input or response (e.g., indicating a problem with the adjusted operational settings or a different setting to adjust) and the recording analyzer 304 can provide another adjusted operational settings.
  • the hearing device can apply the new operational settings to the hearing device (if the recipient is in the sound environment) or replay the sound recording with the new operational settings.
  • the recipient can continue to request new operational settings until an optimal configuration setting for the environment is received. In this way, the recipient can adjust the sound in real time if the recipient is dissatisfied with the sound in a particular environment. By using Al algorithms, the recipient can quickly try different settings until optimal settings adjustments have been applied. These embodiments provide for a dynamic real time process that optimizes the sound for a recipient in different sound environments.
  • Information about the signal processing configuration suggestions 306 can be stored with information associated with the audio sample and optional additional information. For example, information associated with the sound environment when the audio sample was recorded can be stored with information associated with the recipient’s preferred operational settings for the audio sample. The same or similar operational settings can then be applied to the recipient’s hearing device when the recipient is in a same or similar sound environment. In this way, the recipient’s hearing experience can be optimized.
  • FIG. 5 is a flow diagram illustrating a method 500 of adjusting operation settings of a medical device based on environmental data previously recorded by the medical device.
  • Method 500 begins at 502 where environmental data previously recorded by a medical device is obtained from a memory device. The environmental data was previously recorded by the medical device during a prior period of time in which the environmental data was used by the medical device to deliver stimulation signals (e.g., acoustic stimulation signals, mechanical stimulation signals, electrical stimulation signals, etc.) to a recipient of the medical device in accordance with a plurality of operational settings.
  • stimulation signals e.g., acoustic stimulation signals, mechanical stimulation signals, electrical stimulation signals, etc.
  • the environmental data obtained from the memory device is used to adjust one or more of the plurality of operational settings.
  • FIG 6, is a flow diagram illustrating a method 600 of using retrieved audio data and settings data to adjust operational settings of a hearing device.
  • Method 600 begins at 602 where audio data representative of an ambient environment associated with a hearing device is captured at the hearing device.
  • settings data is captured at the hearing device contemporaneously with the capturing of the audio data.
  • the settings data represent one or more operational settings of the hearing device at the time the audio data is captured.
  • the audio data and the settings data are stored in a memory.
  • the audio data and the settings data are retrieved from the memory.
  • the retrieved audio data and the retrieved settings data are used to adjust at least one of the one or more operational settings of the hearing device.
  • embodiments herein have been described with respect to a hearing device and a sound environment, the techniques can be applied to other types of environment inputs or signals.
  • other ambient signals such as light signals
  • techniques described herein can be applied to any signal coming from the environment that can benefit from filtering or adjusted signals.
  • Embodiments described herein can be applied to different types of implants or prosthetic devices, such as vestibular implants, other types of implants that rely on sound and/or light inputs, retinal prostheses, other types of automatic prosthetic devices, etc.
  • the technology disclosed herein can be applied in any of a variety of circumstances and with a variety of different devices.
  • Example devices that can benefit from technology disclosed herein are described in more detail in FIGS. 7 and 8.
  • the techniques of the present disclosure can be applied to other devices, such as neurostimulators, cardiac pacemakers, cardiac defibrillators, sleep apnea management stimulators, seizure therapy stimulators, tinnitus management stimulators, and vestibular stimulation devices, as well as other medical devices that deliver stimulation to tissue.
  • technology described herein can also be applied to consumer devices. These different systems and devices can benefit from the technology described herein.
  • FIG. 7 illustrates an example vestibular stimulator system 702, with which embodiments presented herein can be implemented.
  • the vestibular stimulator system 702 comprises an implantable component (vestibular stimulator) 712 and an external device/component 704 (e.g., external processing device, battery charger, remote control, etc.).
  • the external device 704 comprises a transceiver unit 760.
  • the external device 704 is configured to transfer data (and potentially power) to the vestibular stimulator 712.
  • the vestibular stimulator 712 comprises an implant body (main module) 734, a lead region 736, and a stimulating assembly 716, all configured to be implanted underthe skin/tissue (tissue) 715 of the recipient.
  • the implant body 734 generally comprises a hermetically-sealed housing 738 in which RF interface circuitry, one or more rechargeable batteries, one or more processors, and a stimulator unit are disposed.
  • the implant body 134 also includes an intemal/implantable coil 714 that is generally external to the housing 738, but which is connected to the transceiver via a hermetic feedthrough (not shown).
  • the stimulating assembly 716 comprises a plurality of electrodes 744(l)-(3) disposed in a carrier member (e.g., a flexible silicone body).
  • the stimulating assembly 716 comprises three (3) stimulation electrodes, referred to as stimulation electrodes 744(1), 744(2), and 744(3).
  • the stimulation electrodes 744(1), 744(2), and 744(3) function as an electrical interface for delivery of electrical stimulation signals to the recipient’s vestibular system.
  • the stimulating assembly 716 is configured such that a surgeon can implant the stimulating assembly adjacent the recipient’s otolith organs via, for example, the recipient’s oval window. It is to be appreciated that this specific embodiment with three stimulation electrodes is merely illustrative and that the techniques presented herein can be used with stimulating assemblies having different numbers of stimulation electrodes, stimulating assemblies having different lengths, etc.
  • the vestibular stimulator 712, the external device 704, and/or another external device can be configured to implement the techniques presented herein. That is, the vestibular stimulator 712, possibly in combination with the external device 704 and/or another external device, can include an evoked biological response analysis system, as described elsewhere herein.
  • FIG. 8 illustrates a retinal prosthesis system 801 that comprises an external device 810 configured to communicate with an implantable retinal prosthesis 800 via signals 851.
  • the retinal prosthesis 800 comprises an implanted processing module 825, and a retinal prosthesis sensor-stimulator 890 is positioned proximate the retina of a recipient.
  • the external device 810 and the processing module 825 can communicate via coils 808, 814.
  • sensory inputs are absorbed by a microelectronic array of the sensor-stimulator 890 that is hybridized to a glass piece 892 including, for example, an embedded array of microwires.
  • the glass can have a curved surface that conforms to the inner radius of the retina.
  • the sensor-stimulator 890 can include a microelectronic imaging device that can be made of thin silicon containing integrated circuitry that convert the incident photons to an electronic charge.
  • the processing module 825 includes an image processor 823 that is in signal communication with the sensor-stimulator 890 via, for example, a lead 888 that extends through surgical incision 889 formed in the eye wall. In other examples, processing module 825 is in wireless communication with the sensor-stimulator 890.
  • the image processor 823 processes the input into the sensor-stimulator 890 and provides control signals back to the sensor-stimulator 890 so the device can provide an output to the optic nerve. That said, in an alternate example, the processing is executed by a component proximate to, or integrated with, the sensor-stimulator 890.
  • the electric charge resulting from the conversion of the incident photons is converted to a proportional amount of electronic current which is input to a nearby retinal cell layer. The cells fire and a signal is sent to the optic nerve, thus inducing a sight perception.
  • the processing module 825 can be implanted in the recipient and function by communicating with the external device 810, such as a BTE unit, a pair of eyeglasses, etc.
  • the external device 810 can include an external light/image capture device (e.g., located in/on a behind-the-ear device or a pair of glasses, etc.), while, as noted above, in some examples, the sensor-stimulator 890 captures light/images, in which sensor-stimulator 890 is implanted in the recipient.
  • systems and non-transitory computer readable storage media are provided.
  • the systems are configured with hardware configured to execute operations analogous to the methods of the present disclosure.
  • the one or more non-transitory computer readable storage media comprise instructions that, when executed by one or more processors, cause the one or more processors to execute operations analogous to the methods of the present disclosure.
  • steps of a process are disclosed, those steps are described for purposes of illustrating the present methods and systems and are not intended to limit the disclosure to a particular sequence of steps. For example, the steps can be performed in differing order, two or more steps can be performed concurrently, additional steps can be performed, and disclosed steps can be excluded without departing from the present disclosure. Further, the disclosed processes can be repeated.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Cardiology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Prostheses (AREA)

Abstract

L'invention concerne des techniques permettant de régler les paramètres d'un dispositif médical ou un appareil auditif sur la base de données environnementales enregistrées. Des données environnementales précédemment enregistrées par un dispositif médical ou un appareil auditif sont obtenues à partir d'une mémoire, les données environnementales ayant été enregistrées pendant une période antérieure au cours de laquelle les données environnementales ont été utilisées par le dispositif médical ou l'appareil auditif pour délivrer des signaux de stimulation à un destinataire du dispositif médical conformément à une pluralité de paramètres opérationnels. Les données environnementales obtenues à partir de la mémoire sont utilisées pour régler un ou plusieurs paramètres opérationnels parmi la pluralité de paramètres opérationnels.
PCT/IB2025/054897 2024-05-17 2025-05-09 Réglages basés sur des données environnementales enregistrées Pending WO2025238503A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463648700P 2024-05-17 2024-05-17
US63/648,700 2024-05-17

Publications (1)

Publication Number Publication Date
WO2025238503A1 true WO2025238503A1 (fr) 2025-11-20

Family

ID=97719589

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2025/054897 Pending WO2025238503A1 (fr) 2024-05-17 2025-05-09 Réglages basés sur des données environnementales enregistrées

Country Status (1)

Country Link
WO (1) WO2025238503A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050129262A1 (en) * 2002-05-21 2005-06-16 Harvey Dillon Programmable auditory prosthesis with trainable automatic adaptation to acoustic conditions
KR20140098615A (ko) * 2013-01-31 2014-08-08 삼성전자주식회사 이동 단말기와 연결된 보청기를 피팅(fitting) 하는 방법 및 이를 수행하는 이동 단말기
KR20160129752A (ko) * 2015-04-30 2016-11-09 삼성전자주식회사 사운드 출력 기기, 전자 장치 및 그 제어 방법
US20200112802A1 (en) * 2015-04-10 2020-04-09 Cochlear Limited Systems and method for adjusting auditory prostheses settings
US20240017065A1 (en) * 2016-12-05 2024-01-18 Soundwave Hearing, Llc Optimization tool for auditory devices

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050129262A1 (en) * 2002-05-21 2005-06-16 Harvey Dillon Programmable auditory prosthesis with trainable automatic adaptation to acoustic conditions
KR20140098615A (ko) * 2013-01-31 2014-08-08 삼성전자주식회사 이동 단말기와 연결된 보청기를 피팅(fitting) 하는 방법 및 이를 수행하는 이동 단말기
US20200112802A1 (en) * 2015-04-10 2020-04-09 Cochlear Limited Systems and method for adjusting auditory prostheses settings
KR20160129752A (ko) * 2015-04-30 2016-11-09 삼성전자주식회사 사운드 출력 기기, 전자 장치 및 그 제어 방법
US20240017065A1 (en) * 2016-12-05 2024-01-18 Soundwave Hearing, Llc Optimization tool for auditory devices

Similar Documents

Publication Publication Date Title
US12485285B2 (en) Individualized adaptation of medical prosthesis settings
US20250063311A1 (en) User-preferred adaptive noise reduction
WO2025012805A1 (fr) Système de surveillance neurologique multimodal
US20250266033A1 (en) Dynamic list-based speech testing
US20240382751A1 (en) Clinician task prioritization
US12375196B2 (en) Broadcast selection
WO2025238503A1 (fr) Réglages basés sur des données environnementales enregistrées
US20240325746A1 (en) User interfaces of a hearing device
WO2025114819A1 (fr) Personnalisation de dispositif
US20250071492A1 (en) Tinnitus remediation with speech perception awareness
US20240306945A1 (en) Adaptive loudness scaling
WO2025233755A1 (fr) Génération de dossiers cliniques simultanés
CN120094098A (zh) 利用修复体技术和/或其它技术的生理测量管理
US20250329266A1 (en) Environmental signal recognition training
EP4228740B1 (fr) Autoajustement de prothèse
WO2024228091A1 (fr) Surveillance de la sociabilité d'un utilisateur
US20250194959A1 (en) Targeted training for recipients of medical devices
WO2025062297A1 (fr) Ajustement d'opérations d'un dispositif sur la base de données d'environnement
US20250128061A1 (en) Balanced hearing device loudness control
WO2025219861A1 (fr) Surveillance de l'étalonnage d'un système de réduction de bruit corporel
WO2025257699A1 (fr) Atténuation sonore
WO2025109443A1 (fr) Étalonnage de filtre de réduction de bruit pour un dispositif implantable
WO2025210451A1 (fr) Détermination de paramètre de dispositif dérivé de données
CN120456955A (zh) 听力学干预
WO2025153924A1 (fr) Dispositif de capteur implantable