[go: up one dir, main page]

WO2024228091A1 - Surveillance de la sociabilité d'un utilisateur - Google Patents

Surveillance de la sociabilité d'un utilisateur Download PDF

Info

Publication number
WO2024228091A1
WO2024228091A1 PCT/IB2024/054020 IB2024054020W WO2024228091A1 WO 2024228091 A1 WO2024228091 A1 WO 2024228091A1 IB 2024054020 W IB2024054020 W IB 2024054020W WO 2024228091 A1 WO2024228091 A1 WO 2024228091A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
processor
sociability
biometric data
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IB2024/054020
Other languages
English (en)
Inventor
Paul Reinhart
Bridget TIERNAN
Birgit PHILIPS
Peter Gibson
Kerrie Plant
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cochlear Ltd
Original Assignee
Cochlear Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cochlear Ltd filed Critical Cochlear Ltd
Publication of WO2024228091A1 publication Critical patent/WO2024228091A1/fr
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1123Discriminating type of movement, e.g. walking or running
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • A61B5/121Audiometering evaluating hearing capacity
    • A61B5/125Audiometering evaluating hearing capacity objective methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6846Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive
    • A61B5/6847Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive mounted on an invasive device
    • A61B5/686Permanently implanted devices, e.g. pacemakers, other stimulators, biochips
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/558Remote control, e.g. of amplification, frequency
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L25/84Detection of presence or absence of voice signals for discriminating voice from noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting

Definitions

  • the present disclosure relates generally to monitoring sociability of a user of a device, such as hearing or medical device.
  • Medical devices have provided a wide range of therapeutic benefits to recipients over recent decades.
  • Medical devices can include internal or implantable components/devices, external or wearable components/devices, or combinations thereof (e.g., a device having an external component communicating with an implantable component).
  • Medical devices such as traditional hearing aids, partially or fully-implantable hearing prostheses (e.g., bone conduction devices, mechanical stimulators, cochlear implants, etc.), pacemakers, defibrillators, functional electrical stimulation devices, and other medical devices, have been successful in performing lifesaving and/or lifestyle enhancement functions and/or recipient monitoring for a number of years.
  • implantable medical devices now often include one or more instruments, apparatus, sensors, processors, controllers or other functional mechanical or electrical components that are permanently or temporarily implanted in a recipient. These functional devices are typically used to diagnose, prevent, monitor, treat, or manage a disease/injury or symptom thereof, or to investigate, replace or modify the anatomy or a physiological process. Many of these functional devices utilize power and/or data received from external devices that are part of, or operate in conjunction with, implantable components.
  • a method comprises: determining, via a processor, a first metric of a listening situation associated with a hearing device user; determining, via the processor, a second metric of an activity associated with the hearing device user in the listening situation; and determining, via the processor, a sociability index based on the first metric and the second metric.
  • the processor can be implemented, for example, as part of a cochlear implant system, a mobile device, computing device, and/or a network platform (e.g., a cloud server).
  • one or more non-transitory computer readable storage media comprise instructions that, when executed by a processor, cause the processor to: receive data from a sensor associated with a device configured to be worn by a user; determine, based on the data, a listening situation of the user and an activity of the user in the listening situation; and determine a sociability index based on the listening situation of the user and the activity of the user in the listening situation.
  • a method comprises: receiving, via a processor, biometric data of a hearing device user; determining, via the processor, a listening effort metric based on the biometric data; and determining, via the processor, a sociability index based on the listening effort metric.
  • a device comprising: a memory configured to store instructions; and one or more processors that, upon executing the instructions stored on the memory, are configured to: determine a person in conversational engagement with a user of the device; determine a social network metric based on the person in conversational engagement with the user; and determine a sociability index based on the social network metrics.
  • FIG. 1A is a schematic diagram illustrating a cochlear implant system with which aspects of the techniques presented herein can be implemented
  • FIG. IB is a side view of a recipient wearing a sound processing unit of the cochlear implant system of FIG. 1A;
  • FIG. 1C is a schematic view of components of the cochlear implant system of FIG. 1 A;
  • FIG. ID is a block diagram of the cochlear implant system of FIG. 1A;
  • FIG. IE is a schematic diagram illustrating a computing device with which aspects of the techniques presented herein can be implemented;
  • FIG. 2 is a diagram illustrating various metrics used to determine a sociability index, in accordance with certain embodiments presented herein;
  • FIG. 3 is a flowchart illustrating a method for changing how a sociability index is determined, in accordance with certain embodiments presented herein;
  • FIG. 4 is a schematic diagram illustrating a mobile device configured to display information regarding a sociability index, in accordance with certain embodiments presented herein;
  • FIG. 5 is a schematic diagram illustrating a network of systems that communicate information regarding a sociability index, in accordance with certain embodiments presented herein;
  • FIG. 6A is a flowchart illustrating a method for determining a sociability index, in accordance with certain embodiments presented herein;
  • FIG. 6B is a flowchart illustrating a method for determining a sociability index, in accordance with certain embodiments presented herein;
  • FIG. 7 is a flowchart illustrating a method for providing a recommendation based on a sociability index, in accordance with certain embodiments presented herein;
  • FIG. 8 is a schematic diagram illustrating a vestibular stimulator system with which aspects of the techniques presented herein can be implemented.
  • FIG. 9 is a schematic diagram illustrating a retinal prosthesis system with which aspects of the techniques presented herein can be implemented.
  • a sociability index of a user e.g., a recipient of a “user device,” that is a device that is carried by, worn by, or implanted in, the user.
  • the sociability index can include, but are not limited to, a listening situation of the user and/or an activity associated with the user in the listening situation.
  • a user device e.g., a portable device carried by a user, a wearable device worn by a user, an implantable device having one or more components implanted in the user, etc.
  • receive information can include, for example, acoustic data associated with the user and/or a surrounding environment, biometric data of the user, and/or location data of the user.
  • Such information can be received by sensor data and/or user inputs.
  • a microphone or acoustic detector can detect the acoustic data.
  • the techniques presented herein can be beneficial for monitoring the sociability index of a user to determine whether the user is functioning desirably in social settings. For instance, individuals utilizing a hearing device to mitigate their hearing impairment can have difficulty maintaining a desired sociability because of different factors, including continued difficulty in hearing, difficulty discerning competing noise sources (e.g., other speakers), and unfamiliarity with usage of their hearing device. An individual’s health and wellness can be affected by undesirable or low social activity. Therefore, it is desirable to maintain or improve an individual’s social patterns to establish greater wellbeing for the individual.
  • the sociability index provides a more tangible representation regarding whether the user’s social patterns are desirable.
  • providing the user with information related to the sociability index can prompt the user to adjust, maintain, or otherwise manage their behavior to achieve a target sociability index.
  • the sociability index can be compared to a target or threshold level, and additional information or assistance can be provided based on the comparison, such as in response to the sociability index being below the target level.
  • a recommendation can be provided to the user based on their sociability index.
  • another individual e.g., a significant other, a caretaker, a healthcare provider, a therapist, etc.
  • the monitoring of the sociability index, as well as the additional actions performed based on the sociability index can help fulfill the user’s social goals and improve their comfort to utilize the user device.
  • the techniques presented herein could be implemented by hearing devices, various implantable medical devices, such as vestibular devices (e.g., vestibular implants), visual devices (i.e., bionic eyes), sensors, pacemakers, drug delivery systems, defibrillators, functional electrical stimulation devices, catheters, seizure devices (e.g., devices for monitoring and/or treating epileptic events), sleep apnea devices, electroporation devices, etc.
  • the techniques presented herein can be implemented with air purifiers or air sensors (e.g., automatically adjust depending on environment), hospital beds, identification (ID) badges/bands, or other hospital equipment or instruments, or the like.
  • hearing device is to be broadly construed as any device that delivers sound signals to a user in any form, including in the form of acoustical stimulation, mechanical stimulation, electrical stimulation, optical stimulation, etc.
  • a hearing device can be a device for use by a hearing-impaired person (e.g., hearing aids, middle ear auditory prostheses, bone conduction devices, direct acoustic stimulators, electro-acoustic hearing prostheses, auditory brainstem stimulators, bimodal hearing prostheses, bilateral hearing prostheses, dedicated tinnitus therapy devices, tinnitus therapy device systems, combinations or variations thereof, etc.) or a device for use by a person with normal hearing (e.g., consumer devices that provide audio streaming, consumer headphones, earphones and other listening devices).
  • a hearing-impaired person e.g., hearing aids, middle ear auditory prostheses, bone conduction devices, direct acoustic stimulators, electro-acoustic hearing prostheses, auditory brainstem stimulators, bimodal hearing prostheses, bilateral hearing prostheses, dedicated tinnitus therapy devices, tinnitus therapy device systems, combinations or variations thereof, etc.
  • FIGs. 1A-1D illustrates an example cochlear implant system 102 with which aspects of the techniques presented herein can be implemented.
  • the cochlear implant system 102 comprises an external component 104 that is configured to be directly or indirectly attached to the body of the user, and an intemal/implantable component 112 that is configured to be implanted in or worn on the head of the user.
  • the implantable component 112 is sometimes referred to as a “cochlear implant.”
  • FIG. 1A illustrates the cochlear implant 112 implanted in the head 154 of a user
  • FIG. IB is a schematic drawing of the external component 104 worn on the head 154 of the user.
  • FIG. 1C is another schematic view of the cochlear implant system 102
  • FIG. ID illustrates further details of the cochlear implant system 102.
  • FIGs. 1A-1D will generally be described together.
  • the external component 104 comprises a sound processing unit 106, an external coil 108, and generally, a magnet fixed relative to the external coil 108.
  • the cochlear implant 112 includes an implantable coil 114, an implant body 134, and an elongate stimulating assembly 116 configured to be implanted in the user’s cochlea.
  • the sound processing unit 106 is an off-the-ear (OTE) sound processing unit, sometimes referred to herein as an OTE component, that is configured to send data and power to the implantable component 112.
  • OTE off-the-ear
  • an OTE sound processing unit is a component having a generally cylindrically shaped housing 111 and which is configured to be magnetically coupled to the user’s head 154 (e.g., includes an integrated external magnet 150 configured to be magnetically coupled to an intemal/implantable magnet 152 in the implantable component 112).
  • the OTE sound processing unit 106 also includes an integrated external (headpiece) coil 108 (the external coil 108) that is configured to be inductively coupled to the implantable coil 114.
  • the OTE sound processing unit 106 is merely illustrative of the external devices that could operate with implantable component 112.
  • the external component 104 may comprise a behind-the-ear (BTE) sound processing unit configured to be attached to, and worn adjacent to, the recipient’s ear.
  • BTE sound processing unit comprises a housing that is shaped to be worn on the outer ear of the user and is connected to the separate external coil assembly via a cable, where the external coil assembly is configured to be magnetically and inductively coupled to the implantable coil 114.
  • alternative external components could be located in the user’s ear canal, worn on the body, etc.
  • the cochlear implant system 102 includes the sound processing unit 106 and the cochlear implant 112, as described below, the cochlear implant 112 can operate independently from the sound processing unit 106, for at least a period, to stimulate the user.
  • the cochlear implant 112 can operate in a first general mode, sometimes referred to as an “external hearing mode,” in which the sound processing unit 106 captures sound signals, which are then used as the basis for delivering stimulation signals to the user.
  • the cochlear implant 112 can also operate in a second general mode, sometimes referred as an “invisible hearing” mode, in which the sound processing unit 106 is unable to provide sound signals to the cochlear implant 112 (e.g., the sound processing unit 106 is not present, the sound processing unit 106 is powered-off, the sound processing unit 106 is malfunctioning, etc.).
  • the cochlear implant 112 captures sound signals itself via implantable sound sensors and then uses those sound signals as the basis for delivering stimulation signals to the user. Further details regarding operation of the cochlear implant 112 in the external hearing mode are provided below, followed by details regarding operation of the cochlear implant 112 in the invisible hearing mode. It is to be appreciated that reference to the external hearing mode and the invisible hearing mode is merely illustrative and that the cochlear implant 112 could also operate in alternative modes.
  • the cochlear implant system 102 is shown with an external device 110, configured to implement aspects of the techniques presented.
  • the external device 110 which is shown in greater detail in FIG. IE, is a computing device, such as a personal computer (e.g., laptop, desktop, tablet), a mobile phone (e.g., smartphone), remote control unit, etc.
  • the external device 110 and the cochlear implant system 102 e.g., sound processing unit 106 or the cochlear implant 112 wirelessly communicate via a bi-directional communication link 126.
  • the bi-directional communication link 126 may comprise, for example, a short-range communication, such as Bluetooth link, Bluetooth Low Energy (BLE) link, a proprietary link, etc.
  • BLE Bluetooth Low Energy
  • the sound processing unit 106 of the external component 104 also comprises one or more input devices configured to capture and/or receive input signals (e.g., sound or data signals) at the sound processing unit 106.
  • input signals e.g., sound or data signals
  • the one or more input devices include, for example, one or more sound input devices 118 (e.g., one or more external microphones, audio input ports, telecoils, etc.), one or more auxiliary input devices 128 (e.g., audio ports, such as a Direct Audio Input (DAI), data ports, such as a Universal Serial Bus (USB) port, cable port, etc.), and a short-range wireless transmitter/receiver (wireless transceiver) 120 (e.g., for communication with the external device 110), each located in, on, or near the sound processing unit 106.
  • one or more input devices may include additional types of input devices and/or fewer input devices (e.g., the short-range wireless transceiver 120 and/or one or more auxiliary input devices 128 could be omitted).
  • the sound processing unit 106 also comprises the external coil 108, a charging coil 130, a closely-coupled radio frequency transmitter/receiver (RF transceiver) 122, at least one rechargeable battery 132, and an external sound processing module 124.
  • the external sound processing module 124 can be configured to perform a number of operations which are represented in FIG. ID by an environmental classifier 131, a sound processor 133, and an own voice detector 135.
  • Each of the environmental classifier 131, the sound processor 133, and the own voice detector 135 can be formed by one or more processors (e.g., one or more Digital Signal Processors (DSPs), one or more uC cores, etc.), firmware, software, etc. arranged to perform operations described herein.
  • DSPs Digital Signal Processors
  • the environmental classifier 131, the sound processor 133, and the own voice detector 135 can each be implemented as firmware elements, partially or fully implemented with digital logic gates in one or more application-specific integrated circuits (ASICs), partially or fully in software, etc.
  • FIG. ID illustrates an environmental classifier 131, a sound processor 133, and an own voice detector 135 as being implemented/performed at the external sound processing module 124, it is to be appreciated that these elements (e.g., functional operations) could also or alternatively be implemented/performed as part of the implantable sound processing module 158, as part of the external device 110, etc.
  • the environmental classifier 131 e.g., one or more processing elements implementing firmware, software, etc.
  • the environmental classifier 131 is configured to determine an environmental classification of the sound environment (i.e., determines the “class” or “category” of the sound environment) associated with the input audio signals received at the cochlear implant system 102.
  • the environmental classifier 131 includes a decision tree, sometimes referred to as an environmental classifier decision tree that, in certain embodiments, can be trained/updated.
  • the own voice detector 135 e.g., one or more processing elements implementing firmware, software, etc.
  • ODD own voice detection
  • the own voice detector 135 includes a decision tree, sometimes referred to herein as an OVD decision tree, that can be trained/updated.
  • the decision trees are stored in volatile memory and exposed to, for example, another process for updating thereof.
  • the environmental classifier 131 and the own voice detector 135 are at least partially implemented in volatile memory.
  • the environmental classification decision tree and the own voice detection decision tree can be dynamically updated on/by the device itself (e.g., cochlear implant system 102), or updated using an external computing device (e.g., external device 110).
  • OVD generally refers to a process in which speech signals received at a hearing device, such as a cochlear implant system 102, are classified as either including the “voice” or “speech” of the user (e.g., recipient) of the hearing device (referred to herein as the recipient’s own voice or simply “own voice”) or a voice or speech generated by one or more persons other than the recipient (referred to herein as “external voice”).
  • the implantable component 112 comprises an implant body (main module) 134, a lead region 136, and the intra-cochlear stimulating assembly 116, all configured to be implanted under the skin (tissue) 115 of the user.
  • the implant body 134 generally comprises a hermetically-sealed housing 138 that includes, in certain examples, at least one power source 125 (e.g., one or more batteries, one or more capacitors, etc.), RF interface circuitry 140, and a stimulator unit 142.
  • the implant body 134 also includes the intemal/implantable coil 114 that is generally external to the housing 138, but which is connected to the RF interface circuitry 140 via a hermetic feedthrough (not shown in FIG. ID).
  • stimulating assembly 116 is configured to be at least partially implanted in the user’s cochlea.
  • Stimulating assembly 116 includes a plurality of longitudinally spaced intra-cochlear electrical stimulating contacts (electrodes) 144 that collectively form a contact array (electrode array) 146 for delivery of electrical stimulation (current) to the recipient’s cochlea.
  • Stimulating assembly 116 extends through an opening in the recipient’s cochlea (e.g., cochleostomy, the round window, etc.) and has a proximal end connected to stimulator unit 142 via lead region 136 and a hermetic feedthrough (not shown in FIG. ID).
  • Lead region 136 includes a plurality of conductors (wires) that electrically couple the electrodes 144 to the stimulator unit 142.
  • the implantable component 112 also includes an electrode outside of the cochlea, sometimes referred to as the extra-cochlear electrode (ECE) 139.
  • ECE extra-cochlear electrode
  • the cochlear implant system 102 includes the external coil 108 and the implantable coil 114.
  • the external magnet 150 is fixed relative to the external coil 108
  • the intemal/implantable magnet 152 is fixed relative to the implantable coil 114.
  • the external magnet 150 and the intemal/implantable magnet 152 fixed relative to the external coil 108 and the intemal/implantable coil 114, respectively, facilitate the operational alignment of the external coil 108 with the implantable coil 114.
  • This operational alignment of the coils enables the external component 104 to transmit data and power to the implantable component 112 via a closely-coupled wireless link 148 formed between the external coil 108 with the implantable coil 114.
  • the closely-coupled wireless link 148 is a radio frequency (RF) link.
  • RF radio frequency
  • various other types of energy transfer such as infrared (IR), electromagnetic, capacitive, and inductive transfer, may be used to transfer the power and/or data from an external component to an implantable component and, as such, FIG. ID illustrates only one example arrangement.
  • sound processing unit 106 includes the external sound processing module 124.
  • the external sound processing module 124 is configured to process the received input audio signals (received at one or more of the input devices, such as sound input devices 118 and/or auxiliary input devices 128), and convert the received input audio signals into output control signals for use in stimulating a first ear of a recipient or user (i.e., the external sound processing module 124 is configured to perform sound processing on input signals received at the sound processing unit 106).
  • the one or more processors e.g., processing element(s) implementing firmware, software, etc.
  • the external sound processing module 124 are configured to execute sound processing logic in memory to convert the received input audio signals into output control signals (stimulation signals) that represent electrical stimulation for delivery to the recipient.
  • FIG. ID illustrates an embodiment in which the external sound processing module 124 in the sound processing unit 106 generates the output control signals.
  • the sound processing unit 106 can send less processed information (e.g., audio data) to the implantable component 112, and the sound processing operations (e.g., conversion of input sounds to output control signals 156) can be performed by a processor within the implantable component 112.
  • output control signals are provided to the RF transceiver 122, which transcutaneously transfers the output control signals (e.g., in an encoded manner) to the implantable component 112 via external coil 108 and implantable coil 114. That is, the output control signals (stimulation signals) are received at the RF interface circuitry 140 via implantable coil 114 and provided to the stimulator unit 142.
  • the stimulator unit 142 is configured to utilize the output control signals to generate electrical stimulation signals (e.g., current signals) for delivery to the user’s cochlea via one or more of the stimulating contacts (electrodes) 144.
  • cochlear implant system 102 electrically stimulates the user’s auditory nerve cells, bypassing absent or defective hair cells that normally transduce acoustic vibrations into neural activity, in a manner that causes the recipient to perceive one or more components of the input audio signals (the received sound signals).
  • the cochlear implant 112 receives processed sound signals from the sound processing unit 106.
  • the cochlear implant 112 is configured to capture and process sound signals for use in electrically stimulating the user’s auditory nerve cells.
  • an example embodiment of the cochlear implant 112 can include an implantable sound processing module 158 and a plurality of implantable sound sensors 165(1), 165(2) that collectively form a sensor array 160.
  • the implantable sound processing module 158 may comprise, for example, one or more processors and a memory device (memory) that includes sound processing logic.
  • the memory device may comprise any one or more of: Non-Volatile Memory (NVM), Ferroelectric Random Access Memory (FRAM), read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices.
  • NVM Non-Volatile Memory
  • FRAM Ferroelectric Random Access Memory
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage media devices optical storage media devices
  • flash memory devices electrical, optical, or other physical/tangible memory storage devices.
  • the one or more processors are, for example, microprocessors or microcontrollers that execute instructions for the sound processing logic stored in memory device.
  • the implantable sound sensors 165(1), 165(2) of the sensor array 160 are configured to detect/capture input sound signals 166 (e.g., acoustic sound signals, vibrations, etc.), which are provided to the implantable sound processing module 158.
  • the implantable sound processing module 158 is configured to convert received input sound signals 166 (received at one or more of the implantable sound sensors 165(1), 165(2)) into output control signals 156 for use in stimulating the first ear of a recipient or user (i.e., the implantable sound processing module 158 is configured to perform sound processing operations).
  • the one or more processors e.g., processing element(s) implementing firmware, software, etc.
  • implantable sound processing module 158 are configured to execute sound processing logic in memory to convert the received input sound signals 166 into output control signals 156 that are provided to the stimulator unit 142.
  • the stimulator unit 142 is configured to utilize the output control signals 156 to generate electrical stimulation signals (e.g., current signals) for delivery to the user’s cochlea, thereby bypassing the absent or defective hair cells that normally transduce acoustic vibrations into neural activity.
  • the cochlear implant 112 could use signals captured by the sound input devices 118 and the implantable sound sensors 165(1), 165(2) of sensor array 160 in generating stimulation signals for delivery to the user.
  • external sound processing module 124 may also include one or more other sensors 170 that determine other data.
  • the sensors 170 include a location sensor (e.g., a global positioning system (GPS)) configured to determine a geographical location of the cochlear implant system 102 and therefore of the user associated with the cochlear implant system 102.
  • the sensors 170 include a sensor, such as a heartbeat monitor, a temperature sensor, a blood pressure sensor, configured to determine various biometric data associated with the user.
  • Such sensors 170 may be implemented in, for example, micro electromechanical systems (MEMS) or with other technology suitable for the particular application.
  • MEMS micro electromechanical systems
  • an inertial measurement unit (IMU) 180 including one or more sensors 185 is incorporated into implantable sound processing module 158 of implant body 134.
  • the IMU 180 is configured to measure the inertia of the user's head, that is, motion of the user's head.
  • IMU 180 comprises one or more sensors 185 each configured to sense one or more of rectilinear or rotatory motion in the same or different axes. Examples of sensors 185 that may be used as part of the IMU 180 include accelerometers, gyroscopes, inclinometers, compasses, and the like.
  • data received at the environment classifier 131, the sound processor 133, the own voice detector 135, the sensors 170, and/or the IMU 180 can be used to monitor the sociability index of a user of the cochlear implant system 102.
  • FIG. IE is a block diagram illustrating one example arrangement for an external computing device 110 configured to perform one or more operations in accordance with certain embodiments presented herein.
  • the external computing device 110 includes at least one processing unit 183 and a memory 184.
  • the processing unit 183 includes one or more hardware or software processors (e.g., Central Processing Units) that can obtain and execute instructions.
  • the processing unit 183 can communicate with and control the performance of other components of the external computing device 110.
  • the memory 184 is one or more software or hardware-based computer-readable storage media operable to store information accessible by the processing unit 183.
  • the memory 184 can store, among other things, instructions executable by the processing unit 183 to implement applications or cause performance of operations described herein, as well as other data.
  • the memory 184 can be volatile memory (e.g., RAM), non-volatile memory (e.g., ROM), or combinations thereof.
  • the memory 184 can include transitory memory or non-transitory memory.
  • the memory 184 can also include one or more removable or non-removable storage devices.
  • the memory 184 can include Electronically-Erasable Programmable Read-Only Memory (EEPROM), flash memory, optical disc storage, magnetic storage, solid state storage, or any other memory media usable to store information for later access.
  • EEPROM Electronically-Erasable Programmable Read-Only Memory
  • the memory 184 can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media or combinations thereof.
  • the memory 184 comprises sociability index logic 195 that, when executed, enables the processing unit 183 to perform aspects of the techniques presented.
  • the memory 184 also further comprises sociability index data 196, which may include various data (e.g., previously determined sociability indexes) that is utilized by and/or updated by the sociability index logic 195.
  • the external computing device 110 further includes a network adapter 186, one or more input devices 187, and one or more output devices 188.
  • the external computing device 110 can include other components, such as a system bus, component interfaces, a graphics system, a power source (e.g., a battery), among other components.
  • the network adapter 186 is a component of the external computing device 110 that provides network access (e.g., access to at least one network 189).
  • the network adapter 186 can provide wired or wireless network access and can support one or more of a variety of communication technologies and protocols, such as ETHERNET, cellular, BLUETOOTH, near-field communication, and RF (Radiofrequency), among others.
  • the network adapter 186 can include one or more antennas and associated components configured for wireless communication according to one or more wireless communication technologies and protocols.
  • the one or more input devices 187 are devices over which the external computing device 110 receives input from a user.
  • the one or more input devices 187 can include physically- actuatable user-interface elements (e.g., buttons, switches, or dials), a keypad, keyboard, mouse, touchscreen, and voice input devices, among other input devices that can accept user input.
  • the one or more output devices 188 are devices by which the computing device 110 is able to provide output to a user.
  • the output devices 188 can include a display 190 (e.g., a liquid crystal display (LCD)) and one or more speakers 191, among other output devices for presentation of visual or audible information to the recipient, a clinician, an audiologist, or other user.
  • a display 190 e.g., a liquid crystal display (LCD)
  • speakers 191 among other output devices for presentation of visual or audible information to the recipient, a clinician, an audiologist, or other user.
  • the external computing device 110 shown in FIG. IE is merely illustrative and that aspects of the techniques presented herein can be implemented at a number of different types of systems/devices including any combination of hardware, software, and/or firmware configured to perform the functions described herein.
  • the external computing device 110 can be a personal computer (e.g., a desktop or laptop computer), a hand-held device (e.g., a tablet computer), a mobile device (e.g., a smartphone), a surgical system, and/or any other electronic device having the capabilities to perform the associated operations described elsewhere herein.
  • user devices such as the cochlear implant system 102 of FIGS. 1A- 1D, hearing devices, smartwatches, etc.
  • data related to a listening situation and data related to activity of a user in the listening situation are used to determine an overall sociability index.
  • the data is quantified and/or weighted, and the sociability index is calculated based on such data, such as using an equation, data model, or algorithm that defines a relationship between the sociability index and the data.
  • systems and methods that automate a process for determining the sociability index and outputting a control signal based on the determined sociability index.
  • the control signal can cause display of the sociability index, such as via a mobile device that informs the user of the sociability index.
  • Providing the user with awareness of the sociability index can urge the user to monitor, adjust, and/or maintain certain behaviors to manage (e.g., increase) their sociability index.
  • the sociability index can be compared to a target or threshold level (e.g., a numerical value), and the control signal can be output based on the comparison, such as in response to a difference between the sociability index and the target level exceeding a threshold value.
  • a first control signal is output based on the sociability index being less than the target level, such as to provide a recommendation or other information that helps the user increase the sociability index.
  • a second signal can be output based on the sociability index being greater than the target level, such as to notify the user that their sociability index is at a desirable level.
  • FIG. 2 is a diagram 248 illustrating different metrics that can be used to determine a sociability index 250. These metrics could, for example, be used by the system of FIGS. 1A- 1E (or other devices/systems).
  • the diagram 248 organizes the metrics into a first category 252 and a second category 254.
  • the first category 252 includes a listening situation
  • the second category 254 includes an activity in a particular listening situation.
  • the first category 252 indicates the context or setting of the user, such as the surroundings of the user.
  • the first category 252 includes various sub-categories, including a speech environment subcategory 256, a call streaming sub-category 258, and a location type sub-category 260.
  • the speech environment sub-category 256 indicates whether the user is in an environment in which the user is able to speak or converse (e.g., with another person).
  • attributes of input audio signals represented by electrical input signals
  • are evaluated/analyzed e.g., by the environmental classifier 131).
  • a “class” of the speech environment associated with the input audio signals is determined.
  • Such categorization can include a number of predetermined speech environment classes, such as “speech in quiet,” “speech in noise,” “quiet,” “noise,” and “music,” although other categories are possible.
  • the speech environment class is determined by calculating, in real-time, time-varying features from the input audio signals and analyzing the calculated time-varying features using a type of decision tree (e.g., environmental classification decision tree) or other algorithm.
  • the decision tree includes a number of hierarchical or linked branches/nodes that each perform evaluations, comparisons, or checks using at least one of the time-varying features to determine the sound environment classification at the branch ends (leaves). That is, the decision tree traverses its “branches” until it arrives at a “leaf’ and decides “speech in quiet,” “speech in noise,” “quiet,” “noise,” or “music.”
  • the speech environment class of the speech environment sub-category 256 impacts the sociability index 250 based on determination of environments in which speech participation is available. For instance, increased determination of the “speech in quiet” or “speech in noise” sub-category (e.g., reduced determination of “quiet,” “noise,” or “music”) can increase the sociability index 250. In contrast, reduced determination of the “speech in quiet” or “speech in noise” (e.g., increased determination of “quiet,” “noise,” or “music”) can reduce the sociability index 250. Thus, the user is encouraged to participate in more settings in which speech participation is available to increase the sociability index 250.
  • the call streaming sub-category 258 indicates usage of electronic devices to converse with others.
  • the call streaming sub-category 258 encompasses phone calls, video calls, voice recording exchanges, and so forth.
  • call streaming can occur independently of the speech environment sub-category 256.
  • call streaming can be performed with another person or with an automated system (e.g., an automated menu).
  • Data related to call streaming can be determined based on operation of an electronic device, such as a mobile phone, a laptop computer, a desktop computer, or a tablet, to effectuate the call streaming.
  • operation of the electronic device to communicatively couple to another electronic device can be determined to indicate call streaming.
  • streamed calls can be distinguished from other audio streaming (e.g., music streaming, video streaming, etc.) based on firmware classification by the electronic device, usage of OVD to determine user vocalizations, and the like.
  • a quantity of call streaming occurrences can be determined for the call streaming sub-category 258. Additionally or alternatively, a duration of time associated with streamed calls can be determined for the call streaming sub-category 258. Increased call streaming increases the sociability index 250 to encourage usage of an electronic device for conversational engagement.
  • the location type sub-category 260 indicates various types of environments or settings of the user.
  • the location type can include broader classifications, such as a building or structure type (e.g., a residence, an office, a restaurant, a public venue, etc.) or a geographical location (e.g., a town, a state, etc.).
  • the location type can also include more specific classifications, such as a particular event (e.g., a town meeting, a family gathering, a celebration, etc.) or an approximate number of other people in the location.
  • the location type can be determined based on location data, such as geographical coordinates of the user.
  • the location data can be determined based on movement data (e.g., provided by the IMU 180).
  • the location type can be determined based on information that is stored on or retrieved from other sources. For instance, a particular event in which the user is participating can be determined based on a stored calendar or email invite, which corresponds to the determined user location and/or a determined time/date.
  • the location type can therefore be determined based on various data, including sensor data and stored data.
  • a diverse array of location types can increase the sociability index 250 to encourage the user to attend potentially novel or unfamiliar settings, such as settings having different conversational environments. For example, a greater quantity of attended location types or an increased frequency of determined new locations types can increase the sociability index 250, whereas a fewer quantity of attended location types or an increased frequency of the same determined location type can reduce the sociability index 250.
  • the location of the user is compared to previous locations of the user, and a difference between the location and the previous locations increases the sociability index 250.
  • increased duration of time could be an increase in the index, similar to call streaming (e.g., not leaving the party early)
  • the second category 254 indicates user activity and includes a conversational engagement sub-category 262, a conversational (e.g., listening, speaking) effort sub-category 264, and a social network sub-category 266.
  • the conversational engagement sub-category 262 indicates the user’s participation level in a conversation and can include a quantitative and qualitative indication of the user’s participation.
  • the conversational engagement sub-category 262 can include a duration of time during which the user speaks and/or a quantity of words or sentences spoken by the user (e.g., relative to that spoken by other people).
  • the conversational engagement sub-category 262 can indicate whether the user’s conversational content is contributory or substantive based on a quantity or proportion of certain spoken words, such as filler words (e.g., “um,” “uh,” etc.), interrogative words (e.g., “what,” “when,” etc.), pro-sentences (e.g., yes, no, etc.), and the like. Increased contributory conversational content increases the sociability index 250.
  • filler words e.g., “um,” “uh,” etc.
  • interrogative words e.g., “what,” “when,” etc.
  • pro-sentences e.g., yes, no, etc.
  • OVD is used (e.g., by the own voice detector 135) to evaluate the conversational engagement sub-category 262.
  • input audio signals are classified as being “own voice” (i.e., the hearing device recipient is speaking within the set of input audio signals) or as “external voice” (i.e., someone other than the hearing device recipient is speaking within the set of input audio signals).
  • time-varying features e.g., volume level, proximity level, amplitude modulations, modulation depth, spectral profile, harmonicity, amplitude onsets, etc.
  • the decision tree includes a number of hierarchical or linked branches/nodes that each perform evaluations, comparisons, or checks using at least one of the time-varying features to determine whether the set of input audio signals is “own voice” or “external voice.”
  • OVD can be conditionally used in response to a determination that the speech environment includes “speech in quiet” or “speech in noise” class (sometimes collectively referred to herein as speech classes or categories).
  • the conversational effort sub-category 264 indicates the user’s perceived attempt or intent to process and/or provide speech-related audio signals.
  • the conversational effort subcategory 264 can include additional sub-categories, such as a biomarker sub-category 268 and a user input sub-category 270.
  • the conversational effort sub-category 264 can be determined via sensor data and/or user provided feedback. Indeed, due to the difficulty of determining a user’s intent or attempt for conversing, different types of data can be collectively used to determine the conversational effort.
  • the conversational effort sub-category 264 may be determined separately and independently from the conversational engagement sub-category 262.
  • data related to the conversational engagement sub-category 262 of a user does not impact determination of the conversational effort sub-category 264.
  • the conversational effort sub-category enables determination of user participation with respect to conversations in a manner that is not indicated by the conversational engagement sub-category 262 alone.
  • a user may be in a listening situation in which the user is not able to actively speak (e.g., the user is attending a public presentation).
  • data related to the conversational engagement subcategory 262 may initially appear to indicate reduced sociability index 250.
  • the user may be actively listening (or attempting to listen) to others who are speaking, and data related to the conversational effort sub-category 264 indicates increased conversational effort. Therefore, the user’s determined sociability index 250 will increase based on the conversational effort sub-category 264, even though the user is not actively speaking.
  • the biomarker sub-category 268 is determined based on various biometric data related to the user and indicative of active effort made by the user to receive, process, and/or provide speech.
  • various physiological measures e.g., changes, values
  • speech e.g., non-OVD speech
  • the user’s internal biological state e.g., emotional status, sentiment
  • Such internal biological state can include, for example, a stress response (e.g., indicative of increased conversational effort), a relaxed response (e.g., indicative of maintained conversational effort), or no determined response (e.g., indicative of reduced conversational effort).
  • the biometric data can include respiration rate (e.g., increased respiration rate indicates increased conversational effort), heart rate variability (e.g., decreased heart rate variability indicates increased conversational effort), blood pressure (e.g., increased blood pressure indicates increased conversational effort), body temperature (e.g., increased body temperature indicates increased conversational effort), and/or alpha oscillatory power (e.g., decreased alpha oscillatory power indicates increased conversational effort).
  • the biomarker sub-category 268 can also include attributes of the own voice of the user, such as the fundamental frequency, amplitude, and/or speaking rate (e.g., increased fundamental frequency, amplitude, and/or speaking rate indicates increased conversational effort).
  • the biometric data can be compared to baseline values, which may be determined over time (e.g., in situations in which the user does not appear to be in conversational engagement, such as in “quiet” speech environment classifications). Changes of the biometric data when non-OVD speech is detected (e.g., and when other parameters, such as a lack of substantial physical movement of the user indicated by the IMU 180, do not appear to cause biometric data changes) can indicate the conversational effort. For instance, a data model or algorithm can be used to assess the conversational effort based on various biometric data (e.g., changes in biometric data values).
  • the user input sub-category 270 includes a subjective rating by the user regarding the conversational effort. For example, after a detected change in the social situation (e.g., a change in the speech environment, a change in the location type), the user is prompted to provide a user input regarding the conversational effort.
  • the user input can be provided via interaction with a user interface, such as a button, a lever, or a dial, via a gesture, such as a finger tap, a quantity of fingers held up, or head movement, and/or via audio input (e.g., a voice command).
  • the prompt for user input can include a free text field, a questionnaire (e.g., a Likert scale), a visual analog scale, and so forth.
  • another person such as a significant other, a companion, or a co-worker of the user, is prompted to provide the rating of the user’s conversational effort.
  • a rating can indicate a more unbiased evaluation that accurately reflects the user’s conversational effort.
  • the data model or algorithm used to assess the conversational effort for the biomarker sub-category 268 can be updated based on the user input. That is, the conversational effort indicated by the user input is determined, and the data model or algorithm is updated to increase the association of corresponding biomarker data to the indicated user input. For example, during a social setting, the biometric data of the user includes an increased heart rate of 10 beats per minute and 5 breaths per minute . Additionally, the user input provided after the social setting indicates substantial conversational effort. Therefore, the data model or algorithm is updated to more strongly associate an increased heart rate of 10 beats per minute and 5 breaths per minute with substantial conversational effort. In this manner, the data model or algorithm can be adjusted based on the user input to provide a more accurate determination of conversational effort based on the biometric data.
  • Certain other data can further be used to determine the conversational effort.
  • manual adjustments to a device such as increasing a volume of audio output resultant from operation of a hearing device, increasing a sensitivity of a microphone to receive audio signals, and/or activation of a directional microphone feature, can indicate increased conversational effort.
  • manually changed settings of the device can be used to determine the conversational effort.
  • increased conversational effort can increase the sociability index 250, but this is not the case for all situations. For example, extremely low conversational effort may reflect disengagement from conversation, while very high conversational effort may reflect struggling in the social situation.
  • the relationship between conversational effort and the sociability index can, in certain examples, follow a type of “stress response curve.”
  • the social network sub-category 266 indicates various people with whom the user converses. For example, the social network sub-category 266 can indicate whether the user is or has recently been in conversational engagement with new people and therefore has increased their social network. A frequency in which the user speaks with certain people and/or a quantity of people with whom the user speaks can be determined for the social network subcategory 266.
  • different people can be determined via acoustic classification based on their voice attributes (e.g., pitch, frequency, intensity, etc.) and stored in association with their voice attributes. Attributes of subsequent input audio signals are analyzed and compared to the stored voice attributes to determine whether the input audio signals are provided by a stored person. Based on a match between the attributes of the input audio signals and the stored voice attributes, a determination is made that the user is speaking with a familiar or previously identified person. However, based on there being a difference between the attributes of the input audio signals and the stored voice attributes, a determination is made that the user is in conversational engagement with a person that is different than previous people in conversational engagement with the user.
  • voice attributes e.g., pitch, frequency, intensity, etc.
  • the social network of the user can be determined based on a communicative connection of an electronic device of the user to an electronic device of another person. For instance, expansion of the user’s social network can be determined in response to determining that the mobile device of the user is communicatively coupled to (e.g., to enable call streaming) another electronic device having an unknown identity (e.g., an electronic device that has not previously been communicatively coupled to the mobile device of the user).
  • novel connections between the electronic device of the user and another electronic device can indicate an expanded social network. In either case, the expanded social network increases the sociability index 250.
  • the sociability index 250 can be determined based on any suitable combination of the aforementioned metrics.
  • the sociability index 250 is quantitatively determined based on the metrics. For example, a numerical value (e.g., between 0 and 100) is determined for the sociability index 250.
  • the sociability index 250 is qualitatively determined based on the metrics, such as to indicate a category (e.g., “high,” “low,” etc.) of the sociability index 250.
  • the different metrics can have different weight or impact to affect determination of the sociability index 250. In particular, a change in a metric having a relatively greater weight causes a greater change in the sociability index 250, whereas a change in a metric having a relatively lesser weight causes a lesser change in the sociability index 250.
  • separate metrics for the first category 252 and for the second category 254 are independently determined, and the sociability index 250 is determined based on the separate metrics.
  • a metric associated with the first category 252 e.g., a speech environment metric, a call streaming metric, a location type metric
  • a metric associated with the second category 254 e.g., a conversational engagement metric, a conversational effort metric, a social network metric
  • the metric associated with the first category 252 and the metric associated with the second category 254 indicate different aspects of socialization for the user, and the isolated determination of the metrics can help determine a more accurate sociability index 250 by removing possible confounding or conflating variables that affect the different categories and/or sub-categories.
  • certain metrics of one of the first category 252 or the second category 254 can be determined in relation with another metric of the other of the first category 252 or the second category 254.
  • the conversational effort metric specifically related to call streaming can be monitored, or the conversational engagement metric specifically related to speech environments classified as “speech in noise” can be monitored.
  • more granular analysis of the user behavior can be performed to determine the sociability index 250.
  • a control signal can be output based on the sociability index 250.
  • the control signal can be used to notify the user of the sociability index 250, such as to provide a visual output, an audio output, or tactile feedback associated with the sociability index 250.
  • Such a notification can prompt the user to actively manage their sociability index 250, such as to adjust or maintain certain behaviors to achieve a target level of the sociability index 250. Further actions related to the sociability index 250 can also be performed.
  • Such actions can include sending a notification to another person (e.g., a caretaker, a significant other, a friend, a healthcare provider, etc.) to prompt the other person to assist the user, providing a recommended action to the user (e.g., behavioral changes to improve their sociability index 250), and/or adjusting a device setting (e.g., automatically enabling directional microphone capabilities to improve receipt of certain input audio signals).
  • a notification e.g., a caretaker, a significant other, a friend, a healthcare provider, etc.
  • a recommended action e.g., behavioral changes to improve their sociability index 250
  • adjusting a device setting e.g., automatically enabling directional microphone capabilities to improve receipt of certain input audio signals.
  • the control signal that is output based on the sociability index 250 can help the user with respect to different aspects of social function.
  • FIG. 3 is a flowchart illustrating a method 350 for adjusting weights of metrics used to determine a sociability index.
  • the weights can be automatically adjusted based on a user input.
  • a list of different potential user social goals are provided, such as via an electronic device of the user.
  • the list can include various predetermined options, such as to meet new people, to connect with family, or to increase conversational contributions.
  • the predetermined options can be associated with preset weights assigned to the different metrics.
  • multiple ones of the user social goals and/or the priority of the user social goals with respect to one another can be selected to establish the metric weights.
  • the user or another person e.g., a professional, such as an audiologist
  • Metric weights are then adjusted accordingly. For example, at block 356, metrics regarded as relatively more important for achieving the user social goals are included and/or assigned with more weight for determination of the sociability index.
  • metrics regarded as relatively less important for achieving the user social goals are excluded and/or assigned with less weight for determination of the sociability index.
  • the social network metric of the second category 254 is included and assigned with more weight
  • the call streaming metric of the first category 252 is excluded or assigned with less weight.
  • the call streaming metric of the first category 252 is included and assigned with more weight, whereas the social network metric of the second category 254 is excluded or assigned with less weight.
  • the conversational engagement metric of the second category 254 is included and assigned with more weight, whereas the social network metric of the second category 254 is excluded or assigned with less weight.
  • weights for metrics of different categories and/or within the same category can be adjusted. The weight adjustment and assignment enables the determined sociability index to correspond to the user social goal more accurately and enables the user to manage their behavior in a way that is more in line with the user social goal.
  • the weights of the metrics can additionally or alternatively be adjusted or assigned based on other factors. Such factors can include a personality type of the user (e.g., the social network metric has more weight for a more extraverted user), a residence of the user (e.g., the location type metric has less weight for a user located in a more sparsely populated city), and/or device availability of the user (e.g., the call streaming metric has less weight for a user that does not own many electronic devices with call streaming capabilities). Furthermore, the weights of the metrics can be manually adjustable by the user.
  • factors can include a personality type of the user (e.g., the social network metric has more weight for a more extraverted user), a residence of the user (e.g., the location type metric has less weight for a user located in a more sparsely populated city), and/or device availability of the user (e.g., the call streaming metric has less weight for a user that does not own many electronic devices with call streaming capabilities).
  • the user can directly set the metric weights without having to provide an input that would otherwise automatically assign the weights of the metrics (e.g., to preset amounts).
  • the goals may be adjusted by a care provider, such as an audiologist, mental health professional or occupational therapist, etc.
  • FIG. 4 is a schematic diagram illustrating a mobile device 400 (e.g., a phone, a tablet, etc.) configured to display information regarding a sociability index.
  • a control signal is output based on the determined sociability index and/or the metrics used to determine the sociability index, and the control signal instructs the mobile device 400 to display the information.
  • the mobile device 400 includes a first display portion 402 and a second display portion 404.
  • the first display portion 402 indicates determined sociability indexes (e.g., sociability index values) at different times.
  • the sociability index is determined at specific times (e.g., a time of day, a day of the week) or at a particular frequency.
  • the sociability index is determined with respect to certain occurrences, such as a determined social function.
  • previously determined sociability indexes are stored and displayed at the first display portion 402.
  • the first display portion 402 provides historical information regarding the sociability index, such as a trend of the sociability index over time. Such information can enable the user, for example, to monitor changes in social patterns and behaviors over time, such as whether their recent social patterns are affecting their sociability index in a desirable manner.
  • information can also be used to determine a sociability index associated with the user prior to device usage (e.g., pre-implantation of a hearing device).
  • a sociability index associated with the user prior to device usage (e.g., pre-implantation of a hearing device).
  • previously determined metrics such as biomarkers and/or location data
  • a sociability index e.g., a baseline sociability index
  • the user can compare a sociability index determined during device usage (e.g., post-implantation of a hearing device) with a sociability index determined prior to device usage to understand how the transition to device usage affected their social patterns.
  • the second display portion 404 indicates different metrics, such as any of the aforementioned metrics of the diagram 248.
  • the second display portion 404 can inform the user of the specific metrics, such as the listening situation metrics or the activity metrics, that contribute to the determined sociability index displayed at the first display portion 402.
  • the second display portion 404 can provide more granular information so that the user is notified of a particular metric and can determine a more specific behavior related to the particular metric to change (e.g., increase) the sociability index.
  • first display portion 402 and second display portion 404 include graphical representations of the sociability index and of the metrics
  • the sociability index and/or the metrics can be shown using a different visual representation.
  • the mobile device 400 can display text, a table, and/or a chart.
  • the mobile device 400 can display each of the sociability index and the metrics within a common display portion. In other words, a single display portion of the mobile device 400 can indicate the sociability index and the metrics.
  • the sociability index is displayed as a bar on a bar graph, and the bar is divided into segments associated with the individual metrics contributing to the sociability index.
  • the respective sizes of the segments correspond to the amount in which the metrics affect the sociability index (e.g., a metric, such as a metric with relatively higher weight, providing a greater contribution to the sociability index is shown as a segment with a relatively longer length).
  • FIG. 5 is a schematic diagram illustrating a network 450 of systems that communicate information related to sociability index.
  • the user devices 452 include the cochlear implant system 102 and the mobile device 400 communicatively coupled to the cochlear implant system 102.
  • the cochlear implant system 102 and the mobile device 400 can cooperatively determine various metrics and the sociability index based on the metrics.
  • the cochlear implant system 102 and the mobile device 400 are communicatively coupled to a router 456, which is communicatively coupled to a network platform 458 (e.g., a cloud network).
  • a network platform 458 e.g., a cloud network
  • the network platform 458 can receive information and forward the information to certain recipients, such as to a professional device 462 (e.g., a mobile device, a desktop computer, etc.) used by a professional user 464 (e.g., a caretaker, a healthcare provider, etc.).
  • a professional device 462 e.g., a mobile device, a desktop computer, etc.
  • a professional user 464 e.g., a caretaker, a healthcare provider, etc.
  • the cochlear implant system 102 and/or the mobile device 400 can output information to the router 456, which forwards the information to the network platform 458, and the network platform 458 transmits the information to the professional device 462.
  • the mobile device 400 is communicatively coupled to a terrestrial station 466 (e.g., a cell phone tower), which is communicatively coupled to the network platform 458.
  • the mobile device 400 can transmit information to the network platform 458 via either the router 456 or the terrestrial station 466.
  • the user devices 452 can transmit information to the professional device 462 to inform the professional user 464 of the sociability index of the user.
  • the professional user 464 is also able to monitor the social patterns of the user. For instance, the professional user 464 can see whether the sociability index is at a desirable level without having to directly interact with the user. Therefore, the professional user 464 can more readily perform an action in response.
  • the professional user 464 can also transmit information to the user devices 452 via the network 450.
  • the professional user 464 can utilize the professional device 462 to transmit information to the network platform 458, which then transmits the information to the router 456 and/or to the terrestrial station 466 for forwarding to the mobile device 400.
  • the professional user 464 can manually provide a recommendation or a notification to the user to improve the sociability index. Such communication can be especially helpful to reinforce certain behaviors that may have been previously reviewed in a clinical setting but since forgotten or neglected by the user.
  • the professional user 464 can receive information from multiple different user devices 452 that are associated with different users. Thus, the professional user 464 can monitor the social patterns of different users and provide different recommendations to the different users.
  • the professional user 464 can continue to monitor the sociability of a user after a recommendation (e.g., a treatment program) has been provided to the user. The professional user 464 can then determine an effectiveness of the recommendation and modify subsequent recommendations (e.g., to modify a recommended program, to avoid recommending a program) being provided.
  • the network platform 458 can also perform operations to determine the metrics and/or the sociability index. For example, the network platform 458 can receive raw data from the mobile device 400 and/or from the cochlear implant system 102, determine the metrics based on the received raw data, and determine the sociability index based on the metrics.
  • the network platform 458 can then transmit information related to the determined metrics and/or the determined sociability index, such as to the mobile device 400 and/or to the professional device 462.
  • the network platform 458 includes a memory (e.g., storing instructions) and one or more processors to perform operations related to the metrics and/or the sociability index (e.g., by executing the instructions stored on the memory).
  • the network 450 illustrates various devices and systems used to communicate information between the user devices 452 and the professional system 454, it should be noted that different ways of communication can be used in additional or alternative embodiments.
  • the mobile device 400 can be communicatively coupled to the network platform 458 via a different system, such as a non-terrestrial station (e.g., a satellite), and/or the mobile device 400 can be directly communicatively coupled to the professional device 462 (e.g., via a wired connection).
  • a non-terrestrial station e.g., a satellite
  • the mobile device 400 can be directly communicatively coupled to the professional device 462 (e.g., via a wired connection).
  • FIGS. 6A, 6B, and 7 described below illustrates example methods related to sociability index operations.
  • the external computing device 110 of FIGS. 1A-1E can perform each of the methods.
  • additional or alternative devices can perform the methods. Indeed, the respective methods can be performed by the same or different device. Additionally, the methods can be performed in any relation with respect to one another, such as in parallel (e.g., simultaneously) or in response to one another.
  • FIG. 6A is a flowchart illustrating a method 500 for determining the sociability index.
  • a first metric of a listening situation is determined.
  • the first metric can include a speech environment metric, a call streaming metric, and/or a location type metric.
  • a second metric of a user activity in the listening situation is determined.
  • the second metric can include a conversational engagement metric, a conversational effort metric, and/or a social network metric.
  • a sociability index is determined based on the first metric and the second metric.
  • a signal (e.g., a control signal) is output based on the determined sociability index.
  • the signal causes a mobile device of the user to provide an indication or notification ofthe sociability index.
  • a visual output e.g., a display
  • an audio output e.g., an alarm
  • tactile feedback e.g., a vibration
  • the user can be prompted to improve and/or maintain their sociability index, such as to set and achieve a target sociability index.
  • the signal can cause an electronic device of another user, such as a professional, to provide an indication or notification of the sociability index to the other user.
  • the other user can then monitor progress of the user and perform a corresponding action, such as to incorporate a potential program for the user to improve the sociability index.
  • the signal can adjust certain device settings.
  • the signal can change signal processing related operations of the device and/or enable microphone directionality to facilitate conversational effort and increase the sociability index.
  • the signal can be output to provide a recommendation to the user.
  • the recommendation can include a suggested action that the user can perform, such as to turn down background noise (e.g., music), to improve their sociability index in certain situations.
  • the recommendation can also include suggested device setting adjustments for the user to manually change the device settings to improve their sociability index.
  • the signal can help the user achieve a desirable sociability index in other ways.
  • the signal can provide the user with an incentive to maintain or achieve a target sociability index, such as by crediting the user with financial benefits (e.g., discounts, credits, gift cards, etc.) for certain items or stores. This gamification of the sociability index can further encourage the user to actively manage their sociability index.
  • the signal is output based on the sociability index as compared to a target or threshold level, such as in response to the sociability index being less than the target level (e.g., the sociability index is to be improved) or greater than the target level (e.g., the sociability index is to be maintained).
  • FIG. 6B is a flowchart illustrating a method 550 for determining the sociability index.
  • a first goal of the user is determined, such as based on a user input (e.g., by the user or by a professional).
  • the sociability index is determined based on a first process (e.g., an algorithm, an equation, a data model, etc.) corresponding to the first goal.
  • the sociability index is determined based on various metrics, and each metric can be weighted differently.
  • the first process includes a first set of weights assigned to the metrics in which metrics that are more important are assigned with relatively higher weights. As such, the sociability index determined via the first process more closely corresponds to the first goal.
  • an adjustment of the first goal to a second goal is determined.
  • the sociability index is determined based on a second process corresponding to the second goal. For instance, a different metric can be more important for achievement of the second goal as compared to achievement of the first goal. Therefore, the second process can include a second set of weights, different from the first set of weights, assigned to the metrics such that the sociability index determined via the second process more closely corresponds to the second goal.
  • determination of the sociability index can further be adjusted based on other factors, such as a personality of the user.
  • the sociability index can be determined in different manners for extraverted and introverted users. Indeed, because different metrics can be of a different importance for users of different personalities, determination of the sociability index can be adjusted to more closely represent a particular personality, such as by changing the metric weights such that metrics more important for a personality are assigned with greater weights. In this manner, the sociability index can more closely reflect the user and enable the user to manage the sociability index more effectively.
  • FIG. 7 is a flowchart illustrating a method 600 for providing a recommendation based on a sociability index.
  • a sociability index is determined based on various metrics.
  • a determination is made that a recommendation is to be provided based on the sociability index. For example, the recommendation is to be provided in response to the sociability index being below a target level, so the recommendation can encourage the user to improve their sociability index.
  • previously used recommendations are determined.
  • recommendations that were previously provided to the user are stored and retrieved.
  • recommendations that were previously provided to other users are stored and retrieved.
  • information related to multiple users is accessed (e.g., from a cloud server), and a recommendation is identified based on such information.
  • a signal is output to cause an electronic device (e.g., amobile device) ofthe userto provide a recommendation based on the previously used recommendations, such as by selecting the recommendation from a list of the previously used recommendations.
  • a recommendation can be provided via a visual output, an audio output, tactile feedback, and/or any other suitable notification.
  • the specific recommendation to be provided is based on the metrics used to determine the sociability index.
  • the recommendation can be tailored to the user’s particular social context.
  • the recommendation can recommend actions to improve the conversational engagement metric, including reducing background music volume, orienting to face away from noise sources, activating directional microphone features, moving closer to a speaker, asking a speaker to reduce their speaking rate, and the like.
  • the recommendation can be based on different detected noise sources.
  • the context of the user can include different sources of noise distortion, such as steady-state noise (e.g., surrounding machinery in operation), a quantity of speakers (e.g., competing speakers), reverberation or echo of noise, background music, or impulse noise (e.g., dish clatter), that can reduce the user’s ability to hear (e.g., to reduce the conversational effort metric and/or the conversational engagement metric).
  • the recommendation can specifically include mitigation of such noise sources. Therefore, the recommendation provided to the user can be tailored to a particularly determined context to provide customized actional support that can be effectively implemented to improve the sociability index.
  • feedback indicative of the effectiveness of a provided recommendation is received.
  • the feedback can be provided by the user or by a different person (e.g., a professional, a significant other, etc.) to indicate the effectiveness of the recommendation.
  • the feedback can indicate that the recommendation effectively improved a metric (e.g., facilitated the conversational effort metric) or did not improve the metric.
  • the feedback can then be used for providing subsequent recommendations.
  • the recommendation can be provided more frequently in the future.
  • the feedback indicating that the recommendation was not effective the recommendation may not be provided or may be provided less frequently in the future.
  • a recommendation can also be provided based on a trend of the sociability index.
  • the signal is output to provide a recommendation (e.g., to change a certain behavior) based on the sociability index decreasing at above a threshold rate.
  • a recommendation can also be provided even when the sociability index is desirable (e.g., above a threshold level).
  • the recommendation can include suggestions to maintain the desirable sociability index and to reinforce certain social patterns.
  • a recommendation can be provided based on metrics regardless of the sociability index. For instance, a signal can be provided to improve the conversational engagement metric (e.g., to activate a directional microphone) even though the sociability index is desirable (e.g., because the conversational effort metric is high). As such, the recommendation being provided can improve social patterns of the user in multiple different contexts.
  • the technology disclosed herein can be applied in any of a variety of circumstances and with a variety of different devices.
  • Example devices that can benefit from technology disclosed herein are described in more detail in FIGS. 8 and 9.
  • the techniques of the present disclosure can be applied to other devices, such as neurostimulators, cardiac pacemakers, cardiac defibrillators, sleep apnea management stimulators, seizure therapy stimulators, tinnitus management stimulators, and vestibular stimulation devices, as well as other medical devices that deliver stimulation to tissue.
  • technology described herein can also be applied to consumer devices. These different systems and devices can benefit from the technology described herein.
  • FIG. 8 illustrates an example vestibular stimulator system 1002, with which embodiments presented herein can be implemented.
  • the vestibular stimulator system 1002 comprises an implantable component (vestibular stimulator) 1012 and an external device/component 1004 (e.g., external processing device, battery charger, remote control, etc.).
  • the external device 1004 comprises a transceiver unit 1060.
  • the external device 1004 is configured to transfer data (and potentially power) to the vestibular stimulator 1012.
  • the vestibular stimulator 1012 comprises an implant body (main module) 1034, a lead region 1036, and a stimulating assembly 1016, all configured to be implanted under the skin/tissue (tissue) 1015 of the recipient.
  • the implant body 1034 generally comprises a hermetically-sealed housing 1038 in which RF interface circuitry, one or more rechargeable batteries, one or more processors, and a stimulator unit are disposed.
  • the implant body 134 also includes an intemal/implantable coil 1014 that is generally external to the housing 1038, but which is connected to the transceiver via a hermetic feedthrough (not shown).
  • the stimulating assembly 1016 comprises a plurality of electrodes 1044( l)-(3) disposed in a carrier member (e.g., a flexible silicone body).
  • the stimulating assembly 1016 comprises three (3) stimulation electrodes, referred to as stimulation electrodes 1044(1), 1044(2), and 1044(3).
  • the stimulation electrodes 1044(1), 1044(2), and 1044(3) function as an electrical interface for delivery of electrical stimulation signals to the recipient’s vestibular system.
  • the stimulating assembly 1016 is configured such that a surgeon can implant the stimulating assembly adjacent the recipient’s otolith organs via, for example, the recipient’s oval window. It is to be appreciated that this specific embodiment with three stimulation electrodes is merely illustrative and that the techniques presented herein may be used with stimulating assemblies having different numbers of stimulation electrodes, stimulating assemblies having different lengths, etc.
  • the vestibular stimulator 1012, the external device 1004, and/or another external device can be configured to implement the techniques presented herein. That is, the vestibular stimulator 1012, possibly in combination with the external device 1004 and/or another external device, can include an evoked biological response analysis system, as described elsewhere herein.
  • FIG. 9 illustrates a retinal prosthesis system 1101 that comprises an external device 1110 configured to communicate with an implantable retinal prosthesis 1100 via signals 1151.
  • the retinal prosthesis 1100 comprises an implanted processing module 1125, and a retinal prosthesis sensor-stimulator 1190 is positioned proximate the retina of a recipient.
  • the external device 1110 and the processing module 1125 can communicate via coils 1108, 1114.
  • sensory inputs are absorbed by a microelectronic array of the sensor-stimulator 1190 that is hybridized to a glass piece 1192 including, for example, an embedded array of microwires.
  • the glass can have a curved surface that conforms to the inner radius of the retina.
  • the sensor-stimulator 1190 can include a microelectronic imaging device that can be made of thin silicon containing integrated circuitry that converts the incident photons to an electronic charge.
  • the processing module 1125 includes an image processor 1123 that is in signal communication with the sensor-stimulator 1190 via, for example, a lead 1188 which extends through surgical incision 1189 formed in the eye wall. In other examples, processing module 1125 is in wireless communication with the sensor-stimulator 1190.
  • the image processor 1123 processes the input into the sensor-stimulator 1190 and provides control signals back to the sensor-stimulator 1190 so the device can provide an output to the optic nerve. That said, in an alternate example, the processing is executed by a component proximate to, or integrated with, the sensor-stimulator 1190.
  • the electric charge resulting from the conversion of the incident photons is converted to a proportional amount of electronic current which is input to a nearby retinal cell layer. The cells fire, and a signal is sent to the optic nerve, thus inducing a sight perception.
  • systems and non-transitory computer readable storage media are provided.
  • the systems are configured with hardware configured to execute operations analogous to the methods of the present disclosure.
  • the one or more non-transitory computer readable storage media comprise instructions that, when executed by one or more processors, cause the one or more processors to execute operations analogous to the methods of the present disclosure.
  • steps of a process are disclosed, those steps are described for purposes of illustrating the present methods and systems and are not intended to limit the disclosure to a particular sequence of steps. For example, the steps can be performed in differing order, two or more steps can be performed concurrently, additional steps can be performed, and disclosed steps can be excluded without departing from the present disclosure. Further, the disclosed processes can be repeated.
  • the term “comprises” and its derivations should not be understood in an excluding sense, that is, these terms should not be interpreted as excluding the possibility that what is described and defined can include further elements, steps, etc.
  • any description recites “a” or “a first” element or the equivalent thereof, such disclosure should be understood to include incorporation of one or more such elements, neither requiring nor excluding two or more such elements.
  • the term “approximately” and terms of its family should be understood as indicating values very near to those which accompany the aforementioned term.
  • the term “approximately” can denote a tolerance of plus or minus 0.002 inches, 0.001 inches, or up to 0.005 inches. The same applies to the terms “about” and “around” and “substantially.”
  • the phrase “A and/or B” means (A), (B), or (A and B)
  • the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Otolaryngology (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Neurosurgery (AREA)
  • Physiology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Prostheses (AREA)

Abstract

L'invention concerne des techniques de surveillance (par exemple, de détection, de détermination, de suivi, de calcul, etc.) d'un indice de sociabilité d'un utilisateur (par exemple, un destinataire) d'un "dispositif utilisateur," qui est un dispositif qui est porté par, ou implanté dans, l'utilisateur. L'indice de sociabilité peut comprendre, mais sans y être limité, une situation d'écoute de l'utilisateur et/ou une activité associée à l'utilisateur dans la situation d'écoute.
PCT/IB2024/054020 2023-05-01 2024-04-24 Surveillance de la sociabilité d'un utilisateur Pending WO2024228091A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363499314P 2023-05-01 2023-05-01
US63/499,314 2023-05-01

Publications (1)

Publication Number Publication Date
WO2024228091A1 true WO2024228091A1 (fr) 2024-11-07

Family

ID=93332899

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2024/054020 Pending WO2024228091A1 (fr) 2023-05-01 2024-04-24 Surveillance de la sociabilité d'un utilisateur

Country Status (1)

Country Link
WO (1) WO2024228091A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170123824A1 (en) * 2015-10-28 2017-05-04 Bose Corporation Sensor-enabled feedback on social interactions
JP2018026125A (ja) * 2016-08-02 2018-02-15 キャノンメディカルシステムズ株式会社 医用情報システム、情報処理端末、医用情報サーバ及び医用情報提供プログラム
US20190069098A1 (en) * 2017-08-25 2019-02-28 Starkey Laboratories, Inc. Cognitive benefit measure related to hearing-assistance device use
WO2020021487A1 (fr) * 2018-07-25 2020-01-30 Cochlear Limited Procédés et systèmes de réhabilitation et/ou de rééducation
US20220254367A1 (en) * 2021-02-05 2022-08-11 Sonova Ag Determining social interaction of a user wearing a hearing device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170123824A1 (en) * 2015-10-28 2017-05-04 Bose Corporation Sensor-enabled feedback on social interactions
JP2018026125A (ja) * 2016-08-02 2018-02-15 キャノンメディカルシステムズ株式会社 医用情報システム、情報処理端末、医用情報サーバ及び医用情報提供プログラム
US20190069098A1 (en) * 2017-08-25 2019-02-28 Starkey Laboratories, Inc. Cognitive benefit measure related to hearing-assistance device use
WO2020021487A1 (fr) * 2018-07-25 2020-01-30 Cochlear Limited Procédés et systèmes de réhabilitation et/ou de rééducation
US20220254367A1 (en) * 2021-02-05 2022-08-11 Sonova Ag Determining social interaction of a user wearing a hearing device

Similar Documents

Publication Publication Date Title
CN112602337B (zh) 被动适配技术
US20250063311A1 (en) User-preferred adaptive noise reduction
CN115768514A (zh) 绕过验证的医疗装置控制
US20240194335A1 (en) Therapy systems using implant and/or body worn medical devices
US20240155299A1 (en) Auditory rehabilitation for telephone usage
US20230329912A1 (en) New tinnitus management techniques
US20240382751A1 (en) Clinician task prioritization
WO2024228091A1 (fr) Surveillance de la sociabilité d'un utilisateur
US20240304314A1 (en) Predictive medical device consultation
CN119072743A (zh) 基于动态列表的语音测试
WO2025238503A1 (fr) Réglages basés sur des données environnementales enregistrées
US20250071492A1 (en) Tinnitus remediation with speech perception awareness
WO2025233755A1 (fr) Génération de dossiers cliniques simultanés
US20240325746A1 (en) User interfaces of a hearing device
CN120094098A (zh) 利用修复体技术和/或其它技术的生理测量管理
WO2025114818A1 (fr) Contrôle de qualité d'assistance pour ajustement d'implant
US20250194959A1 (en) Targeted training for recipients of medical devices
WO2025114819A1 (fr) Personnalisation de dispositif
WO2025062297A1 (fr) Ajustement d'opérations d'un dispositif sur la base de données d'environnement
WO2024150094A1 (fr) Surveillance de jalons de parole-langage
WO2024252356A1 (fr) Techniques à plusieurs parties
WO2024218687A1 (fr) Techniques prédictives pour aides sensorielles
WO2024141900A1 (fr) Intervention audiologique
WO2025210451A1 (fr) Détermination de paramètre de dispositif dérivé de données
WO2025229471A1 (fr) Reconnaissance automatique de la parole dans un traitement sonore

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24799943

Country of ref document: EP

Kind code of ref document: A1