[go: up one dir, main page]

WO2025058630A1 - Methods and systems for performing cochlear implant stimulation based on an analytic signal - Google Patents

Methods and systems for performing cochlear implant stimulation based on an analytic signal Download PDF

Info

Publication number
WO2025058630A1
WO2025058630A1 PCT/US2023/032875 US2023032875W WO2025058630A1 WO 2025058630 A1 WO2025058630 A1 WO 2025058630A1 US 2023032875 W US2023032875 W US 2023032875W WO 2025058630 A1 WO2025058630 A1 WO 2025058630A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
envelope
audio
input spectrum
additional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2023/032875
Other languages
French (fr)
Inventor
John Norris
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Bionics LLC
Original Assignee
Advanced Bionics LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Bionics LLC filed Critical Advanced Bionics LLC
Priority to PCT/US2023/032875 priority Critical patent/WO2025058630A1/en
Publication of WO2025058630A1 publication Critical patent/WO2025058630A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/36036Applying electric currents by contact electrodes alternating or intermittent currents for stimulation of the outer, middle or inner ear
    • A61N1/36038Cochlear stimulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/60Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles
    • H04R25/604Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers
    • H04R25/606Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers acting directly on the eardrum, the ossicles or the skull, e.g. mastoid, tooth, maxillary or mandibular bone, or mechanically stimulating the cochlea, e.g. at the oval window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired

Definitions

  • certain signal processing may be performed to analyze audio presented to a recipient of a cochlear implant system and, based on the analysis of this audio, to generate stimulation data configured to direct a cochlear implant that has been implanted within the recipient to properly stimulate the recipient in accordance with the audio.
  • FIG. 1 shows an illustrative method for increasing a temporal resolution of cochlear implant stimulation based on an analytic signal in accordance with principles described herein.
  • FIG. 2 shows an illustrative computing system configured to increase temporal resolution of stimulation based on an analytic signal in accordance with principles described herein.
  • FIG. 3 shows certain elements of an illustrative cochlear implant system configured to increase a temporal resolution of cochlear implant stimulation based on an analytic signal in accordance with principles described herein.
  • FIG. 4 shows an illustrative implementation of a cochlear implant system in which methods and systems described herein may be embodied in accordance with principles described herein.
  • FIG. 5 shows example facilities and signals that may be implemented in an illustrative architecture of a sound processing device in accordance with principles described herein.
  • FIG. 6 shows illustrative aspects of how an input signal may be processed by an example sound processing device in accordance with principles described herein.
  • FIG. 7 shows illustrative aspects of how a set of input spectrum signals may be generated by an example sound processing device in accordance with principles described herein.
  • FIG. 8 shows illustrative aspects of how envelope and phase values may be generated by an example sound processing device in accordance with principles described herein.
  • FIG. 9 shows illustrative aspects of how one or more analytic signals may be used by an example sound processing device to increase the resolution of envelope and phase values generated in accordance with principles described herein.
  • FIG. 11 shows an illustrative computing system that may implement any of the computing systems described herein.
  • cochlear implant systems may perform signal processing to analyze audio presented to a recipient and may generate stimulation data based on this signal processing. For example, after bringing in an audio signal and digitizing it, properly adjusting its gain, reducing noise on the signal, and so forth, a sound processing device may convert the signal from the time domain to the frequency domain and divide up the signal with respect to a plurality of channels associated with different frequencies. As part of this processing, a sound processing device may be configured to analyze both the envelope of the respective signals for each frequency channel into which the audio signal has been divided, and the fine structure (e.g., phase, frequency, etc.) of the respective input spectrum signals.
  • the fine structure e.g., phase, frequency, etc.
  • envelope e.g., energy
  • phase e.g., instantaneous frequency
  • audio frame rate may refer to the number of audio frames being input and processed per unit of time (e.g., per second), as determined by how frequently samples are being captured for an audio signal and how many of those samples are included per audio frame (i.e., how many samples are to be processed at a time).
  • an audio signal has been generated with a sample rate of 22.05 kHz (i.e., 22,050 samples per second) and processing is performed with respect to audio frames being updated every 32 samples the audio frame rate of this example would be calculated to be about 690 Hz (i.e., about 690 frames per second as calculated by the quotient of 22,050 and 32).
  • Methods that calculate one signal value per audio frame are described, for example, in U.S. Patent No. 7,515,966, which is hereby incorporated by reference in its entirety.
  • stimulation frame rate (also referred to as a “forward telemetry rate”), which may refer to the rate of stimulation frames being provided (e.g., by a sound processing device) to the cochlear implant as the cochlear implant applies stimulation to the cochlear implant recipient.
  • the stimulation frame rate may be determined based on a variety of factors (e.g., customized to the needs and preferences of the recipient), and, at least in some examples or to some extent, may be independent from the audio frame rate. As an example, a stimulation frame rate for a particular cochlear implant recipient may be about 1856 Hz.
  • the signal processing chain may up-sample the audio frames to keep pace with the desired stimulation frame rate. For example, if a singular envelope and phase value is determined for each channel based on each audio frame, stimulation frames may repeat the singular envelope and phase value for that channel before being refreshed with a new envelope and phase value associated with the next audio frame.
  • methods and systems described herein may allow for the temporal resolution of cochlear implant stimulation to be increased by using down sampled band-limited analytic signals (e.g., band-limited Hilbert analytic signals generated using inverse Fourier transforms and set up to overlap low and/or mid-frequency ranges, etc.) to resample the original audio signal at a rate closer to the stimulation frame rate.
  • band-limited analytic signals e.g., band-limited Hilbert analytic signals generated using inverse Fourier transforms and set up to overlap low and/or mid-frequency ranges, etc.
  • analytic signals may help retain temporal information by producing more than one envelope and phase value per audio frame, such that the up-sampling described above ceases to be necessary.
  • the Hilbert analytic signal By using the Hilbert analytic signal to generate the fine structure for one or more input spectrum intervals (e.g., input signals associated with one or more frequency channels), several pairs of envelope and phase values may be computed (e.g., determined, estimated, etc.) per audio frame.
  • the rate of these envelope/phase value pairs e.g., the number of value pairs per frame times the audio frame rate
  • an effective audio frame rate may form what will be referred to herein as an “effective audio frame rate.”
  • the effective audio frame rate may be made to be greater than the stimulation frame rate, such that down-sampling, rather than the up-sampling described above, will be needed to meet the desired stimulation frame rate. Accordingly, for a given stimulation frame rate, the temporal resolution may thereby be improved compared to a system that calculates one signal value per audio frame. With this increased resolution, cochlear implant systems may be enabled to simulate captured audio signals with more fidelity and to otherwise provide stimulation that is improved overall. These improvements may benefit producers and recipients of cochlear implant systems implementing these principles in various ways that will be described and made apparent herein.
  • FIG. 1 shows an example sound processing device 100 and an illustrative method 102 for increasing a temporal resolution of cochlear implant stimulation based on an analytic signal that sound processing device 100 may perform in accordance with principles described herein.
  • method 102 also referred to as a process 102
  • method 102 shows illustrative operations according to one implementation, other implementations may omit, add to, reorder, and/or modify any of operations 104 shown in method 102.
  • multiple operations 104 may be performed concurrently (e.g., in parallel) with one another, rather than being performed sequentially as illustrated and/or described.
  • part or all of method 102 may, at least in certain circumstances, be performed in real time so as to provide, receive, process, and/or use data described herein immediately as the data is generated, updated, changed, exchanged, or otherwise becomes available.
  • operations described herein may involve real-time data, real-time representations, real-time conditions, and/or other real-time circumstances.
  • real time will be understood to relate to data processing and/or other actions that are performed immediately, as well as conditions and/or circumstances that are accounted for as they exist in the moment when the processing or other actions are performed.
  • a real-time operation may refer to an operation that is performed immediately and without undue delay, even if it is not possible for there to be absolutely zero delay.
  • real-time data, real-time representations, real-time conditions, and so forth will be understood to refer to data, representations, and conditions that relate to a present moment in time or a moment in time when decisions are being made and operations are being performed (e.g., even if after a short delay), such that the data, representations, conditions, and so forth are temporally relevant to the decisions being made and/or the operations being performed.
  • One or more of operations 104 shown in Fig. 1 may be performed by data processing resources (e.g., data processing resources), user interface resources, communication resources, and/or other suitable computing resources of sound processing device 100, which, as will be described and illustrated in more detail below, may be communicatively coupled to a cochlear implant in a cochlear implant system that includes sound processing device 100, the cochlear implant, and other components.
  • data processing resources e.g., data processing resources
  • user interface resources e.g., user interface resources
  • communication resources e.g., communication resources, and/or other suitable computing resources of sound processing device 100, which, as will be described and illustrated in more detail below, may be communicatively coupled to a cochlear implant in a cochlear implant system that includes sound processing device 100, the cochlear implant, and other components.
  • sound processing device 100 may obtain an audio signal that is represented in a time domain and that comprises a series of audio frames.
  • the obtained audio signal may be based on an acoustic signal that is captured by a microphone of the cochlear implant system (e.g., a microphone included within or communicatively coupled to sound processing device 100).
  • the acoustic signal may be generated by digitizing and applying certain effects (e.g., automatic gain control (AGC), noise reduction, etc.) to the signal.
  • AGC automatic gain control
  • Sound processing device 100 may therefore obtain this audio signal either by generating the signal itself, by receiving the signal after it has been generated by another part of the system, or by some combination of these (e.g., receiving the signal at some stage of the process and applying certain affects to finish preparing the audio signal for the processing of operations 104-2 through 104-5).
  • the audio signal may be based on an electrical signal provided to the cochlear implant system (e.g., an audio signal associated with a music file or other recording, a transmission of a sound that is captured remotely from the recipient, etc.).
  • each audio frame in the series of audio frames may incorporate a certain number of samples from the audio signal.
  • the audio signal may include approximately 22,050 samples per second (i.e., an audio sampling frequency of 22.05 kHz) and, after every 32 samples update, may be processed together as an audio frame.
  • the audio frame rate would therefore be approximately 690 Hz (i.e., approximately 690 audio frames per second). It will be understood that these values are provided only by way of illustration and that any sample rate and audio frame rate may be used as may serve a particular implementation.
  • sound processing device 100 may generate spectral values for a set of input spectrum signals based on the audio signal. While the audio signal, as mentioned above, may be represented in the time domain (i.e., representing the audio signal as a function of time), each input spectrum signal in the set of input spectrum signals generated at operation 104-2 may be represented in the frequency domain (i.e., representing each audio frame as a function of how much energy is associated with each frequency band in a set of frequency bands).
  • the total spectrum of frequencies of interest e.g., audible frequencies, etc.
  • the set of input spectrum signals generated at operation 104-2 may correspond to a set of channels.
  • This set of input spectrum signals may include, for example, a particular input spectrum signal with spectral values corresponding to a particular channel of the set of channels (e.g., a frequency domain signal associated with a particular frequency band in the overall spectrum of audible frequencies).
  • the generating of the set of input spectrum signals at operation 104-2 may be performed using a Fourier transform (e.g., a short-time Fourier transform (STFT), a fast Fourier transform (FFT), etc.), or in any other manner as may serve a particular implementation.
  • generating the spectral values at operation 104-2 may include generating the spectrum of windowed input audio and zeroing out all the negative frequencies.
  • sound processing device 100 may determine an analytic signal associated with one or more of the set of channels (e.g., associated with the particular channel mentioned above in relation to operation 104-2).
  • the analytic signal may represent positive frequency regions.
  • operation 104-3 may involve determining band-limited analytic signals associated with the low and mid positive frequency ranges. These analytic signals may be generated based on the particular input spectrum signal in any suitable way.
  • sound processing device 100 may spectrally shift the mid frequency ranges so they approximately overlap the low frequency range, and may down-sample both the low and mid analytic signals so they have a common sample rate.
  • operation 104-3 may be performed by generating several (e.g., two, three, four, etc.) band-limited analytic signals (e.g., Hilbert analytic signals) to help retain timing information.
  • band-limited analytic signals e.g., Hilbert analytic signals
  • These analytic signals may be configured to overlap low- and/or mid-frequency ranges by being generated using an inverse Fourier transform for the bins that cover these frequency ranges.
  • the band-limited analytic signals may be spectrally shifted down and down-sampled.
  • An overlap-and-add technique e.g., WOLA
  • sound processing device 100 may generate one or more envelope signals and fine structure signals (e.g., instantaneous frequencies) based on the analytic signal determined at operation 104-3 (e.g., by filtering the analytic signal with a set of band-pass filters that define each of the channels frequency intervals). These band-pass filters cutoff frequencies are adjusted for the analytic signals that have been spectrally shifted. Envelope and fine structure signals may be produced for each of the set of channels using one or more analytic signals (as well as other processing tools such as band-pass filters associated with each of the channels).
  • envelope signals and fine structure signals e.g., instantaneous frequencies
  • sound processing device 100 may generate an envelope signal and a fine structure signal for the particular channel.
  • sound processing device 100 may specify a set of logarithmically- spaced band-pass filters, shifting their cutoff frequencies if they overlap one of the mid frequency ranges.
  • the sound processing device 100 may filter the analytic signals to create logarithmically spaced channel signals and may determine the envelopes and instantaneous frequencies for the mid and low frequency channels and the phase for the low frequency channels.
  • Envelope and fine structure signals generated at operation 104-4 may be represented in the time domain, similar to the audio signal obtained at operation 104-1. However, whereas the audio signal may correspond to the entire audio spectrum (rather than a single channel or subset of channels), the envelope and fine structure signals generated at operation 104-4 may, like the set of input spectrum signals from which the analytic signals derives, be individually associated with particular channels of the set of channels.
  • an envelope signal for the particular channel generated at operation 104-4 may consist of envelope (e.g., amplitude) values of the input spectrum signal for the particular channel, while a fine structure signal for that particular channel generated at operation 104-4 may consist of phase (e.g., frequency) values of the input spectrum signal for that particular channel.
  • envelope and phase value pairs could theoretically be generated for every sample of every audio frame on a one-to-one basis (e.g., providing 32 envelope/phase value pairs per audio frame for examples described above to include 32 samples per audio frame).
  • this level of resolution may be overkill when considered with the stimulation frame rate that may be in use for a given implementation.
  • real-world sound processing devices do not, of course, have access to limitless processing power.
  • the number of envelope/phase value pairs generated per audio frame for each channel may be customizable based on the available processing power, the target stimulation frame rate, and/or other such factors.
  • a sufficient number of value pairs may be generated per audio frame to make the effective audio frame rate (i.e., the audio frame rate multiplied by the number of value pairs per frame) greater than the stimulation frame rate.
  • sound processing device 100 may transmit a series of stimulation frames to the cochlear implant.
  • the stimulation frames may be generated based on the envelope and fine structure signals generated at operation 104-4.
  • a transmittal of stimulation frames by a conventional sound processing device i.e., one not configured to increase the temporal resolution based on analytic signals
  • may require up-sampling of the envelope and phase values of the envelope and fine structure signals i.e., reusing each value pair more than once in consecutive stimulation frames while the next audio frame is being processed
  • the increased resolution enabled by the envelope and fine structure signals generated at operation 104-4 may reduce or eliminate this undesirable practice.
  • each stimulation frame transmitted in the series may include a unique and updated envelope/phase value pair to thereby increase the resolution and enhance the quality perceived by the recipient of the cochlear implant.
  • FIG. 2 shows an illustrative system 200 (e.g., a computing system such as a sound processing device or other such device) configured to increase temporal resolution of stimulation based on an analytic signal in accordance with principles described herein.
  • system 200 is shown to include a memory 202 storing instructions 204, as well as one or more processors 206 communicatively coupled to memory 202 and configured to execute instructions 204 to perform process 102.
  • processors 206 may access memory 202 and load instructions 204 that cause the processor to perform operations 104 of process 102 (similar or identical to operations 104 described above in relation to FIG. 1).
  • System 200 may be implemented by computer resources such as processors, memory facilities, storage facilities, communication interfaces, and so forth, implemented on one or more computing devices described herein.
  • system 200 (or components thereof) may be implemented by sound processing devices such as behind-the-ear (BTE) sound processors, body worn sound processors, active headpieces worn on the head, implanted sound processors, computing devices communicatively coupled to such sound processors (e.g., mobile devices or other personal computing devices that physically or wirelessly connect to cochlear implant system components such as sound processors, etc.), by some combination of these, or by other suitable computing systems as may serve a particular implementation.
  • BTE behind-the-ear
  • processor 206 (which will be understood to represent one or more processors) and memory 202 may be selectively and communicatively coupled to one another and/or to other resources (e.g., networking and communication interfaces, etc.).
  • memory facilities represented by memory 202 and processors represented by processor 206 may be distributed between multiple computing systems and/or multiple locations as may serve a particular implementation.
  • One or more memory facilities represented by memory 202 may store and/or otherwise maintain executable data used by one or more processors represented by processor 206 to perform any of the functionality described herein.
  • memory 202 may store instructions 204 that may be executed by processor 206.
  • Memory 202 may represent (e.g., may be implemented by) one or more memory or storage devices, including any memory or storage devices described herein, that are configured to store data in a transitory or non-transitory manner.
  • Instructions 204 may be executed by processor 206 to cause system 200 to perform any of the functionality described herein.
  • Instructions 204 may be implemented by any suitable application, software, script, code, and/or other executable data instance.
  • memory 202 may also maintain any other data accessed, managed, used, and/or transmitted by processor 206 in a particular implementation.
  • Processor 206 may represent (e.g., may be implemented by) one or more computer processing devices, including general-purpose processors (e.g., central processing units (CPUs), graphics processing units (GPUs), microprocessors, etc.), special-purpose processors (e.g., application-specific integrated circuits (ASICs), field- programmable gate arrays (FPGAs), etc.), or the like.
  • general-purpose processors e.g., central processing units (CPUs), graphics processing units (GPUs), microprocessors, etc.
  • special-purpose processors e.g., application-specific integrated circuits (ASICs), field- programmable gate arrays (FPGAs), etc.
  • system 200 may perform functions associated with increasing a temporal resolution of cochlear implant stimulation based on an analytic signal in accordance with methods and systems described herein and/or as may serve a particular implementation.
  • FIG. 2 shows process 102 for increasing a temporal resolution of cochlear implant stimulation based on an analytic signal.
  • Process 102 is shown to include the same operations 104- 1 through 104-5 described above in relation to FIG. 1 , and it will be understood that these operations may be performed in the same or similar ways by processor 206 as described above in relation to sound processing device 100.
  • operation 104-1 may be performed by obtaining an audio signal that is represented in a time domain and that comprises a series of audio frames; operation 104-2 may be performed by generating (e.g., based on the audio signal obtained at operation 104-1), a set of input spectrum signals in a frequency domain (where the set of input spectrum signals corresponds to a set of channels and including a particular input spectrum signal corresponding to a particular channel of the set of channels); operation 104-3 may be performed by determining (e.g., based on the particular input spectrum signal) an analytic signal associated with the particular channel; operation 104-4 may be performed by generating, based on the analytic signal, an envelope signal and a fine structure signal for the particular channel (wherein, for each audio frame of the series of audio frames, the envelope signal includes more than one envelope value and the fine structure signal includes more than one phase value); and operation 104-5 may be performed by transmitting a series of stimulation frames generated based on the envelope and fine structure signals.
  • FIG. 3 shows certain elements of an illustrative cochlear implant system 300 configured to increase a temporal resolution of cochlear implant stimulation based on an analytic signal in accordance with principles described herein.
  • cochlear implant system 300 may include: 1) a microphone 302 that may be configured to capture an acoustic signal; 2) a cochlear implant 304 that may be configured to stimulate a recipient in which cochlear implant 304 is implanted; and 3) an implementation of sound processing device 100 that may be communicatively coupled to microphone 302 and to cochlear implant 304.
  • sound processing device 100 may be configured to perform a method or process such as process 102.
  • sound processing device 100 may perform process 102, which was described in relation to FIGS. 1 and 2. More particularly, as shown, this implementation of sound processing device 100 may obtain, at operation 104-1 , an audio signal that is based on the acoustic signal captured by microphone 302, that is represented in a time domain, and that comprises a series of audio frames. At operation 104-2, the sound processing device 100 may generate, based on the audio signal, a set of input spectrum signals in a frequency domain, the set of input spectrum signals corresponding to a set of channels and including a particular input spectrum signal corresponding to a particular channel of the set of channels. At operation 104-3, the sound processing device 100 may determine an analytic signal based on the particular input spectrum signal.
  • the sound processing device 100 may generate, based on the analytic signal, an envelope signal and a fine structure signal for the particular channel.
  • the envelope and fine structure signals may be configured such that, for each audio frame of the series of audio frames, the envelope signal includes more than one envelope value and the fine structure signal includes more than one phase value.
  • the sound processing device 100 may transmit, to cochlear implant 304, a series of stimulation frames generated based on the envelope and fine structure signals.
  • FIG. 4 shows a more detailed implementation 400 of a cochlear implant system such as the cochlear implant system 300 described above in relation to FIG. 3.
  • Implementation 400 also referred to as cochlear implant system 400
  • Cochlear implant system 400 may be configured to be used by a recipient. As shown, cochlear implant system 400 receives audio input (e.g., by way of an audio source implementing microphone 302 or another suitable source) and includes a sound processor 402 (e.g., implementing sound processing device 100), a headpiece 404, a cochlear implant 406 (e.g., implementing cochlear implant 304), an electrode lead 408 physically coupled to cochlear implant 406 and having an array of electrodes 410. In some examples cochlear implant systems such as implementation 400 may include more or fewer components than those explicitly shown in FIG. 4.
  • Cochlear implant system 400 shown in FIG. 4 is unilateral (i.e. , associated with only one ear of the recipient).
  • a bilateral configuration of cochlear implant system 400 may include separate cochlear implants and electrode leads for each ear of the recipient.
  • sound processor 402 may be implemented by a single sound processing device configured to interface with both cochlear implants or by two separate sound processing devices each configured to interface with a different one of the cochlear implants.
  • Cochlear implant 406 may be implemented by any suitable type of implantable stimulator configured to apply electrical stimulation to one or more stimulation sites located along an auditory pathway of the recipient.
  • cochlear implant 406 may additionally or alternatively apply nonelectrical stimulation (e.g., mechanical and/or optical stimulation) to the auditory pathway of the recipient.
  • cochlear implant 406 may be configured to generate electrical stimulation representative of an audio signal received as part of the audio input (captured by microphone 302) and/or processed by sound processor 402 in accordance with one or more stimulation parameters transmitted to cochlear implant 406 by sound processor 402.
  • Cochlear implant 406 may be further configured to apply the electrical stimulation to one or more stimulation sites (e.g., one or more intracochlear locations) within the recipient by way of one or more electrodes 410 on electrode lead 408.
  • cochlear implant 406 may include a plurality of independent current sources each associated with a channel defined by one or more of electrodes 410. In this manner, different stimulation current levels may be applied to multiple stimulation sites simultaneously by way of multiple electrodes 410.
  • Cochlear implant 406 may additionally or alternatively be configured to generate, store, and/or transmit data.
  • cochlear implant 406 may use one or more electrodes 410 to record one or more signals (e.g., one or more voltages, impedances, evoked responses within the recipient, and/or other measurements) and transmit, by way of a back telemetry communication link, data representative of the one or more signals to sound processor 402. In some examples, this data is referred to as back telemetry data.
  • Electrode lead 408 may be implemented in any suitable manner.
  • a distal portion of electrode lead 408 may be pre-curved such that electrode lead 408 conforms with the helical shape of the cochlea after being implanted.
  • Electrode lead 408 may alternatively be naturally straight or of any other suitable configuration.
  • electrode lead 408 includes a plurality of wires (e.g., within an outer sheath) that conductively couple electrodes 410 to one or more current sources within cochlear implant 406. For example, if there are n electrodes 410 on electrode lead 408 and n current sources within cochlear implant 406, there may be n separate wires within electrode lead 408 that are configured to conductively connect each electrode 410 to a different one of the n current sources. Exemplary values for n are 8, 12, 16, or any other suitable number.
  • Electrodes 410 are located on at least a distal portion of electrode lead 408.
  • Electrodes 410 may be applied by way of one or more of electrodes 410 to one or more intracochlear locations.
  • One or more other electrodes may also be disposed on other parts of electrode lead 408 (e.g., on a proximal portion of electrode lead 408) to, for example, provide a current return path for stimulation current applied by electrodes 410 and to remain external to the cochlea after the distal portion of electrode lead 408 is inserted into the cochlea.
  • a housing of cochlear implant 406 may serve as a ground electrode for stimulation current applied by electrodes 410.
  • Sound processor 402 may be configured to interface with (e.g., control and/or receive data from) cochlear implant 406. For example, sound processor 402 may transmit commands (e.g., stimulation parameters and/or other types of operating parameters in the form of data words included in a forward telemetry sequence) to cochlear implant 406 by way of a forward telemetry communication link. Sound processor 402 may additionally or alternatively provide operating power to cochlear implant 406 by transmitting one or more power signals to cochlear implant 406 by way of the communication link. Sound processor 402 may additionally or alternatively receive back telemetry data from cochlear implant 406 by way of communication link. Communication link may be implemented by any suitable number of wired and/or wireless bidirectional and/or unidirectional links.
  • commands e.g., stimulation parameters and/or other types of operating parameters in the form of data words included in a forward telemetry sequence
  • Sound processor 402 may additionally or alternatively provide operating power to cochlear implant 406 by transmitting one or more power
  • Sound processor 402 may represent an implementation of any of the sound processing devices or other systems described herein (e.g., system 200). As such, sound processor 402 may include a memory (e.g., similar to memory 202), one or more processors (e.g., similar to processor 206), and access to instructions that may cause the processors to perform methods and processes described herein (e.g., method 102). [0050] The audio input shown to be received by sound processor 402 may, as shown, implement microphone 302 described above. In the same or other examples, this audio input may be associated with an audio signal associated with a wireless interface (e.g., a Bluetooth interface), and/or a wired interface (e.g., an auxiliary input port).
  • a wireless interface e.g., a Bluetooth interface
  • wired interface e.g., an auxiliary input port
  • Sound processor 402 may process this audio input in accordance with a sound processing program (e.g., a sound processing program stored in the memory of sound processor 402 to generate appropriate stimulation parameters. Sound processor 402 may then transmit the stimulation parameters (e.g., in a series of stimulation frames such as will be described in more detail below) to cochlear implant 406 to direct cochlear implant 406 to apply electrical stimulation representative of the audio signal to the recipient.
  • a sound processing program e.g., a sound processing program stored in the memory of sound processor 402 to generate appropriate stimulation parameters. Sound processor 402 may then transmit the stimulation parameters (e.g., in a series of stimulation frames such as will be described in more detail below) to cochlear implant 406 to direct cochlear implant 406 to apply electrical stimulation representative of the audio signal to the recipient.
  • sound processor 402 may also be configured to apply acoustic stimulation to the recipient.
  • a receiver also referred to as a loudspeaker
  • sound processor 402 may deliver acoustic stimulation to the recipient by way of the receiver.
  • the acoustic stimulation may be representative of an audio signal (e.g., an amplified version of the audio signal), configured to elicit an evoked response within the recipient, and/or otherwise configured.
  • cochlear implant system 400 may be referred to as a bimodal hearing system and/or any other suitable term.
  • Sound processor 402 may be additionally or alternatively configured to receive and process data generated by cochlear implant 406. For example, sound processor 402 may receive data representative of a signal recorded by cochlear implant 406 using one or more electrodes 410 and, based on the data, adjust one or more operating parameters of sound processor 402. Additionally or alternatively, sound processor 402 may use the data to perform one or more diagnostic operations with respect to cochlear implant 406 and/or the recipient
  • sound processor 402 is communicatively coupled to one or more audio inputs (e.g., including an implementation of microphone 302) and to the headpiece 404.
  • FIG. 4 indicates that this audio input and the headpiece 404 may both be located external to the recipient (i.e., to the left of the layer of “SKIN”), while cochlear implant 406 and electrode lead 408 (with its electrodes 410) are implanted within the recipient (i.e., to the right of the layer of “SKIN”).
  • Sound processor 402 may be implemented by any suitable device that may be worn or carried by the recipient.
  • sound processor 402 may be implemented by a behind-the-ear (BTE) unit configured to be worn behind and/or on top of an ear of the recipient.
  • BTE behind-the-ear
  • sound processor 402 may be implemented by an off-the-ear unit (also referred to as a body worn device) configured to be worn or carried by the recipient away from the ear.
  • at least a portion of sound processor 402 is implemented by circuitry within headpiece 404.
  • the audio input received by sound processor 402 may be configured to detect one or more audio signals (e.g., that include speech and/or any other type of sound) in an environment of the recipient.
  • This audio input may be implemented in any suitable manner.
  • audio input may be implemented by a microphone (e.g., an implementation of microphone 302) that is configured to be placed within the concha of the ear near the entrance to the ear canal, such as a T-MICTM microphone from Advanced Bionics.
  • a microphone may be held within the concha of the ear near the entrance of the ear canal during normal operation by a boom or stalk that is attached to an ear hook configured to be selectively attached to sound processor 402.
  • one or more microphones in or on headpiece 404, one or more microphones in or on a housing of sound processor 402, one or more beamforming microphones, auxiliary audio inputs (e.g., from wired or wired interfaces, etc.), and/or any other suitable audio sources as may serve a particular implementation may be used for audio input.
  • Headpiece 404 may be selectively and communicatively coupled to sound processor 402 by way of a communication link (e.g., a cable or any other suitable wired or wireless communication link), which may be implemented in any suitable manner.
  • Headpiece 404 may include an external antenna (e.g., a coil and/or one or more wireless communication components) configured to facilitate selective wireless coupling of sound processor 402 to cochlear implant 406.
  • Headpiece 404 may additionally or alternatively be used to selectively and wirelessly couple any other external device to cochlear implant 406.
  • headpiece 404 may be configured to be affixed to the recipient’s head and positioned such that the external antenna housed within headpiece 404 is communicatively coupled to a corresponding implantable antenna (which may also be implemented by a coil and/or one or more wireless communication components) included within or otherwise connected to cochlear implant 406.
  • a corresponding implantable antenna which may also be implemented by a coil and/or one or more wireless communication components
  • stimulation parameters and/or power signals may be wirelessly and transcutaneously transmitted between sound processor 402 and cochlear implant 406 by way of a wireless communication link.
  • sound processor 402 may receive an audio signal detected by a microphone (e.g., microphone 302) by receiving an electrical audio signal representative of an acoustic signal captured by the microphone.
  • Sound processor 402 may additionally or alternatively receive the audio signal by way of any other suitable interface as described herein. Sound processor 402 may process the audio signal in any of the ways described herein and transmit, by way of headpiece 404, stimulation parameters (e.g., in a series of stimulation frames, as will be described) to cochlear implant 406 to direct cochlear implant 406 to apply electrical stimulation representative of the audio signal to the recipient.
  • stimulation parameters e.g., in a series of stimulation frames, as will be described
  • sound processor 402 may be implanted within the recipient instead of being located external to the recipient.
  • this alternative configuration which may be referred to as a fully implantable implementation of cochlear implant system 300
  • sound processor 402 and cochlear implant 406 would be combined into a single device or implemented as separate devices configured to communicate one with another by way of a wired and/or wireless communication link.
  • headpiece 404 might not be included and one or more microphones used by the system may be implanted within the recipient, located within an ear canal of the recipient, and/or external to the recipient.
  • FIG. 5 shows example facilities and signals that may be implemented in an illustrative architecture 500 of an implementation of sound processing device 100 in accordance with principles described herein.
  • an audio input signal 502 may be received from an input source (“From Audio Input Source”) to an input signal processing facility 504.
  • Input signal processing facility 504 may produce an audio signal 506 that is then used by an input spectrum signal generation facility 508 to produce a set of input spectrum signals 510, some of which (input spectrum signals 510-1 ) may be provided to an envelope/frequency estimation facility 512 and others of which (input spectrum signals 510-2) may be provided to an analytic signal processing facility 514.
  • Facilities 512 and 514 may generate both respective envelope signals 516 (e.g., envelope signals 516-1 generated by envelope/frequency estimation facility 512 and envelope signals 516-2 generated by analytic signal processing facility 514) and respective fine structure signals 518 (e.g., fine structure signals 518-1 generated by envelope/frequency estimation facility 512 and fine structure signals 518-2 generated by analytic signal processing facility 514). All of these envelope signals 516 and fine structure signals 518 may then be processed by an envelope/phase signal processing facility 520 to produce a stimulation signal 522.
  • envelope signals 516 e.g., envelope signals 516-1 generated by envelope/frequency estimation facility 512 and envelope signals 516-2 generated by analytic signal processing facility 51
  • respective fine structure signals 518 e.g., fine structure signals 518-1 generated by envelope/frequency estimation facility 512 and fine structure signals 518-2 generated by analytic signal processing facility 514. All of these envelope signals 516 and fine structure signals 518 may then be processed by an envelope/phase signal processing facility 520 to produce a stimulation signal 522.
  • Stimulation signal 522 may then be used by a forward telemetry transmission facility 524 to generate a forward telemetry (FTEL) signal 526 that, as shown, may be provided to a cochlear implant by way of a headpiece (“To Cl (by way of HP)”) or provided in other suitable ways as have been described or as may serve a particular implementation.
  • FTEL forward telemetry
  • each signal shown in FIG. 5 is connected by a dotted line to a name and various characteristics of the signal that will be described.
  • Several of the facilities shown in FIG. 5 are labeled with a dashed-line box indicating an additional figure that will be referenced as the facility is described.
  • input signal processing facility 504 will be further described with reference to FIG. 6 (“See FIG. 6”)
  • input spectrum signal generation facility 508 will be further described with reference to FIG. 7 (“See FIG. 7”), and so forth.
  • FIGS. 5-10 will help further illustrate how a temporal resolution of cochlear implant stimulation may be increased by using one or more analytic signals in accordance with principles described herein.
  • a series of audio frames comprised in audio signal 506 may be associated with an audio frame rate
  • envelope values included for each audio frame on envelope signals 516 (as well as phase values included for each audio frame on fine structure signals 518) may be associated with an effective audio frame rate (e.g., a rate equal to the audio frame rate multiplied by a number of envelope/phase value pairs included for each audio frame)
  • 3) a series of stimulation frames comprised in stimulation signal 522 may be associated with a stimulation frame rate.
  • Audio input signal 502 may be received from a suitable audio source (e.g., any of the audio input sources described herein) and may be received in an acoustic form (e.g., sound waves to be captured by a microphone integrated with sound processing device 100), an analog form (e.g., an electrical signal generated by an external microphone and provided to sound processing device 100), or another suitable form (e.g., a digital signal that was captured and digitized externally, etc.).
  • a suitable audio source e.g., any of the audio input sources described herein
  • an acoustic form e.g., sound waves to be captured by a microphone integrated with sound processing device 100
  • an analog form e.g., an electrical signal generated by an external microphone and provided to sound processing device 100
  • another suitable form e.g., a digital signal that was captured and digitized externally, etc.
  • the first row of circles represents all the bin values of the set of input spectrum signals 510 for a first audio frame of audio signal 506, the second row of circles represents all the bin values of the set of input spectrum signals 510 for a second audio frame of audio signal 506, and so forth.
  • the frame rate RAudio of the audio frames making up input spectrum signals 510 may be the same audio rate described above.
  • the Fourier transform may operate with a window of one audio frame (e.g., 32 samples in one example) to determine the magnitude of each of the frequency ranges associated with each of the bins of interest.
  • sound processing device 100 will be considered to have increased the temporal resolution of cochlear implant stimulation in accordance with principles described herein regardless of how many of the rest of the input spectrum signals 510 may be processed through envelope/frequency estimation facility 512.
  • the processing in envelope/frequency estimation facility 512 and analytic signal processing facility 514 are shown to be performed in parallel and to each generate similar outputs (e.g., envelope signals 516 and fine structure signals 518).
  • an envelope analysis 802 may be performed on each input spectrum signal 510-1 to estimate the energy of each channel for each audio frame (which will be used to generate amplitude words for stimulating implanted electrodes), while an instantaneous frequency analysis 804 may be performed on each input spectrum signal 510-2 to estimate the instantaneous frequency of each channel for each audio frame.
  • respective dotted arrows extending from the subset of fine structure signals 518-1 in FIG. 8 point to different columns in an example visualization of the instantaneous frequency values associated with each audio frame of each channel in the set of channels.
  • a “1” in each of the squares in this visualization indicates that a singular (1 ) frequency value per channel may be estimated for each audio frame.
  • FIG. 5 indicates that the set of fine structure signals 518 each exhibit characteristics including being digital, being associated with frame rate RAudio (the same frame rate that the set of input spectrum signals 510 is associated with), and further being associated with an effective frame rate Rvalues that indicates the rate at which envelope values are being produced for each channel.
  • Analytic signal generation utilities 902 may generate analytic signals 904 based on input spectrum signals 510-2.
  • analytic signals 904 may be generated as complex signals that include both envelope and phase information for one or more of the set of channels.
  • analytic signals 904 may represent the envelope and phase information (i.e., the envelope and fine structure information) for the various channels in the time domain so that the temporal resolution of stimulation based on these analytic signals is increased.
  • these complex signals could be used to determine an envelope value and a phase value for a given channel (or grouping of channels) with respect to each sample of the audio frame that is available. For instance, if each audio frame includes 32 samples and if there were enough processing power, each of these samples, when processed using the appropriate analytic signal 904, could be used to estimate both an envelope value and a phase value associated with that sample (e.g., resulting in 32 value pairs per audio frame in this example). As has been mentioned, unlimited processing power is of course not available to any system and that maximum rate may be overkill anyway for a typical stimulation frame rate (which may send 3 or 4 stimulation frames per audio frame).
  • a given implementation may provide a desirable number of envelope/phase value pairs per audio frame based on the processing power available, the desired stimulation frame rate, and other factors as may serve a particular implementation.
  • the phase is used to recreate the proper timing at which the amplitude words are to stimulate the implanted electrodes for lower-frequency channels.
  • an analytic signal is a complex-valued function that has no negative frequency components.
  • the real and imaginary parts of an analytic signal are real-valued functions related to each other by the Hilbert transform.
  • analytic signal generation utilities 902 may generate analytic signals 904 with these properties in any suitable way.
  • band limited analytic signals that overlap low and mid frequency ranges (e.g., of channels corresponding to input spectrum signals 510-2) may be generated using an inverse FFT for the bins that cover these positive frequency intervals.
  • Such band limited signals may be spectrally shifted down and down-sampled.
  • An overlap and add (WOI_A) technique may then be performed to reconstruct the windowed signals.
  • FIG. 9 a number of input spectrum signals 510-2 are shown to be taken as input to the analytic signal processing facility 514 and to be used by a number of analytic signal generation utilities 902 to generate respective analytic signals 904. It will be understood that the numbers of elements shown in FIG. 9 are examples only and that any number of input spectrum signals (up to and including all of the input spectrum signals 510), analytic signal generation utilities, and analytic signals may be employed as may serve a particular implementation. As illustrated, for example, multiple different analytic signals may be based on different subsets of input spectrum signals (e.g., three input spectrum signals being used to create one analytic signal, four other input spectrum signals being used to create another analytic signal, etc.).
  • different subsets of input spectrum signals may be used by different analytic signal generation utilities 902 to generate multiple analytic signals 904 for the various subsets (e.g., input spectrum signals for channels 1 and 2 associated with one analytic signal, input spectrum signals for channels 3, 4, and 5 associated with another analytic signal, etc.).
  • one analytic signal generation utility 902 may be configured to determine a particular analytic signal (e.g., analytic signal 904-1) based on the first input spectrum signal (e.g., and possibly other input spectrum signals in a subset of input spectrum signals that includes the first input spectrum signal), while another analytic signal generation utility 902 (e.g., analytic signal generation utility 902-2) may be configured to determine an additional analytic signal (e.g., analytic signal 904-2) based on the second input spectrum signal (e.g., and possibly other input spectrum signals in an additional subset of input spectrum signals that includes the second input spectrum signal).
  • a particular analytic signal e.g., analytic signal 904-1
  • another analytic signal generation utility 902 e.g., analytic signal generation utility 902-2
  • an additional analytic signal e.g., analytic signal 904-2
  • the second input spectrum signal e.g., and possibly other input spectrum signals in an additional subset of input
  • analytic signal processing facility 514 may ultimately use these one or more analytic signals 904 to produce individual envelope signals 516 and fine structure signals 518 for each channel.
  • analytic signal processing facility 514 may only be, for instance, three analytic signals 904 determined within analytic signal processing facility 514 to help produce ten envelope signals 516-2 and ten fine structure signals 518-2 that are ultimately derived from those three analytic signals 904 and output by analytic signal processing facility 514.
  • band-pass filter 906-1 may correspond to channel 1 and, as such, may use analytic signal 904-1 to generate both an envelope signal 516 and a fine structure signal 518 associated with channel 1.
  • band-pass filter 906-2 may correspond to channel 2 and, as such, may use analytic signal 904-1 (the same analytic signal used for channel 1 in this example) to generate both an envelope signal 516 and a fine structure signal 518 associated with channel 2.
  • band-pass filter 906-3 may correspond to channel 3 and use analytic signal 904-2 (a different analytic signal than was used for channels 1 and 2) to generate both an envelope signal 516 and a fine structure signal 518 associated with channel 3.
  • the generating of a given envelope signal 516 for a particular channel (e.g., channel 1 ) based on a particular analytic signal (e.g., analytic signal 904-1) may include applying a band-pass filter (e.g., band-pass filters 906-1) to the analytic signal to pass envelope information for the particular channel, while the generating of a given fine structure signal 518 for this particular channel based on the analytic signal includes applying the band-pass filter to the analytic signal to pass frequency information for the particular channel.
  • a band-pass filter e.g., band-pass filters 906-1
  • the set of input spectrum signals 510 may be split into subsets of input spectrum signals 510-1 and 510-2 to be processed in parallel by envelope/frequency estimation facility 512 (processing input spectrum signals 510-1) and by analytic signal processing facility 514 (processing input spectrum signals 510-2).
  • envelope signals 516 and fine structure signals 518 may be associated with different temporal resolutions (e.g., larger numbers of value pairs per audio frame in the case of envelope signals 516-2 and fine structure signals 518-2 than in envelope signals 516-1 and fine structure signals 518-1) based on the principles described above.
  • FIG. 5 shows how sound processing device 100 may generate envelope and fine structure signals with different temporal resolutions for different channels. For instance, if the first input spectrum signal is in the subset of input spectrum signals 510-1 , then, based on the first input spectrum signal, envelope/frequency estimation facility 512 may generate a first envelope signal and a first fine structure signal for the first channel, wherein, for each audio frame of the series of audio frames, the first envelope signal includes one envelope value and the first fine structure signal includes one phase value.
  • analytic signal processing facility 514 may generate, based on the second input spectrum signal, a second envelope signal and a second fine structure signal for the second channel, wherein, for each audio frame of the series of audio frames, the second envelope signal includes more than one envelope value and the second fine structure signal includes more than one phase value.
  • all of the channels may be processed on one of these parallel paths (e.g., by envelope/frequency estimation facility 512 or by analytic signal processing facility 514), rather than the input spectrum signals being processed separately as shown in FIG. 5 and as has been described.
  • envelope/phase signal processing facility 520 is shown to receive and process all of the envelope signals 516-1 and 516-2 together with all of the fine structure signals 518-1 and 518-2.
  • envelope/phase signal processing facility 520 may use the envelope and phase values included in the envelope signals 516 and fine structure signals 518 to generate a stimulation signal 522 that incorporates stimulation data for each of the channels.
  • stimulation signal 522 may include all the information needed for a cochlear implant to appropriately stimulate each electrode of a set of electrodes implanted in the recipient’s cochlea so as to produce in the recipient a sense that he or she hears the sound on which the original audio input signal 502 was based. Due to the increase in temporal resolution provided by the analytic signal processing described above, this stimulation signal 522 may provide a superior experience for the recipient as compared to implementations that provide less resolution.
  • FIG. 10 shows illustrative aspects of how envelope signals 516 and fine structure signals 518 may be processed by an example envelope/phase signal processing facility 520 to form stimulation frames of stimulation signal 522 in accordance with principles described herein.
  • envelope signals 516-1 generated by envelope/frequency estimation facility 512
  • 516-2 generated by analytic signal processing facility 514
  • fine structure signals 518-1 generated by envelope/frequency estimation facility 512
  • 518-2 generated by analytic signal processing facility 514
  • envelope/phase signal processing facility 520 may be received by envelope/phase signal processing facility 520.
  • envelope/phase signal processing facility 520 is shown to apply an envelope-processing function to each of the envelope signals 516 (envelope value processing 1002), to apply a fine-structure- processing function to each of the fine structure signals 518 (phase value processing 1004), and to perform a mapping function 1006 to recombine the envelope and fine structure signals to form the stimulation frames.
  • envelope value processing 1002 envelope value processing 1002
  • phase value processing 1004 fine-structure- processing function
  • mapping function 1006 to recombine the envelope and fine structure signals to form the stimulation frames.
  • Envelope value processing 1002 may be configured to process the envelope values of envelope signals 516 in any suitable way. For instance, algorithms may be implemented to eliminate noise, isolate and emphasize energy from important frequency ranges (e.g., ranges associated with the human voice that are likely to represent speech, etc.), and perform other signal processing to clean up, prepare, and augment any suitable aspects of the energy represented for each channel.
  • phase value processing 1004 may be configured to process the phase values of fine structure signals 518 in any suitable way. For instance, as shown, phase value processing 1004 may use the fine structure information of fine structure signals 518 to implement a fine-structure-processing function such as a phase synthesis function 1008, a current steering function 1010, and/or any other suitable functions as may serve a particular implementation.
  • Phase synthesis function 1008 represents a first fine-structure-processing function that may be included in phase value processing 1004. Phase synthesis function 1008 may be performed by generating phase signals for each channel based on instantaneous frequency values included in the fine structure signals 518. For example, using instantaneous frequency information that has been estimated by envelope/frequency estimation facility 512 and/or analytic signal processing facility 514 (information represented by fine structure signals 518), phase synthesis function 1008 may be configured to generate, for each low frequency channels, temporal information by determining when the phase of these channels wrap around by 2n. When this happens the channel stimulates the electrodes associated with it in the stimulation frame (e.g., at the rate at which forward telemetry stimulation words frames are to be transmitted to the cochlear implant).
  • phase synthesis function 1008 may account for is the timing information, firing order, and so forth, for the stimulation pulses that the electrodes are to apply to the recipient. Using the phase values that have been determined, phase synthesis function 1008 may determine when each electrode for the various channels is to be stimulated. Additionally, the order in which stimulation pulses are to be applied to the different electrodes may be determined at this stage. For example, a nonoverlapping and nonconsecutive stimulation sequence may be used for the electrodes (e.g., to reduce electrical interference between neighboring electrodes).
  • Current steering function 1010 represents a second fine-structure-processing function that may be included in phase value processing 1004.
  • Current steering function 1010 may be performed to simulate one or more sub-channels spectrally located between adjacent electrode pairs. For example, by stimulating two adjacent electrodes at the same time, the recipient may perceive that stimulation is applied at a location between the actual locations of the two electrodes. In this way, stimulation current may be made to target (i.e., may be steered toward) stimulation sites in the cochlea that do not in fact host an actual electrode. The precise location of the target stimulation site may be determined based on how much current is applied to each of the adjacent electrodes surrounding the site.
  • current may be steered to a location closer to depth 1 by driving a larger amount of current to the first electrode and a smaller amount of current to the second electrode. If a location between the two depths is then desired that is nearer to depth 2, this may be achieved by driving a smaller amount of current to the first electrode and a larger amount of current to the second electrode.
  • the phase signals (e.g., carrier signals) that have been synthesized for each channel may be modulated based on the processed phase values of the signals.
  • These various channels may be reordered to different stimulation times inside a frame to be comprised in stimulation signal 522, which may include all the information needed for the cochlear implant to properly stimulate all of the channels.
  • the stimulation frame rate of the stimulation frames included in stimulation signal 522 may be greater than the audio frame rate while being less than the effective audio frame rate that can be achieved when analytic signals are used to increase the temporal resolution of the signals. For example, if the stimulation frame rate (Rstimuiation in FIG.
  • stimulation signal 522 may be processed by forward telemetry transmission facility 524 to generate FTEL signal 526.
  • forward telemetry transmission facility 524 may modulate the data of stimulation signal 522 onto a radio frequency (RF) carrier wave configured to carry power from the sound processing device 100 through the headpiece to the cochlear implant (to simultaneously provide both power and stimulation data to the cochlear implant by way of the headpiece).
  • RF radio frequency
  • FTEL signal may therefore be a modulated RF signal that carries power modulated with data representative of stimulation signal 522.
  • this power signal may be wirelessly and transcutaneously transmitted by a headpiece, through the skin of the recipient, to the cochlear implant, where the FTEL power may provide energy to operate the cochlear implant and the FTEL data may provide information about how the cochlear implant is to direct each electrode to apply stimulation.
  • a computer-readable medium includes any non-transitory medium that participates in providing data (e.g., instructions) that may be read by a computer (e.g., by a processor of a computer).
  • a medium may take many forms, including, but not limited to, non-volatile media, and/or volatile media.
  • Non-volatile media may include, for example, optical or magnetic disks and other persistent memory.
  • Volatile media may include, for example, dynamic random-access memory (DRAM), which typically constitutes a main memory.
  • DRAM dynamic random-access memory
  • Computer-readable media include, for example, a disk, hard disk, magnetic tape, any other magnetic medium, a compact disc read-only memory (CD-ROM), a digital video disc (DVD), any other optical medium, random access memory (RAM), programmable read-only memory (PROM), electrically erasable programmable readonly memory (EPROM), FLASH-EEPROM, any other memory chip or cartridge, or any other tangible medium from which a computer can read.
  • a disk hard disk, magnetic tape, any other magnetic medium
  • CD-ROM compact disc read-only memory
  • DVD digital video disc
  • RAM random access memory
  • PROM programmable read-only memory
  • EPROM electrically erasable programmable readonly memory
  • FLASH-EEPROM any other memory chip or cartridge, or any other tangible medium from which a computer can read.
  • FIG. 11 shows an illustrative computing system 1100 that may implement any of the computing systems described herein, including those employed as part of sound processing devices and/or other cochlear implant system components described herein.
  • computing system 1100 may include a communication interface 1102, a processor 1104, a storage device 1106, and an input/output (I/O) module 1108 communicatively connected via a communication infrastructure 1110.
  • I/O input/output
  • FIG. 11 While an illustrative computing system 1100 is shown in FIG. 11 , the components illustrated in FIG. 11 are not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing system 1100 shown in FIG. 11 will now be described in additional detail.
  • Communication interface 1102 may be configured to communicate with one or more computing devices.
  • Examples of communication interface 1102 include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface.
  • Processor 1104 generally represents any type or form of processing unit capable of processing data or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor 1104 may direct execution of operations in accordance with one or more applications 1112 or other computer-executable instructions such as may be stored in storage device 1106 or another computer-readable medium.
  • Storage device 1106 may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device.
  • storage device 1106 may include, but is not limited to, a hard drive, network drive, flash drive, magnetic disc, optical disc, RAM, dynamic RAM, other non-volatile and/or volatile data storage units, or a combination or subcombination thereof.
  • Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device 1106.
  • data representative of one or more executable applications 1112 configured to direct processor 1104 to perform any of the operations described herein may be stored within storage device 1106.
  • data may be arranged in one or more databases residing within storage device 1106.
  • I/O module 1108 may include one or more I/O modules configured to receive user input and provide user output. One or more I/O modules may be used to receive input for a single virtual experience. I/O module 1108 may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module 1108 may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons.
  • I/O module 1108 may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers.
  • I/O module 1108 is configured to provide graphical data to a display for presentation to a user.
  • the graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation.
  • any of the facilities described herein may be implemented by or within one or more components of computing system 1100.

Landscapes

  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Neurosurgery (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Prostheses (AREA)

Abstract

A sound processing device is communicatively coupled to a cochlear implant. The sound processing device may obtain an audio signal represented in a time domain and comprising a series of audio frames. Based on the audio signal, the sound processing device may generate a set of input spectrum signals in a frequency domain, and, based on one or more of these input spectrum signals, may determine an analytic signal that can be used to generate an envelope signal and a fine structure signal for a particular channel. For each audio frame of the series, the envelope signal may include more than one envelope value and the fine structure signal may include more than one phase value. The sound processing device may transmit, to the cochlear implant, a series of stimulation frames generated based on these envelope and fine structure signals. Corresponding systems and methods are also disclosed.

Description

METHODS AND SYSTEMS FOR PERFORMING COCHLEAR IMPLANT STIMULATION BASED ON AN ANALYTIC SIGNAL
BACKGROUND INFORMATION
[0001] Various people suffer from partial or total hearing loss for a variety of reasons. For example, certain people are born without any ability to hear or lose this ability as a result of illness or accident. Others may enjoy normal hearing throughout their lives but still find that their hearing ability degrades significantly in their later years. In some of these circumstances, cochlear implant systems may be employed to provide a sense of hearing to recipients who lack this ability and/or to augment the natural hearing ability of recipients who may retain such an ability.
[0002] To implement a cochlear implant system, certain signal processing may be performed to analyze audio presented to a recipient of a cochlear implant system and, based on the analysis of this audio, to generate stimulation data configured to direct a cochlear implant that has been implanted within the recipient to properly stimulate the recipient in accordance with the audio. Certain challenges exist in implementations of cochlear implant system, including challenges relating to the resolution with which sound information may be captured and recreated in the cochlear implant simulation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] The accompanying drawings illustrate various embodiments and are a part of the specification. The illustrated embodiments are merely examples and do not limit the scope of the disclosure. Throughout the drawings, identical or similar reference numbers designate identical or similar elements.
[0004] FIG. 1 shows an illustrative method for increasing a temporal resolution of cochlear implant stimulation based on an analytic signal in accordance with principles described herein.
[0005] FIG. 2 shows an illustrative computing system configured to increase temporal resolution of stimulation based on an analytic signal in accordance with principles described herein.
[0006] FIG. 3 shows certain elements of an illustrative cochlear implant system configured to increase a temporal resolution of cochlear implant stimulation based on an analytic signal in accordance with principles described herein. [0007] FIG. 4 shows an illustrative implementation of a cochlear implant system in which methods and systems described herein may be embodied in accordance with principles described herein.
[0008] FIG. 5 shows example facilities and signals that may be implemented in an illustrative architecture of a sound processing device in accordance with principles described herein.
[0009] FIG. 6 shows illustrative aspects of how an input signal may be processed by an example sound processing device in accordance with principles described herein. [0010] FIG. 7 shows illustrative aspects of how a set of input spectrum signals may be generated by an example sound processing device in accordance with principles described herein.
[0011] FIG. 8 shows illustrative aspects of how envelope and phase values may be generated by an example sound processing device in accordance with principles described herein.
[0012] FIG. 9 shows illustrative aspects of how one or more analytic signals may be used by an example sound processing device to increase the resolution of envelope and phase values generated in accordance with principles described herein.
[0013] FIG. 10 shows illustrative aspects of how envelope and fine structure signals may be processed by an example sound processing device to form stimulation frames in accordance with principles described herein.
[0014] FIG. 11 shows an illustrative computing system that may implement any of the computing systems described herein.
DETAILED DESCRIPTION
[0015] Methods and systems for performing cochlear implant stimulation based on an analytic signal are described herein. As mentioned above, cochlear implant systems may perform signal processing to analyze audio presented to a recipient and may generate stimulation data based on this signal processing. For example, after bringing in an audio signal and digitizing it, properly adjusting its gain, reducing noise on the signal, and so forth, a sound processing device may convert the signal from the time domain to the frequency domain and divide up the signal with respect to a plurality of channels associated with different frequencies. As part of this processing, a sound processing device may be configured to analyze both the envelope of the respective signals for each frequency channel into which the audio signal has been divided, and the fine structure (e.g., phase, frequency, etc.) of the respective input spectrum signals. Various signal processing techniques may be performed on envelope (e.g., energy) and phase (e.g., instantaneous frequency) values determined in this way prior to recombining the processed signals and converting the output to stimulation data that may be used to direct a cochlear implant to apply stimulation to the recipient.
[0016] For cochlear implant systems operating in this manner, there are various frame rates that will be referred to and described herein. One of these frame rates is an “audio frame rate,” which may refer to the number of audio frames being input and processed per unit of time (e.g., per second), as determined by how frequently samples are being captured for an audio signal and how many of those samples are included per audio frame (i.e., how many samples are to be processed at a time). As an example, if an audio signal has been generated with a sample rate of 22.05 kHz (i.e., 22,050 samples per second) and processing is performed with respect to audio frames being updated every 32 samples the audio frame rate of this example would be calculated to be about 690 Hz (i.e., about 690 frames per second as calculated by the quotient of 22,050 and 32). Methods that calculate one signal value per audio frame are described, for example, in U.S. Patent No. 7,515,966, which is hereby incorporated by reference in its entirety.
[0017] Another frame rate that will be referenced herein is a “stimulation frame rate” (also referred to as a “forward telemetry rate”), which may refer to the rate of stimulation frames being provided (e.g., by a sound processing device) to the cochlear implant as the cochlear implant applies stimulation to the cochlear implant recipient. The stimulation frame rate may be determined based on a variety of factors (e.g., customized to the needs and preferences of the recipient), and, at least in some examples or to some extent, may be independent from the audio frame rate. As an example, a stimulation frame rate for a particular cochlear implant recipient may be about 1856 Hz.
[0018] In examples like this where the stimulation frame rate (1856 Hz) is significantly greater than the audio frame rate (690 Hz), the signal processing chain may up-sample the audio frames to keep pace with the desired stimulation frame rate. For example, if a singular envelope and phase value is determined for each channel based on each audio frame, stimulation frames may repeat the singular envelope and phase value for that channel before being refreshed with a new envelope and phase value associated with the next audio frame. In contrast, methods and systems described herein may allow for the temporal resolution of cochlear implant stimulation to be increased by using down sampled band-limited analytic signals (e.g., band-limited Hilbert analytic signals generated using inverse Fourier transforms and set up to overlap low and/or mid-frequency ranges, etc.) to resample the original audio signal at a rate closer to the stimulation frame rate. For example, analytic signals may help retain temporal information by producing more than one envelope and phase value per audio frame, such that the up-sampling described above ceases to be necessary. By using the Hilbert analytic signal to generate the fine structure for one or more input spectrum intervals (e.g., input signals associated with one or more frequency channels), several pairs of envelope and phase values may be computed (e.g., determined, estimated, etc.) per audio frame. The rate of these envelope/phase value pairs (e.g., the number of value pairs per frame times the audio frame rate) may form what will be referred to herein as an “effective audio frame rate.”
[0019] By using band-limited analytic signals to produce additional value pairs for each audio frame, the effective audio frame rate may be made to be greater than the stimulation frame rate, such that down-sampling, rather than the up-sampling described above, will be needed to meet the desired stimulation frame rate. Accordingly, for a given stimulation frame rate, the temporal resolution may thereby be improved compared to a system that calculates one signal value per audio frame. With this increased resolution, cochlear implant systems may be enabled to simulate captured audio signals with more fidelity and to otherwise provide stimulation that is improved overall. These improvements may benefit producers and recipients of cochlear implant systems implementing these principles in various ways that will be described and made apparent herein.
[0020] Various specific embodiments will now be described in detail with reference to the figures. It will be understood that the specific embodiments described below are provided as non-limiting examples of how various novel and inventive principles may be applied in various situations. Additionally, it will be understood that other examples not explicitly described herein may also be captured by the scope of the claims set forth below. Systems, methods, and interfaces described herein for increasing a temporal resolution of cochlear implant stimulation based on an analytic signal may provide any of the benefits mentioned above, as well as various additional and/or alternative benefits that will be described and/or made apparent below.
[0021] FIG. 1 shows an example sound processing device 100 and an illustrative method 102 for increasing a temporal resolution of cochlear implant stimulation based on an analytic signal that sound processing device 100 may perform in accordance with principles described herein. As shown, method 102 (also referred to as a process 102) may include a plurality of operations 104 (e.g., operations 104-1 through 104-5), each of which may be performed, in full or in part, by sound processing device 100 or components thereof, as will be described in more detail below.
[0001] While method 102 shows illustrative operations according to one implementation, other implementations may omit, add to, reorder, and/or modify any of operations 104 shown in method 102. In some examples, multiple operations 104 may be performed concurrently (e.g., in parallel) with one another, rather than being performed sequentially as illustrated and/or described. Additionally, part or all of method 102 may, at least in certain circumstances, be performed in real time so as to provide, receive, process, and/or use data described herein immediately as the data is generated, updated, changed, exchanged, or otherwise becomes available. In such examples, operations described herein may involve real-time data, real-time representations, real-time conditions, and/or other real-time circumstances. As used herein, “real time” will be understood to relate to data processing and/or other actions that are performed immediately, as well as conditions and/or circumstances that are accounted for as they exist in the moment when the processing or other actions are performed. For example, a real-time operation may refer to an operation that is performed immediately and without undue delay, even if it is not possible for there to be absolutely zero delay. Similarly, real-time data, real-time representations, real-time conditions, and so forth, will be understood to refer to data, representations, and conditions that relate to a present moment in time or a moment in time when decisions are being made and operations are being performed (e.g., even if after a short delay), such that the data, representations, conditions, and so forth are temporally relevant to the decisions being made and/or the operations being performed.
[0022] One or more of operations 104 shown in Fig. 1 may be performed by data processing resources (e.g., data processing resources), user interface resources, communication resources, and/or other suitable computing resources of sound processing device 100, which, as will be described and illustrated in more detail below, may be communicatively coupled to a cochlear implant in a cochlear implant system that includes sound processing device 100, the cochlear implant, and other components. Each of operations 104 of method 102 will now be described in more detail.
[0023] At operation 104-1 , sound processing device 100 may obtain an audio signal that is represented in a time domain and that comprises a series of audio frames. For example, as will be described in more detail below, the obtained audio signal may be based on an acoustic signal that is captured by a microphone of the cochlear implant system (e.g., a microphone included within or communicatively coupled to sound processing device 100). After capturing the acoustic signal (e.g., as an analog signal), the acoustic signal may be generated by digitizing and applying certain effects (e.g., automatic gain control (AGC), noise reduction, etc.) to the signal. Sound processing device 100 may therefore obtain this audio signal either by generating the signal itself, by receiving the signal after it has been generated by another part of the system, or by some combination of these (e.g., receiving the signal at some stage of the process and applying certain affects to finish preparing the audio signal for the processing of operations 104-2 through 104-5). In other examples, rather than being generated by a microphone of the cochlear implant system, the audio signal may be based on an electrical signal provided to the cochlear implant system (e.g., an audio signal associated with a music file or other recording, a transmission of a sound that is captured remotely from the recipient, etc.).
[0024] As mentioned above, each audio frame in the series of audio frames may incorporate a certain number of samples from the audio signal. For example, as set forth in the example mentioned above, the audio signal may include approximately 22,050 samples per second (i.e., an audio sampling frequency of 22.05 kHz) and, after every 32 samples update, may be processed together as an audio frame. In this example, the audio frame rate would therefore be approximately 690 Hz (i.e., approximately 690 audio frames per second). It will be understood that these values are provided only by way of illustration and that any sample rate and audio frame rate may be used as may serve a particular implementation.
[0025] At operation 104-2, sound processing device 100 may generate spectral values for a set of input spectrum signals based on the audio signal. While the audio signal, as mentioned above, may be represented in the time domain (i.e., representing the audio signal as a function of time), each input spectrum signal in the set of input spectrum signals generated at operation 104-2 may be represented in the frequency domain (i.e., representing each audio frame as a function of how much energy is associated with each frequency band in a set of frequency bands). For example, if a particular cochlear implant system example uses 15 different channels, the total spectrum of frequencies of interest (e.g., audible frequencies, etc.) may be divided (e.g., logarithmically, linearly, etc.) into 15 frequency ranges referred to as “channels” (or “frequency channels”). As such, the set of input spectrum signals generated at operation 104-2 may correspond to a set of channels.
[0026] This set of input spectrum signals may include, for example, a particular input spectrum signal with spectral values corresponding to a particular channel of the set of channels (e.g., a frequency domain signal associated with a particular frequency band in the overall spectrum of audible frequencies). As will be described in more detail below, the generating of the set of input spectrum signals at operation 104-2 may be performed using a Fourier transform (e.g., a short-time Fourier transform (STFT), a fast Fourier transform (FFT), etc.), or in any other manner as may serve a particular implementation. In some examples, generating the spectral values at operation 104-2 may include generating the spectrum of windowed input audio and zeroing out all the negative frequencies.
[0027] At operation 104-3, sound processing device 100 may determine an analytic signal associated with one or more of the set of channels (e.g., associated with the particular channel mentioned above in relation to operation 104-2). In some examples, the analytic signal may represent positive frequency regions. For example, operation 104-3 may involve determining band-limited analytic signals associated with the low and mid positive frequency ranges. These analytic signals may be generated based on the particular input spectrum signal in any suitable way. For example, sound processing device 100 may spectrally shift the mid frequency ranges so they approximately overlap the low frequency range, and may down-sample both the low and mid analytic signals so they have a common sample rate. Additionally, if the channels signals are calculated using an STFT, operation 104-3 may be performed by generating several (e.g., two, three, four, etc.) band-limited analytic signals (e.g., Hilbert analytic signals) to help retain timing information. These analytic signals may be configured to overlap low- and/or mid-frequency ranges by being generated using an inverse Fourier transform for the bins that cover these frequency ranges. In some examples, the band-limited analytic signals may be spectrally shifted down and down-sampled. An overlap-and-add technique (e.g., WOLA) may then be used to reconstruct the windowed signals.
[0028] At operation 104-4, sound processing device 100 may generate one or more envelope signals and fine structure signals (e.g., instantaneous frequencies) based on the analytic signal determined at operation 104-3 (e.g., by filtering the analytic signal with a set of band-pass filters that define each of the channels frequency intervals). These band-pass filters cutoff frequencies are adjusted for the analytic signals that have been spectrally shifted. Envelope and fine structure signals may be produced for each of the set of channels using one or more analytic signals (as well as other processing tools such as band-pass filters associated with each of the channels). For example, using the analytic signal determined at operation 104-3 and a band-pass filter tuned to the particular channel, sound processing device 100 may generate an envelope signal and a fine structure signal for the particular channel. As part of operation 104-4, sound processing device 100 may specify a set of logarithmically- spaced band-pass filters, shifting their cutoff frequencies if they overlap one of the mid frequency ranges. Moreover, the sound processing device 100 may filter the analytic signals to create logarithmically spaced channel signals and may determine the envelopes and instantaneous frequencies for the mid and low frequency channels and the phase for the low frequency channels.
[0029] Envelope and fine structure signals generated at operation 104-4 may be represented in the time domain, similar to the audio signal obtained at operation 104-1. However, whereas the audio signal may correspond to the entire audio spectrum (rather than a single channel or subset of channels), the envelope and fine structure signals generated at operation 104-4 may, like the set of input spectrum signals from which the analytic signals derives, be individually associated with particular channels of the set of channels. For example, an envelope signal for the particular channel generated at operation 104-4 may consist of envelope (e.g., amplitude) values of the input spectrum signal for the particular channel, while a fine structure signal for that particular channel generated at operation 104-4 may consist of phase (e.g., frequency) values of the input spectrum signal for that particular channel.
[0030] Whereas conventional techniques may involve determining, for a given audio frame, a single envelope value and a single phase value for each channel, the analytic signals determined at operation 104-4 may enable the envelope and fine structure signals generated at operation 104-4 to be generated with higher temporal resolutions than would otherwise be possible with conventional techniques. Specifically, for each audio frame of the series of audio frames, the envelope signal generated at operation 104-4 may include more than one envelope value, while the fine structure signal generated at operation 104-4 may include more than one phase value. If limitless processing power were available to the sound processing device, envelope and phase value pairs could theoretically be generated for every sample of every audio frame on a one-to-one basis (e.g., providing 32 envelope/phase value pairs per audio frame for examples described above to include 32 samples per audio frame). However, this level of resolution may be overkill when considered with the stimulation frame rate that may be in use for a given implementation. Additionally, it will be understood that real-world sound processing devices do not, of course, have access to limitless processing power. Accordingly, the number of envelope/phase value pairs generated per audio frame for each channel may be customizable based on the available processing power, the target stimulation frame rate, and/or other such factors. In some examples, a sufficient number of value pairs may be generated per audio frame to make the effective audio frame rate (i.e., the audio frame rate multiplied by the number of value pairs per frame) greater than the stimulation frame rate.
[0031] At operation 104-5, sound processing device 100 may transmit a series of stimulation frames to the cochlear implant. For example, the stimulation frames may be generated based on the envelope and fine structure signals generated at operation 104-4. As mentioned above, while a transmittal of stimulation frames by a conventional sound processing device (i.e., one not configured to increase the temporal resolution based on analytic signals) may require up-sampling of the envelope and phase values of the envelope and fine structure signals (i.e., reusing each value pair more than once in consecutive stimulation frames while the next audio frame is being processed), the increased resolution enabled by the envelope and fine structure signals generated at operation 104-4 may reduce or eliminate this undesirable practice. Rather, since the envelope signal includes more than one envelope value per audio frame and the fine structure signal includes more than one phase value per audio frame, each stimulation frame transmitted in the series may include a unique and updated envelope/phase value pair to thereby increase the resolution and enhance the quality perceived by the recipient of the cochlear implant.
[0032] FIG. 2 shows an illustrative system 200 (e.g., a computing system such as a sound processing device or other such device) configured to increase temporal resolution of stimulation based on an analytic signal in accordance with principles described herein. In this example, system 200 is shown to include a memory 202 storing instructions 204, as well as one or more processors 206 communicatively coupled to memory 202 and configured to execute instructions 204 to perform process 102. Specifically, a processor 206 may access memory 202 and load instructions 204 that cause the processor to perform operations 104 of process 102 (similar or identical to operations 104 described above in relation to FIG. 1).
[0033] System 200 may be implemented by computer resources such as processors, memory facilities, storage facilities, communication interfaces, and so forth, implemented on one or more computing devices described herein. In some examples, system 200 (or components thereof) may be implemented by sound processing devices such as behind-the-ear (BTE) sound processors, body worn sound processors, active headpieces worn on the head, implanted sound processors, computing devices communicatively coupled to such sound processors (e.g., mobile devices or other personal computing devices that physically or wirelessly connect to cochlear implant system components such as sound processors, etc.), by some combination of these, or by other suitable computing systems as may serve a particular implementation.
[0034] In the generalized representation of system 200 shown in FIG. 2, processor 206 (which will be understood to represent one or more processors) and memory 202 may be selectively and communicatively coupled to one another and/or to other resources (e.g., networking and communication interfaces, etc.). In certain embodiments, memory facilities represented by memory 202 and processors represented by processor 206 may be distributed between multiple computing systems and/or multiple locations as may serve a particular implementation.
[0035] One or more memory facilities represented by memory 202 may store and/or otherwise maintain executable data used by one or more processors represented by processor 206 to perform any of the functionality described herein. For example, as shown, memory 202 may store instructions 204 that may be executed by processor 206. Memory 202 may represent (e.g., may be implemented by) one or more memory or storage devices, including any memory or storage devices described herein, that are configured to store data in a transitory or non-transitory manner. Instructions 204 may be executed by processor 206 to cause system 200 to perform any of the functionality described herein. Instructions 204 may be implemented by any suitable application, software, script, code, and/or other executable data instance. Additionally, memory 202 may also maintain any other data accessed, managed, used, and/or transmitted by processor 206 in a particular implementation.
[0036] Processor 206 may represent (e.g., may be implemented by) one or more computer processing devices, including general-purpose processors (e.g., central processing units (CPUs), graphics processing units (GPUs), microprocessors, etc.), special-purpose processors (e.g., application-specific integrated circuits (ASICs), field- programmable gate arrays (FPGAs), etc.), or the like. Using processor 206 (e.g., when the processor is directed to perform operations represented by instructions 204 stored in memory 202), system 200 may perform functions associated with increasing a temporal resolution of cochlear implant stimulation based on an analytic signal in accordance with methods and systems described herein and/or as may serve a particular implementation.
[0037] As one example of functionality that processor 206 may perform, Fig. 2 shows process 102 for increasing a temporal resolution of cochlear implant stimulation based on an analytic signal. Process 102 is shown to include the same operations 104- 1 through 104-5 described above in relation to FIG. 1 , and it will be understood that these operations may be performed in the same or similar ways by processor 206 as described above in relation to sound processing device 100. For example, as shown, operation 104-1 may be performed by obtaining an audio signal that is represented in a time domain and that comprises a series of audio frames; operation 104-2 may be performed by generating (e.g., based on the audio signal obtained at operation 104-1), a set of input spectrum signals in a frequency domain (where the set of input spectrum signals corresponds to a set of channels and including a particular input spectrum signal corresponding to a particular channel of the set of channels); operation 104-3 may be performed by determining (e.g., based on the particular input spectrum signal) an analytic signal associated with the particular channel; operation 104-4 may be performed by generating, based on the analytic signal, an envelope signal and a fine structure signal for the particular channel (wherein, for each audio frame of the series of audio frames, the envelope signal includes more than one envelope value and the fine structure signal includes more than one phase value); and operation 104-5 may be performed by transmitting a series of stimulation frames generated based on the envelope and fine structure signals.
[0038] FIG. 3 shows certain elements of an illustrative cochlear implant system 300 configured to increase a temporal resolution of cochlear implant stimulation based on an analytic signal in accordance with principles described herein. As shown, cochlear implant system 300 may include: 1) a microphone 302 that may be configured to capture an acoustic signal; 2) a cochlear implant 304 that may be configured to stimulate a recipient in which cochlear implant 304 is implanted; and 3) an implementation of sound processing device 100 that may be communicatively coupled to microphone 302 and to cochlear implant 304. As described above, sound processing device 100 may be configured to perform a method or process such as process 102. [0039] To that end, FIG. 3 shows that sound processing device 100 may perform process 102, which was described in relation to FIGS. 1 and 2. More particularly, as shown, this implementation of sound processing device 100 may obtain, at operation 104-1 , an audio signal that is based on the acoustic signal captured by microphone 302, that is represented in a time domain, and that comprises a series of audio frames. At operation 104-2, the sound processing device 100 may generate, based on the audio signal, a set of input spectrum signals in a frequency domain, the set of input spectrum signals corresponding to a set of channels and including a particular input spectrum signal corresponding to a particular channel of the set of channels. At operation 104-3, the sound processing device 100 may determine an analytic signal based on the particular input spectrum signal. At operation 104-4, the sound processing device 100 may generate, based on the analytic signal, an envelope signal and a fine structure signal for the particular channel. For example, as described above, the envelope and fine structure signals may be configured such that, for each audio frame of the series of audio frames, the envelope signal includes more than one envelope value and the fine structure signal includes more than one phase value. At operation 104-5, the sound processing device 100 may transmit, to cochlear implant 304, a series of stimulation frames generated based on the envelope and fine structure signals.
[0040] FIG. 4 shows a more detailed implementation 400 of a cochlear implant system such as the cochlear implant system 300 described above in relation to FIG. 3. Implementation 400 (also referred to as cochlear implant system 400) illustrates a cochlear implant system in which methods (e.g., method 102) and systems (e.g., system 200) described herein may be embodied in accordance with principles described herein.
[0041] Cochlear implant system 400 may be configured to be used by a recipient. As shown, cochlear implant system 400 receives audio input (e.g., by way of an audio source implementing microphone 302 or another suitable source) and includes a sound processor 402 (e.g., implementing sound processing device 100), a headpiece 404, a cochlear implant 406 (e.g., implementing cochlear implant 304), an electrode lead 408 physically coupled to cochlear implant 406 and having an array of electrodes 410. In some examples cochlear implant systems such as implementation 400 may include more or fewer components than those explicitly shown in FIG. 4.
[0042] Cochlear implant system 400 shown in FIG. 4 is unilateral (i.e. , associated with only one ear of the recipient). Alternatively, a bilateral configuration of cochlear implant system 400 may include separate cochlear implants and electrode leads for each ear of the recipient. In the bilateral configuration, sound processor 402 may be implemented by a single sound processing device configured to interface with both cochlear implants or by two separate sound processing devices each configured to interface with a different one of the cochlear implants. [0043] Cochlear implant 406 may be implemented by any suitable type of implantable stimulator configured to apply electrical stimulation to one or more stimulation sites located along an auditory pathway of the recipient. In some examples, cochlear implant 406 may additionally or alternatively apply nonelectrical stimulation (e.g., mechanical and/or optical stimulation) to the auditory pathway of the recipient. In some examples, cochlear implant 406 may be configured to generate electrical stimulation representative of an audio signal received as part of the audio input (captured by microphone 302) and/or processed by sound processor 402 in accordance with one or more stimulation parameters transmitted to cochlear implant 406 by sound processor 402. Cochlear implant 406 may be further configured to apply the electrical stimulation to one or more stimulation sites (e.g., one or more intracochlear locations) within the recipient by way of one or more electrodes 410 on electrode lead 408. In some examples, cochlear implant 406 may include a plurality of independent current sources each associated with a channel defined by one or more of electrodes 410. In this manner, different stimulation current levels may be applied to multiple stimulation sites simultaneously by way of multiple electrodes 410.
[0044] Cochlear implant 406 may additionally or alternatively be configured to generate, store, and/or transmit data. For example, cochlear implant 406 may use one or more electrodes 410 to record one or more signals (e.g., one or more voltages, impedances, evoked responses within the recipient, and/or other measurements) and transmit, by way of a back telemetry communication link, data representative of the one or more signals to sound processor 402. In some examples, this data is referred to as back telemetry data.
[0045] Electrode lead 408 may be implemented in any suitable manner. For example, a distal portion of electrode lead 408 may be pre-curved such that electrode lead 408 conforms with the helical shape of the cochlea after being implanted. Electrode lead 408 may alternatively be naturally straight or of any other suitable configuration.
[0046] In some examples, electrode lead 408 includes a plurality of wires (e.g., within an outer sheath) that conductively couple electrodes 410 to one or more current sources within cochlear implant 406. For example, if there are n electrodes 410 on electrode lead 408 and n current sources within cochlear implant 406, there may be n separate wires within electrode lead 408 that are configured to conductively connect each electrode 410 to a different one of the n current sources. Exemplary values for n are 8, 12, 16, or any other suitable number. [0047] Electrodes 410 are located on at least a distal portion of electrode lead 408. In this configuration, after the distal portion of electrode lead 408 is inserted into the cochlea, electrical stimulation may be applied by way of one or more of electrodes 410 to one or more intracochlear locations. One or more other electrodes (e.g., including a ground electrode, not explicitly shown) may also be disposed on other parts of electrode lead 408 (e.g., on a proximal portion of electrode lead 408) to, for example, provide a current return path for stimulation current applied by electrodes 410 and to remain external to the cochlea after the distal portion of electrode lead 408 is inserted into the cochlea. Additionally or alternatively, a housing of cochlear implant 406 may serve as a ground electrode for stimulation current applied by electrodes 410.
[0048] Sound processor 402 may be configured to interface with (e.g., control and/or receive data from) cochlear implant 406. For example, sound processor 402 may transmit commands (e.g., stimulation parameters and/or other types of operating parameters in the form of data words included in a forward telemetry sequence) to cochlear implant 406 by way of a forward telemetry communication link. Sound processor 402 may additionally or alternatively provide operating power to cochlear implant 406 by transmitting one or more power signals to cochlear implant 406 by way of the communication link. Sound processor 402 may additionally or alternatively receive back telemetry data from cochlear implant 406 by way of communication link. Communication link may be implemented by any suitable number of wired and/or wireless bidirectional and/or unidirectional links.
[0049] Sound processor 402 may represent an implementation of any of the sound processing devices or other systems described herein (e.g., system 200). As such, sound processor 402 may include a memory (e.g., similar to memory 202), one or more processors (e.g., similar to processor 206), and access to instructions that may cause the processors to perform methods and processes described herein (e.g., method 102). [0050] The audio input shown to be received by sound processor 402 may, as shown, implement microphone 302 described above. In the same or other examples, this audio input may be associated with an audio signal associated with a wireless interface (e.g., a Bluetooth interface), and/or a wired interface (e.g., an auxiliary input port). Sound processor 402 may process this audio input in accordance with a sound processing program (e.g., a sound processing program stored in the memory of sound processor 402 to generate appropriate stimulation parameters. Sound processor 402 may then transmit the stimulation parameters (e.g., in a series of stimulation frames such as will be described in more detail below) to cochlear implant 406 to direct cochlear implant 406 to apply electrical stimulation representative of the audio signal to the recipient.
[0051] In some implementations, sound processor 402 may also be configured to apply acoustic stimulation to the recipient. For example, a receiver (also referred to as a loudspeaker) may be optionally coupled to sound processor 402. In this configuration, sound processor 402 may deliver acoustic stimulation to the recipient by way of the receiver. The acoustic stimulation may be representative of an audio signal (e.g., an amplified version of the audio signal), configured to elicit an evoked response within the recipient, and/or otherwise configured. In configurations in which sound processor 402 is configured to both deliver acoustic stimulation to the recipient and direct cochlear implant 406 to apply electrical stimulation to the recipient, cochlear implant system 400 may be referred to as a bimodal hearing system and/or any other suitable term.
[0052] Sound processor 402 may be additionally or alternatively configured to receive and process data generated by cochlear implant 406. For example, sound processor 402 may receive data representative of a signal recorded by cochlear implant 406 using one or more electrodes 410 and, based on the data, adjust one or more operating parameters of sound processor 402. Additionally or alternatively, sound processor 402 may use the data to perform one or more diagnostic operations with respect to cochlear implant 406 and/or the recipient
[0053] Other operations may be performed by processors included in sound processor 402 as may serve a particular implementation. In the description provided herein, any references to operations performed by sound processor 402 and/or any implementation thereof may be understood to be performed by the one or more processors included therein, based on instructions stored in memory (e.g., instructions 204 stored in memory 202).
[0054] In FIG. 4, sound processor 402 is communicatively coupled to one or more audio inputs (e.g., including an implementation of microphone 302) and to the headpiece 404. As with the sound processor 402 in this example, FIG. 4 indicates that this audio input and the headpiece 404 may both be located external to the recipient (i.e., to the left of the layer of “SKIN”), while cochlear implant 406 and electrode lead 408 (with its electrodes 410) are implanted within the recipient (i.e., to the right of the layer of “SKIN”).
[0055] Sound processor 402 may be implemented by any suitable device that may be worn or carried by the recipient. For example, sound processor 402 may be implemented by a behind-the-ear (BTE) unit configured to be worn behind and/or on top of an ear of the recipient. Additionally or alternatively, sound processor 402 may be implemented by an off-the-ear unit (also referred to as a body worn device) configured to be worn or carried by the recipient away from the ear. Additionally or alternatively, at least a portion of sound processor 402 is implemented by circuitry within headpiece 404.
[0056] The audio input received by sound processor 402 may be configured to detect one or more audio signals (e.g., that include speech and/or any other type of sound) in an environment of the recipient. This audio input may be implemented in any suitable manner. For example, audio input may be implemented by a microphone (e.g., an implementation of microphone 302) that is configured to be placed within the concha of the ear near the entrance to the ear canal, such as a T-MIC™ microphone from Advanced Bionics. Such a microphone may be held within the concha of the ear near the entrance of the ear canal during normal operation by a boom or stalk that is attached to an ear hook configured to be selectively attached to sound processor 402. Additionally or alternatively, one or more microphones in or on headpiece 404, one or more microphones in or on a housing of sound processor 402, one or more beamforming microphones, auxiliary audio inputs (e.g., from wired or wired interfaces, etc.), and/or any other suitable audio sources as may serve a particular implementation may be used for audio input.
[0057] Headpiece 404 may be selectively and communicatively coupled to sound processor 402 by way of a communication link (e.g., a cable or any other suitable wired or wireless communication link), which may be implemented in any suitable manner. Headpiece 404 may include an external antenna (e.g., a coil and/or one or more wireless communication components) configured to facilitate selective wireless coupling of sound processor 402 to cochlear implant 406. Headpiece 404 may additionally or alternatively be used to selectively and wirelessly couple any other external device to cochlear implant 406. To this end, headpiece 404 may be configured to be affixed to the recipient’s head and positioned such that the external antenna housed within headpiece 404 is communicatively coupled to a corresponding implantable antenna (which may also be implemented by a coil and/or one or more wireless communication components) included within or otherwise connected to cochlear implant 406. In this manner, stimulation parameters and/or power signals may be wirelessly and transcutaneously transmitted between sound processor 402 and cochlear implant 406 by way of a wireless communication link. [0058] In cochlear implant system 400, sound processor 402 may receive an audio signal detected by a microphone (e.g., microphone 302) by receiving an electrical audio signal representative of an acoustic signal captured by the microphone. Sound processor 402 may additionally or alternatively receive the audio signal by way of any other suitable interface as described herein. Sound processor 402 may process the audio signal in any of the ways described herein and transmit, by way of headpiece 404, stimulation parameters (e.g., in a series of stimulation frames, as will be described) to cochlear implant 406 to direct cochlear implant 406 to apply electrical stimulation representative of the audio signal to the recipient.
[0059] In an alternative configuration, sound processor 402 may be implanted within the recipient instead of being located external to the recipient. In this alternative configuration, which may be referred to as a fully implantable implementation of cochlear implant system 300, sound processor 402 and cochlear implant 406 would be combined into a single device or implemented as separate devices configured to communicate one with another by way of a wired and/or wireless communication link. In a fully implantable implementation of cochlear implant system 300, headpiece 404 might not be included and one or more microphones used by the system may be implanted within the recipient, located within an ear canal of the recipient, and/or external to the recipient.
[0060] FIG. 5 shows example facilities and signals that may be implemented in an illustrative architecture 500 of an implementation of sound processing device 100 in accordance with principles described herein. As shown, an audio input signal 502 may be received from an input source (“From Audio Input Source”) to an input signal processing facility 504. Input signal processing facility 504 may produce an audio signal 506 that is then used by an input spectrum signal generation facility 508 to produce a set of input spectrum signals 510, some of which (input spectrum signals 510-1 ) may be provided to an envelope/frequency estimation facility 512 and others of which (input spectrum signals 510-2) may be provided to an analytic signal processing facility 514. Facilities 512 and 514 may generate both respective envelope signals 516 (e.g., envelope signals 516-1 generated by envelope/frequency estimation facility 512 and envelope signals 516-2 generated by analytic signal processing facility 514) and respective fine structure signals 518 (e.g., fine structure signals 518-1 generated by envelope/frequency estimation facility 512 and fine structure signals 518-2 generated by analytic signal processing facility 514). All of these envelope signals 516 and fine structure signals 518 may then be processed by an envelope/phase signal processing facility 520 to produce a stimulation signal 522. Stimulation signal 522 may then be used by a forward telemetry transmission facility 524 to generate a forward telemetry (FTEL) signal 526 that, as shown, may be provided to a cochlear implant by way of a headpiece (“To Cl (by way of HP)”) or provided in other suitable ways as have been described or as may serve a particular implementation.
[0061] Additional detail with respect to each signal and facility illustrated in FIG. 5 will now be described with reference both to FIG. 5 and to FIGS. 6-10. As shown, each signal shown in FIG. 5 is connected by a dotted line to a name and various characteristics of the signal that will be described. Several of the facilities shown in FIG. 5 are labeled with a dashed-line box indicating an additional figure that will be referenced as the facility is described. Specifically, for example, input signal processing facility 504 will be further described with reference to FIG. 6 (“See FIG. 6”), input spectrum signal generation facility 508 will be further described with reference to FIG. 7 (“See FIG. 7”), and so forth.
[0062] As has already been mentioned, the detailed description of FIGS. 5-10 will help further illustrate how a temporal resolution of cochlear implant stimulation may be increased by using one or more analytic signals in accordance with principles described herein. For example, as will be described in more detail: 1) a series of audio frames comprised in audio signal 506 may be associated with an audio frame rate; 2) envelope values included for each audio frame on envelope signals 516 (as well as phase values included for each audio frame on fine structure signals 518) may be associated with an effective audio frame rate (e.g., a rate equal to the audio frame rate multiplied by a number of envelope/phase value pairs included for each audio frame); and 3) a series of stimulation frames comprised in stimulation signal 522 may be associated with a stimulation frame rate. While this stimulation frame rate may be greater than the audio frame rate (thereby requiring up-sampling and a reuse of envelope/phase information if there were only one envelope/phase value pair per audio frame), this stimulation frame rate may be less than the effective audio frame rate (such that the temporal resolution is increased and no such up-sampling or redundant data usage needs to be employed). [0063] Audio input signal 502 may be received from a suitable audio source (e.g., any of the audio input sources described herein) and may be received in an acoustic form (e.g., sound waves to be captured by a microphone integrated with sound processing device 100), an analog form (e.g., an electrical signal generated by an external microphone and provided to sound processing device 100), or another suitable form (e.g., a digital signal that was captured and digitized externally, etc.). [0064] To illustrate some of these possibilities, the input signal processing facility 504 that receives and processes audio input signal 502 is illustrated FIG. 6. As shown, FIG. 6 depicts example aspects of how an input signal such as audio input signal 502 may be processed by input signal processing facility 504. Specifically, FIG. 6 shows a sound signal 602, which will be understood to consist of acoustic energy (e.g., sound waves traveling through a medium such as the air), going into an implementation of microphone 302 (described above) and converted into a microphone signal 604 (e.g., an analog electrical signal representative of sound signal 602). A dashed line around microphone 302 is used to indicate that microphone 302 may, in certain examples, be integrated into sound processing device 100 (and into input signal processing facility 504 in particular), while, in other examples, microphone 302 may be external to sound processing device 100 and input signal processing facility 504. Similarly, audio input signal 502 is shown to point in the direction of either or both of sound signal 602 and microphone signal 604 to illustrate that either or both of these signals may implement audio input signal 502 in certain embodiments.
[0065] Signal processing facility 504 is shown to receive this analog microphone signal 604 and to include an audio-to-digital conversion (ADC) utility 606 and an automatic gain control (AGC) utility 608. Each of these utilities may include hardware and/or software resources to perform the functions of converting analog signaling to digital samples and to adjust the gain of the signal to a suitable level. In various examples, the processing performed by utilities 606 and 608 may be performed serially (with either the ADC utility or the AGC utility going first). In some examples, AGC utility 608 or another utility not explicitly shown in FIG. 6 may perform other pre-processing such as certain types of noise cancelation or the like. Input signal processing facility 504 may be configured to perform the obtaining of audio signal 506 in any of the ways described above in relation to method 102 (e.g., by receiving audio signal 506, by generating audio signal 506, etc.). For example, using utilities 606 and 608, input signal processing facility 504 may be configured to: 1 ) receive, from a microphone 302 included in the cochlear implant system, an analog audio signal (e.g., microphone signal 604) that is generated by microphone 302 based on an acoustic signal presented to the microphone (e.g., sound signal 602); 2) convert the analog audio signal into a digital audio signal (e.g., an output of ADC utility 606); and 3) apply an automatic gain control to at least one of the analog audio signal (e.g., in the event that AGC utility is performed before the analog-to-digital conversion) or the digital audio signal (e.g., in the event that the AGC utility is performed after the analog-to-digital conversion). [0066] As indicated in FIGS. 5 and/or 6, audio signal 506 is indicated to exhibit characteristics including being digital, being represented in the time domain, being associated with a sample rate Rsampie, and comprising a series of audio frames associated with a frame rate RAudio.
[0067] Returning to FIG. 5, audio signal 506 is shown in architecture 500 to feed into input spectrum signal generation facility 508. Input spectrum signal generation facility 508 may be configured to generate the set of input spectrum signals 510 based on audio signal 506, as has been described in relation to operation 104-2 of method 102. Input spectrum signal generation facility 508 is illustrated in more detail in FIG. 7.
[0068] FIG. 7 shows illustrative aspects of how the set of input spectrum signals 510 may be generated by sound processing device 100 (e.g., by input spectrum signal generation facility 508 in particular) in accordance with principles described herein. As shown, audio signal 506 (which, again, is represented in a similar visualization as described above in relation to FIG. 6) may be received by input spectrum signal generation facility 508 and used to generate the set of input spectrum signals 510. To this end, as shown, audio signal 506 may be processed by a short-time Fourier transform (STFT) 702 to convert the time-domain audio signal into a frequency-domain audio signal having a certain number of bins (e.g., 128 bins for a 128 STFT, 256 bins for a 256 STFT, etc.). More particularly, the generating of the set of input spectrum signals 510 in the frequency domain may include applying STFT 702 to audio signal 506 (e.g., to thereby derive a plurality of frequency bins representative of the audio signal that may be formed into groups associated with each analytic signal or set of channels for the high frequencies).
[0069] Regardless of whether the STFT 702 or another transform approach is employed, the creation of the set of input spectrum signals 510 by input spectrum signal generation facility 508 may serve to separate out different components of the audio signal that ultimately will be assigned to different electrodes on the electrode lead stimulated by the cochlear implant. Accordingly, and as will be made apparent, it is useful for downstream processing to be performed in the frequency domain on this set of input spectrum signals (e.g., rather than on a single time-domain signal).
[0070] To illustrate, respective dotted arrows extending from spectral values of the set of input spectrum signals 510 in FIG. 7 point to different columns in an example visualization of the audio frames associated with each spectral bin. In FIG. 5, the spectral values of the set of input spectrum signals 510 is indicated to exhibit characteristics including being digital, being represented in the frequency domain, and comprising the series of audio frames associated with the frame rate RAudio described above. The visualization in FIG. 7 illustrates this with small circles in a grid configuration each representing the magnitude associated with a particular frequency range (i.e., a particular channel) for a particular audio frame. Each column represents one bin and each circle represents the magnitude of its respective bin for a given audio frame. Accordingly, the first row of circles represents all the bin values of the set of input spectrum signals 510 for a first audio frame of audio signal 506, the second row of circles represents all the bin values of the set of input spectrum signals 510 for a second audio frame of audio signal 506, and so forth. It will be understood that the frame rate RAudio of the audio frames making up input spectrum signals 510 may be the same audio rate described above. For example, the Fourier transform may operate with a window of one audio frame (e.g., 32 samples in one example) to determine the magnitude of each of the frequency ranges associated with each of the bins of interest. [0071] Returning to FIG. 5, input spectrum signals 510 emerging from input spectrum signal generation facility 508 are shown to be split into different groupings labeled as input spectrum signals 510-1 (provided to envelope/frequency estimation facility 512) and input spectrum signals 510-2 (provided to analytic signal processing facility 514). In conventional sound processing devices, the entire set of input spectrum signals 510 may have been processed by functions such as those that will be described in relation to envelope/frequency estimation facility 512. In sound processing devices described herein to increase a temporal resolution of cochlear implant stimulation based on an analytic signal, however, some or all of the set of input spectrum signals 510 may instead be processed by functions associated with analytic signal processing facility 514. Accordingly, as long as at least one input spectrum signal 510 is processed through analytic signal processing facility 514, sound processing device 100 will be considered to have increased the temporal resolution of cochlear implant stimulation in accordance with principles described herein regardless of how many of the rest of the input spectrum signals 510 may be processed through envelope/frequency estimation facility 512. The processing in envelope/frequency estimation facility 512 and analytic signal processing facility 514 are shown to be performed in parallel and to each generate similar outputs (e.g., envelope signals 516 and fine structure signals 518). It will be understood, however, that the envelope and fine structures produced by envelope/frequency estimation facility 512 (i.e., envelope signals 516-1 and fine structure signals 518-1) may be distinct from those produced by analytic signal processing facility 514 (i.e., envelope signals 516-2 and fine structure signals 518-2) in that these latter signals may be imbued with more than one value per audio frame, rather than the singular values of signals 516-1 and 518-1.
[0072] FIG. 8 shows illustrative aspects of how envelope values (i.e. , energy values) and instantaneous frequency values (i.e., phase values) may be generated by sound processing device 100 (e.g., by envelope/frequency estimation facility 512 in particular) in accordance with principles described herein. As shown, input spectrum signals 510-1 (a subset from the set of input spectrum signals 510) may each be associated with a different channel and may include frequency-domain channel values for each audio frame. The subset of input spectrum signals 510-1 may be received by envelope/frequency estimation facility 512 and used to generate the subset of envelope signals 516-1 and the subset of fine structure signals 518-1. More particularly, an envelope analysis 802 may be performed on each input spectrum signal 510-1 to estimate the energy of each channel for each audio frame (which will be used to generate amplitude words for stimulating implanted electrodes), while an instantaneous frequency analysis 804 may be performed on each input spectrum signal 510-2 to estimate the instantaneous frequency of each channel for each audio frame.
[0073] As with other signal visualizations described above, FIG. 8 includes signal visualizations for each of the subset of envelope signals 516-1 and the subset of envelope signals 518-1 that may take this path through envelope/frequency estimation facility 512. Specifically, as shown in FIG. 8, respective dotted arrows extending from the subset of envelope signals 516-1 in FIG. 8 point to different columns in an example visualization of the envelope values associated with each audio frame of each channel in the set of channels. A “1” in each of the squares in the visualization indicates that a singular (1 ) envelope value per channel may be estimated for each audio frame. In FIG. 5, the set of envelope signals 516 is indicated to exhibit characteristics including being digital, being associated with frame rate RAudio (the same frame rate that the set of input spectrum signals 510 is associated with), and further being associated with an effective frame rate Rvalues that indicates the rate at which envelope values are being produced for each channel. Since each of the frames represented by envelope signals 516-1 include only the singular envelope value (indicated by the “1” labels), the effective frame rate Rvalues may be, in this case, the same as the frame rate RAudio, which, as has been mentioned, may be a lower rate than the stimulation frame rate that will be used and may therefore require reuse of certain envelope information (repeating envelope values on subsequent stimulation frames). [0074] Similarly, respective dotted arrows extending from the subset of fine structure signals 518-1 in FIG. 8 point to different columns in an example visualization of the instantaneous frequency values associated with each audio frame of each channel in the set of channels. Again, a “1” in each of the squares in this visualization indicates that a singular (1 ) frequency value per channel may be estimated for each audio frame. As with envelope signals 516, FIG. 5 indicates that the set of fine structure signals 518 each exhibit characteristics including being digital, being associated with frame rate RAudio (the same frame rate that the set of input spectrum signals 510 is associated with), and further being associated with an effective frame rate Rvalues that indicates the rate at which envelope values are being produced for each channel. Since each of the frames represented by fine structure signals 518-2 include only the singular frequency value (indicated by the “1” labels), the effective frame rate Rvalues may again be the same as the frame rate RAudio. For example, one envelope/frequency “value pair” (including a single envelope value and an instantaneous frequency structure value) may be estimated for each audio frame such that the effective frame rate Rvalues may be a lower rate than the stimulation rate that will be used and may therefore require reuse of certain information (repeating not only envelope values but also phase values on subsequent stimulation frames).
[0075] The envelope and instantaneous frequency value estimation performed by envelope/frequency estimation facility 512 contrasts in certain ways with the parallel processing being performed by analytic signal processing facility 514. To illustrate, the envelope signals 516-1 and fine structure signals 518-1 illustrated in FIG. 8 may be compared and contrasted with envelope signals 516-2 and fine structure signals 518-2 illustrated in FIG. 9.
[0076] FIG. 9 shows illustrative aspects of how one or more analytic signals may be used by sound processing device 100 (e.g., by analytic signal processing facility 514 in particular) to augment the resolution of envelope and phase values generated in accordance with principles described herein. As shown, input spectrum signals 510-2 (a subset of the set of input spectrum signals 510) may, again, each be associated with a different channel and may each include frequency-domain channel values for each audio frame. This subset of input spectrum signals 510-2 may be received by analytic signal processing facility 514 and used to generate the subset of envelope signals 516- 2 and the subset of fine structure signals 518-2.
[0077] As shown in the signal visualizations of FIG. 9, these signals 516-2 and 518- 2 are similar to the respective signals 516-1 and 518-1 , but with a notable difference. Whereas each of the envelope signals 516-1 included only a singular envelope value per audio frame (indicated by the “1” labels), each of the envelope signals 516-2 is shown in FIG. 9 to include more than one envelope value per audio frame (e.g., in this example, four envelope values per channel per frame, as indicated by the “4” labels). Similarly, whereas each of the fine structure signals 518-1 included only a singular frequency value per audio frame (indicated by the “1” labels), each of the fine structure signals 518-2 is shown in FIG. 9 to include more than one frequency value per audio frame (e.g., like the envelope values, four phase values per channel per frame in this example, as indicated by the “4” labels). At this point, envelope signals 516-1 may be upsampled by repeating the samples for envelope and frequency so 516-1 and 516-2 remain synchronized (e.g., using a sample and hold technique).
[0078] In contrast with signals 516-1 and 516-2 described above, where the effective frame rate Rvalues was described as being the same as the frame rate RAudio (since each of the frames represented by signals 516-1 and 518-1 included only the singular values), the effective frame rate Rvalues for signals 516-2 and 518-2 is not the same as the frame rate RAudio. Rather, as shown, since there are four value pairs per audio frame generated for each channel of signals 516-2 and 518-2, the effective frame rate Rvalues in this example is four times greater than frame rate RAudio of the audio frames (“Rvalues = 4*RAudio”). As such, even if the original frame rate RAudio was a lower rate than the stimulation rate that will be used, this effective frame rate Rvalues may be greater than the stimulation frame rate such that no reuse of envelope and/or fine structure information is needed (e.g., no envelope/phase value pairs need be repeated for subsequent stimulation frames).
[0079] Within the box representing analytic signal processing facility 514, FIG. 9 illustrates certain aspects of how this additional resolution (i.e. , multiple envelope/phase value pairs per audio frame rather than singular ones) may be achieved. Specifically, FIG. 9 shows several analytic signal generation utilities 902 (e.g., analytic signal generation utilities 902-1 , 902-2 and 902-3 in this example) that may be configured to generate analytic signals 904 (e.g., analytic signals 904-1 , 904-2 and 904-3 in this example) and to run these analytic signals through a network of band-pass filters 906 (e.g., band-pass filters 906-1 through 906-N) associated with the subset of channels (i.e., the channels represented by input spectrum signals 510-2) to ultimately produce envelope signals 516-2 and fine structure signals 518-2.
[0080] Analytic signal generation utilities 902 may generate analytic signals 904 based on input spectrum signals 510-2. For example, using an inverse Fourier transform or other such tools, analytic signals 904 may be generated as complex signals that include both envelope and phase information for one or more of the set of channels. In contrast to the input spectrum signals 510-2, which are in the frequency domain and thus can provide limited temporal information for how each channel is to be stimulated, analytic signals 904 may represent the envelope and phase information (i.e., the envelope and fine structure information) for the various channels in the time domain so that the temporal resolution of stimulation based on these analytic signals is increased. At the limit, these complex signals could be used to determine an envelope value and a phase value for a given channel (or grouping of channels) with respect to each sample of the audio frame that is available. For instance, if each audio frame includes 32 samples and if there were enough processing power, each of these samples, when processed using the appropriate analytic signal 904, could be used to estimate both an envelope value and a phase value associated with that sample (e.g., resulting in 32 value pairs per audio frame in this example). As has been mentioned, unlimited processing power is of course not available to any system and that maximum rate may be overkill anyway for a typical stimulation frame rate (which may send 3 or 4 stimulation frames per audio frame). Accordingly, a given implementation may provide a desirable number of envelope/phase value pairs per audio frame based on the processing power available, the desired stimulation frame rate, and other factors as may serve a particular implementation. For example, the phase is used to recreate the proper timing at which the amplitude words are to stimulate the implanted electrodes for lower-frequency channels.
[0081] In mathematics and signal processing, an analytic signal is a complex-valued function that has no negative frequency components. The real and imaginary parts of an analytic signal are real-valued functions related to each other by the Hilbert transform. As such, analytic signal generation utilities 902 may generate analytic signals 904 with these properties in any suitable way. For example, band limited analytic signals that overlap low and mid frequency ranges (e.g., of channels corresponding to input spectrum signals 510-2) may be generated using an inverse FFT for the bins that cover these positive frequency intervals. Such band limited signals may be spectrally shifted down and down-sampled. An overlap and add (WOI_A) technique may then be performed to reconstruct the windowed signals.
[0082] In FIG. 9, a number of input spectrum signals 510-2 are shown to be taken as input to the analytic signal processing facility 514 and to be used by a number of analytic signal generation utilities 902 to generate respective analytic signals 904. It will be understood that the numbers of elements shown in FIG. 9 are examples only and that any number of input spectrum signals (up to and including all of the input spectrum signals 510), analytic signal generation utilities, and analytic signals may be employed as may serve a particular implementation. As illustrated, for example, multiple different analytic signals may be based on different subsets of input spectrum signals (e.g., three input spectrum signals being used to create one analytic signal, four other input spectrum signals being used to create another analytic signal, etc.). More particularly, along with a first input spectrum signal corresponding to a first channel, the set of input spectrum signals may further include a second input spectrum signal corresponding to a second channel of the set of channels. Based on both the first and second channels, an analytic signal generation utility 902 (e.g., analytic signal generation utility 902-1 ) may determine an analytic signal (e.g., analytic signal 904-1) that is associated with both the first and second channels. For example, input spectrum signals for a channel 1 and a channel 2 could both be included amongst incoming input spectrum signals 510- 2 and could be used by the same analytic signal generation utility 902-1 to generate an analytic signal 904-1 that is associated with both channel 1 and channel 2 (as will be described in more detail below, values associated with these different channels may be derived from this analytic signal 904-1 and then separated out using different bandpass filters 906).
[0083] As further illustrated, different subsets of input spectrum signals may be used by different analytic signal generation utilities 902 to generate multiple analytic signals 904 for the various subsets (e.g., input spectrum signals for channels 1 and 2 associated with one analytic signal, input spectrum signals for channels 3, 4, and 5 associated with another analytic signal, etc.). For example, given a set of channels with a first and a second input spectrum signal (corresponding, respectively, with a first and a second channel), one analytic signal generation utility 902 (e.g., analytic signal generation utility 902-1) may be configured to determine a particular analytic signal (e.g., analytic signal 904-1) based on the first input spectrum signal (e.g., and possibly other input spectrum signals in a subset of input spectrum signals that includes the first input spectrum signal), while another analytic signal generation utility 902 (e.g., analytic signal generation utility 902-2) may be configured to determine an additional analytic signal (e.g., analytic signal 904-2) based on the second input spectrum signal (e.g., and possibly other input spectrum signals in an additional subset of input spectrum signals that includes the second input spectrum signal). [0084] Regardless of how incoming input spectrum signals 510-2 are processed to generate one or more analytic signals 904, analytic signal processing facility 514 may ultimately use these one or more analytic signals 904 to produce individual envelope signals 516 and fine structure signals 518 for each channel. Thus, for example, if the subset of input spectrum signals 510-2 includes input spectrum signals for ten individual channels, there may only be, for instance, three analytic signals 904 determined within analytic signal processing facility 514 to help produce ten envelope signals 516-2 and ten fine structure signals 518-2 that are ultimately derived from those three analytic signals 904 and output by analytic signal processing facility 514. As has been described, for each audio frame of the series of audio frames, each of these envelope signals generated for these channels may include more than one envelope value (e.g., 4 envelope values per audio frame in this example), while each of these fine structure signals generated for these channels may include more than one phase value (e.g., 4 phase values per audio frame in this example).
[0085] FIG. 9 further shows that a different band-pass filter 906 may be associated with each channel in the subset of input spectrum signals 510-2. For instance, in the example where there are ten input spectrum signals 510-2 corresponding to ten different channels being processed on this branch (e.g., rather than on the branch shown in FIG. 8), ten band-pass filters 906-1 through 906-10 may be employed. Each band-pass filter 906 may output an envelope signal 516 and a fine structure signal 518 (with its various envelope and/or phase values as have been described) for the channel with which the band-pass filter 906 corresponds. For example, band-pass filter 906-1 may correspond to channel 1 and, as such, may use analytic signal 904-1 to generate both an envelope signal 516 and a fine structure signal 518 associated with channel 1. Similarly, band-pass filter 906-2 may correspond to channel 2 and, as such, may use analytic signal 904-1 (the same analytic signal used for channel 1 in this example) to generate both an envelope signal 516 and a fine structure signal 518 associated with channel 2. As shown, band-pass filter 906-3 may correspond to channel 3 and use analytic signal 904-2 (a different analytic signal than was used for channels 1 and 2) to generate both an envelope signal 516 and a fine structure signal 518 associated with channel 3. Accordingly, it will be understood that the generating of a given envelope signal 516 for a particular channel (e.g., channel 1 ) based on a particular analytic signal (e.g., analytic signal 904-1) may include applying a band-pass filter (e.g., band-pass filters 906-1) to the analytic signal to pass envelope information for the particular channel, while the generating of a given fine structure signal 518 for this particular channel based on the analytic signal includes applying the band-pass filter to the analytic signal to pass frequency information for the particular channel.
[0086] Returning to FIG. 5, it is shown that the set of input spectrum signals 510 may be split into subsets of input spectrum signals 510-1 and 510-2 to be processed in parallel by envelope/frequency estimation facility 512 (processing input spectrum signals 510-1) and by analytic signal processing facility 514 (processing input spectrum signals 510-2). The outcome of these processing paths are envelope signals 516 and fine structure signals 518 that may be associated with different temporal resolutions (e.g., larger numbers of value pairs per audio frame in the case of envelope signals 516-2 and fine structure signals 518-2 than in envelope signals 516-1 and fine structure signals 518-1) based on the principles described above. If the set of input spectrum signals includes a first input spectrum signal corresponding to a first channel of the set of channels and a second input spectrum signal corresponding to a second channel of the set of channels, FIG. 5 shows how sound processing device 100 may generate envelope and fine structure signals with different temporal resolutions for different channels. For instance, if the first input spectrum signal is in the subset of input spectrum signals 510-1 , then, based on the first input spectrum signal, envelope/frequency estimation facility 512 may generate a first envelope signal and a first fine structure signal for the first channel, wherein, for each audio frame of the series of audio frames, the first envelope signal includes one envelope value and the first fine structure signal includes one phase value. If the second input spectrum signal is then in the subset of input spectrum signals 510-2, analytic signal processing facility 514 may generate, based on the second input spectrum signal, a second envelope signal and a second fine structure signal for the second channel, wherein, for each audio frame of the series of audio frames, the second envelope signal includes more than one envelope value and the second fine structure signal includes more than one phase value. In certain implementations, all of the channels may be processed on one of these parallel paths (e.g., by envelope/frequency estimation facility 512 or by analytic signal processing facility 514), rather than the input spectrum signals being processed separately as shown in FIG. 5 and as has been described.
[0087] Regardless of which of the parallel paths is used for each channel, envelope/phase signal processing facility 520 is shown to receive and process all of the envelope signals 516-1 and 516-2 together with all of the fine structure signals 518-1 and 518-2. As will be described in more detail below, envelope/phase signal processing facility 520 may use the envelope and phase values included in the envelope signals 516 and fine structure signals 518 to generate a stimulation signal 522 that incorporates stimulation data for each of the channels. For example, stimulation signal 522 may include all the information needed for a cochlear implant to appropriately stimulate each electrode of a set of electrodes implanted in the recipient’s cochlea so as to produce in the recipient a sense that he or she hears the sound on which the original audio input signal 502 was based. Due to the increase in temporal resolution provided by the analytic signal processing described above, this stimulation signal 522 may provide a superior experience for the recipient as compared to implementations that provide less resolution.
[0088] FIG. 10 shows illustrative aspects of how envelope signals 516 and fine structure signals 518 may be processed by an example envelope/phase signal processing facility 520 to form stimulation frames of stimulation signal 522 in accordance with principles described herein. As shown, both envelope signals 516-1 (generated by envelope/frequency estimation facility 512) and 516-2 (generated by analytic signal processing facility 514), as well as both fine structure signals 518-1 (generated by envelope/frequency estimation facility 512) and 518-2 (generated by analytic signal processing facility 514) may be received by envelope/phase signal processing facility 520. To generate the series of stimulation frames shown in the signal visualization to be comprised within stimulation signal 522, envelope/phase signal processing facility 520 is shown to apply an envelope-processing function to each of the envelope signals 516 (envelope value processing 1002), to apply a fine-structure- processing function to each of the fine structure signals 518 (phase value processing 1004), and to perform a mapping function 1006 to recombine the envelope and fine structure signals to form the stimulation frames. Each of these functions will now be described in more detail.
[0089] Envelope value processing 1002 may be configured to process the envelope values of envelope signals 516 in any suitable way. For instance, algorithms may be implemented to eliminate noise, isolate and emphasize energy from important frequency ranges (e.g., ranges associated with the human voice that are likely to represent speech, etc.), and perform other signal processing to clean up, prepare, and augment any suitable aspects of the energy represented for each channel. In like manner, phase value processing 1004 may be configured to process the phase values of fine structure signals 518 in any suitable way. For instance, as shown, phase value processing 1004 may use the fine structure information of fine structure signals 518 to implement a fine-structure-processing function such as a phase synthesis function 1008, a current steering function 1010, and/or any other suitable functions as may serve a particular implementation.
[0090] Phase synthesis function 1008 represents a first fine-structure-processing function that may be included in phase value processing 1004. Phase synthesis function 1008 may be performed by generating phase signals for each channel based on instantaneous frequency values included in the fine structure signals 518. For example, using instantaneous frequency information that has been estimated by envelope/frequency estimation facility 512 and/or analytic signal processing facility 514 (information represented by fine structure signals 518), phase synthesis function 1008 may be configured to generate, for each low frequency channels, temporal information by determining when the phase of these channels wrap around by 2n. When this happens the channel stimulates the electrodes associated with it in the stimulation frame (e.g., at the rate at which forward telemetry stimulation words frames are to be transmitted to the cochlear implant). One aspect of the phase synthesis function 1008 may account for is the timing information, firing order, and so forth, for the stimulation pulses that the electrodes are to apply to the recipient. Using the phase values that have been determined, phase synthesis function 1008 may determine when each electrode for the various channels is to be stimulated. Additionally, the order in which stimulation pulses are to be applied to the different electrodes may be determined at this stage. For example, a nonoverlapping and nonconsecutive stimulation sequence may be used for the electrodes (e.g., to reduce electrical interference between neighboring electrodes).
[0091] Current steering function 1010 represents a second fine-structure-processing function that may be included in phase value processing 1004. Current steering function 1010 may be performed to simulate one or more sub-channels spectrally located between adjacent electrode pairs. For example, by stimulating two adjacent electrodes at the same time, the recipient may perceive that stimulation is applied at a location between the actual locations of the two electrodes. In this way, stimulation current may be made to target (i.e., may be steered toward) stimulation sites in the cochlea that do not in fact host an actual electrode. The precise location of the target stimulation site may be determined based on how much current is applied to each of the adjacent electrodes surrounding the site. For instance, for a first electrode at cochlear depth 1 and a second electrode a cochlear depth 2, current may be steered to a location closer to depth 1 by driving a larger amount of current to the first electrode and a smaller amount of current to the second electrode. If a location between the two depths is then desired that is nearer to depth 2, this may be achieved by driving a smaller amount of current to the first electrode and a larger amount of current to the second electrode.
[0092] At mapping function 1006, the phase signals (e.g., carrier signals) that have been synthesized for each channel may be modulated based on the processed phase values of the signals. These various channels may be reordered to different stimulation times inside a frame to be comprised in stimulation signal 522, which may include all the information needed for the cochlear implant to properly stimulate all of the channels. As has been mentioned, the stimulation frame rate of the stimulation frames included in stimulation signal 522 may be greater than the audio frame rate while being less than the effective audio frame rate that can be achieved when analytic signals are used to increase the temporal resolution of the signals. For example, if the stimulation frame rate (Rstimuiation in FIG. 5) is about 1856 Hz and the audio frame rate (RAudio) is about 690 Hz, using analytic signals to determine 4 envelope/phase value pairs per audio frame may allow for an effective audio rate (Rvalues) of about 2750 Hz. Advantageously, and unlike the 690 Hz frame rate, this effective audio rate is greater than the 1856 Hz stimulation frame rate. Consequently, new and unique envelope/phase value pairs may be available for each stimulation frame and no data would need to be reused for multiple stimulation frames.
[0093] Returning to FIG. 5, stimulation signal 522 may be processed by forward telemetry transmission facility 524 to generate FTEL signal 526. For example, forward telemetry transmission facility 524 may modulate the data of stimulation signal 522 onto a radio frequency (RF) carrier wave configured to carry power from the sound processing device 100 through the headpiece to the cochlear implant (to simultaneously provide both power and stimulation data to the cochlear implant by way of the headpiece). As indicated in FIG. 5, FTEL signal may therefore be a modulated RF signal that carries power modulated with data representative of stimulation signal 522. As has been described, this power signal may be wirelessly and transcutaneously transmitted by a headpiece, through the skin of the recipient, to the cochlear implant, where the FTEL power may provide energy to operate the cochlear implant and the FTEL data may provide information about how the cochlear implant is to direct each electrode to apply stimulation.
[0094] In certain embodiments, one or more of the processes described herein may be implemented at least in part as instructions embodied in a non-transitory computer- readable medium and executable by one or more computing devices. In general, a processor (e.g., a microprocessor) receives instructions, from a non-transitory computer-readable medium (e.g., a memory, etc.), and executes those instructions, thereby performing one or more operations such as the operations described herein. Such instructions may be stored and/or transmitted using any of a variety of known computer-readable media.
[0095] A computer-readable medium (also referred to as a processor-readable medium) includes any non-transitory medium that participates in providing data (e.g., instructions) that may be read by a computer (e.g., by a processor of a computer). Such a medium may take many forms, including, but not limited to, non-volatile media, and/or volatile media. Non-volatile media may include, for example, optical or magnetic disks and other persistent memory. Volatile media may include, for example, dynamic random-access memory (DRAM), which typically constitutes a main memory. Common forms of computer-readable media include, for example, a disk, hard disk, magnetic tape, any other magnetic medium, a compact disc read-only memory (CD-ROM), a digital video disc (DVD), any other optical medium, random access memory (RAM), programmable read-only memory (PROM), electrically erasable programmable readonly memory (EPROM), FLASH-EEPROM, any other memory chip or cartridge, or any other tangible medium from which a computer can read.
[0096] FIG. 11 shows an illustrative computing system 1100 that may implement any of the computing systems described herein, including those employed as part of sound processing devices and/or other cochlear implant system components described herein. As shown in FIG. 11 , computing system 1100 may include a communication interface 1102, a processor 1104, a storage device 1106, and an input/output (I/O) module 1108 communicatively connected via a communication infrastructure 1110. While an illustrative computing system 1100 is shown in FIG. 11 , the components illustrated in FIG. 11 are not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing system 1100 shown in FIG. 11 will now be described in additional detail.
[0097] Communication interface 1102 may be configured to communicate with one or more computing devices. Examples of communication interface 1102 include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface.
[0098] Processor 1104 generally represents any type or form of processing unit capable of processing data or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor 1104 may direct execution of operations in accordance with one or more applications 1112 or other computer-executable instructions such as may be stored in storage device 1106 or another computer-readable medium.
[0099] Storage device 1106 may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device. For example, storage device 1106 may include, but is not limited to, a hard drive, network drive, flash drive, magnetic disc, optical disc, RAM, dynamic RAM, other non-volatile and/or volatile data storage units, or a combination or subcombination thereof. Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device 1106. For example, data representative of one or more executable applications 1112 configured to direct processor 1104 to perform any of the operations described herein may be stored within storage device 1106. In some examples, data may be arranged in one or more databases residing within storage device 1106.
[0100] I/O module 1108 may include one or more I/O modules configured to receive user input and provide user output. One or more I/O modules may be used to receive input for a single virtual experience. I/O module 1108 may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module 1108 may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons.
[0101] I/O module 1108 may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O module 1108 is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. In some examples, any of the facilities described herein may be implemented by or within one or more components of computing system 1100.
[0102] In the preceding description, various exemplary embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the scope of the invention as set forth in the claims that follow. For example, certain features of one embodiment described herein may be combined with or substituted for features of another embodiment described herein. The description and drawings are accordingly to be regarded in an illustrative rather than a restrictive sense.

Claims

CLAIMS What is claimed is:
1. A method comprising: obtaining, by a sound processing device communicatively coupled to a cochlear implant in a cochlear implant system, an audio signal that is represented in a time domain and that comprises a series of audio frames; generating, by the sound processing device based on the audio signal, a set of input spectrum signals in a frequency domain, the set of input spectrum signals corresponding to a set of channels and including a particular input spectrum signal corresponding to a particular channel of the set of channels; determining, by the sound processing device based on the particular input spectrum signal, an analytic signal associated with the particular channel; generating, by the sound processing device and based on the analytic signal, an envelope signal and a fine structure signal for the particular channel, wherein, for each audio frame of the series of audio frames, the envelope signal includes more than one envelope value and the fine structure signal includes more than one phase value; and transmitting, by the sound processing device to the cochlear implant, a series of stimulation frames generated based on the envelope and fine structure signals.
2. The method of claim 1 , wherein: the series of audio frames is associated with an audio frame rate; the envelope values included for each audio frame on the envelope signal are associated with an effective audio frame rate equal to the audio frame rate multiplied by a number of envelope values included for each audio frame; the series of stimulation frames is associated with a stimulation frame rate; and the stimulation frame rate is greater than the audio frame rate and less than the effective audio frame rate.
3. The method of claim 1 , wherein: the determining of the analytic signal is further based on an additional input spectrum signal included, together with the particular input spectrum signal, in a subset of the set of input spectrum signals; and the analytic signal is further associated with an additional channel included, together with the particular channel, in a subset of the set of channels.
4. The method of claim 3, wherein: the generating of the envelope signal for the particular channel based on the analytic signal includes applying a band-pass filter to the analytic signal to pass envelope information for the particular channel; and the generating of the fine structure signal for the particular channel based on the analytic signal includes applying the band-pass filter to the analytic signal to pass frequency information for the particular channel.
5. The method of claim 1 , wherein: the set of input spectrum signals further includes an additional input spectrum signal corresponding to an additional channel of the set of channels; and the method further comprises: determining, by the sound processing device based on the additional input spectrum signal, an additional analytic signal associated with the additional channel; and generating, by the sound processing device based on the additional analytic signal, an additional envelope signal and an additional fine structure signal for the additional channel, wherein, for each audio frame of the series of audio frames, the additional envelope signal includes more than one envelope value and the additional fine structure signal includes more than one phase value.
6. The method of claim 1 , wherein: the set of input spectrum signals further includes an additional input spectrum signal corresponding to an additional channel of the set of channels; and the method further comprises generating, by the sound processing device based on the additional input spectrum signal, an additional envelope signal and an additional fine structure signal for the additional channel, wherein, for each audio frame of the series of audio frames, the additional envelope signal includes one envelope value and the additional fine structure signal includes one phase value.
7. The method of claim 1 , wherein the sound processing device generates the series of stimulation frames by: applying, to the envelope signal, an envelope-processing function; applying, to the fine structure signal, a fine-structure-processing function; and performing a mapping function to combine the envelope and fine structure signals.
8. The method of claim 7, wherein: the fine-structure-processing function includes a phase synthesis function performed by generating a carrier signal based on fine structure values included in the fine structure signal; and the mapping function includes modulating the carrier signal based on envelope values included in the envelope signal.
9. The method of claim 7, wherein the fine-structure-processing function includes a current steering function performed to simulate one or more sub-channels spectrally located between adjacent channel pairs within the set of channels.
10. The method of claim 1 , wherein the generating of the set of input spectrum signals in the frequency domain includes applying, to the audio signal, a short-time Fourier transform (STFT).
11. The method of claim 1 , wherein the obtaining of the audio signal includes: receiving, from a microphone included in the cochlear implant system, an analog audio signal generated by the microphone based on an acoustic signal presented to the microphone; converting the analog audio signal into a digital audio signal; and applying an automatic gain control to at least one of the analog audio signal or the digital audio signal.
12. A cochlear implant system comprising: a microphone configured to capture an acoustic signal; a cochlear implant configured to stimulate a recipient in which the cochlear implant is implanted; and a sound processing device communicatively coupled to the microphone and the cochlear implant and configured to perform a process comprising: obtaining an audio signal that is based on the acoustic signal, that is represented in a time domain, and that comprises a series of audio frames; generating, based on the audio signal, a set of input spectrum signals in a frequency domain, the set of input spectrum signals corresponding to a set of channels and including a particular input spectrum signal corresponding to a particular channel of the set of channels; determining an analytic signal based on the particular input spectrum signal; generating, based on the analytic signal, an envelope signal and a fine structure signal for the particular channel, wherein, for each audio frame of the series of audio frames, the envelope signal includes more than one envelope value and the fine structure signal includes more than one phase value; and transmitting, to the cochlear implant, a series of stimulation frames generated based on the envelope and fine structure signals.
13. The cochlear implant system of claim 12, wherein: the series of audio frames is associated with an audio frame rate; the envelope values included for each audio frame on the envelope signal are associated with an effective audio frame rate equal to the audio frame rate multiplied by a number of envelope values included for each audio frame; the series of stimulation frames is associated with a stimulation frame rate; and the stimulation frame rate is greater than the audio frame rate and less than the effective audio frame rate.
14. The cochlear implant system of claim 12, wherein: the determining of the analytic signal is further based on an additional input spectrum signal included, together with the particular input spectrum signal, in a subset of the set of input spectrum signals; the analytic signal is further associated with an additional channel included, together with the particular channel, in a subset of the set of channels; the generating of the envelope signal for the particular channel based on the analytic signal includes applying a band-pass filter to the analytic signal to pass energy information for the particular channel; and the generating of the fine structure signal for the particular channel based on the analytic signal includes applying the band-pass filter to the analytic signal to pass frequency information for the particular channel.
15. The cochlear implant system of claim 12, wherein: the set of input spectrum signals further includes an additional input spectrum signal corresponding to an additional channel of the set of channels; and the process further comprises: determining, based on the additional input spectrum signal, an additional analytic signal associated with the additional channel; and generating, based on the additional analytic signal, an additional envelope signal and an additional fine structure signal for the additional channel, wherein, for each audio frame of the series of audio frames, the additional envelope signal includes more than one envelope value and the additional fine structure signal includes more than one phase value.
16. The cochlear implant system of claim 12, wherein: the set of input spectrum signals further includes an additional input spectrum signal corresponding to an additional channel of the set of channels; and the process further comprises generating, based on the additional input spectrum signal, an additional envelope signal and an additional fine structure signal for the additional channel, wherein, for each audio frame of the series of audio frames, the additional envelope signal includes one envelope value and the additional fine structure signal includes one phase value.
17. The cochlear implant system of claim 12, wherein the sound processing device generates the series of stimulation frames by: applying, to the envelope signal, an envelope-processing function; applying, to the fine structure signal, a fine-structure-processing function; and performing a mapping function to combine the envelope and fine structure signals.
18. The cochlear implant system of claim 17, wherein the fine-structure- processing function includes a current steering function performed to simulate one or more sub-channels spectrally located between adjacent channel pairs within the set of channels.
19. A system comprising: a memory storing instructions; and one or more processors communicatively coupled to the memory and configured to execute the instructions to perform a process comprising: obtaining an audio signal that is represented in a time domain and that comprises a series of audio frames; generating, based on the audio signal, a set of input spectrum signals in a frequency domain, the set of input spectrum signals corresponding to a set of channels and including a particular input spectrum signal corresponding to a particular channel of the set of channels; determining, based on the particular input spectrum signal, an analytic signal associated with the particular channel; generating, based on the analytic signal, an envelope signal and a fine structure signal for the particular channel, wherein, for each audio frame of the series of audio frames, the envelope signal includes more than one envelope value and the fine structure signal includes more than one phase value; and transmitting a series of stimulation frames generated based on the envelope and fine structure signals.
20. The system of claim 19, wherein: the series of audio frames is associated with an audio frame rate; the envelope values included for each audio frame on the envelope signal are associated with an effective audio frame rate equal to the audio frame rate multiplied by a number of envelope values included for each audio frame; the series of stimulation frames is associated with a stimulation frame rate; and the stimulation frame rate is greater than the audio frame rate and less than the effective audio frame rate.
PCT/US2023/032875 2023-09-15 2023-09-15 Methods and systems for performing cochlear implant stimulation based on an analytic signal Pending WO2025058630A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2023/032875 WO2025058630A1 (en) 2023-09-15 2023-09-15 Methods and systems for performing cochlear implant stimulation based on an analytic signal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2023/032875 WO2025058630A1 (en) 2023-09-15 2023-09-15 Methods and systems for performing cochlear implant stimulation based on an analytic signal

Publications (1)

Publication Number Publication Date
WO2025058630A1 true WO2025058630A1 (en) 2025-03-20

Family

ID=88373694

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/032875 Pending WO2025058630A1 (en) 2023-09-15 2023-09-15 Methods and systems for performing cochlear implant stimulation based on an analytic signal

Country Status (1)

Country Link
WO (1) WO2025058630A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7515966B1 (en) 2005-03-14 2009-04-07 Advanced Bionics, Llc Sound processing and stimulation systems and methods for use with cochlear implant devices
WO2016201187A1 (en) * 2015-06-11 2016-12-15 Med-El Elektromedizinische Geraete Gmbh Switching hearing implant coding strategies
US20170056657A1 (en) * 2015-09-01 2017-03-02 Med-El Elektromedizinische Geraete Gmbh Patient Specific Frequency Modulation Adaption
US9808624B2 (en) * 2015-06-11 2017-11-07 Med-El Elektromedizinische Geraete Gmbh Interaural coherence based cochlear stimulation using adapted fine structure processing
US20180311500A1 (en) * 2015-10-23 2018-11-01 Med-El Elektromedizinische Geraete Gmbh Robust Instantaneous Frequency Estimation for Hearing Prosthesis Sound Coding
EP3522980A1 (en) * 2016-12-05 2019-08-14 Med-El Elektromedizinische Geraete GmbH Interaural coherence based cochlear stimulation using adapted fine structure processing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7515966B1 (en) 2005-03-14 2009-04-07 Advanced Bionics, Llc Sound processing and stimulation systems and methods for use with cochlear implant devices
WO2016201187A1 (en) * 2015-06-11 2016-12-15 Med-El Elektromedizinische Geraete Gmbh Switching hearing implant coding strategies
US9808624B2 (en) * 2015-06-11 2017-11-07 Med-El Elektromedizinische Geraete Gmbh Interaural coherence based cochlear stimulation using adapted fine structure processing
US20170056657A1 (en) * 2015-09-01 2017-03-02 Med-El Elektromedizinische Geraete Gmbh Patient Specific Frequency Modulation Adaption
US20180311500A1 (en) * 2015-10-23 2018-11-01 Med-El Elektromedizinische Geraete Gmbh Robust Instantaneous Frequency Estimation for Hearing Prosthesis Sound Coding
EP3522980A1 (en) * 2016-12-05 2019-08-14 Med-El Elektromedizinische Geraete GmbH Interaural coherence based cochlear stimulation using adapted fine structure processing

Similar Documents

Publication Publication Date Title
EP2476267B1 (en) Reducing an effect of ambient noise within an auditory prosthesis system
US8706246B2 (en) Fully implantable cochlear implant systems including optional external components and methods for using the same
US9432777B2 (en) Hearing device with brainwave dependent audio processing
US8467881B2 (en) Methods and systems for representing different spectral components of an audio signal presented to a cochlear implant patient
US7908012B2 (en) Cochlear implant fitting system
US8706247B2 (en) Remote audio processor module for auditory prosthesis systems
US20130218237A1 (en) Cochlear implant fitting system
US8705783B1 (en) Methods and systems for acoustically controlling a cochlear implant system
AU2014321433B2 (en) Dynamic stimulation channel selection
US10357655B2 (en) Frequency-dependent focusing systems and methods for use in a cochlear implant system
US11745008B2 (en) ECAP recording method and cochlea implant system
WO2025058630A1 (en) Methods and systems for performing cochlear implant stimulation based on an analytic signal
US9597502B2 (en) Systems and methods for controlling a width of an excitation field created by current applied by a cochlear implant system
CN202892217U (en) Artificial Hearing Simulation System Based on Photoacoustic Effect
CN114466677B (en) Polyphonic pitch enhancement in cochlear implants
US10029096B2 (en) Channel selection systems and methods that employ temporal modification
US8583245B1 (en) Methods and systems for archiving patient data used to fit a cochlear implant system to a patient
Babacan Implementation of a neurophsiology-based coding strategy for the cochlear implant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23789414

Country of ref document: EP

Kind code of ref document: A1