[go: up one dir, main page]

WO2019086439A1 - Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive - Google Patents

Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive Download PDF

Info

Publication number
WO2019086439A1
WO2019086439A1 PCT/EP2018/079681 EP2018079681W WO2019086439A1 WO 2019086439 A1 WO2019086439 A1 WO 2019086439A1 EP 2018079681 W EP2018079681 W EP 2018079681W WO 2019086439 A1 WO2019086439 A1 WO 2019086439A1
Authority
WO
WIPO (PCT)
Prior art keywords
arrival
hearing aid
estimate
microphone
mean
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2018/079681
Other languages
English (en)
Inventor
Lars Dalskov Mosgaard
Thomas Bo Elmedyb
David PELEGRIN-GARCIA
Pejman Mowlaee
Michael Johannes Pihl
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Widex AS
Original Assignee
Widex AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201800462A external-priority patent/DK201800462A1/en
Application filed by Widex AS filed Critical Widex AS
Priority to EP18796007.5A priority Critical patent/EP3704874B1/fr
Priority to US16/760,246 priority patent/US11134348B2/en
Publication of WO2019086439A1 publication Critical patent/WO2019086439A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S1/005For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • In-The-Ear (ITE) hearing aids are designed for arrangement in the ear, normally in the funnel-shaped outer part of the ear canal.
  • ITE hearing aids In a specific type of ITE hearing aids the hearing aid is placed substantially inside the ear canal. This category is sometimes referred to as Completely-In-Canal (CIC) hearing aids.
  • CIC Completely-In-Canal
  • This type of hearing aid requires an especially compact design in order to allow it to be arranged in the ear canal, while accommodating the components necessary for operation of the hearing aid.
  • Hearing loss of a hearing impaired person is quite often frequency-dependent. This means that the hearing loss of the person varies depending on the frequency. Therefore, when compensating for hearing losses, it can be advantageous to utilize frequency- dependent amplification.
  • the binaural wireless link is adapted to provide, for each of the hearing aids, transmission of at least one ipse-lateral input signal, from an ipse-lateral microphone, to the contra-lateral hearing aid whereby at least one binaural microphone set is provided; wherein the filter bank is adapted to:
  • the digital signal processor is configured to apply a frequency dependent gain that is adapted to at least one of suppressing noise and alleviating a hearing deficit of an individual wearing the hearing aid system;
  • Fig. 1 illustrates highly schematically a directional system
  • Fig. 2 illustrates highly schematically a hearing aid system according to an
  • Fig. 3 illustrates highly schematically a phase versus frequency plot
  • signal processing is to be understood as any type of hearing aid system related signal processing that includes at least: beam forming, noise reduction, speech enhancement and hearing compensation.
  • Fig. 1 illustrates highly schematically a directional system 100 suitable for implementation in a hearing aid system according to an embodiment of the invention.
  • the directional system 100 takes as input, the digital output signals, at least, derived from the two acoustical-electrical input transducers lOla-b.
  • the acoustical-electrical input transducers lOla-b which in the following may also be denoted microphones, provide analog output signals that are converted into digital output signals by analog-digital converters (ADC) and subsequently provided to a filter bank 102 adapted to transform the signals into the time-frequency domain.
  • ADC analog-digital converters
  • One specific advantage of transforming the input signals into the time-frequency domain is that both the amplitude and phase of the signals become directly available in the provided individual time-frequency bins.
  • the output signals from the filter bank 102 will primarily be denoted input signals because these signals represent the primary input signals to the directional system 100.
  • the term digital input signal may be used interchangeably with the term input signal.
  • all other signals referred to in the present disclosure may or may not be specifically denoted as digital signals.
  • the terms input signal, digital input signal, frequency band input signal, sub-band signal and frequency band signal may be used interchangeably in the following and unless otherwise noted the input signals can generally be assumed to be frequency band signals independent on whether the filter bank 102 provide frequency band signals in the time domain or in the time-frequency domain.
  • the microphones lOla-b are omni-directional unless otherwise mentioned.
  • the input signals are not transformed into the time-frequency domain. Instead the input signals are first transformed into a number of frequency band signals by a time-domain filter bank comprising a multitude of time-domain bandpass filters, such as Finite Impulse Response bandpass filters and subsequently the frequency band signals are compared using correlation analysis wherefrom the phase is derived. Both the digital input signals are branched, whereby the input signals, in a first branch, is provided to a Fixed Beam Former (FBF) unit 103, and, in a second branch, is provided to a blocking matrix 104.
  • BMF Fixed Beam Former
  • the blocking matrix may be given by:
  • D is the Inter-Microphone Transfer Function (which in the following may be abbreviated IMTF) that represents the transfer function between the two microphones with respect to a specific source.
  • IMTF Inter-Microphone Transfer Function
  • the IMTF may interchangeably also be denoted the steering vector.
  • the digital input signals are provided to the FBF unit 103 that provides an omni signal Q given by the equation:
  • vector W 0 represents the FBF unit 103 that may be given by:
  • the estimated noise signal U provided by the blocking matrix 104 is filtered by the adaptive filter 105 and the resulting filtered estimated noise signal is subtracted, using the subtraction unit 106, from the omni-signal Q provided in the first branch in order to remove the noise, and the resulting beam formed signal E is provided to further processing in the hearing aid system, wherein the further processing may comprise application of a frequency dependent gain in order to alleviate a hearing loss of a specific hearing aid system user and/or processing directed at reducing noise or improving speech intelligibility.
  • the resulting beam formed signal E may therefore be expressed using the equation:
  • H represents the adaptive filter 105, which in the following may also interchangeably be denoted the active noise cancellation filter.
  • subscript n represents noise and subscript t represents the target signal.
  • ( ) is the average operator
  • n represents the number of IMTF estimates used for the averaging
  • RA is an averaged amplitude that depends on the phase and that may assume values in the interval [0, (A)]
  • ⁇ ⁇ is the weighted mean phase. It can be seen that the amplitude Ai of each individual sample weight each corresponding phase ⁇ ⁇ in the averaging. Therefore both the averaged amplitude RA and the weighted mean phase ⁇ ⁇ are biased (i.e. dependent on the other).
  • n the number of inter- microphone phase difference samples used for the averaging.
  • inter-microphone phase difference samples may in the following simply be denoted inter-microphone phase differences.
  • R is denoted the resultant length and the resultant length R provides information on how closely the individual phase estimates are grouped together and the circular variance V and the resultant length R are related by:
  • V 1— R (eq. 10)
  • the inventors have found that the information regarding the amplitude relation, which is lost in the determination of the unbiased mean phase ⁇ , the resultant length R and the circular variance V turns out to be advantageous because more direct access to the underlying phase probability distribution is provided.
  • the optimal estimate of the IMTF in the LMS sense is closely related to the coherence C(f) that may be given as: It is noted that the derived expression for the optimal IMTF, using the least mean square approach, is subject to bias problems both in the estimation of the phase and amplitude relation because the averaged amplitude is phase dependent and the weighted mean phase is amplitude dependent, both of which is undesirable. This however is the strategy for estimating the IMTF commonly taken.
  • the present invention provides an alternative method of estimating the phase of the steering vector which is optimal in the LMS sense, when the normalized input signals are considered as opposed to the input signals considered alone.
  • this optimal steering vector based on normalized input signals will be denoted DN(I):
  • the amplitude part is estimated simply by selecting at least one set of input signals that has contributed to providing a high value of the resultant length, wherefrom it may be assumed that the input signals are not primarily noise and that therefore the biased mean amplitude corresponding to said set of input signals is relatively accurate. Furthermore, the value of unbiased mean phase can be used to select between different target sources.
  • the biased mean amplitude is used to control the directional system without considering the corresponding resultant length.
  • the amplitude part is determined by transforming the unbiased mean phase using a transformation selected from a group comprising the Hilbert transformation.
  • GSC Generalized Sidelobe Canceller
  • MMSE Minimum Mean Squared Error
  • LCMV Linearly Constrained Minimum Variance
  • the method may also be applied for directional system that is not based on energy minimization.
  • the determination of the amplitude and phase of the IMTF according to the present invention can be determined purely based on input signals and as such is highly flexible with respect to its use in various different directional systems.
  • the input signals i.e. the sound environment
  • the two main sources of dynamics are the temporal and spatial dynamics of the sound environment.
  • speech the duration of a short consonant may be as short as only 5 milliseconds, while long vowels may have a duration of up to 200 milliseconds depending on the specific sound.
  • the spatial dynamics is a consequence of relative movement between the hearing aid user and surrounding sound sources.
  • speech is considered quasi stationary for a duration in the range between say 20 and 40 milliseconds and this includes the impact from spatial dynamics.
  • a first time window is defined by the transformation of the digital input signals into the time-frequency domain and the longer the duration of the first time window the higher the frequency resolution in the time-frequency domain, which obviously is advantageous.
  • the present invention requires that the determination of an unbiased mean phase or the resultant length of the IMTF for a particular angular direction or the final estimate of an inter- microphone phase difference is based on a calculation of an expectation value and it has been found that the number of individual samples used for calculation of the expectation value preferably exceeds at least 5.
  • the combined effect of the first time window and the calculation of the expectation value provides an effective time window that is shorter than 40 milliseconds or in the range between 5 and 200 milliseconds such that the sound environment in most cases can be considered quasi-stationary.
  • improved accuracy of the unbiased mean phase or the resultant length may be provided by obtaining a multitude of successive samples of the unbiased mean phase and the resultant length, in the form of a complex number using the methods according to the present invention and subsequently adding these successive estimates (i.e. the complex numbers) and normalizing the result of the addition with the number of added estimates.
  • This embodiment is particularly advantageous in that the resultant length effectively weights the samples that have a high probability of comprising a target source, while estimates with a high probability of mainly comprising noise will have a negligible impact on the final value of the unbiased mean phase of the IMTF or inter-microphone phase difference because the samples are characterized by having a low value of the resultant length.
  • this method it therefore becomes possible to achieve pseudo time windows with a duration up to say several seconds or even longer and the improvements that follows therefrom, despite the fact that neither the temporal nor the spatial variations can be considered quasi- stationary.
  • the estimation of the unbiased mean phase of the IMTF or inter-microphone phase difference is additionally based on an evaluation of the value of the individual samples of the unbiased mean phase such that only samples representing the same target source are combined.
  • speech detection may be used as input to determine a preferred unbiased mean phase for controlling a directional system, e.g. by giving preference to target sources positioned at least approximately in front of the hearing aid system user, when speech is detected.
  • a directional system enhances the direct sound from a source that does not provide speech or is positioned more to the side than another speaker, whereby speakers are preferred above other sound sources and a speaker in front of the hearing aid system user is preferred above speakers positioned more to the side.
  • the angular direction of a target source which may also be denoted the direction of arrival (DOA) is derived from the unbiased mean phase and used for various types of signal processing.
  • DOA direction of arrival
  • the resultant length can be used to determine how to weight information, such as a determined DOA of a target source, from each hearing aid of a binaural hearing aid system. More generally the resultant length can be used to compare or weight information obtained from a multitude of microphone pairs, such as the multitude of microphone pairs that are available in e.g. a binaural hearing aid system comprising two hearing aids each having two microphones.
  • the determination of a an angular direction of a target source is provided by combining a monaurally determined unbiased mean phase with a binaurally determined unbiased mean phase, whereby the symmetry ambiguity that results when translating an estimated phase to a target direction may be resolved.
  • FIG. 2 illustrates highly schematically a hearing aid system 200 according to an embodiment of the invention.
  • the components that have already been described with reference to Fig. 1 are given the same numbering as in Fig. 1.
  • the acoustical-electrical input transducers lOla-b which in the following may also be denoted microphones, provide analog output signals that are converted into digital output signals by analog-digital converters (ADC) and subsequently provided to a filter bank 102 adapted to transform the signals into the time-frequency domain.
  • ADC analog-digital converters
  • One specific advantage of transforming the input signals into the time-frequency domain is that both the amplitude and phase of the signals become directly available in the provided individual time-frequency bins.
  • the input signals 101-a and 101-b are branched and provided both to the digital signal processor 201 and to a sound classifier 203.
  • the digital signal processor 201 may be adapted to provide various forms of signal processing including at least: beam forming, noise reduction, speech enhancement and hearing compensation.
  • the phase versus frequency plot can be used to identify a direct sound if said mapping provides a straight line or at least a continuous curve in the phase versus frequency plot.
  • the curve 301 -A represents direct sound from a target positioned directly in front of the hearing aid system user assuming a contemporary standard hearing aid having two microphones positioned along the direction of the hearing aid system users nose.
  • the curve 301-B represents direct sound from a target directly behind the hearing aid system user.
  • the angular direction of the direct sound from a given target source may be determined from the fact that the slope of the interpolated straight line representing the direct sound is given as:
  • the phase versus frequency plot can be used to identify a diffuse noise field if said mapping provides a uniform distribution, for a given frequency, within a coherent region, wherein the coherent region 303 is defined as the area in the phase versus frequency plot that is bounded by the at least continuous curves defining direct sounds coming directly from the front and the back direction respectively and the curves defining a constant phase of + ⁇ and - ⁇ respectively.
  • the phase versus frequency plot can be used to identify a random or incoherent noise field if said mapping provides a uniform distribution, for a given frequency, within a full phase region defined as the area in the phase versus frequency plot that is bounded by the two straight lines defining a constant phase of + ⁇ and - ⁇ respectively.
  • any data points outside the coherent region, i.e. inside the incoherent regions 302-a and 302-b will represent a random or incoherent noise field.
  • a diffuse noise can be identified by in a first step transforming a value of the resultant length to reflect a transformation of the unbiased mean phase from inside the coherent region and onto the full phase region, and in a second step identifying a diffuse noise field if the transformed value of the resultant length, for at least one frequency range, is below a transformed resultant length diffuse noise trigger level. More specifically the step of transforming the values of the resultant length to reflect a transformation of the unbiased mean phase from inside the coherent region and onto the full phase region comprises the step of determining the values in accordance with the formula:
  • Mi(f) and M 2 (f) represent the frequency dependent first and second input signals respectively.
  • identification of a diffuse, random or incoherent noise field can be made if a value of the resultant length, for at least one frequency range, is below a resultant length noise trigger level.
  • identification of a direct sound can be made if a value of the resultant length, for at least one frequency range, is above a resultant length direct sound trigger level.
  • the trigger levels are replaced by a continuous function, which maps the resultant length or the unwrapped resultant length to a signal-to-noise-ratio, wherein the noise may be diffuse or incoherent.
  • , (eq. 18) wherein k u 2Kf u /f s , with f s being the sampling frequency, K the number of frequency bins up to the Nyquist limit.
  • the mapped mean resultant length R ab for diffuse noise approaches zero for all k ⁇ k u while for anechoic sources it approaches one as intended.
  • the IPDs are circular variables, the estimation of TDoA requires solving a circular-linear fit.
  • an ordinary linear fit can be used as an approximation.
  • a mapped mean resultant length R ab is estimated, which corresponds to a reliability measure for the unbiased mean phase ⁇ ab . Due to the small inter- microphone spacings in a hearing aid system, it is, as discussed above, advantageous to employ the mapped mean resultant length R ab instead of the mean resultant length R ab .
  • the TDoA is estimated using, not only a single data fitting, of a plurality of unbiased mean phases weighted by a corresponding plurality of reliability measures but by carrying out a plurality of data fittings, based on a plurality of data fitting models.
  • the plurality of data fitting models differ at least in the number of sound sources that the data fitting models are adapted to fit.
  • comparison of the results provided by the data fitting models can improve the ability to determine e.g. the number of speakers in the sound environment.
  • the DoA functionality is implemented using three blocks coupled in parallel but obviously the functionality may alternatively be implemented using a single DoA map block operating serially.
  • d B is the inter-microphone spacing between the two hearing aids on the head and the look direction is perpendicular to the rotation axis of the binaural microphone pair.
  • the estimated local DoAs are circular variables and their estimated variances are transformed to mean resultant lengths using eq. (19), where each local DoA is assumed to follow a wrapped normal distribution.
  • R M M E ⁇ L, R ⁇
  • R B the monaural and the binaural mean resultant lengths associated with the direction of arrivals, respectively. These resultant lengths may also each be denoted local reliability measure.
  • the mean resultant lengths associated with the estimated local DOA's are provided to the DOA combiner 406 in order to provide a common DOA that may also be denoted a common mean direction ⁇ and a corresponding common mean resultant length R that may also be denoted a common reliability measure.
  • the monaural DoA estimates for the left and the right pairs are defined in the interval [0, ⁇ ] due to the rotational symmetry around the line connecting the microphones.
  • the binaural DoA is defined within — - , - .
  • is the circular dispersion defined in eq. 20
  • Y is the test statistic to be compared with the upper 100 (l-a)% point of the ⁇ distribution, with a as the significance level.
  • the weighting factors are used to effectively reduce the reliability of the estimates to compensate for the approximations made in eq. 24 and eq. 26.
  • onset of an acoustical feedback signal will exhibit characteristic values of DOA and reliability measures that are relatively easy to distinguish from other types of highly coherent signals such as music), user behavior (e.g finding the preferred sound source direction for the individual user) and own voice detection (e.g. by utilizing the location and vicinity of the hearing aid system users mouth).
  • mapped mean resultant length may be given by other expressions than the one given in eq. 18, e.g. :
  • the present method and its variations are particularly attractive for use in hearing aid systems, because these systems due to size requirements only offer limited processing resources, and the present invention provides a very precise DOA estimate while only requiring relatively few processing resources.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

L'invention concerne un procédé de fonctionnement d'un système d'aide auditive permettant de fournir une performance améliorée pour une multitude d'étapes de traitement de système d'aide auditive et un système d'aide auditive (400) pour la mise en œuvre du procédé.
PCT/EP2018/079681 2017-10-31 2018-10-30 Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive Ceased WO2019086439A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18796007.5A EP3704874B1 (fr) 2017-10-31 2018-10-30 Procédé de fonctionnement d'un système de prothèse auditive
US16/760,246 US11134348B2 (en) 2017-10-31 2018-10-30 Method of operating a hearing aid system and a hearing aid system

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
DKPA201700612 2017-10-31
DKPA201700612 2017-10-31
DKPA201700611 2017-10-31
DKPA201700611 2017-10-31
DKPA201800462 2018-08-15
DKPA201800462A DK201800462A1 (en) 2017-10-31 2018-08-15 METHOD OF OPERATING A HEARING AID SYSTEM AND A HEARING AID SYSTEM
DKPA201800465 2018-08-15
DKPA201800465 2018-08-15

Publications (1)

Publication Number Publication Date
WO2019086439A1 true WO2019086439A1 (fr) 2019-05-09

Family

ID=64051569

Family Applications (4)

Application Number Title Priority Date Filing Date
PCT/EP2018/079676 Ceased WO2019086435A1 (fr) 2017-10-31 2018-10-30 Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive
PCT/EP2018/079671 Ceased WO2019086432A1 (fr) 2017-10-31 2018-10-30 Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive
PCT/EP2018/079674 Ceased WO2019086433A1 (fr) 2017-10-31 2018-10-30 Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive
PCT/EP2018/079681 Ceased WO2019086439A1 (fr) 2017-10-31 2018-10-30 Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive

Family Applications Before (3)

Application Number Title Priority Date Filing Date
PCT/EP2018/079676 Ceased WO2019086435A1 (fr) 2017-10-31 2018-10-30 Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive
PCT/EP2018/079671 Ceased WO2019086432A1 (fr) 2017-10-31 2018-10-30 Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive
PCT/EP2018/079674 Ceased WO2019086433A1 (fr) 2017-10-31 2018-10-30 Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive

Country Status (1)

Country Link
WO (4) WO2019086435A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021063873A1 (fr) 2019-09-30 2021-04-08 Widex A/S Procédé pour faire fonctionner un système audio binaural à porter dans ou sur l'oreille et système audio binaural à porter dans ou sur l'oreille
DE102022201706B3 (de) 2022-02-18 2023-03-30 Sivantos Pte. Ltd. Verfahren zum Betrieb eines binauralen Hörvorrichtungssystems und binaurales Hörvorrichtungssystem

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3809410A1 (fr) * 2019-10-17 2021-04-21 Tata Consultancy Services Limited Système et procédé pour réduire des composants de bruit dans un flux audio en direct
KR102093366B1 (ko) * 2020-01-16 2020-03-25 한림국제대학원대학교 산학협력단 귀 인상 정보를 바탕으로 관리되는 보청기 적합관리 시스템의 제어 방법, 장치 및 프로그램
KR102093369B1 (ko) * 2020-01-16 2020-05-13 한림국제대학원대학교 산학협력단 확장역치레벨에 대한 최적증폭을 위한 보청기 시스템의 제어 방법, 장치 및 프로그램

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120127832A1 (en) * 2009-08-11 2012-05-24 Hear Ip Pty Ltd System and method for estimating the direction of arrival of a sound
US20150163602A1 (en) * 2013-12-06 2015-06-11 Oticon A/S Hearing aid device for hands free communication
US20150289064A1 (en) * 2014-04-04 2015-10-08 Oticon A/S Self-calibration of multi-microphone noise reduction system for hearing assistance devices using an auxiliary device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009034524A1 (fr) * 2007-09-13 2009-03-19 Koninklijke Philips Electronics N.V. Appareil et procede de formation de faisceau audio
GB0720473D0 (en) * 2007-10-19 2007-11-28 Univ Surrey Accoustic source separation
DK2088802T3 (da) * 2008-02-07 2013-10-14 Oticon As Fremgangsmåde til estimering af lydsignalers vægtningsfunktion i et høreapparat
DK3148213T3 (en) * 2015-09-25 2018-11-05 Starkey Labs Inc DYNAMIC RELATIVE TRANSFER FUNCTION ESTIMATION USING STRUCTURED "SAVING BAYESIAN LEARNING"

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120127832A1 (en) * 2009-08-11 2012-05-24 Hear Ip Pty Ltd System and method for estimating the direction of arrival of a sound
US20150163602A1 (en) * 2013-12-06 2015-06-11 Oticon A/S Hearing aid device for hands free communication
US20150289064A1 (en) * 2014-04-04 2015-10-08 Oticon A/S Self-calibration of multi-microphone noise reduction system for hearing assistance devices using an auxiliary device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CABOT: "AN INTRODUCTION TO CIRCULAR STATISTICS AND ITS APPLICATION TO SOUND LOCALIZATION EXPERIMENTS", AES, November 1977 (1977-11-01), XP002788240, Retrieved from the Internet <URL:http://www.aes.org/tmpFiles/elib/20190109/3062.pdf> [retrieved on 201901] *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021063873A1 (fr) 2019-09-30 2021-04-08 Widex A/S Procédé pour faire fonctionner un système audio binaural à porter dans ou sur l'oreille et système audio binaural à porter dans ou sur l'oreille
US11818548B2 (en) 2019-09-30 2023-11-14 Widex A/S Method of operating a binaural ear level audio system and a binaural ear level audio system
DE102022201706B3 (de) 2022-02-18 2023-03-30 Sivantos Pte. Ltd. Verfahren zum Betrieb eines binauralen Hörvorrichtungssystems und binaurales Hörvorrichtungssystem
EP4231667A1 (fr) 2022-02-18 2023-08-23 Sivantos Pte. Ltd. Procédé de fonctionnement d'un système de dispositif auditif binaural et système de dispositif auditif binaural

Also Published As

Publication number Publication date
WO2019086432A1 (fr) 2019-05-09
WO2019086435A1 (fr) 2019-05-09
WO2019086433A1 (fr) 2019-05-09

Similar Documents

Publication Publication Date Title
EP3704873B1 (fr) Procédé de fonctionnement d&#39;un système de prothèse auditive
CN104902418B (zh) 用于估计目标和噪声谱方差的多传声器方法
US10219083B2 (en) Method of localizing a sound source, a hearing device, and a hearing system
CN107071674B (zh) 配置成定位声源的听力装置和听力系统
WO2019086439A1 (fr) Procédé de fonctionnement d&#39;un système d&#39;aide auditive et système d&#39;aide auditive
US10425745B1 (en) Adaptive binaural beamforming with preservation of spatial cues in hearing assistance devices
Cornelis et al. Speech intelligibility improvements with hearing aids using bilateral and binaural adaptive multichannel Wiener filtering based noise reduction
WO2020035158A1 (fr) Procédé de fonctionnement d&#39;un système d&#39;aide auditive et système d&#39;aide auditive
JP2019531659A (ja) バイノーラル補聴器システムおよびバイノーラル補聴器システムの動作方法
US11570557B2 (en) Method for direction-dependent noise rejection for a hearing system containing a hearing apparatus and hearing system
EP2916320A1 (fr) Procédé à plusieurs microphones et pour l&#39;estimation des variances spectrales d&#39;un signal cible et du bruit
EP3837861B1 (fr) Procédé de fonctionnement d&#39;un système de prothèse auditive
DK201800462A1 (en) METHOD OF OPERATING A HEARING AID SYSTEM AND A HEARING AID SYSTEM
Maj et al. SVD-based optimal filtering technique for noise reduction in hearing aids using two microphones

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18796007

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018796007

Country of ref document: EP

Effective date: 20200602