WO2024068868A1 - Dispositif de traitement, en particulier des écouteurs ou une prothèse auditive, pour traiter des signaux de microphone dans un flux de données audio - Google Patents
Dispositif de traitement, en particulier des écouteurs ou une prothèse auditive, pour traiter des signaux de microphone dans un flux de données audio Download PDFInfo
- Publication number
- WO2024068868A1 WO2024068868A1 PCT/EP2023/076943 EP2023076943W WO2024068868A1 WO 2024068868 A1 WO2024068868 A1 WO 2024068868A1 EP 2023076943 W EP2023076943 W EP 2023076943W WO 2024068868 A1 WO2024068868 A1 WO 2024068868A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- processing device
- profile
- signal processing
- settings
- designed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/558—Remote control, e.g. of amplification, frequency
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/60—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles
- H04R25/603—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of mechanical or electronic switches or control elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/55—Communication between hearing aids and external devices via a network for data exchange
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/61—Aspects relating to mechanical or electronic switches or control elements, e.g. functioning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/11—Application of ambisonics in stereophonic audio systems
Definitions
- Processing device in particular headphones or hearing aid, for processing microphone signals into an audio data stream
- the invention relates to processing devices, in particular headphones or hearing aids, for processing microphone signals into an audio data stream.
- US 2009/0123013 A1 discloses a hearing aid device as a processing device.
- the hearing aid includes a housing; a plurality of electrical components disposed within the housing; and an actuator that modifies a setting value of at least one of the electrical components, the actuator being a non-mechanical actuator.
- the setting element distinguishes between several movement patterns of a detected finger or an object and modifies the associated setting values of the hearing aid device.
- this hearing aid is extremely complicated because the user has to remember the associated movement pattern for each setting of the relevant setting value.
- the hearing aid only provides a very limited range of functions that can actually be controlled with the help of its low-performance user interface, namely switching the hearing aid on or off and increasing or reducing the volume of the hearing aid.
- the invention therefore has the object of providing a processing device with a more powerful user interface.
- the subject matter of the invention is therefore a processing device, in particular headphones or hearing aid, wherein the processing device has a signal processing chain which is designed to process microphone signals according to values or settings of signal processing parameters of a profile into an audio data stream, wherein a first profile and a second profile are provided, and wherein the processing device is designed to change the values or settings of the signal processing parameters of the first profile to those values or settings of the signal processing parameters of the second profile or vice versa, taking interaction data into account, wherein the interaction data represents a user interaction recorded as a movement, such as a "swipe", “tap”, “hold” or combination thereof, in particular a movement known as "hold and swipe".
- the invention therefore relates to a system, preferably a hearing aid system, particularly preferably a binaural hearing aid system, wherein the system has two processing devices according to the invention, wherein the first processing device is intended for the left ear and the second processing device for the right ear of a person.
- the measures according to the invention have the advantage that, with a single movement, a large number of the settings of the signal processing parameters of the signal processing chain can be intervened at the same time, without the user actually having to know the individual settings, because these are stored in the respective profile or are summarized. It is therefore sufficient that the user of the processing device is aware of the two profiles that are intended for use via the user interface.
- the processing device thus has an innovative user interface, in particular for a hearing aid.
- the processing device is designed to receive and use the interaction data from an external interaction source by means of a radio connection.
- This measure makes it possible for the interaction data to be generated using the external interaction source, such as a smartphone, a tablet computer or even a cloud-based software solution, and transmitted via a radio connection to one of the processing devices or to both processing devices in order to use it there for further processing to use processing.
- the external interaction source such as a smartphone, a tablet computer or even a cloud-based software solution
- the processing device has an input module with a planar interaction area, wherein the input module is designed to detect a movement, in particular a swiping movement, in the interaction area as a user interaction and to provide the interaction data representing this user interaction.
- the areal interaction area can be provided by a touch-sensitive surface.
- the processing device is designed to define at least two different menu areas in the area interaction area, with one of the two profiles being assigned to the respective menu area.
- the input module can have a camera or be designed as a camera.
- Such an input module can, for example, record a hand gesture as a user interaction.
- the processing device can be designed to recognize objects, in particular the hand or its movement or the position or combination of the finger orientations, in captured image data using pattern recognition.
- the input module is designed to allow user interactions within the auricle capture or capture in front of the auricle or capture along the front edge of the auricle, especially towards the upper edge.
- the input module has at least one touchbar or is implemented as a touchbar.
- a touch bar does not require a fixed actuator to be moved. Rather, a simple touch of the touch-sensitive surface of the touch bar is sufficient because the touch bar is designed to detect touch capacitively.
- touch zones on the touch bar can be defined using hardware or software, or can be adjusted or changed during operation, creating virtual, individually definable buttons that are assigned functions of the processing device, e.g. switching between predefined profiles (for controlling microphone signal processing). This makes it possible to select profiles quickly and specifically, or to automatically adjust the selection of profiles (for a signal processing chain for processing microphone signals) depending on the situation.
- the processing device is configured in such a way that the touch bar is positioned at the front edge of the auricle when worn on the head.
- the touch bar does not have to be lifted as far and the fingers of the hand do not have to be bent as much as would be the case if the outer part of the touch bar were positioned along the back of the ear and could only be touched from behind. This is particularly important for older people, whose mobility is usually limited.
- the processing device is designed to continuously or quasi-continuously change the values or settings, wherein the degree of change scales with the extent of the detected movement.
- the processing device is designed to change the values or settings step by step or abruptly when a threshold value of the detected movement occurs.
- the processing device is designed to change the values or settings even in the case that a profile has a mixture of continuously or quasi-continuously changeable values or settings and values or settings that can be changed step by step or only in jumps.
- the signal processing according to the first profile is converted into signal processing according to the changed values or settings that are now present.
- the values and settings can assume intermediate states between the limits of the values or settings defined by the two profiles. Only if the detected movement uses the entire available length/width/diagonal etc. of the areal interaction area or corresponds to a predefined movement range or shape switched one hundred percent from the first profile to the second profile. Otherwise, the intermediate states mentioned are used.
- the processing device is designed to leave the relevant signal processing parameter unchanged for as long as
- a threshold value such as 25%, 50% or 75%.
- This measure can be used to prevent the relevant effect of the parameter, which is only available in the first profile, from starting too abruptly. You can also proceed analogously with a signal processing parameter of the second profile for which there is no corresponding parameter in the first profile.
- the processing device is designed to implement the change in the signal processing parameters to be made in accordance with the detected movement in real time in the signal processing chain.
- the processing device therefore allows - as already mentioned at the beginning - a "hold and swipe” functionality, which in technical jargon is referred to as a “swipe” functionality, which is advantageously used in the context of the system to mix profiles with each other.
- the person first touches one of the profile zones 53 - 55, leaves the finger on it and then pulls the finger into the adjacent profile zone before the finger is lifted from the touch-sensitive element. This is interpreted by the software in such a way that the the profiles assigned to both profile zones swept by the fingers are to be mixed.
- the degree of mixing (or in other words the mixing ratio) of the two profiles to be mixed can be adjusted by the width of the wiping movement (defacto between 0 and 100%).
- the mixing ratio can be controlled by the width of the finger movement, meaning it can be dynamically adjusted in real time.
- the user can dynamically switch back and forth between the two profiles or dynamically change the degree of mixing, for example, to find out which mixing ratio is best for him, as long as his finger is in contact with the touch-sensitive element but is moved there best fits or, for example, to consciously switch back and forth between two mixing ratios in order to temporarily optimize his acoustic perception according to the first profile or the second profile at short intervals. Only when he lifts his finger from the touch-sensitive element is this point in time set mixing ratio fixed.
- the processing device is designed to fix the change in the signal processing parameters according to the detected movement only when the user interaction, such as a “swipe", “tap”, “hold” or combination thereof, in particular a movement known as “hold and swipe”, which in technical jargon is also referred to as a “swipe movement”, ends.
- the processing device is designed to respond to a predefined user interaction, such as a particularly rapid swipe movement that differs from that with which the profiles are mixed, or a user interaction recorded as a “tap & hold”. Activate home profile.
- a predefined user interaction such as a particularly rapid swipe movement that differs from that with which the profiles are mixed, or a user interaction recorded as a “tap & hold”.
- the user i.e. the person wearing the system, always returns to the original position with a "swipe" gesture up or down, i.e. with a quick stroking movement of the finger across the two zones touching the touch-sensitive element. stable listening mode (the "Home" profile), regardless of whether this "swipe" gesture is performed on the first processing device or the second processing device.
- the touchbar is freely assignable and can control practically all system parameters, i.e. the settings of the signal processing chain. This allows the user to intervene in their sound reality in real time. It can also be used to generate dynamic profiles, whereby the profiles placed on the touchbar are mixed with each other as discussed. This makes it easy to control complex DSP functions, for example, and the user has the feeling that he is interacting with the sound. The user's feeling that he would only change programs with a plastic button, which is often perceived as a disadvantage with known systems, is thus completely avoided.
- the user can intervene deeply in the function of the signal processing chain even during operation.
- a hearing test can be carried out and evaluated and deficits can be identified because the input module allows a wide variety of interactions.
- the system equipped with the technical measures described can also enable an audio data stream generated from the microphone signals to be shared in real time with another external device. This can be done, for example, using a radio connection to the external device or a wired connection via the (magnetically holding) connection.
- This functionality allows another person to experience the underlying audio experience of the person wearing the system live using the other device.
- This functionality can also be used to analyze the audio experience, i.e. the audio data stream, on the other device in real time.
- the system in particular the processing devices, can also each have a data storage device with the help of which the generated microphone signals in raw data format or also pre-processed can be continuously or limited to a time range of the occurrence of an audio event.
- the settings or parameters that were used in the signal processing chain to create the audio data stream from the microphone signals are also saved. If the user of the system now has problems with audio perception, it can be determined by targeted, systematic adjustment or change of the settings or parameters which values for the setting or parameter improve the user's audio perception.
- the stored data allows incremental and repetitive adjustment of the settings and parameters in particular in order to determine the appropriate parameters for one and the same audio experience.
- the audio data streams corresponding to the respective audio experience can also be saved. This allows the user to narrow down the settings and parameters that are suitable for him by comparative listening (i.e. sequential playback of the audio data streams), which in turn gives the audiologist or a software-based audiological application the opportunity to find the optimized set of settings or parameters.
- the two processing devices of the system can be set independently of each other with regard to the use of the profiles available there or the mix ratio of the profiles there, which allows the two processing devices to be operated as flexibly and independently of each other as possible.
- this configuration with a touch bar assignment of two profiles per touch bar, a maximum of four profiles in total can be used, whereby two profiles per processing device can be smoothly converted into one another according to the movement in question.
- This operating mode is referred to as "independent mode", whereby each of the two processing devices interprets the movement independently of the other and carries out the movement resulting values and settings of the signal processing parameters.
- the two processing devices are designed for radio-based transmission of the interaction data from the first processing device to the second processing device or vice versa, and the processing device receiving the interaction data from the other processing device is designed to transmit the received interaction data analogously to that the processing device that sends out interaction data must be taken into account.
- the system can be configured in an asynchronously linked mode such that there is a radio connection between the processing devices for sending their own interaction data to the other processing device and for receiving the interaction data from the other processing device.
- Gestures executed synchronously on both processing devices i.e. to the left and right sides of the head
- both processing devices can perform synchronous execution based on different inputs on both sides.
- individual interactions on both sides carry out different functions that can be synchronized with the other processing device. For example, a user defines swiping up as increasing the volume on the left side, but swiping up on the right as muting the sound. If the user then swipes up on the right side, both devices will be muted while Swiping up on the left side increases the volume on both devices.
- an asynchronously unlinked mode can be provided.
- this mode too, there is the radio connection mentioned above in the asynchronously linked mode for exchanging the respective interaction data so that each side is informed about the respective user interaction.
- this radio connection is not absolutely necessary.
- the processing devices behave in such a way that changes are only carried out on that side of the head or by the processing devices positioned there, where the user interaction happens or takes place.
- radio modules for radio communication can be used in the two processing devices.
- This can be, for example, Near Field Communication, NFC for short, ZigBee, Wi-Fi, or a proprietary protocol can also be used.
- the radio modules are designed according to the respective radio technology, whereby special software (driver software) that provides the respective protocol is used if necessary.
- the radio connections are Bluetooth radio connections.
- the radio modules are designed as Bluetooth radio modules.
- Each of the processing devices has at least two microphones, which provide individual microphone signals that are exchanged between the two processing devices.
- the microphone signals can be further processed according to “Dolby Atmos” technology or Aureal 3-Dimensional technology, or A3D technology for short, or the Auro 3D format.
- each of the processing devices is designed to process the two groups of microphone signals according to the Ambisonics format.
- Ambisonics format also allows for a simple and familiar use of the recorded microphone signals. For example, they can be recorded and saved as recording data. This recording data can then be further processed within the system or by other devices. For example, the recording data can be used by so-called "(e.g. video and/or audio) content creators" to use the recorded recording data in their own (proprietary) processing environment.
- the Ambisonics format is used in the further system-internal processing chain to enable the use of one or more virtual microphones, so that with the help of the system, the directivity of the respective virtual microphone is tailored to individual users in a purely software-based manner, i.e. without any additional external aids Sound sources (e.g. devices, vehicles or people) can be focused in the sound detection area.
- Sound sources e.g. devices, vehicles or people
- the basic configuration mentioned above consisting of the two microphones per processing device, already forms the basis for audio signal processing in accordance with the generally known Ambisonics format in the first order. It may be necessary to transform the microphone signals from one coordinate system to another in order to make the preferred arrangement and/or orientation of the microphones accessible for use in the Ambisonics format in accordance with the convention. If there are more microphones than in the basic configuration, and if there are corresponding orientations or directional characteristics, higher orders of the Ambisonics format can also be processed. It should be noted that even when using other formats, the use of multiple microphones allows a description of a higher order of the spherical harmonics.
- the fully automatic or user-interaction-based three-dimensional interpretation of the microphone signals makes it possible to identify a specific sound source and the direction of the sound source in relation to the respective orientation of the processing device.
- the sounds from this sound source can now be amplified or muted for the person depending on the specification or properties of the sounds.
- Beamforming is generally understood to mean the orienting of a directional characteristic (of a virtual microphone known in the context of Ambisonics), if necessary modeling the directivity (of this virtual microphone).
- the change in the position of the sound source relative to the system or its orientation which in the case of a hearing aid results from the orientation of the head, must be taken into account in order to continue to focus on the previously focused sound source stay and amplify or block out their sound.
- the aforementioned “beamforming” can also be used with a virtual microphone for this - possibly automated - process. In this way, the sound field can be permanently scanned and the position of the “traveling” sound source can be tracked.
- ESLs electronic devices
- smartphones smartphones, tablet computers, video shelf rails, etc.
- the electronics can be constructed discretely or through integrated electronics or a combination of both.
- ASICs application specific integrated circuits
- Many of the mentioned functionalities of the devices are implemented - possibly in conjunction with hardware components - with the help of software that is executed on an electronics processor.
- Devices designed for radio communication usually have an antenna configuration for sending and receiving radio signals as part of a transceiver module.
- the electronic devices can also have an internal electrical power supply, for example can be realized with a replaceable or rechargeable battery.
- the devices can also be powered by wires, either through an external power supply or via “Power over LAN”.
- FIG. 1 shows a head of a person with a binaural hearing aid system with a processing device on each ear of the head;
- Fig. 2 direction information for the binaural hearing aid system in the horizontal plane relative to the head
- Fig. 3 Directional characteristics of the microphones used in the binaural hearing aid system
- Fig. 4 is a block diagram of a binaural hearing aid system
- Fig. 5 shows a signal processing chain used in the processing devices
- FIG. 6 structural details of the processing devices in an exploded view
- FIG. 7 shows the processing device with electronic components or assemblies accommodated in its housing and additional expansion modules that can be connected to it;
- Fig. 8 shows a dynamic interface of the processing devices realized with the help of two touch bars
- Fig. 9 shows a typical wearing position of the processing device on the right ear with a user interaction by a finger on a surface interaction area of the processing device
- Fig. 10 shows user interaction in a zoom view
- Fig. 11 the areal interaction area with profiles assigned to its end areas for controlling the signal processing chain
- Fig. 12 a movement of the finger along the areal interaction area and the resulting change of the values and settings for the signal processing chain;
- FIG. 13 shows a reverse movement compared to FIG. 12 and the associated change in the values or settings for the signal processing chain
- Fig. 15 to 20 a user interface of a smartphone application for configuring various settings of the processing device(s);
- Fig. 21 shows a further embodiment of the areal interaction area
- Fig. 22 shows a first two-finger user interaction and its effect
- Fig. 23 a second two-finger user interaction and its effect
- Fig. 24 a visualization of the mixing of two profiles for audio signal processing
- Fig. 25 an influence of a finger position on a touch bar on the profile mixture
- Fig. 26 shows an effect of the mix based on two profiles, each with three audio effects
- Fig. 27 an exemplary parameter collection of a profile
- Fig. 28 shows an individualization of a profile using a smartphone.
- system 1 shows a head 101 of a person 100 who is wearing a binaural hearing aid system 1, hereinafter referred to as system 1, which has a first processing device 2 worn on the left ear 102 and a second processing device 3 worn on the right ear 103 to enable the person 100 to have an improved, in particular natural, hearing experience with both ears 102, 103.
- the designations front V, back H, left L, and right R correspond to the natural direction information in relation to the head 101 of the (standing or sitting) person 100, so that the face faces forward V, the left ear 102 faces left L and that right ear 103 is aligned to right R.
- Processing devices 2 and 3 also indicate a radio connection, which will be discussed in detail below.
- Figure 2 shows the four directions V, H, R and L based on the head 101 of the person 100, which is viewed from above, with the face of the person 100 looking forward V, which corresponds to the degree 90°.
- the degree is given in the horizontal plane, which is positioned essentially at the level of the ears 102, 103.
- the head 101 is indicated on the circumference and marked with a cross 200, the longest section of the cross 201 being oriented from the center of the head 101 towards the nose. This is mentioned at this point because this type of representation is also used in the following figures.
- Each of the processing devices 2 and 3 shown in Figure 1 has two microphones 10 and 11 or 12 and 13 (not shown, but see e.g. Figures 3 and 4), so that sound can be recorded in all four directions V, H, L and R, as shown in Figure 2.
- the main recording directions of the four microphones 10-13 shown in Figure 2 are indicated here by the direction information "left-front” LV, "right-front” RV, "left-back” LH and "right-back” RH.
- the two processing devices 2 and 3 are discussed below using a block diagram.
- the two processing devices 2 and 3 are battery-operated and therefore have each have a rechargeable battery 25 which provides a supply voltage VCC relative to a reference potential GND.
- the first processing device 2 has a first processing stage 4.
- the processing stage 4 is connected to a first Bluetooth radio module 6, in short first radio module 6, and a fourth Bluetooth radio module 9, in short fourth radio module 9, via a UART connection 17 for setting or programming the radio modules 6 and 9, respectively, and via an I 2 S connection 18 for the separate transmission of audio data.
- the first radio module 6 is connected by wire (in this exemplary embodiment via its analog microphone signal inputs) to a first microphone 10 and a second microphone 11.
- An individual microphone signal MSI or MS2 is generated with the first microphone 10 and the second microphone 11.
- the two microphone signals MSI and MS2 together form a first group Gl of microphone signals.
- the microphone signals MSI and MS2 arriving at the first radio module 6 are digitized for further processing using the first radio module 6.
- the second processing device 3 has a second processing stage 5.
- the second processing stage 5 is with a third Bluetooth radio module 8, short third radio module 8, and a second Bluetooth radio module 7, short second radio module 7, each via a UART connection 17 for setting or programming the radio modules 6 and 9 and each connected via an I 2 S connection 18 for the separate transmission of audio data.
- the third radio module 8 is connected via a cable (in this embodiment via its analog microphone signal inputs) to a third microphone 12 and a fourth microphone 13.
- the third microphone 12 and the fourth microphone 13 each generate an individual microphone signal MS3 or MS4.
- the two microphone signals MS3 and MS4 together form a second group G2 of microphone signals.
- the microphone signals MS3 and MS4 arriving at the third radio module 8 are digitized for further processing using the third radio module 8.
- the respective group Gl or G2 of the microphone signals can also be fed to the radio modules 6 or 7 as a digitized audio data stream if appropriately designed microphones are used which support such a creation of the audio data stream.
- Each of the radio modules 6 to 9 has an antenna configuration 14 and the usual transceiver electronics (not shown in detail) for Bluetooth radio communication.
- a first radio connection 15 is established between the first radio module 6 and the second radio module 7 and the first group G1 of the microphone signals is transmitted from the first radio module 6 to the second radio module 7.
- the first radio module 6 is used exclusively as a transmitter and the second radio module 7 is used exclusively as a receiver.
- a second radio connection 16 is also established between the third radio module 8 and the fourth radio module 9, and the second group G2 of microphone signals is transmitted from the third radio module 8 to the fourth radio module 9.
- the third radio module 8 is used exclusively as a transmitter and the fourth radio module 9 is used exclusively as a receiver.
- audio data representing the first group Gl of the microphone signals are transmitted from the first radio module 6 and audio data representing the second group G2 of the microphone signals are transmitted from the fourth radio module 9, each via the separate I 2 S connections 18, to the first processing stage 4.
- audio data representing the first group Gl of the microphone signals are transmitted from the second radio module 7 and audio data representing the second group G2 of the microphone signals are transmitted from the third radio module 8, each via the separate I 2 S connections 18, to the second processing stage 5.
- both the first processing device 2 and the second processing device 3 have both groups Gl and G2 of the microphone signals, i.e.
- all microphone signals MSI to MS4 available for further Signal processing is available, whereby the individual, unidirectional transmission via two separate radio links 15 and 16 - as discussed in the general part of the description - forms the basis for the further phase-synchronous signal processing of all microphone signals MSI to MS4 in each of the processing devices 2 and 3 in real time.
- the four radio modules 6 to 9 are each implemented by a component from “MICROCHIP” called “BM83 Bluetooth® Stereo Audio Module”.
- the two processing stages 4 and 5 are implemented in this exemplary embodiment by a component from “NXP Semiconductors” called “i.MX RT1160”.
- i.MX RT1160 a component from “NXP Semiconductors”
- Each of the processing devices 2 and 3 has a gyroscope 21a and 21b, respectively, with which position data LD are generated, which represent the position or the change in the position of the respective processing devices 2 and 3, respectively.
- Each of the processing stages 4 and 5 is connected to the respective gyroscope 21a and 21b and takes the respective position data LD into account during the further processing of the two groups G1 and G2 of the microphone signals, so that position changes can be compensated for in this further processing.
- Each of the processing devices 2 and 3 further comprises a touch-sensitive input module 20, which is divided into a first input module controller 19a (implemented with the aid of a microcontroller) and a first so-called “touch strip” 20a connected to it, and a second input module controller 19b (also implemented with the aid of a microcontroller) and a second touch strip 20b connected to it.
- the two touch strips 20a and 20b form external features of two touch bars, each of which has a touch-sensitive surface.
- the input module 20 can define freely assignable zones on the touch strips 20a and 20b and thus divide the area available there into menu items, for example into an upper and lower half, so that the functions assigned to the menu items can be triggered by touching with a finger.
- the assignment of the menu items to the zones can be predefined, changed or adapted depending on the operating state, or configured via an external app.
- the respective processing levels 4 and 5 are connected to the respective input module 20.
- the input module 20 is designed to capacitively detect the position of a finger of the person 100 on the respective touch strip 20a or 20b and to interpret it using the respective touch module controller 19a or 19b and to generate interaction data ID and to provide it for the respective processing levels 4 or 5, where this interaction data ID is processed according to the currently valid menu assignment of the respective touch strip 20a and 20b.
- areas can be defined on the touch strip 20a and 20b that are used to directly activate profiles, or areas can be defined that are used to control the volume, or areas can be defined that are used to align a virtual microphone, etc.
- the interaction data ID can also be generated by means of an external interaction source, such as a smartphone, a tablet computer or a cloud-based software solution, and transmitted via a radio connection to one of the processing devices 2 or 3 or to both processing devices 2 and 3 in order to use them there for further processing.
- the interaction data ID as well as the position data LD can be transmitted together with the microphone signals MSI and MS2 or MS3 and MS4 via the corresponding radio connections 15 and 16 to the other processing device 2 or 3, so that both processing devices 2 and 3 are synchronous with one another can be operated or simply keep each other informed about the respective interaction or situation.
- the two processing devices 2 and 3 each have a visual signaling stage 26, which is essentially formed by an LED that is controlled by the respective radio module 6 or 8 to which it is connected.
- the visual signaling stage 26 serves, for example, to display a transmission activity or to display a charge status of the battery 25 of the respective processing device 2 or 3.
- each of the processing devices 2 and 3 has an output module 22, which is divided into an amplifier 22a or 22b and a loudspeaker 23a or 23b coupled thereto.
- the amplifiers 22a, 22b are an I2C Class D amplifier which is connected to the respective processing stage 4 or 5 via an I 2 C connection and is designed to be supplied with digital audio data from there and from this to generate a correspondingly amplified output audio signal with which the loudspeaker 22a or 22b is controlled.
- each of the processing devices 2 and 3 has a removable storage medium read/write stage 24, with the aid of which a removable storage medium can be written with data or data can be read from the removable storage medium.
- This can be setting data, user data, audio data or even application data, etc., which are made accessible to the system or the respective processing device 2 or 3 or are retrieved from there.
- This can also provide a memory extension for processing activities that require a lot of computing or data. This also allows executable applications to be made available in the system 1 in a physical way, possibly even encrypted.
- the reference numerals 27 to 29 are not used.
- the two groups Gl and G2 of the microphone signals MSI - MS4 are fed into a first signal processing stage 31 on the input side.
- the first signal processing stage 31 all functions for ambisonics-related signal processing are combined, taking into account the position data LD and/or the interaction data ID that relate to the ambisonics-related signal processing.
- an audio data stream structured according to the Ambisonics format is generated, the position data LD of the gyroscope 21a or 21b is taken into account, one or more virtual microphones are defined or controlled, etc.
- the first signal processing stage 31 generates from the four microphone signals MS1-MS4 a first audio data stream ADI, which is fed into a second signal processing stage 32.
- the second signal processing stage 32 all functions for hearing curve adaptation-related signal processing are summarized, including or taking into account the interaction data ID, which relate to the hearing curve adaptation.
- a hearing curve adjustment works by correcting the hearing curve of an impaired ear to approximate that of a healthy ear. This essentially uses a static equalizer. With this functionality, the second signal processing stage 32 generates a second audio data stream AD2 from the first audio data stream ADI, which is fed into a third signal processing stage 33.
- the third signal processing stage 32 all functions for hearing profile-related signal processing are summarized, including or taking into account the interaction data ID, which relate to the hearing profile settings.
- a hearing profile includes those audio signal processing parameters that are tailored to the ear or person in question and which improve or positively influence the intelligibility of speech, participation in a group discussion, listening to music, etc. If the respective hearing profile is activated, i.e. used in the signal processing chain, the intelligibility of individual speech is improved, participation in a group discussion is made easier, the natural perceptibility of music is promoted, etc.
- the third signal processing stage generates 32 from the second audio data stream AD2 a third audio data stream AD3, which is fed into a fourth signal processing stage 34.
- the fourth signal processing stage 34 all functions for hearing aid function-related signal processing are summarized, including or taking into account the interaction data ID, which relate to the hearing aid function settings.
- This can be, for example, "Quality-Of-Life Improvements" such as additional noise filtering, echo/reverb filtering, etc.
- Plugins can also be integrated here that can be defined/downloaded by yourself.
- the fourth signal processing stage generates this functionality 32 from the third audio data stream AD3 a fourth audio data stream AD4, which is delivered to the ear 102, 103 of the person 35 wearing the system 1 by means of the aforementioned output module 22.
- signal processing stages 31 to 34 shown as structural blocks can essentially be based on software modules, although of course hardware optioned for the respective function, possibly programmable in combination with software, can also be used .
- the structure of the physical structure of the processing device 2 or 3 is discussed below with the help of Figure 6.
- the two processing devices 2 and 3 are structured as follows.
- loudspeaker capsule 36 which is designed for use in the external auditory canal and accommodates the loudspeaker 23a or 23b in a housing.
- first flexibly deformable connecting element 37 which is essentially formed from a body consisting of deformable plastic, in particular polyurethane plastic or silicone.
- first housing 38 which shows a first touch-sensitive part 39 of the first touch strip 20a, so that the touch-sensitive part 39 is oriented laterally or forwards or oriented obliquely forwards as unhindered as possible when the processing device 2 or 3 is worn on the head the ear cup is accessible or touchable on the first housing part.
- a second flexibly deformable connecting element 40 which is essentially formed from a body consisting of deformable plastic, in particular polyurethane plastic or silicone.
- a rigid second housing 41 which, on the one hand, is shaped to fit the shape of the outer ear from its front to its rear area in order to be accommodated between the ear and the skull, and which, on the other hand, is designed to be large enough to accommodate the loudspeaker 23a or 23b and the first touch-sensitive part 39 of the touch strips 20a.
- remaining electronic components of the processing device 2 or 3. These electronic components are connected by cable to the first touch-sensitive part 39 of the touch strips 20a and the loudspeaker 23a or 23b, which is not shown for reasons of clarity.
- the second housing part 41 therefore contains a processing module of the corresponding processing device 2 or 3.
- the processing module has the respective processing stage 4 or 5, the two radio modules 6 and 9 or 8 and 7, two microphones 10 and 11 or 12 and 13, the respective gyroscope 21a or 21b and the respective visual signaling stage (not shown here).
- Externally visible on the second housing 41 are a second touch-sensitive part 42 of the second touch strip 20b, the second touch-sensitive part 42 being used primarily for volume adjustment, and a magnetically held connection 43 or its contact field, which shows six contact elements.
- the first microphone 10 or its sound inlet opening(s) is present on the second housing 41 at its front end, although this is not visible in the selected perspective of Figure 6. Furthermore, the second microphone 11 or its sound inlet opening(s) is visible on the second housing 41 of the first processing device 2.
- the third microphone 12 or its sound inlet opening(s) is present on the second housing 41 at its front end, which, however, is not visible in the selected perspective of FIG. Furthermore, the fourth microphone 13 or its sound inlet opening(s) is visible on the second housing 41 of the first processing device 2.
- FIG. 7 will be discussed further, in which the processing device 2 or 3 is shown assembled in contrast to Figure 6.
- the areas of the rigid second housing part 41 are shown here as transparent or cut free in some areas in order to reveal the assemblies or electronic components arranged therein.
- the integrated circuits mentioned are in an upper area 44A
- Analog electronic components are arranged for the radio modules 6 and 9 or 8 and 7, for the input module 20 and for the processing stages 4 and 5.
- a rear area 44B there is an electrical supply, which in the present case is implemented with two rechargeable button cell-like batteries 25, and a haptic module 45, not yet shown, which can be controlled with the aid of the respective processing stage 4 or 5.
- an optional expansion module 51 connected to the magnetically holding connection 43 is shown in FIG.
- these can be the following types of modules: a battery charging module 46, a microphone cube (not shown), a jack plug adapter 47, a USB adapter 48, a radio device 49 that can be logically linked to the system 1, with its help Audio or settings data from another device, to which the radio device 49 is then connected, for example via a USB port, can be transmitted via radio to the system 1 or vice versa, a microphone extension 50, etc.
- the rechargeable batteries 25 can thus be charged using the battery charging module.
- a docking station can also be provided, into which the two processing devices 2 and 3 can be coupled using the connection 43, so that the batteries 25 can be charged in the docking station.
- the system 1 can be designed to process Ambisonics according to a higher order than that provided by the microphones 1 to 13 permanently installed in the system 1 (in the present case, this is the first order). This circumstance allows further microphones to be connected to the connection 43 and thus to use the higher order, without any change to the basic design of the system 1 being necessary.
- the user interface integrated in the system 1 with an exemplary menu assignment of the first touch-sensitive part 39 of the touch bar 20 for both of the processing devices 2 and 3 and the functions of the system 1 that can be controlled with it are discussed.
- the menu assignment is defined such that in the second processing device 3 the upper half of the first touch-sensitive part 39 defines a first profile zone 53 for selecting a first profile for the signal processing chain 30 and the lower half of the first touch-sensitive part 39 defines a third profile zone 55 for selecting a third profile for the signal processing chain 30.
- the menu assignment is defined such that in the first processing device 2 the upper half of the first touch-sensitive part 39 defines a second profile zone 54 for selecting a second profile for the signal processing chain 30 and the lower half of the first touch-sensitive part 39 also defines the third profile zone 55 Selection of the third profile for the signal processing chain 30 is defined.
- the three profiles for the signal processing chain 30 summarize the respective signal processing settings or signal processing parameters that are to be used in the signal processing chain 30 depending on the activated menu.
- Touching one of the profile zones 53 - 55 (upper or lower half) of the relevant touch-sensitive part 39 selects the respective associated profile in order to use it in the signal processing of the microphone signals MSI - MS4. This takes place in real time, while the person 100 is wearing the system 1 and the signal processing chain 30 is being run through.
- the profile transition between the currently active profile and the profile selected by touching the respective zone is preferably smooth, which is also referred to as "fading" and is planned to take a certain amount of time, for example one to three seconds, preferably around 2 seconds, so that a smooth transition to the selected profile and therefore to be used is created. Abrupt signal processing changes, which could sometimes be interpreted as disruptive or a malfunction, are thus reliably avoided.
- the software of system 1, i.e. of each processing device 2 or 3 also allows a "hold and swipe” functionality, which is referred to in technical jargon as a “swipe” functionality, which is advantageously used in the context of system 1 to mix profiles with one another.
- the person first touches one of the profile zones 53 - 55, leaves the finger on it and then drags the finger into the adjacent profile zone before the finger is lifted from the touch-sensitive element. This is interpreted by the software in such a way that the profiles assigned to the two profile zones swept by the fingers are to be mixed.
- the signal processing settings applicable to the signal processing chain 30 according to the first profile are mixed with those of the third profile. If the same is done on the left, first processing device 2, the signal processing settings of the second profile are mixed with those of the third profile.
- the degree of mixing (or in other words the mixing ratio) of the two profiles to be mixed can be adjusted by the width of the swiping movement (defacto between 0 and 100%).
- the first profile of the signal processing chain 30 was designed for natural hearing (home setting) and the third profile of the signal processing chain 30 was programmed as an autofocus on speech signals, so that the beamformer in the first signal processing stage 31 always selects a speech vector and the DSP (DSP stands for "digital signal processor") of the third signal processing stage 33 is set to speech. If you now touch the first profile zone 53, hold it and then swipe downwards, the stable "home" profile approaches the uncompromising speech autofocus.
- the mixing ratio can advantageously be controlled by the width of the finger movement, i.e. dynamically adjusted in real time. As a result, an audio signal optimized for normal hearing is mixed with an audio signal optioned for speech according to the selected mixing ratio.
- the user can also dynamically switch back and forth between the two profiles or dynamically change the mixing level as long as his finger is resting on the touch-sensitive element but is being moved there, for example to find out which mixing ratio suits him best or to consciously switch back and forth between two mixing ratios in order to temporarily optimize his acoustic perception according to the first profile or the second profile at short intervals. Only when he lifts his finger from the touch-sensitive element is the mixing ratio set at that time fixed. Furthermore, the user, i.e. the person 100 wearing the system 1, always returns to the original, stable listening mode (the "home” profile) with a "swipe” gesture up or down, i.e. with a quick swipe of the finger across the two zones touching the touch-sensitive element, regardless of whether this "swipe” gesture is carried out on the first processing device 2 or the second processing device 3.
- the entire system 1 can be controlled in a touchable way.
- the touch bar 20 is freely assignable and can control practically all system parameters, i.e. the settings of the signal processing chain 30. This allows the user to intervene in their sound reality in real time.
- dynamic profiles can also be generated with it, whereby the profiles placed on the touch bar 20 are mixed with each other as discussed. This makes it easy to control complex DSP functions, for example, and gives the user the feeling that they are interacting with the sound. The feeling of the user, often perceived as a disadvantage with known systems, that they are only changing programs with a plastic button, is thus completely avoided.
- the user interface can adapt dynamically to the user, e.g. as a result of an interaction that has just been carried out or due to external circumstances that were determined, for example, through the evaluation of the incoming sound.
- the user interface can also change its state briefly and, for example, provide the function of a call acceptance button when a call is received via a smartphone, which is communicated to System 1 via a Bluetooth connection with the smartphone, for example.
- the short-term change can also be caused by the evaluation of the incoming sound if this appears necessary based on the evaluation.
- a zone (eg the upper half) of the touch strip 20a or 20b can be assigned to the "Speech Autofocus" function. Tapping in the upper area of the touch strip 20a or 20b assigned to this menu item then executes a special plug-in (i.e. a software component) in processing stages 4 and 5, with the help of which the sound field is searched for speakers and one or more virtual microphones are aimed at these sound sources or spatial areas or directional areas in which the sound sources were identified.
- a special plug-in i.e. a software component
- a "just listen” function can be activated via the touch bar 20.
- This can be a completely stable hearing program that is, however, adapted to the hearing curve of the user of System 1 and is therefore suitable for or leads to a natural hearing sensation.
- This function would consist of a mix of omnidirectional and/or, for example, cardioid directionally recorded audio signals (vector to the front, front of user).
- the mix of the individual microphone signals MSI - MS4 is determined, which is determined by binaural (localization) hearing tests.
- the frequency response results from a binaural (frequency) hearing test.
- the polar pattern i.e. the directional characteristic used, is generated by the mix of several virtual microphones. This applies to all programs of this System 1.
- the only difference here is that the mix is absolutely static.
- the fact that the mix is stable means that no automatic mechanisms intervene during operation.
- the localization ability is restored by binaural mixdown, which is known in the context of Ambisonics.
- This "just listen” profile also forms the basis of all other profiles, with the difference that other profiles can create and freely mix freely definable virtual microphones of any type and number (limited to the order of the microphone array 10 - 13; in the present embodiment it is First Order Ambisonics).
- An example of this is a "noise cancelling" function that can be called up on the touch strip 20a or 20b, which has a dynamic behavior.
- the "noise cancellation” profile is active, the environment is continuously analyzed and noise in the sound field is selectively removed fully automatically. This function searches for and suppresses permanent (interference) noise, while at the same time music, ambient noise to be observed and speech remain consistently perceptible for the user.
- the Signal processing chain 30 therefore adapts dynamically to the spatial and temporal dynamics of the noise.
- the haptic module 45 which can also be referred to as a feedback module, gives the user, i.e. the person who wears or uses the system 1, feedback under a wide variety of operating conditions, for example when the touch bar 20 or the respective touch strips 20 a or 20b were touched or the system 1 wants to communicate something during normal operation.
- This feedback module serves to enable uninterrupted “notifications”, i.e. messages, to the user.
- the sound image, which is generated for the user by the system 1, is never disturbed by the tactile feedback when the volume or program changes, etc , because this provides feedback without acoustic interruption or acoustic overlay.
- a first area, such as a head area 90, of the second touchbar 20b has a first profile PI and a second area, such as a foot area 91, of the first touchbar 20a has a second one Profile P2 is assigned.
- the data structure DS of the profiles PI and P2 is constructed identically in order to easily enable the "mixing" of the values or settings of the two profiles as a result of user interaction.
- This data structure DS is visualized by a first matrix M1 and a second matrix M2 with “boxes” that represent data elements, with each box of the first profile PI being assigned exactly one box of the second profile P2.
- the columns here contain the ones for the respective one
- the signal processing parameters used in signal processing stages 31 to 34 come together, with a line entry for storing the relevant value or the relevant setting of a signal processing parameter being provided for each signal processing parameter.
- Those boxes that are filled with the number “1” indicate that it is these are the values or settings for the respective signal processing parameters according to the first profile PI.
- Those boxes filled with the number “3” indicate that they are the values or settings for the signal processing parameters according to the second profile P2.
- FIG. 12 shows, with three images along the time axis t, a temporal sequence of movements during the condition of the user interface, whereby the finger 104 is first placed in the head area 90 and is subsequently moved to the foot area 91, while the finger remains in contact with the touchstrip 20b.
- the finger 104 is in an intermediate position between the head area 90 and the foot area 91, as shown in the middle figure. Only when the foot area 91 is reached is the finger 104 lifted off the touch strip 20b. This sequence of finger movement - i.e.
- the interaction data ID describes the detected movement of the finger 104 as a "hold and swipe” movement and also provides the respective position of the finger 104 along the touch strip 20b in real time as long as the finger is held on the touch strip 20b.
- the second processing stage 5 in real time the values or settings of the signal processing parameters of the first profile 1 - scaled according to the respective position of the finger 104 between the two outermost ends of the first touch strip 20a - to the values or settings of the signal processing parameters of the second profile P2
- the fundamentally dynamic process is visualized in a snapshot for the intermediate position by a third matrix M3, the boxes of which contain the number "2", which indicates that it is an intermediate profile P3 generated dynamically in real time, which is used in real time in the signal processing chain 30 is applied.
- the values and settings of the respective signal processing parameter of the intermediate profile P3 result from a (for example linear) interpolation, which is dependent on the position of the finger 104, between the value or the setting of the signal processing parameter of the first profile PI and the value or the setting of the latter corresponding signal processing parameter of the second profile P2.
- This interpolation is performed for each signal processing parameter.
- the listening experience that is conveyed by the intermediate profile P3 therefore represents a mixture of the respective listening experience conveyed by the first profile 1 and the second profile P2, the mixing ratio being given by the finger position. So the position of finger 104 essentially affects everyone Signal processing parameters, unless they are intended to be constant or are not listed as identical in the two profiles PI and PI.
- Figure 13 shows the reverse process, i.e. that the starting point of the "hold and swipe" movement is the foot area 91, to which the second profile P2 is assigned, and that the movement runs via an intermediate position, where a dynamically generated intermediate profile P3 is created and applied, to the head area 90, to which the first profile PI is assigned.
- the finger 104 can also remain on the touch strip 20b for a longer period of time while it is moved up and down, i.e. a back and forth "swipe" movement is carried out in order to continuously generate an intermediate profile P3 corresponding to the respective finger position.
- the person 100 can use this process as a type of fine adjustment of the intermediate profile P3 in order to find the ideal setting of the signal processing parameters of the intermediate profile P3 for the listening experience.
- the interaction is ended, i.e.
- the processing stage 3 ends the interpolation of the values or settings of the signal processing parameters between the first profile PI and the second profile P2 for the intermediate profile P3 and the intermediate profile P3 present at this time is retained from then on for the signal processing chain until the next "hold and swipe" movement is detected or the person directly taps the head area 90 or the foot area 91 to select the profile Pl assigned there. or P2 is activated immediately.
- the dynamic, real-time creation of the intermediate profile P3 please refer to Figure 14. It shows the first profile PI, the second profile P2 and the third profile P3 in a meta-description of its signal processing parameters.
- the meta description characterizes here in an exemplary manner: noise suppression 93; an orientation 94 of the virtual microphones, i.e. an ambisonic focus; a gain or volume 95; a compressor ratio 96; a bandpass frequency range 97.
- the first profile PI is optimized for the pure perception of speech that comes from a direction corresponding to the direction in which the person is looking.
- the second profile P2 is optimized for general all-round hearing or in other words “everything hearing”.
- the intermediate profile P3 which results from detecting a "hold and wipe” movement of the finger of the person 100 when the finger 104 is positioned exactly in the middle between the head area 91 and the foot area 92, forms an interpolated one in real time Compromise for the listening experience of the person 100 between the two listening experience extremes of the predefined profiles PI and P2.
- a configuration application is discussed which is executed on a mobile phone 98, wherein the mobile phone 98 is in (radio) connection with at least one of the processing devices 2 or 3 in order to transmit the settings made by the person 100 to the latter.
- the application shows one of the two touch stripes 20a or 20b and a list of profiles Pn, Pm, Po, ... that can be assigned to the head area 90 or the foot area 91.
- the profile Pn was assigned to the head area 90 by manual selection and the profile Pm was assigned to the foot area.
- the person can select 100 different movements or gestures from a list and associate these movements or gestures with profiles for both the left and right processing devices 2 and 3, respectively.
- This allows actions to be triggered in relation to the applications of individual profiles without having to do the aforementioned interpolation between two profiles.
- the gestures listed here are, for example, touch & hold (marked as "T&H") or, in plain text, "TAP” and “SWIPE” upwards, where the direction of the swipe is indicated by an upward arrow.
- Figure 17 shows movements or gestures that must be performed essentially simultaneously on both touch strips 20a and 20b in order to activate the profile Pn - Pp shown next to it.
- the symbols shown are to be understood as follows.
- An arrow pointing upwards as well as an arrow pointing downwards represents a swiping movement, where the direction of the swipe is indicated by the respective arrow direction.
- a closed ring represents a single short-term touch (one tap).
- a dotted ring represents a double short-term touch (double tap).
- Figure 18 also shows setting options that affect the function and operation of the touch strips 20a and 20b.
- the two touch strips 20a and 20b can be operated in a coupled or uncoupled (individual) state.
- the sensitivity can also be adjusted. It can also be specified that haptic feedback should be provided, etc.
- Figure 19 shows settings of how the detected movement or gesture Eq. G7 can be used to (media) control an external device, such as a smartphone. You can use movements or gestures to accept calls, mute calls, increase or decrease the volume of the external device or mute the device.
- the playback of songs, messages or other multimedia sources can also be controlled, such as starting playback, pausing playback, stopping playback or even basic control of an external device - in this case a PC - etc.
- Figure 20 shows an interface for learning movements or gestures, so that the person can define 100 self-developed movements or gestures or can also adapt the characteristics of given movements or gestures to their own needs or circumstances.
- the movement or gestures learned by the application can then be associated with the respective profile (here e.g. bottom).
- the areal interaction area of the input module 20 is discussed below.
- the areal interaction area can, for example, be designed as a continuous area between the head area 90 and the foot area 91.
- the area of interaction can also be structured.
- Such a structured, area-wide interaction area is shown, for example, in FIG. 21.
- the touch strip 20a or 20b has three separate touch-sensitive detection zones, namely a first detection zone 110, a second detection zone 111 and a third Detection zone 112, with which a touch or a movement can be actively detected electronically, in particular capacitively.
- Each of the detection zones 110 - 112 of the relevant touch strip 20a or 20b is connected to an input module controller (each with a separate or all with a common input module controller) so that touches and movements that occur in one of the detection zones Zones 110 -
- the detection zones 110 - 112 can be directly adjacent to each other or connected to passive zones
- the passive zones 113 and 114 must be separated from each other.
- the passive zones 113 and 114 do not allow direct detection of a touch or movement and only serve as a distance between the detection zones 110 - 112.
- the passive zones 113 - 114 can be straight, curved or - as shown in Figure 21 - wedge-shaped be trained.
- menu areas or menu items can be assigned to the respective detection zone in a physically separate manner.
- a first profile can be assigned to the head area 90 (here the first detection zone 110) and a second profile to the foot area 91 (here the third detection zone 112).
- Interpolation can be carried out between the profile settings (values or settings of the signal processing parameters) of the two profiles when a hold and swipe movement is present, as discussed, in order to generate the dynamic intermediate profile.
- the middle detection zone 111 can be placed as a predefined third profile between the other two profiles, i.e. the first and the second profile. This means that intermediate profiles can now be created dynamically along the touch strip 20a or 20b, namely one between the listening experience extremes of the first profile and the third profile as well as one between the listening experience extremes of the third profile and the second profile.
- the middle detection zone 111 can also be used only to quickly activate another profile or even just a single function, whereas the intermediate profile can only be set between the first and the second profile.
- the dynamic transfer - in other words a smooth change - of the settings of one profile to the settings of another profile can be carried out by user interaction, generating the dynamic intermediate profile. This can be used by the person 100 when they switch between different environments and consciously perform the hold and swipe movement.
- the processing device 2 or 3 uses machine learning algorithms to make settings for the processing devices and to assign the profiles available there to the touch strips 20a or 20b. These automatisms can run on the basis of real-time recordings of signals or physical parameters using sensors and/or the microphones of the processing devices 2 or 3. It can also be provided that automatic interpolation takes place between these automatically assigned profiles. For this purpose, for example, previously recorded and evaluated user behavior can be used, which was taken into account in the machine learning.
- the concept of the flowing mixing of parameters includes the ability to seamlessly combine different profiles or parameter sets to achieve a better user experience. This makes it possible for the first time to mix several profiles, such as transparency, voice amplification, music settings, etc., seamlessly. Profiles can no longer be activated or deactivated in the sense of either/or. Rather, these defined profiles only define the corner points between which interpolation takes place depending on the respective four positions. Thus, when using one touch strip 20a and 20b each, at least four profiles can be available.
- two profiles can be mixed together on the left and right side, or on the other hand - in coupled operating mode - even three or up to four profiles can be mixed together.
- the processing devices 2 and 3 are naturally located at a position on the head that is inaccessible to the user's own visual perception, it can be advantageous to design the housing parts in such a way that it is clear where the finger is just by touching it.
- the parts of the processing devices 2 and 3 that are connected to the touch strips 20a and 20b can be made of a different material than the touch strips 20a and 20b themselves.
- the edge of the touch strip 20a and 20b can also be designed differently along its length, such as being corrugated, grooved, etc., in order to provide the user with haptically perceptible position information.
- the description so far has dealt very generally with the mixing or merging of profiles, which, as discussed, can be implemented as simply as possible by moving a single finger along the respective touch strip 20a or 20b.
- the touch strip 20a or 20b can also be used to control the alignment of virtual microphones of the Ambisonic signal processing stage.
- the processing device can also be configured (programmed) to detect that the person 100 places two fingers on the touch strip 20a or 20b.
- the placing can be done with fingers spread out in a V-shape (e.g. index finger and middle finger of one hand), with one finger being placed on the head area 90 and another finger on the foot area 91 and then the fingers (i.e. both) facing each other towards the middle of the touch strip 20a or 20b can be moved.
- This movement represented by the interaction data ID generated in the process, can be interpreted by the processing device 2 or 3 to the effect that a profile is set that enables all-round sound detection as a listening experience.
- a profile is set that enables all-round sound detection as a listening experience.
- the placement can also occur approximately in the middle of the touch strip 20a or 20b, after which both fingers (e.g. the index finger and the middle finger of one hand) are moved away from each other, i.e. one of the two fingers towards the head area 90 and the other both fingers are moved towards the foot area 91.
- This movement represented by the interaction data ID generated in the process, can be interpreted by the processing device 2 or 3 in such a way that a profile is set that enables frontally oriented sound detection as a listening experience.
- a smooth transition of the values and settings is made in order to obtain the frontally oriented sound detection.
- the touch stripe 20a or 20b can also be used to simply prevent the acoustically perceptible output of the audio data stream. Simply placing your finger on it for a longer period of time can be interpreted as muting the sound. When the processing device 2 or 3 is muted, the same gesture can be interpreted again to activate the sound emission.
- the touch stripe 20a or 20b can also be used for the step-by-step selection of speakers in the person 100's surroundings.
- the Ambisonic system may have preliminarily analyzed the environment of the person 100 and identified the existence of different speakers in different directions. This analysis can of course also be carried out continuously in real time.
- a relatively slow swipe movement from top to bottom can now be interpreted as changing the focus of the virtual microphones from a first to a second speaker in a clockwise direction.
- Such a movement from below upwards can be interpreted to mean that the focus of the virtual microphones is changed counterclockwise from a first to a second person.
- Figure 24 is discussed below, which shows a further visualization of the mixing of two profiles PI and P2 into a final third profile P3 used in the signal processing chain 30.
- the first profile PI has a parameter collection A and the second profile P2 has a parameter collection B.
- the head area 90 of the touch bar 20b is 100% assigned to the PI profile, so that when the finger is positioned there, only the parameter collection A is used in the signal processing chain 30.
- the foot area 91 of the touch bar 20b is 100% assigned to the second profile P2, so that when the finger is positioned there, only the parameter collection B is used in the signal processing chain 30.
- Figure 25 shows how the finger position FP is specifically used to mix the profiles PI and P2.
- the recorded finger position FP is specified with a position detection value PEW in percent between 0% at the foot area 91 and 100% at the head area 90.
- the finger is now positioned exactly in the middle between the foot area 91 and the head area 90, so that the finger position FP is specified with a position detection value of 50%.
- the parameter collection A of the first profile and the parameter collection B of the second Profiles weighted at only 50% each are used to generate the third profile P3.
- FIG. 27 an exemplary parameter collection for one of the profiles that contains different audio effects is shown in FIG. 27.
- the audio effect in question is listed as “EFFECT” in the first column S1
- the value description is listed as “VALUE DESC” in the second column S2.
- the relevant value is listed in the third column S3 with “VALUE”.
- Figure 28 shows the process of personalizing a profile, here the first profile PI, using a smartphone SP.
- An app software application
- the smartphone SP which provides the necessary user interface.
- the user can first select the particular profile he/she wants to customize.
- the profile for the parameter collection A is selected, assigned to the head area 90 of the touchbar (selection "ASSIGN TO TOP") and the audio effect A (selection "EFFECT A") with a value of 100 (selection "VALUE A 100”) is selected.
- the selection of the audio effect can be completed for the available audio effects (EFFECT B to C).
- This results in the parameter collection A which is summarized in a data block DB.
- This parameter collection A is then to the hearing device 2 or 3 or the hearing devices 2 and 3 (previously referred to as processing device 2 and 3), for example via Bluetooth transmitted and is available there for further audio signal processing.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Circuit For Audible Band Transducer (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP23785984.8A EP4595461A1 (fr) | 2022-09-30 | 2023-09-28 | Dispositif de traitement, en particulier des écouteurs ou une prothèse auditive, pour traiter des signaux de microphone dans un flux de données audio |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/EP2022/077278 WO2024067994A1 (fr) | 2022-09-30 | 2022-09-30 | Système et procédé de traitement de signaux de microphone |
| EPPCT/EP2022/077278 | 2022-09-30 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024068868A1 true WO2024068868A1 (fr) | 2024-04-04 |
Family
ID=84045103
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2022/077278 Ceased WO2024067994A1 (fr) | 2022-09-30 | 2022-09-30 | Système et procédé de traitement de signaux de microphone |
| PCT/EP2023/076943 Ceased WO2024068868A1 (fr) | 2022-09-30 | 2023-09-28 | Dispositif de traitement, en particulier des écouteurs ou une prothèse auditive, pour traiter des signaux de microphone dans un flux de données audio |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2022/077278 Ceased WO2024067994A1 (fr) | 2022-09-30 | 2022-09-30 | Système et procédé de traitement de signaux de microphone |
Country Status (2)
| Country | Link |
|---|---|
| EP (2) | EP4595460A1 (fr) |
| WO (2) | WO2024067994A1 (fr) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080292126A1 (en) * | 2007-05-24 | 2008-11-27 | Starkey Laboratories, Inc. | Hearing assistance device with capacitive switch |
| US20090123013A1 (en) | 2007-11-14 | 2009-05-14 | Siemens Medical Instruments Pte. Ltd. | Hearing aid device |
| US20110091059A1 (en) * | 2009-10-17 | 2011-04-21 | Starkey Laboratories, Inc. | Method and apparatus for behind-the-ear hearing aid with capacitive sensor |
| WO2016004996A1 (fr) * | 2014-07-10 | 2016-01-14 | Widex A/S | Dispositif de communication personnel ayant un logiciel d'application pour commander le fonctionnement d'au moins une aide auditive |
| US20170199643A1 (en) * | 2014-05-30 | 2017-07-13 | Sonova Ag | A method for controlling a hearing device via touch gestures, a touch gesture controllable hearing device and a method for fitting a touch gesture controllable hearing device |
| EP3358812A1 (fr) * | 2017-02-03 | 2018-08-08 | Widex A/S | Canaux de communication entre un dispositif de communication personnel et au moins un dispositif porté sur la tête |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102008015263B4 (de) * | 2008-03-20 | 2011-12-15 | Siemens Medical Instruments Pte. Ltd. | Hörsystem mit Teilbandsignalaustausch und entsprechendes Verfahren |
| DE102012205634B4 (de) * | 2012-04-05 | 2014-07-10 | Siemens Medical Instruments Pte. Ltd. | Einstellen einer Hörgerätevorrichtung |
| DK2869599T3 (da) * | 2013-11-05 | 2020-12-14 | Oticon As | Binauralt høreassistancesystem, der omfatter en database med hovedrelaterede overføringsfunktioner |
| EP2908549A1 (fr) * | 2014-02-13 | 2015-08-19 | Oticon A/s | Dispositif de prothèse auditive comprenant un élément de capteur |
| EP2991380B1 (fr) * | 2014-08-25 | 2019-11-13 | Oticon A/s | Dispositif d'assistance auditive comprenant une unité d'identification d'emplacement |
| US10181328B2 (en) * | 2014-10-21 | 2019-01-15 | Oticon A/S | Hearing system |
| US10728677B2 (en) * | 2017-12-13 | 2020-07-28 | Oticon A/S | Hearing device and a binaural hearing system comprising a binaural noise reduction system |
| CN114208214B (zh) | 2019-08-08 | 2023-09-22 | 大北欧听力公司 | 增强一个或多个期望说话者语音的双侧助听器系统和方法 |
-
2022
- 2022-09-30 EP EP22798104.0A patent/EP4595460A1/fr active Pending
- 2022-09-30 WO PCT/EP2022/077278 patent/WO2024067994A1/fr not_active Ceased
-
2023
- 2023-09-28 EP EP23785984.8A patent/EP4595461A1/fr active Pending
- 2023-09-28 WO PCT/EP2023/076943 patent/WO2024068868A1/fr not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080292126A1 (en) * | 2007-05-24 | 2008-11-27 | Starkey Laboratories, Inc. | Hearing assistance device with capacitive switch |
| US20090123013A1 (en) | 2007-11-14 | 2009-05-14 | Siemens Medical Instruments Pte. Ltd. | Hearing aid device |
| US20110091059A1 (en) * | 2009-10-17 | 2011-04-21 | Starkey Laboratories, Inc. | Method and apparatus for behind-the-ear hearing aid with capacitive sensor |
| US20170199643A1 (en) * | 2014-05-30 | 2017-07-13 | Sonova Ag | A method for controlling a hearing device via touch gestures, a touch gesture controllable hearing device and a method for fitting a touch gesture controllable hearing device |
| WO2016004996A1 (fr) * | 2014-07-10 | 2016-01-14 | Widex A/S | Dispositif de communication personnel ayant un logiciel d'application pour commander le fonctionnement d'au moins une aide auditive |
| EP3358812A1 (fr) * | 2017-02-03 | 2018-08-08 | Widex A/S | Canaux de communication entre un dispositif de communication personnel et au moins un dispositif porté sur la tête |
Also Published As
| Publication number | Publication date |
|---|---|
| EP4595461A1 (fr) | 2025-08-06 |
| WO2024067994A1 (fr) | 2024-04-04 |
| EP4595460A1 (fr) | 2025-08-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP1912474B1 (fr) | Procédé pour le fonctionnement d'une prothèse auditive et prothèse auditive | |
| EP1619928B1 (fr) | Prothèse auditive ou système de communication avec sources virtuelles | |
| EP1296537A2 (fr) | Prothèse auditive avec commutation automatique vers le mode bobine | |
| DE112015003822T5 (de) | Systeme und Verfahren zum Entzerren von Audio zur Wiedergabe auf einem elektronischen Gerät | |
| DE102007052625A1 (de) | Stereo-Bluetooth-Headset | |
| EP1933593B1 (fr) | Procédé de définition latérale pour l'adaptation d'aides auditives | |
| DE112011104939T5 (de) | Anzeigevorrichtung und Verfahren zum Steuern derselben | |
| DE202017103388U1 (de) | Erzeugung und Steuerung von Kanälen, die Zugriff auf Inhalte von unterschiedlichen Audioanbieterdiensten bereitstellen | |
| DE102014006997A1 (de) | Verfahren, Vorrichtung und Erzeugnis für drahtlose immersive Audioübertragung | |
| DE102008054087A1 (de) | Hörhilfegerät mit mindestens einem kapazitiven Näherungssensor | |
| DE102022205633A1 (de) | Räumliche audiosteuerung | |
| EP1848245B1 (fr) | Appareil auditif à séparation de source en aveugle et procédé correspondant | |
| DE102007051308B4 (de) | Verfahren zum Verarbeiten eines Mehrkanalaudiosignals für ein binaurales Hörgerätesystem und entsprechendes Hörgerätesystem | |
| JP6926640B2 (ja) | 目標位置設定装置及び音像定位装置 | |
| WO2024068868A1 (fr) | Dispositif de traitement, en particulier des écouteurs ou une prothèse auditive, pour traiter des signaux de microphone dans un flux de données audio | |
| DE102008021607A1 (de) | System und Verfahren zum Monitoren industrieller Anlagen | |
| CH709679A2 (de) | Verfahren zur Fernunterstützung von Schwerhörigen. | |
| EP2648423B1 (fr) | Réglage d'un appareil auditif | |
| DE102019210934A1 (de) | System zur Verbesserung des Hörens für hörgeschädigte Personen | |
| DE102008031581A1 (de) | Hörhilfesystem mit Mikrofonmodul | |
| EP3972291A1 (fr) | Procédé de fonctionnement d'un dispositif auditif et système auditif | |
| DE112023002632T5 (de) | Ladevorrichtung für ohrhörer umfassend benutzerschnittstelle zur steuerung der ohrhörer | |
| DE102022117387A1 (de) | Verfahren und system zur lautstärkesteuerung | |
| EP2592850B1 (fr) | Activation et désactivation automatiques d'un système auditif binaural | |
| EP2028878A2 (fr) | Procédé de définition latérale pour l'adaptation d'aides auditives |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23785984 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023785984 Country of ref document: EP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2023785984 Country of ref document: EP Effective date: 20250430 |
|
| WWP | Wipo information: published in national office |
Ref document number: 2023785984 Country of ref document: EP |