WO2017134973A1 - 音響出力装置、音響出力方法、プログラム、音響システム - Google Patents
音響出力装置、音響出力方法、プログラム、音響システム Download PDFInfo
- Publication number
- WO2017134973A1 WO2017134973A1 PCT/JP2017/000070 JP2017000070W WO2017134973A1 WO 2017134973 A1 WO2017134973 A1 WO 2017134973A1 JP 2017000070 W JP2017000070 W JP 2017000070W WO 2017134973 A1 WO2017134973 A1 WO 2017134973A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- unit
- acoustic
- audio signal
- output device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/033—Headphones for stereophonic communication
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K15/00—Acoustics not otherwise provided for
- G10K15/08—Arrangements for producing a reverberation or echo sound
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K15/00—Acoustics not otherwise provided for
- G10K15/08—Arrangements for producing a reverberation or echo sound
- G10K15/10—Arrangements for producing a reverberation or echo sound using time-delay networks comprising electromechanical or electro-acoustic devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1016—Earpieces of the intra-aural type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/34—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R29/00—Monitoring arrangements; Testing arrangements
- H04R29/001—Monitoring arrangements; Testing arrangements for loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/305—Electronic adaptation of stereophonic audio signals to reverberation of the listening space
- H04S7/306—For headphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/34—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means
- H04R1/345—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means for loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/40—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
- H04R1/406—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/09—Non-occlusive ear tips, i.e. leaving the ear canal open, for both custom and non-custom tips
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
Definitions
- the present disclosure relates to a sound output device, a sound output method, a program, and a sound system.
- Patent Document 1 a technique for reproducing reverberation due to an impulse response by measuring an impulse response under a predetermined environment and convolving an input signal with the obtained impulse response is known. It has been.
- Patent Document 1 convolves an impulse response acquired by measurement in advance with a digital audio signal to which reverberation is desired to be added. For this reason, the technique described in Patent Document 1 does not add a spatial simulation transfer function process (for example, reverberation / reverb) that simulates a desired space to a sound acquired in real time. Not assumed.
- a spatial simulation transfer function process for example, reverberation / reverb
- the space simulation transfer function process is referred to as “reverb process” for simplicity.
- the space simulation transfer function process is referred to as “reverb process” for simplicity.
- the space simulation transfer function process is referred to as “reverb process” for simplicity.
- the transfer function between two points in space is used as a base for the purpose of simulating space, Expressed as reverb processing.
- a sound acquisition unit that acquires a sound signal based on surrounding sound
- a reverb processing unit that performs reverb processing on the sound signal
- a listener who listens to sound based on the sound signal that has been subjected to the reverb processing
- an acoustic output device that outputs in the vicinity of the ear.
- means for acquiring a sound signal based on ambient sound means for performing reverberation processing on the sound signal, and sound generated by the sound signal subjected to the reverberation processing are heard by the listener's ear.
- a program for causing the computer to function as a means for outputting in the vicinity means for acquiring a sound signal based on ambient sound, means for performing reverberation processing on the sound signal, and sound generated by the sound signal subjected to the reverberation processing are heard by the listener's ear.
- a program for causing the computer to function as a means for outputting in the vicinity means for acquiring a sound signal based on ambient sound, means for performing reverberation processing on the sound signal, and sound generated by the sound signal subjected to the reverberation processing are heard by the listener's ear.
- the acoustic acquisition part which acquires the acoustic environment information showing the surrounding acoustic environment, and the acoustic environment information showing the surrounding acoustic environment of the second acoustic output device which is the communication partner
- An acoustic environment information acquisition unit acquired from an acoustic output device, a reverberation processing unit that performs reverberation processing on the audio signal acquired by the acoustic acquisition unit according to the acoustic environment information, and the reverb processing
- a first sound output device including a sound output unit that outputs sound based on a sound signal to a listener's ear; a sound acquisition unit that acquires sound environment information representing a surrounding sound environment; and the communication partner.
- An acoustic environment information acquisition unit that acquires acoustic environment information that represents an acoustic environment around one acoustic output device, and performs reverberation processing on the audio signal acquired by the acoustic acquisition unit according to the acoustic environment information Riva Comprising a blanking unit, and a second audio output device comprising: a sound output unit, the output to the listener's ear the sound by the sound reverberated signal processing is performed, the sound system is provided.
- a desired reverberation is added to a sound acquired in real time, and a listener can listen.
- the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
- FIG. 5 is a schematic diagram showing a user wearing a sound output device in the system shown in FIG. 4. It is a schematic diagram which shows the processing system which performs the user experience of the sound after a reverberation process using normal "sealed type" headphones, such as a canal type, and a microphone. In the case of FIG.
- FIG. 6 it is a schematic diagram which shows the response image of the sound pressure on the eardrum when the sound emitted from the sound source is an impulse and the spatial transmission is flat.
- FIG. 8 is a schematic diagram illustrating a case where an impulse response IR is used in the same sound field environment as in FIGS. 6 and 7 using an “ear hole open type” sound output device.
- FIG. 9 is a schematic diagram showing a sound pressure response image on the eardrum when the sound emitted from the sound source is an impulse and the spatial transmission is flat in the case of FIG. 8. It is a schematic diagram which shows the example which raised the realistic presence more by application of the reverb process. It is a schematic diagram which shows the example which combined the HMD display based on the video content.
- FIG. 1 and FIG.2 is a schematic diagram which shows the structure of the acoustic output device 100 which concerns on one Embodiment of this indication.
- FIG. 1 is a front view of the sound output device 100
- FIG. 2 is a perspective view of the sound output device 100 as viewed from the left side.
- the sound output device 100 shown in FIGS. 1 and 2 is configured to be worn on the left ear, but the sound output device (not shown) for wearing the right ear is configured symmetrically with respect to this.
- the 1 and 2 includes a sound generator (sound output unit) 110 that generates sound, a sound guide unit 120 that takes in sound generated from the sound generator 110 from one end 121, and a sound guide.
- a holding unit 130 that holds the unit 120 near the other end 122 is provided.
- the sound guide portion 120 is made of a hollow tube having an inner diameter of 1 to 5 mm, and both ends thereof are open ends.
- One end 121 of the sound guiding unit 120 is an acoustic input hole for the sound generated from the acoustic generating unit 110, and the other end 122 is an acoustic output hole. Therefore, when the one end 121 is attached to the sound generation unit 110, the sound guide unit 120 is open on one side.
- the holding unit 130 engages with the vicinity of the entrance of the ear canal (for example, a notch between the ears), so that the sound output hole of the other end 122 of the sound guide unit 120 faces the back side of the ear canal.
- the guiding portion 120 is supported near the other end 122.
- the outer diameter of at least the vicinity of the other end 122 of the sound guide portion 120 is formed to be smaller than the inner diameter of the ear hole. Therefore, even if the other end 122 of the sound guide part 120 is held near the entrance of the ear canal by the holding part 130, the ear hole of the listener is not blocked. That is, the ear hole is open.
- the sound output device 100 can be referred to as an “ear hole open type”.
- the holding unit 130 includes an opening 131 that opens the ear canal entrance (ear hole) to the outside world even when the sound guide unit 120 is held.
- the holding portion 130 is a ring-shaped structure, and is connected to the vicinity of the other end 122 of the sound guide portion 120 only by a rod-shaped support member 132. All other parts of the region are openings 131.
- the holding portion 130 is not limited to the ring-shaped structure, and may have any shape that can support the other end 122 of the sound guide portion 120 as long as it has a hollow structure.
- the air vibration is propagated, and the other end 122 held near the entrance of the ear canal by the holding unit 130. Radiates to the ear canal and transmits to the eardrum.
- the holding unit 130 that holds the vicinity of the other end 122 of the sound guide unit 120 includes the opening 131 that opens the entrance (ear hole) of the ear canal to the outside. Therefore, the ear hole of the listener is not blocked even when the sound output device 100 is worn. The listener can sufficiently listen to the ambient sound through the opening 131 while wearing the sound output device 100 and listening to the sound output from the sound generation unit 110.
- the sound output device 100 opens the ear hole, the sound generated from the sound generator 100 (reproduced sound) can be suppressed from leaking to the outside. Since the other end 122 of the sound guide part 120 is attached so as to face in the vicinity of the entrance of the ear canal and radiates the generated air vibrations near the eardrum, it is sufficient to reduce the output of the sound output part 100. This is because sound quality can be obtained.
- FIG. 3 shows a state in which the acoustic output device 100 with an open ear hole outputs sound waves to the listener's ear. Air vibration is radiated from the other end 122 of the sound guide portion 120 toward the inside of the ear canal.
- the ear canal 300 is a hole that begins at the ear canal entrance 301 and ends inside the eardrum 302 and is generally approximately 25-30 mm long.
- the external auditory canal 300 is a cylindrical closed space.
- the air vibration radiated from the other end 122 of the acoustic unit 120 toward the back of the ear canal 300 propagates to the eardrum 302 with directivity as indicated by reference numeral 311.
- the sensitivity (gain) in the low range is particularly improved.
- the outside of the ear canal 300 that is, the outside world is an open space.
- the air vibration radiated out of the ear canal 300 from the other end 122 of the sound guide portion 120 has no directivity in the outside world and is rapidly attenuated as indicated by reference numeral 312.
- the tubular sound guide portion 120 has a bent shape that is folded back from the back side to the front side of the ear hole at the intermediate portion.
- the bent portion is a pinch portion 123 having an open / close structure, and can generate a pinch force to pinch the earlobe, which will be described in detail later.
- the sound guide portion 120 further includes a deformable portion 124 between the other end 122 disposed near the entrance of the ear canal and the bent pinch portion 123.
- the deformation portion 124 is deformed when an excessive external force is applied, and prevents the other end 122 of the sound guide portion 120 from entering the depth of the ear canal more than necessary.
- the listener can naturally listen to the ambient sound while the sound output device 100 is being worn. Therefore, it is possible to normally use human functions that depend on auditory characteristics such as grasping space, perception of danger, conversation, and grasping subtle nuances during conversation.
- the sound output device 100 can be regarded as acoustically transparent by surrounding the ear hole with a structure for reproduction so that the surrounding sound is not transmitted. Similarly, the surrounding sound can be heard as it is, and the target sound information and music can be simultaneously reproduced through the pipe or duct shape, so that both sounds can be heard.
- an earphone device In the case of an earphone device that is commonly used at present, it is basically a sealed structure that closes the ear canal, so the sound that you generate and your own mastication sound are compared to when the ear canal is open. Since it sounds differently, there is a sense of incongruity and often causes discomfort for the user. This is thought to be because spontaneous sounds and mastication sounds are radiated to the sealed external auditory canal through bones and meat, so that the sound passes through the eardrum in a state where the low range is enhanced. Since such a phenomenon does not occur in the acoustic output device 100, it is possible to enjoy a normal conversation while listening to the target audio information.
- the user can transmit the presented voice or music to the vicinity of the ear canal through the tubular sound guide unit 120 while passing the surrounding sound as it is as a sound wave. Can experience voice and music while listening to the surrounding sounds.
- FIG. 4 is a schematic diagram showing a basic system of the present disclosure.
- microphones (microphones (sound acquisition units)) 400 are mounted on the left and right sound output devices 100.
- the microphone signal output from the microphone 400 is amplified by a microphone amplifier / ADC 402, subjected to AD conversion, and then subjected to DSP processing (reverb processing) by a DSP (or MPU) 404, and then to a DAC / amplifier (or digital amplifier) 406.
- sound is generated from the sound generation unit 110, and sound can be heard in the user's ear via the sound guide unit 120.
- FIG. 4 is a schematic diagram showing a basic system of the present disclosure.
- microphones (microphones (sound acquisition units)) 400 are mounted on the left and right sound output devices 100.
- the microphone signal output from the microphone 400 is amplified by a microphone amplifier / ADC 402, subjected to AD conversion, and
- microphones 400 are attached independently on the left and right, and the microphone signal is independently reverberated on each side.
- the components such as the microphone amplifier / ADC 402, DSP 404, DAC / amplifier 406, and the like can be provided in the sound generation unit 110 of the sound output device 100. 4 can be configured by a circuit (hardware) or a central processing unit such as a CPU and a program (software) for causing it to function.
- FIG. 5 is a schematic diagram showing a user wearing the sound output device 100 in the system shown in FIG.
- the ambient sound that directly enters the ear canal and the sound that is picked up by the microphone 400 and subjected to signal processing and enters the sound guide unit 120 are spatially separated in the ear canal path. Since they are acoustically added, both synthesized sounds reach the eardrum, and the sound field and space are recognized based on the synthesized sounds.
- the DSP 404 functions as a reverberation processing unit (reverberation processing unit) that performs reverberation processing on the microphone signal.
- reverberation processing unit reverberation processing unit
- the so-called “sampling reverb” is a realistic effect that convolves the impulse response between two points measured in an actual location at any point (the operation in the frequency domain is equivalent to the multiplication of the transfer function).
- a filter that approximates part or all of this with IIR (Infinite Impulse Response) may be used.
- IIR Intelligent Impulse Response
- Such an impulse response can also be obtained by simulation.
- the convolution can be performed by the same method as in Patent Document 1 described above, and an FIR digital filter or a convolver can be used. At this time, it is possible to have filter coefficients for a plurality of reverbs, and the user can arbitrarily select them.
- the sound field can be experienced through an impulse response (IR) measured or simulated in advance.
- IR impulse response
- FIG. 6 and FIG. 7 a processing system for performing a user experience using a normal “sealed” headphone 500 such as a canal type and a microphone 400 will be described.
- the configuration of the headphone 500 shown in FIG. 6 is the same as that of the sound output device 100 shown in FIG. 4 except that it is “sealed”, and a microphone 400 is provided in the vicinity of the left and right headphones 500.
- the sealed headphone 500 is assumed to have high sound insulation.
- an impulse response IR as shown in FIG. 6 has been measured in order to simulate a specific sound field space. As shown in FIG.
- the sound generated by the sound source 600 is collected by the microphone 400, and the IR itself including the direct sound component is reverberated and convolved with the microphone signal from the microphone 400 in the DSP 404. You can feel a specific sound field space.
- the illustration of the microphone amplifier / ADC 402 and the DAC / amplifier 406 is omitted.
- the headphones 500 are hermetically sealed, sound insulation is often not sufficient particularly in the low frequency range, and some sounds enter through the housing of the headphones 500 and sound as a remaining component of the sound insulation effect is generated. It may be heard on the user's eardrum.
- FIG. 7 is a schematic diagram showing a response image of sound pressure on the eardrum when the sound emitted from the sound source 600 is an impulse and the spatial transmission is flat.
- the sealed headphone 500 has a high sound insulation property.
- the direct sound component (the sound insulation residue) in the spatial transmission remains in the portion where the sound insulation could not be performed, and the user can hear it a little.
- the response sequence of the impulse response IR shown in FIG. 6 is continuously observed through the processing time by the convolution (or FIR) calculation in the DSP 404 and the “system delay” time generated in the ADC and DAC. Become.
- the direct sound component of the spatial transmission may be heard as the remainder of the sound insulation, or a sense of discomfort may occur as a bodily sensation due to overall system delay. More specifically, in FIG. 7, when sound is generated from the sound source 600 at time t0, the direct sound component of spatial transmission is heard by the user after the passage of the spatial transmission time from the sound source 600 to the eardrum (time t1). The sound that can be heard here is the sound remaining after sound insulation that cannot be sound-insulated by the sealed headphones 500. Thereafter, when the above-mentioned “system delay” time elapses, the direct sound component after the reverberation process is heard (time t2).
- the direct sound component after the reverberation process is heard, which may cause the user to feel uncomfortable. Furthermore, after that, the initial reflected sound after the reverberation process is heard (time t3), and the reverberation component after the reverb process is heard after time t4. There may be a sense of discomfort. Even if the headphones 500 can completely block external sounds, the occurrence of the “system delay” described above may cause a gap between the user's vision and hearing. In FIG. 7, while the sound from the sound source 600 is generated at time t0, the direct sound component that the user first listens to when the headphones 500 can completely block the external sound is the direct sound component after the reverberation process. There is a gap between the user's vision and hearing. As an example of the shift that occurs between the user's visual sense and hearing, there is a shift between the actual mouth movement of the conversation partner and the corresponding voice (lip sync).
- FIGS. 8 and 9 are schematic diagrams showing a case where an impulse response IR is used in the same sound field environment as in FIGS.
- FIG. 8 corresponds to FIG. 6, and
- FIG. 9 corresponds to FIG.
- the direct sound component of the impulse response IR shown in FIG. 6 is not used as the component convoluted by the DSP 404.
- FIGS. 6 and 7 because the direct sound component directly enters the external auditory canal through the space when using the “open ear hole type” sound output device 100 according to the present embodiment. This is because, unlike the sealed headphone 500, it is not necessary to create a direct sound component by calculation by the DSP 404 and headphone reproduction.
- the system delay time including the DSP processing calculation time generated between the measured direct sound component and the initial reflected sound is obtained.
- the information (1) is subtracted from the impulse response IR (IR shown in FIG. 6) of the original specific sound field (a region surrounded by a one-dot chain line in FIG. 8).
- FIG. 9 is a schematic diagram showing a response image of sound pressure on the eardrum when the sound generated by the sound source 600 is an impulse and the spatial transmission is flat as in FIG.
- the spatial transmission time (t0 to t1) from the sound source 600 to the eardrum occurs in the same way as in FIG. 7, but at time t1, the ear canal is open. Therefore, the direct sound component of spatial transmission is observed on the eardrum.
- time t5 the initial reflected sound by the reverberation process is observed on the eardrum, and the reverberation component by the reverb process is observed on the eardrum after time t6.
- time t6 the initial reflected sound by the reverberation process is observed on the eardrum, and the reverberation component by the reverb process is observed on the eardrum after time t6.
- the initial reflected sound of the reverb process is set to an appropriate timing. Since the initial reflected sound of the reverb process is a sound according to a specific sound field environment, the user enjoys a sound field feeling that is in a different real position corresponding to the specific sound field environment. be able to. Since the system delay can be absorbed by subtracting the information of the system delay time generated between the direct sound component and the initial reflected sound from the impulse response IR of the original specific sound field, the system itself is made to have a low delay characteristic. The necessity and the necessity of operating the calculation resource of the DSP 404 at high speed are alleviated. Therefore, the system scale can be reduced and the system configuration can be simplified, so that a large practical effect such as a significant reduction in manufacturing cost can be obtained.
- the direct sound is not heard twice in succession as compared with the systems shown in FIGS.
- the sound quality degradation due to the interference between the unnecessary remaining sound insulation component and the direct sound component due to the reverb process, which has occurred in FIGS. 6 and 7, can be avoided.
- the direct sound component is a real sound or an artificial sound due to the resolution and frequency characteristics compared to the reverberation component.
- the direct sound can be easily discriminated as a real sound or an artificial sound, the sound reality is particularly important.
- the direct sound audible to the user's ear is the direct “sound” itself generated by the sound source 600.
- ADC analog to digital converter
- DAC digital to analog converter
- the user can feel more realistic by listening to the real sound.
- the impulse response IR ′ in consideration of the system delay shown in FIGS. 8 and 9 is configured so that the time between the direct sound component and the initial reflected sound component in the impulse response IR ′ shown in FIG. It can also be said that the system can be used effectively as the delay time of ADC, DAC and DAC. This is a system that is established because the sound can be directly transmitted to the eardrum as it is by the acoustic output device 100 with the open ear canal, and when the “sealed” headphones are used, it is difficult to establish the system itself.
- FIG. 10 shows an example in which a sense of reality is further enhanced by applying reverb processing.
- the system on the R (right) side is illustrated, but it is assumed that the L (left) side also has a system configuration symmetrical to that in FIG. 10.
- the playback devices on the L side and R side are independent, and both are not connected by wire.
- the L-side and R-side sound output devices 100 are connected by a wireless communication unit 412 so that bidirectional communication is possible.
- the L-side and R-side acoustic output devices 100 may be capable of bidirectional communication using a smartphone or the like as a relay.
- the reverb process in FIG. 10 realizes stereo reverb.
- different reverb processes are applied to the microphone signals of the right microphone 400 and the left microphone 400, respectively, and the addition is used as a reproduction output.
- different reverb processes are performed on the microphone signals of the left microphone 400 and the right microphone 400, respectively, and the addition is used as a reproduction output.
- the sound heard by the L-side microphone 400 is received by the R-side wireless communication unit 412 and reverberated by the DSP 404b.
- the sound heard by the microphone 400 on the R side is amplified by the microphone amplifier / ADC 402, subjected to AD conversion, and then reverberated by the DSP 404a.
- the left and right microphone signals subjected to the reverb process are added by an adding unit (superimposing unit) 414.
- the transmission and reception of microphone signals on the L side and the R side are performed by communication methods such as Bluetooth (registered trademark) (LE), WiFi, original 900 MHz, NFMI (Near Field Electromagnetic Induction used in hearing aids), infrared rays, and the like. Although it can be performed by a method such as communication, it may be transmitted and received by wire. In addition to the microphone signal, it is desirable to share (synchronize) information about the reverb type selected by the user between LRs.
- Bluetooth registered trademark
- WiFi WiFi
- original 900 MHz original 900 MHz
- NFMI Near Field Electromagnetic Induction used in hearing aids
- NFMI Near Field Electromagnetic Induction used in hearing aids
- infrared rays and the like.
- it may be transmitted and received by wire.
- HMD Head Mounted Display
- the content is stored in, for example, a medium (Media; disk, memory, etc.), and the content is sent from the cloud and temporarily stored in the local device. Including cases where The content includes content that is highly interactive such as a game.
- the video portion is displayed on the HMD 600 via the video processing unit 420.
- reverb processing is performed offline when creating the content for the voices and sounds of people in the place.
- reverberation processing (rendering) on the playback device side.
- the sense of immersion in the content will be hindered at once.
- the voice uttered by the user himself or the real surroundings The sound is adapted to the sound field environment according to the scene.
- the scene control information generation unit 422 generates scene control information corresponding to the estimated sound field environment or the sound field environment specified by the metadata.
- the reverb type closest to the sound field environment is selected from the reverb type database 408 according to the scene control information, and the DSP 404 performs reverberation processing based on the selected reverb type.
- the microphone signal subjected to the reverberation process is input to the adding unit 426, convolved with the audio of the content processed by the audio / audio processing unit 424, and reproduced by the acoustic output device 100.
- the signal that is convoluted with the audio of the content is a microphone signal that has been reverberated according to the sound field environment of the content.
- FIG. 11 assumes a case where previously created content including a game or the like is displayed on the HMD 600.
- a use case close to FIG. 11 by combining a camera or the like with the HMD 600 or using a half mirror, an actual scene (environment) of the surroundings is displayed on the HMD 600 and an object made of CG is displayed.
- the sound field when it is desired to be a sound field environment different from the actual place while being based on the video of the surrounding situation, it can be constructed by a system similar to FIG.
- the surrounding situation dropping an object, someone talking
- the surrounding situation ambient environment
- Sound field expression can be obtained with vision and become more realistic.
- FIG. 11 and FIG. 12 are the same.
- FIG. 13 is a schematic diagram illustrating a case in which a call is made while sharing the acoustic environment of the other party to be called. This function can be set to ON / OFF by user selection.
- the reverb type is specified by the user himself / herself, or specified / estimated by the content.
- the caller experiences the other party's sound field environment in a realistic way.
- the sound field environment information of the other party is required. This may be obtained by analyzing a microphone signal listened to by the other party's microphone 400, or the location of the other party from the map information via GPS. The building may be estimated to determine the degree of reverberation.
- the two parties that communicate with each other transmit information indicating the acoustic environment around them to the other party separately from the call voice.
- One user utters his / her voice in the sound field where the other user (calling party) exists by reverberating the echo of his / her voice based on the acoustic environment acquired from the other user. You can feel like that.
- the acoustic environment acquisition unit (acoustic environment information acquisition unit) 430 obtains the degree of reverberation by estimating the location and building of the other party from the map information via GPS, and acquires it as acoustic environment information.
- the wireless communication unit 412 transmits the acoustic environment information acquired by the acoustic environment acquisition unit 430 to the partner side together with the microphone signal.
- the other party receiving the microphone signal selects a reverb type from the reverb type database 408 based on the acoustic environment information received together with the microphone signal, and performs reverb processing on the right and left DSPs 404L and 404R404 for the own microphone signal.
- the microphone signal received from the other party is convolved with the signal after reverberation by the adders 428R and 428L.
- one user performs reverberation processing according to the other party's acoustic environment based on the other party's acoustic environment information for ambient sounds including his / her voice, while the other party's voice, Since sounds corresponding to the other party's acoustic environment are added by the adders 428R and 428L, the user feels as if he / she is in the same sound field environment (eg, church, hall) as the other party. Can be obtained.
- the same sound field environment eg, church, hall
- connection between the wireless communication unit 412 and the microphone amplifiers / ADCs 402L and 402R and the connection between the wireless communication unit 412 and the addition units 428L and 428R are connected by wireless or wired connection.
- wireless for example, short-range wireless such as Bluetooth (registered trademark) (LE) or NFMI may be used, and short-range wireless may include a relay.
- the user's voice for transmission may be extracted as monaural in a voice-specific manner, for example, using a beam forming technique.
- Beam forming is performed by a beam forming unit (BF) 432.
- BF beam forming unit
- the HRTF head-related transfer function
- the sound image is localized outside the head by localizing the virtual sound image at an arbitrary position. It is also possible to do so.
- the other party's sound image position may be preset, may be arbitrarily set by the user, or may be combined with the video. As a result, for example, it is possible to experience that the sound image of the other party on the phone is next to the user. Of course, it may be accompanied by a video expression as if the other party is on the side.
- the audio signal after the virtual sound image localization is added to the microphone signal by the adders 428L and 428R, and reverberation processing is performed. This makes it possible to return the sound that has been localized in the virtual sound image to the sound in the acoustic environment of the communication partner.
- the audio signal after the virtual sound image localization is added to the microphone signal after the reverberation processing by the adders 428L and 428R.
- the sound with the virtual sound image localized does not correspond to the acoustic environment of the communication partner, but the sound of the communication partner can be clearly distinguished by localizing the sound image at a desired position.
- FIGS. 14 and 15 are schematic diagrams illustrating an example of a telephone call with a large number of people.
- each person environment handle user, users A to G
- the sound field set here does not necessarily have to be the sound field of someone to be called, and may be a sound field of a completely artificial virtual space.
- each person may set an avatar and use a video auxiliary expression such as HMD.
- communication by the wireless communication unit 436 can be performed using an electronic device 700 such as a smartphone.
- the environment handle user sends acoustic environment information for setting the acoustic environment to the wireless communication unit 440 of the electronic device 700 of each user A, B, C,.
- the electronic device 700 of the user A that has received the acoustic environment information sets an optimal acoustic environment from the reverb type database 408 based on the acoustic environment information, and reverberates the microphone signals collected by the left and right microphones 400.
- Reverb processing is performed by the processing units 404L and 404R.
- the electronic devices 700 of the users A, B, C,... Communicate with each other via the wireless communication unit 436.
- the sound environment transfer function (HRTF L, R) is convoluted by a filter (acoustic environment adjustment unit) 438 in the sound of another user received by the wireless communication unit 436 of the electronic device 700 of the user A.
- the sound source information of the sound source 406 can be arranged in a virtual space, and the sound can be arranged in a space so that the sound source information exists in the same space as in reality.
- the acoustic environment transfer functions L and R mainly include information on reflected sound and reverberation. Ideally, an actual reproduction environment is assumed or an environment close to the actual reproduction environment is assumed.
- the acoustic environment transfer functions L, R are convoluted by a filter 438, so that the users A, B, C,. ⁇ Even if you are in a remote location, you can listen to the sound as if you were having a meeting in one room.
- the sounds of other users B, C,... are added by the adder 442, and the ambient sound after the reverberation process is added, amplified by the amplifier 444, and sent from the sound output device 100 to the ear of the user A. Is output.
- the same processing is performed in the electronic devices 700 of other users B, C,.
- each user A, B, C,... Can talk in the acoustic environment set by the filter 438, and further, his / her voice and the sound of his / her surrounding environment
- the environment handle can be heard as a sound of a specific acoustic environment set by the user.
- an acoustic acquisition unit that acquires an audio signal based on surrounding sounds
- a reverb processing unit for performing reverb processing on the audio signal
- An acoustic output unit that outputs the sound of the audio signal subjected to the reverberation process in the vicinity of a listener's ear
- An acoustic output device comprising: (2) The sound output device according to (1), wherein the reverberation processing unit performs the reverberation process by removing a direct sound component of an impulse response.
- the sound output device includes a first reverb processing unit that performs reverberation processing on the audio signal acquired on one side of the listener's left and right ears, and the acquired on the other side of the listener's left and right ears.
- a second reverberation processing unit for reverberating the audio signal A superimposing unit that superimposes the audio signal reverberated by the first reverb processing unit and the audio signal reverberated by the second reverb processing unit;
- the sound output device according to any one of (1) to (4), wherein the sound output unit outputs sound based on the audio signal superimposed by the superimposing unit.
- the sound output unit outputs the sound of the content to the viewer's ear,
- the sound output device according to any one of (1) to (5), wherein the reverb processing unit performs the reverb processing in accordance with an acoustic environment of the content.
- the sound output device wherein the reverb processing unit performs the reverb processing based on a reverb type selected based on an acoustic environment of the content.
- the sound output device further including a superimposing unit that superimposes the audio signal of the content on the audio signal after the reverberation process.
- An acoustic environment information acquisition unit that acquires acoustic environment information representing the acoustic environment around the communication partner is provided, The reverberation processing unit according to (1), wherein the reverb processing unit performs the reverb processing based on acoustic environment information.
- the sound output device further including a superimposing unit that superimposes the audio signal received from the communication partner on the audio signal after the reverberation process.
- a superimposing unit that superimposes the audio signal received from the communication partner on the audio signal after the reverberation process.
- an acoustic environment adjustment unit that adjusts a sound image position of an audio signal received from a communication partner;
- a superimposing unit that superimposes a signal whose sound image position is adjusted by the acoustic environment adjusting unit on the audio signal acquired by the acoustic acquiring unit;
- the reverberation processing unit according to (9), wherein the reverb processing unit performs reverberation processing on the audio signal superimposed by the superimposition unit.
- an acoustic environment adjustment unit that adjusts a sound image position of a monaural audio signal received from a communication partner;
- the acoustic output device further including a superimposing unit that superimposes the signal whose sound image position is adjusted by the acoustic environment adjusting unit on the audio signal after the reverberation processing.
- An acoustic output method comprising: (14) means for acquiring an audio signal based on ambient sound; Means for performing reverberation processing on the audio signal; Means for outputting the sound of the audio signal subjected to the reverberation processing in the vicinity of the listener's ear; As a program to make the computer function as. (15) A sound acquisition unit that acquires sound environment information representing the surrounding sound environment, and sound environment information representing the sound environment around the second sound output device that is the communication partner are obtained from the second sound output device.
- An acoustic environment information acquisition unit a reverberation processing unit that performs reverberation processing on the audio signal acquired by the acoustic acquisition unit according to the acoustic environment information, and sound generated by the audio signal subjected to the reverb processing.
- a first sound output device comprising: a sound output unit that outputs to a listener's ear; An acoustic acquisition unit that acquires acoustic environment information that represents the surrounding acoustic environment, an acoustic environment information acquisition unit that acquires acoustic environment information that represents the surrounding acoustic environment of the first acoustic output device that is the communication partner, and the acoustic A reverberation processing unit that performs reverberation processing on the sound signal acquired by the sound acquisition unit according to environmental information, and an acoustic output unit that outputs sound from the sound signal subjected to the reverberation processing to a listener's ear
- the second sound output device comprising:
- An acoustic system comprising:
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- General Health & Medical Sciences (AREA)
- Stereophonic System (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
なお、上記の効果は必ずしも限定的なものではなく、上記の効果とともに、または上記の効果に代えて、本明細書に示されたいずれかの効果、または本明細書から把握され得る他の効果が奏されてもよい。
1.音響出力装置の構成例
2.本実施形態におけるリバーブ処理
3.本実施形態に係るシステムの応用例
まず、図1を参照して、本開示の一実施形態に係る音響出力装置の概略構成について説明する。図1及び図2は、本開示の一実施形態に係る音響出力装置100の構成を示す模式図である。ここで、図1は音響出力装置100の正面図であり、図2は音響出力装置100を左側から見た斜視図である。図1及び図2に示す音響出力装置100は、左耳に装着するように構成されているが、右耳装着用の音響出力装置(不図示)はこれとは左右対称に構成されている。
次に、本実施形態におけるリバーブ処理に関して詳細に説明する。最初に、図6及び図7に基づいて、カナル型等の通常の「密閉型」のヘッドホン500とマイク400を用いて、ユーザ体験を行わせる処理システムについて説明する。図6に示すヘッドホン500の構成は、「密閉型」であること以外は、図4に示す音響出力装置100と同様に構成されており、左右のヘッドホン500の近傍にマイク400が設けられている。この時、密閉型のヘッドホン500は遮音性が高いものとする。ここでは、ある特定の音場空間をシミュレートするために、図6に示すようなインパルス応答IRが測定済みであるものとする。図6に示すように、音源600が発した音をマイク400で集音し、この直接音成分を含むIRそのものをリバーブ処理として、DSP404においてマイク400からのマイク信号に畳み込むことで、ユーザはその特定の音場空間を感じることができる。なお、図6において、マイクアンプ・ADC402、DAC・アンプ406については、図示を省略する。
次に、本実施形態に係るシステムの応用例について説明する。図10は、リバーブ処理の応用により、より臨場感を高めた例を示している。図10では、R(右)側のシステムを図示しているが、L(左)側にも図10と対称なシステム構成があるものとする。通常は、L側とR側の再生デバイスは独立であり、かつ双方は有線で繋がっていない。図10に示す構成例では、L側とR側の音響出力装置100は、無線通信部412で接続されて双方向通信が可能とされている。なお、L側とR側の音響出力装置100は、スマートフォン等を中継器として双方向通信が可能とされていても良い。
(1) 周囲の音響による音声信号を取得する音響取得部と、
前記音声信号に対し、リバーブ処理を行うリバーブ処理部と、
前記リバーブ処理が行われた前記音声信号による音響を聴取者の耳の近傍に出力する音響出力部と、
を備える、音響出力装置。
(2) 前記リバーブ処理部は、インパルス応答の直接音成分を除いて前記リバーブ処理を行う、前記(1)に記載の音響出力装置。
(3) 前記音響出力部は、一端が聴取者の外耳道入口付近に配置される中空構造の音導部の他端に音響を出力する、前記(1)又は(2)に記載の音響出力装置。
(4) 前記音響出力部は、前記聴取者の耳を外部から密閉した状態で音響を出力する、前記(1)又は(2)に記載の音響出力装置。
(5) 前記音響出力部は、聴取者の左右の耳のそれぞれの側で前記音声信号を取得し、
前記リバーブ処理部は、聴取者の左右の耳の一方の側で取得された前記音声信号をリバーブ処理する第1のリバーブ処理部と、聴取者の左右の耳の他方の側で取得された前記音声信号をリバーブ処理する第2のリバーブ処理部と、
前記第1のリバーブ処理部によりリバーブ処理された前記音声信号と、前記第2のリバーブ処理部によりリバーブ処理された前記音声信号を重畳する重畳部を備え、
前記音響出力部は、前記重畳部により重畳された前記音声信号による音響を出力する、前記(1)~(4)のいずれかに記載の音響出力装置。
(6) 前記音響出力部は、コンテンツの音響を視聴者の耳に出力し、
前記リバーブ処理部は、前記コンテンツの音響環境に合わせて前記リバーブ処理を行う、前記(1)~(5)のいずれかに記載の音響出力装置。
(7) 前記リバーブ処理部は、前記コンテンツの音響環境に基づいて選択されたリバーブ種別に基づいて前記リバーブ処理を行う、前記(6)に記載の音響出力装置。
(8) 前記コンテンツの音声信号を前記リバーブ処理後の前記音声信号に重畳する重畳部を備える、前記(6)に記載の音響出力装置。
(9) 通信相手の周囲の音響環境を表す音響環境情報を取得する音響環境情報取得部を備え、
前記リバーブ処理部は、音響環境情報に基づいて前記リバーブ処理を行う、前記(1)に記載の音響出力装置。
(10) 通信相手から受信した音声信号を前記リバーブ処理後の前記音声信号に重畳する重畳部を備える、前記(9)に記載の音響出力装置。
(11) 通信相手から受信した音声信号の音像位置を調整する音響環境調整部と、
前記音響取得部が取得した前記音声信号に対して、前記音響環境調整部により音像位置が調整された信号を重畳する重畳部と、を備え、
前記リバーブ処理部は、前記重畳部により重畳された音声信号をリバーブ処理する、前記(9)に記載の音響出力装置。
(12) 通信相手から受信したモノラル音声信号の音像位置を調整する音響環境調整部と、
前記音響環境調整部により音像位置が調整された信号を前記リバーブ処理後の前記音声信号に重畳する重畳部を備える、前記(9)に記載の音響出力装置。
(13) 周囲の音響による音声信号を取得することと、
前記音声信号に対し、リバーブ処理を行うことと、
前記リバーブ処理が行われた前記音声信号による音響を聴取者の耳の近傍に出力することと、
を備える、音響出力方法。
(14) 周囲の音響による音声信号を取得する手段と、
前記音声信号に対し、リバーブ処理を行う手段と、
前記リバーブ処理が行われた前記音声信号による音響を聴取者の耳の近傍に出力する手段と、
としてコンピュータを機能させるためのプログラム。
(15) 周囲の音響環境を表す音響環境情報を取得する音響取得部と、通信相手である第2の音響出力装置の周囲の音響環境を表す音響環境情報を前記第2の音響出力装置から取得する音響環境情報取得部と、前記音響環境情報に応じて、前記音響取得部で取得された音声信号に対してリバーブ処理を行うリバーブ処理部と、前記リバーブ処理が行われた音声信号による音響を聴取者の耳に出力する音響出力部と、を備える第1の音響出力装置と、
周囲の音響環境を表す音響環境情報を取得する音響取得部と、通信相手である前記第1の音響出力装置の周囲の音響環境を表す音響環境情報を取得する音響環境情報取得部と、前記音響環境情報に応じて、前記音響取得部で取得された音声信号に対してリバーブ処理を行うリバーブ処理部と、前記リバーブ処理が行われた音声信号による音響を聴取者の耳に出力する音響出力部と、を備える前記第2の音響出力装置と、
を備える、音響システム。
110 音響発生部
120 音導部
400 マイク
404 DSP
414,426,428L,428R
430 音響環境取得部
438 フィルタ
Claims (15)
- 周囲の音響による音声信号を取得する音響取得部と、
前記音声信号に対し、リバーブ処理を行うリバーブ処理部と、
前記リバーブ処理が行われた前記音声信号による音響を聴取者の耳の近傍に出力する音響出力部と、
を備える、音響出力装置。 - 前記リバーブ処理部は、インパルス応答の直接音成分を除いて前記リバーブ処理を行う、請求項1に記載の音響出力装置。
- 前記音響出力部は、一端が聴取者の外耳道入口付近に配置される中空構造の音導部の他端に音響を出力する、請求項1に記載の音響出力装置。
- 前記音響出力部は、前記聴取者の耳を外部から密閉した状態で音響を出力する、請求項1に記載の音響出力装置。
- 前記音響出力部は、聴取者の左右の耳のそれぞれの側で前記音声信号を取得し、
前記リバーブ処理部は、聴取者の左右の耳の一方の側で取得された前記音声信号をリバーブ処理する第1のリバーブ処理部と、聴取者の左右の耳の他方の側で取得された前記音声信号をリバーブ処理する第2のリバーブ処理部と、
前記第1のリバーブ処理部によりリバーブ処理された前記音声信号と、前記第2のリバーブ処理部によりリバーブ処理された前記音声信号を重畳する重畳部を備え、
前記音響出力部は、前記重畳部により重畳された前記音声信号による音響を出力する、請求項1に記載の音響出力装置。 - 前記音響出力部は、コンテンツの音響を視聴者の耳に出力し、
前記リバーブ処理部は、前記コンテンツの音響環境に合わせて前記リバーブ処理を行う、請求項1に記載の音響出力装置。 - 前記リバーブ処理部は、前記コンテンツの音響環境に基づいて選択されたリバーブ種別に基づいて前記リバーブ処理を行う、請求項6に記載の音響出力装置。
- 前記コンテンツの音声信号を前記リバーブ処理後の前記音声信号に重畳する重畳部を備える、請求項6に記載の音響出力装置。
- 通信相手の周囲の音響環境を表す音響環境情報を取得する音響環境情報取得部を備え、
前記リバーブ処理部は、音響環境情報に基づいて前記リバーブ処理を行う、請求項1に記載の音響出力装置。 - 通信相手から受信した音声信号を前記リバーブ処理後の前記音声信号に重畳する重畳部を備える、請求項9に記載の音響出力装置。
- 通信相手から受信した音声信号の音像位置を調整する音響環境調整部と、
前記音響取得部が取得した前記音声信号に対して、前記音響環境調整部により音像位置が調整された信号を重畳する重畳部と、を備え、
前記リバーブ処理部は、前記重畳部により重畳された音声信号をリバーブ処理する、請求項9に記載の音響出力装置。 - 通信相手から受信したモノラル音声信号の音像位置を調整する音響環境調整部と、
前記音響環境調整部により音像位置が調整された信号を前記リバーブ処理後の前記音声信号に重畳する重畳部を備える、請求項9に記載の音響出力装置。 - 周囲の音響による音声信号を取得することと、
前記音声信号に対し、リバーブ処理を行うことと、
前記リバーブ処理が行われた前記音声信号による音響を聴取者の耳の近傍に出力することと、
を備える、音響出力方法。 - 周囲の音響による音声信号を取得する手段と、
前記音声信号に対し、リバーブ処理を行う手段と、
前記リバーブ処理が行われた前記音声信号による音響を聴取者の耳の近傍に出力する手段と、
としてコンピュータを機能させるためのプログラム。 - 周囲の音響環境を表す音響環境情報を取得する音響取得部と、通信相手である第2の音響出力装置の周囲の音響環境を表す音響環境情報を前記第2の音響出力装置から取得する音響環境情報取得部と、前記音響環境情報に応じて、前記音響取得部で取得された音声信号に対してリバーブ処理を行うリバーブ処理部と、前記リバーブ処理が行われた音声信号による音響を聴取者の耳に出力する音響出力部と、を備える第1の音響出力装置と、
周囲の音響環境を表す音響環境情報を取得する音響取得部と、通信相手である前記第1の音響出力装置の周囲の音響環境を表す音響環境情報を取得する音響環境情報取得部と、前記音響環境情報に応じて、前記音響取得部で取得された音声信号に対してリバーブ処理を行うリバーブ処理部と、前記リバーブ処理が行われた音声信号による音響を聴取者の耳に出力する音響出力部と、を備える前記第2の音響出力装置と、
を備える、音響システム。
Priority Applications (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017565437A JP7047383B2 (ja) | 2016-02-01 | 2017-01-05 | 音響出力装置、音響出力方法、プログラム |
| US16/069,631 US10685641B2 (en) | 2016-02-01 | 2017-01-05 | Sound output device, sound output method, and sound output system for sound reverberation |
| EP19200583.3A EP3621318B1 (en) | 2016-02-01 | 2017-01-05 | Sound output device and sound output method |
| EP17747137.2A EP3413590B1 (en) | 2016-02-01 | 2017-01-05 | Audio output device, audio output method, program, and audio system |
| CN201780008155.1A CN108605193B (zh) | 2016-02-01 | 2017-01-05 | 声音输出设备、声音输出方法、计算机可读存储介质和声音系统 |
| US16/791,083 US11037544B2 (en) | 2016-02-01 | 2020-02-14 | Sound output device, sound output method, and sound output system |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2016017019 | 2016-02-01 | ||
| JP2016-017019 | 2016-02-01 |
Related Child Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/069,631 A-371-Of-International US10685641B2 (en) | 2016-02-01 | 2017-01-05 | Sound output device, sound output method, and sound output system for sound reverberation |
| US16/791,083 Continuation US11037544B2 (en) | 2016-02-01 | 2020-02-14 | Sound output device, sound output method, and sound output system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017134973A1 true WO2017134973A1 (ja) | 2017-08-10 |
Family
ID=59501022
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2017/000070 Ceased WO2017134973A1 (ja) | 2016-02-01 | 2017-01-05 | 音響出力装置、音響出力方法、プログラム、音響システム |
Country Status (5)
| Country | Link |
|---|---|
| US (2) | US10685641B2 (ja) |
| EP (2) | EP3413590B1 (ja) |
| JP (1) | JP7047383B2 (ja) |
| CN (1) | CN108605193B (ja) |
| WO (1) | WO2017134973A1 (ja) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019053996A1 (ja) * | 2017-09-13 | 2019-03-21 | ソニー株式会社 | ヘッドホン装置 |
| WO2019053993A1 (ja) * | 2017-09-13 | 2019-03-21 | ソニー株式会社 | 音響処理装置及び音響処理方法 |
| JP2022538714A (ja) * | 2019-06-24 | 2022-09-06 | メタ プラットフォームズ テクノロジーズ, リミテッド ライアビリティ カンパニー | 人工現実環境のためのオーディオシステム |
Families Citing this family (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108605193B (zh) | 2016-02-01 | 2021-03-16 | 索尼公司 | 声音输出设备、声音输出方法、计算机可读存储介质和声音系统 |
| AU2018353008B2 (en) | 2017-10-17 | 2023-04-20 | Magic Leap, Inc. | Mixed reality spatial audio |
| CN116781827A (zh) | 2018-02-15 | 2023-09-19 | 奇跃公司 | 混合现实虚拟混响 |
| CN112534498B (zh) | 2018-06-14 | 2024-12-31 | 奇跃公司 | 混响增益归一化 |
| CN111045635B (zh) * | 2018-10-12 | 2021-05-07 | 北京微播视界科技有限公司 | 音频处理方法和装置 |
| KR102790631B1 (ko) * | 2019-03-19 | 2025-04-04 | 소니그룹주식회사 | 음향 처리 장치, 음향 처리 방법, 및 음향 처리 프로그램 |
| US11523244B1 (en) * | 2019-06-21 | 2022-12-06 | Apple Inc. | Own voice reinforcement using extra-aural speakers |
| CN114424583A (zh) | 2019-09-23 | 2022-04-29 | 杜比实验室特许公司 | 混合近场/远场扬声器虚拟化 |
| EP4049466B1 (en) * | 2019-10-25 | 2025-04-30 | Magic Leap, Inc. | Methods and systems for determining and processing audio information in a mixed reality environment |
| JP7712061B2 (ja) * | 2020-02-19 | 2025-07-23 | ヤマハ株式会社 | 音信号処理方法および音信号処理装置 |
| JP7524614B2 (ja) * | 2020-06-03 | 2024-07-30 | ヤマハ株式会社 | 音信号処理方法、音信号処理装置および音信号処理プログラム |
| WO2022113289A1 (ja) | 2020-11-27 | 2022-06-02 | ヤマハ株式会社 | ライブデータ配信方法、ライブデータ配信システム、ライブデータ配信装置、ライブデータ再生装置、およびライブデータ再生方法 |
| WO2022113288A1 (ja) | 2020-11-27 | 2022-06-02 | ヤマハ株式会社 | ライブデータ配信方法、ライブデータ配信システム、ライブデータ配信装置、ライブデータ再生装置、およびライブデータ再生方法 |
| US12283265B1 (en) * | 2021-04-09 | 2025-04-22 | Apple Inc. | Own voice reverberation reconstruction |
| US11140469B1 (en) | 2021-05-03 | 2021-10-05 | Bose Corporation | Open-ear headphone |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH06245299A (ja) * | 1993-02-15 | 1994-09-02 | Sony Corp | 補聴器 |
| JP2007202020A (ja) * | 2006-01-30 | 2007-08-09 | Sony Corp | 音声信号処理装置、音声信号処理方法、プログラム |
| US20140126756A1 (en) * | 2012-11-02 | 2014-05-08 | Daniel M. Gauger, Jr. | Binaural Telepresence |
Family Cites Families (25)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5371799A (en) * | 1993-06-01 | 1994-12-06 | Qsound Labs, Inc. | Stereo headphone sound source localization system |
| US6681022B1 (en) | 1998-07-22 | 2004-01-20 | Gn Resound North Amerca Corporation | Two-way communication earpiece |
| JP3975577B2 (ja) | 1998-09-24 | 2007-09-12 | ソニー株式会社 | インパルス応答の収集方法および効果音付加装置ならびに記録媒体 |
| GB2361395B (en) | 2000-04-15 | 2005-01-05 | Central Research Lab Ltd | A method of audio signal processing for a loudspeaker located close to an ear |
| JP3874099B2 (ja) * | 2002-03-18 | 2007-01-31 | ソニー株式会社 | 音声再生装置 |
| US7949141B2 (en) * | 2003-11-12 | 2011-05-24 | Dolby Laboratories Licensing Corporation | Processing audio signals with head related transfer function filters and a reverberator |
| CN2681501Y (zh) | 2004-03-01 | 2005-02-23 | 上海迪比特实业有限公司 | 一种具有混响功能的手机 |
| WO2006033058A1 (en) * | 2004-09-23 | 2006-03-30 | Koninklijke Philips Electronics N.V. | A system and a method of processing audio data, a program element and a computer-readable medium |
| US7184557B2 (en) | 2005-03-03 | 2007-02-27 | William Berson | Methods and apparatuses for recording and playing back audio signals |
| CN101138273B (zh) | 2005-03-10 | 2013-03-06 | 唯听助听器公司 | 一种用于助听器的耳塞 |
| US20070127750A1 (en) * | 2005-12-07 | 2007-06-07 | Phonak Ag | Hearing device with virtual sound source |
| US8036767B2 (en) * | 2006-09-20 | 2011-10-11 | Harman International Industries, Incorporated | System for extracting and changing the reverberant content of an audio input signal |
| US20080273708A1 (en) * | 2007-05-03 | 2008-11-06 | Telefonaktiebolaget L M Ericsson (Publ) | Early Reflection Method for Enhanced Externalization |
| EP2337375B1 (en) * | 2009-12-17 | 2013-09-11 | Nxp B.V. | Automatic environmental acoustics identification |
| CN202514043U (zh) | 2012-03-13 | 2012-10-31 | 贵州奥斯科尔科技实业有限公司 | 一种便携式个人唱歌话筒 |
| WO2015031080A2 (en) * | 2013-08-30 | 2015-03-05 | Gleim Conferencing, Llc | Multidimensional virtual learning audio programming system and method |
| US9479859B2 (en) * | 2013-11-18 | 2016-10-25 | 3M Innovative Properties Company | Concha-fit electronic hearing protection device |
| US10148240B2 (en) * | 2014-03-26 | 2018-12-04 | Nokia Technologies Oy | Method and apparatus for sound playback control |
| US9648436B2 (en) * | 2014-04-08 | 2017-05-09 | Doppler Labs, Inc. | Augmented reality sound system |
| US9892721B2 (en) | 2014-06-30 | 2018-02-13 | Sony Corporation | Information-processing device, information processing method, and program |
| EP3441966A1 (en) * | 2014-07-23 | 2019-02-13 | PCMS Holdings, Inc. | System and method for determining audio context in augmented-reality applications |
| HUE056176T2 (hu) * | 2015-02-12 | 2022-02-28 | Dolby Laboratories Licensing Corp | Fejhallgató virtualizálás |
| US9565491B2 (en) * | 2015-06-01 | 2017-02-07 | Doppler Labs, Inc. | Real-time audio processing of ambient sound |
| EP3657822A1 (en) | 2015-10-09 | 2020-05-27 | Sony Corporation | Sound output device and sound generation method |
| CN108605193B (zh) | 2016-02-01 | 2021-03-16 | 索尼公司 | 声音输出设备、声音输出方法、计算机可读存储介质和声音系统 |
-
2017
- 2017-01-05 CN CN201780008155.1A patent/CN108605193B/zh active Active
- 2017-01-05 EP EP17747137.2A patent/EP3413590B1/en active Active
- 2017-01-05 JP JP2017565437A patent/JP7047383B2/ja active Active
- 2017-01-05 EP EP19200583.3A patent/EP3621318B1/en active Active
- 2017-01-05 WO PCT/JP2017/000070 patent/WO2017134973A1/ja not_active Ceased
- 2017-01-05 US US16/069,631 patent/US10685641B2/en active Active
-
2020
- 2020-02-14 US US16/791,083 patent/US11037544B2/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH06245299A (ja) * | 1993-02-15 | 1994-09-02 | Sony Corp | 補聴器 |
| JP2007202020A (ja) * | 2006-01-30 | 2007-08-09 | Sony Corp | 音声信号処理装置、音声信号処理方法、プログラム |
| US20140126756A1 (en) * | 2012-11-02 | 2014-05-08 | Daniel M. Gauger, Jr. | Binaural Telepresence |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019053996A1 (ja) * | 2017-09-13 | 2019-03-21 | ソニー株式会社 | ヘッドホン装置 |
| WO2019053993A1 (ja) * | 2017-09-13 | 2019-03-21 | ソニー株式会社 | 音響処理装置及び音響処理方法 |
| JPWO2019053993A1 (ja) * | 2017-09-13 | 2020-08-27 | ソニー株式会社 | 音響処理装置及び音響処理方法 |
| JP7070576B2 (ja) | 2017-09-13 | 2022-05-18 | ソニーグループ株式会社 | 音響処理装置及び音響処理方法 |
| US11350203B2 (en) | 2017-09-13 | 2022-05-31 | Sony Corporation | Headphone device |
| US11445289B2 (en) | 2017-09-13 | 2022-09-13 | Sony Corporation | Audio processing device and audio processing method |
| JP2022538714A (ja) * | 2019-06-24 | 2022-09-06 | メタ プラットフォームズ テクノロジーズ, リミテッド ライアビリティ カンパニー | 人工現実環境のためのオーディオシステム |
| JP7482147B2 (ja) | 2019-06-24 | 2024-05-13 | メタ プラットフォームズ テクノロジーズ, リミテッド ライアビリティ カンパニー | 人工現実環境のためのオーディオシステム |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108605193B (zh) | 2021-03-16 |
| EP3413590A4 (en) | 2018-12-19 |
| JPWO2017134973A1 (ja) | 2018-11-22 |
| CN108605193A (zh) | 2018-09-28 |
| EP3621318A1 (en) | 2020-03-11 |
| JP7047383B2 (ja) | 2022-04-05 |
| US10685641B2 (en) | 2020-06-16 |
| US20190019495A1 (en) | 2019-01-17 |
| US11037544B2 (en) | 2021-06-15 |
| US20200184947A1 (en) | 2020-06-11 |
| EP3413590B1 (en) | 2019-11-06 |
| EP3413590A1 (en) | 2018-12-12 |
| EP3621318B1 (en) | 2021-12-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11037544B2 (en) | Sound output device, sound output method, and sound output system | |
| US11902772B1 (en) | Own voice reinforcement using extra-aural speakers | |
| CN114125661B (zh) | 声音再现系统和头戴式设备 | |
| Ranjan et al. | Natural listening over headphones in augmented reality using adaptive filtering techniques | |
| US10812926B2 (en) | Sound output device, sound generation method, and program | |
| JP5894634B2 (ja) | 個人ごとのhrtfの決定 | |
| CN107852563B (zh) | 双耳音频再现 | |
| JP3435141B2 (ja) | 音像定位装置、並びに音像定位装置を用いた会議装置、携帯電話機、音声再生装置、音声記録装置、情報端末装置、ゲーム機、通信および放送システム | |
| JPH10500809A (ja) | バイノーラル信号合成と頭部伝達関数とその利用 | |
| EP2243136B1 (en) | Mediaplayer with 3D audio rendering based on individualised HRTF measured in real time using earpiece microphones. | |
| CN112956210B (zh) | 基于均衡滤波器的音频信号处理方法及装置 | |
| WO2020176532A1 (en) | Method and apparatus for time-domain crosstalk cancellation in spatial audio | |
| Kates et al. | Integrating a remote microphone with hearing-aid processing | |
| JP6147603B2 (ja) | 音声伝達装置、音声伝達方法 | |
| JP6389080B2 (ja) | ボイスキャンセリング装置 | |
| JP6637992B2 (ja) | 音響再生装置 | |
| CN117294980A (zh) | 用于声学透传的方法和系统 | |
| JP2022128177A (ja) | 音声生成装置、音声再生装置、音声再生方法、及び音声信号処理プログラム | |
| JP6972858B2 (ja) | 音響処理装置、プログラム及び方法 | |
| Watanabe et al. | Effects of Acoustic Transparency of Wearable Audio Devices on Audio AR | |
| US20250016519A1 (en) | Audio device with head orientation-based filtering and related methods | |
| JP2020099094A (ja) | 信号処理装置 | |
| JP2006352728A (ja) | オーディオ装置 | |
| Møller et al. | Directional characteristics for different in-ear recording points | |
| CN120380783A (zh) | 目标响应曲线数据、目标响应曲线数据的生成方法、放音装置、声音处理装置、声音数据、声学系统、目标响应曲线数据的生成系统、程序、以及记录介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17747137 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2017565437 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2017747137 Country of ref document: EP |
|
| ENP | Entry into the national phase |
Ref document number: 2017747137 Country of ref document: EP Effective date: 20180903 |