[go: up one dir, main page]

EP4026353A1 - Système et procédé pour une communication audio projetée dans l'espace - Google Patents

Système et procédé pour une communication audio projetée dans l'espace

Info

Publication number
EP4026353A1
EP4026353A1 EP20861623.5A EP20861623A EP4026353A1 EP 4026353 A1 EP4026353 A1 EP 4026353A1 EP 20861623 A EP20861623 A EP 20861623A EP 4026353 A1 EP4026353 A1 EP 4026353A1
Authority
EP
European Patent Office
Prior art keywords
user
ego
group
audio
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP20861623.5A
Other languages
German (de)
English (en)
Inventor
Eric Tammam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anachoic Ltd
Original Assignee
Anachoic Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/559,899 external-priority patent/US10917736B2/en
Application filed by Anachoic Ltd filed Critical Anachoic Ltd
Publication of EP4026353A1 publication Critical patent/EP4026353A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72457User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/07Use of position data from wide-area or local-area positioning systems in hearing devices, e.g. program or information selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/13Acoustic transducers and sound field adaptation in vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field

Definitions

  • the present invention generally relates to audio communication among multiple users in a group, such as a group of riders.
  • intercommunication systems have been developed that convert sound from a user to an inaudible signal (e.g., ultrasound, electromagnetic, and the like) that is then projected or transmitted to another system that converts the signal back to an audible one and relays it to a second user.
  • This process is bidirectional, allowing for the second user to communicate with the first user, and in some implementations can be done simultaneously.
  • This process can also be applied to numerous users allowing for intercommunication between members of a group.
  • An example of such systems are motorcycle helmet mounted intercommunication systems that allow for groups of riders to communicate while riding at speed.
  • a system for providing spatially projected audio communication between members of a group of a plurality of users is associated with a respective ego-user of the group.
  • the system includes a positioning unit, configured to determine a user position of the ego-user.
  • the system further includes a head orientation measurement unit, configured to determine a head position and orientation of the ego-user.
  • the system further includes a communication unit, configured to transmit at least a determined user position of the ego-user, a user identifier of the ego-user, and audio information of the ego-user, to at least one other system associated with at least one other user of the group.
  • the communication unit is further configured to receive at least a user position of at least one other user of the group, a user identifier of the other user, and audio information of the other user, from at least one other system associated with the other user.
  • the system further includes a processing unit, configured to track the user position of at least one other user of the group and to establish a relative position of the other user relative to the ego-user, based on the received user position and the received user identifier one of the other user, and to synthesize a spatially resolved audio signal of the received audio information of the other user, based on the established relative position of the other user and based on the determined head position and orientation of the ego-user.
  • the system further includes an audio interface unit, configured to audibly convey the synthesized spatially resolved audio signal to the ego-user. At least one of the system units may be mounted on the head of the ego-user.
  • the system may further include a detection unit, including a plurality of sensors. The detection unit is configured to detect a user position of at least one other user of the group, where the processing unit is configured to track the user position of the other user, by correlating the detected user position of the other user obtained from the detection unit with the received user position of the other user received by the communication unit.
  • the detection unit may be further configured to detect a relative velocity between the ego-user and at least one other user of the group, where the processing unit is configured to impart a Doppler shift onto the synthesized spatially resolved audio signal, based on the detected relative velocity.
  • the detection unit may include at least one simultaneous localization and mapping (SLAM) sensor.
  • SLAM simultaneous localization and mapping
  • the detection unit may be integrated with the communication unit and configured to transmit and receive information via a radar-communication (RadCom) technique.
  • a method for providing spatially projected audio communication between members of a group of a plurality of users includes the procedure of determining a user position of an ego-user of the group, using a positioning unit associated with the ego-user.
  • the method further includes the procedure of determining a head position and orientation of the ego-user, using a head orientation measurement unit associated with the ego-user.
  • the method further includes the procedures of transmitting at least a determined user position of the ego-user, a user identifier of the ego-user, and audio information of the ego-user, to at least one other system associated with at least one other user of the group, and receiving at least a user position of at least one other user of the group, a user identifier of the other user, and audio information of the other user, from at least one other system associated with the other user.
  • the method further includes the procedure of tracking the user position of at least one other user of the group and establishing a relative position of the other user relative to the ego-user, based on the received user position and the received user identifier of the other user.
  • the method further includes the procedures of synthesizing a spatially resolved audio signal of the received audio information of the other user, based on the established relative position of the other user and based on the determined head position and orientation of the ego-user, and audibly conveying the synthesized spatially resolved audio signal to the ego-user via an audio interface unit.
  • At least one of the system units may be mounted on the head of the ego-user.
  • the method may further include the procedures of detecting a user position of at least one other user of the group, with a detection unit including a plurality of sensors, and tracking the user position of the other user, by correlating the detected user position of the other user obtained from the detection unit with the received user position of the other user received by the communication unit.
  • the method may further include the procedures of detecting a relative velocity between the ego-user and at least one other user of the group, with the detection unit, and imparting a Doppler shift onto the synthesized spatially resolved audio signal, based on the detected relative velocity.
  • Figure 1 is a schematic illustration of a high-level topology of a system for spatially projected audio communication between members of a group, constructed and operative in accordance with an embodiment of the present invention
  • Figure 2 is a schematic illustration of the subsystems of a system for spatially projected audio communication between members of a group, constructed and operative in accordance with an embodiment of the present invention
  • Figure 3 is an illustration of an exemplary projection of information from multiple surrounding users to an individual ego-user, operative in accordance with an embodiment of the present invention
  • FIG. 4 is a flow diagram of a RadCom mode operation of the system for spatially projected audio communication, operative in accordance with an embodiment of the present invention
  • Figure 5 is a flow diagram of audio intercommunication between selected users of a group, operative in accordance with an embodiment of the present invention
  • Figure 6 is an illustration of the detection of surrounding objects by the system for spatially projected audio communication of an ego-user, operative in accordance with an embodiment of the present invention
  • Figure 7 is an illustration of the detection of identification and location data by the communication unit of the system for spatially projected audio communication, operative in accordance with an embodiment of the present invention
  • Figure 8 is an illustration of the association of detected objects with the user position by the system for spatially projected audio communication of an ego-user, operative in accordance with an embodiment of the present invention
  • Figure 9 is an illustration of the spatial projection of audio by the system for spatially projected audio communication of an ego-user, operative in accordance with an embodiment of the present invention.
  • Figure 10 is an illustration of the tracking of user positions by the system for spatially projected audio communication of an ego-user, operative in accordance with an embodiment of the present invention
  • Figure 11 is an illustration of an exemplary operation of systems for spatially projected audio communication of respective motorcycle riders, operative in accordance with an embodiment of the present invention
  • Figure 12 is an illustration of the overlapping fields of view of the detection units and interconnecting communication channels among users in a group, operative in accordance with an embodiment of the present invention
  • Figure 13 is an illustration of a three-way conversation between motorcyclists in a group by the respective systems for spatially projected audio communication, operative in accordance with an embodiment of the present invention.
  • the present invention overcomes the disadvantages of the prior art by providing a system and method for spatially projected audio communication between members of a group, such as a group of motorcycle riders.
  • spatial information is important in developing a spatial awareness of the surrounding users, and can help avoid unwanted collisions, navigational errors, and misidentification.
  • various scenarios can be improved with spatial audio. For example, individual riders can issue audible directions based on the perceived position of other riders without necessitating eye contact (i.e., without removing the users eyes from the direction of forward movement); riders can coordinate movements based on the perceived position of other riders to avoid potential collisions, and riders can improve their response to hazards that are encountered and communicated among members of the group.
  • the communication system of the present invention allows for communication between two or more individuals in a group.
  • the system acquires the relative position (localization) of each participant communicating in the group and be able to uniquely identify the participant in the group with an associated identifier (ID).
  • ID an associated identifier
  • the system transmits the initial position and unique identification of a respective user in a group to other participants in the group. Once the relative position and identification of the user is established, the user’s position and identity can be tracked by the systems associated with other members within the group. All audible communications emanating from the user may then be spatially mapped to other members in the group.
  • FIG. 1 is a schematic illustration of a high-level topology of a system for spatially projected audio communication between members of a group, constructed and operative in accordance with an embodiment of the present invention.
  • the system includes at least some of the following subsystems: a detection (sensing) unit, a positioning unit, a head orientation measurement unit, a communication unit (optional), a processing unit, a power unit and an audio unit.
  • the detection unit may include one or more Simultaneous Localization and Mapping (SLAM) sensors, such as at least one of: a radar sensor, a LIDAR sensor, an ultrasound sensor, a camera, a field camera, and a time of flight camera.
  • SLAM Simultaneous Localization and Mapping
  • the sensors may be arranged in a configuration so as to provide 360° (360 degree) coverage around the user and capable of tracking individuals in different environments.
  • the sensor module is a radar module.
  • a system on chip millimeter wave radar transceiver (such as the Texas Instruments IWR1243 or the NXP TEF8101 chips) can provide the necessary detection functionality while allowing for a compact and low power design, which may be an advantage in mobile applications.
  • the transceiver chip is integrated on an electronics board with a patch antenna design.
  • the sensor module may provide reliable detection of persons for distances of up to 30m, motorcycles of up to 50m, and automobiles of up to 80m, with a range resolution of up to 40 cm.
  • the sensor module may provide up to a 120° azimuthal field of view (FoV) with a resolution of 15°. Three modules can provide a full 360° azimuthal FoV, though in some applications it may be possible to use two modules or even a single module.
  • the radar module in its basic mode of operation can detect objects in the proximity of the sensor but has limited identification capabilities.
  • Lidar sensors and ultrasound sensors may suffer from the same limitations.
  • Optical cameras and their variants can provide identification capabilities, but such identification may require considerable computational resources, may not be entirely reliable and may not readily provide distance information.
  • Spatially projected communication requires the determination of the spatial position of the communicating parties, to allow for accurately and uniquely representing their audio information to a user in three-dimensional (3D) space.
  • Some types of sensors, such as radar and ultrasound can provide the instantaneous relative velocity of the detected objects in the vicinity of the user. The relative velocity information of the detected objects can be used to provide a Doppler effect on the audio representation of those detected objects.
  • a positioning unit is used to determine the position of the users.
  • Such positioning unit may include localization sensors or systems, such as a global navigation satellite system (GNSS), a global positioning system (GPS), GLONASS, and the like, for outdoor applications.
  • GNSS global navigation satellite system
  • GPS global positioning system
  • GLONASS global positioning system
  • an indoor positioning sensor that is used as part of an indoor localization system may be used for indoor applications.
  • the position of each user is acquired by the respective positioning unit of the user, and the acquired position and the unique user ID is transmitted by the respective communication unit of the user to the group.
  • the other members of the group reciprocate with the same process.
  • Each member of the group now has the location information and the accompanied unique ID of each user.
  • the user systems can continuously transmit, over the respective communication units, their acquired position to other members of the group and/or the detection units can track the position of other members independent of the transmission of the other members positions.
  • Using the detection unit for tracking may provide lower latency (receiving the other members positions through the communications channel is no longer necessary) and the relative velocity of the other members positions relative to the user. Lower latency translates to better positioning accuracy in dynamic situations since between the time of transmission and the time of reception, the position of the transmitter position may have changed.
  • a discrepancy between the system's representation of the audio source position and the actual position of the audio source reduces the ability of the user to “believe” or to accurately perceive the spatial audio effect being generated. Both positioning accuracy and relative velocity are important to emulate natural human hearing.
  • the head orientation measurement unit provides continuous tracking of the user’s head position. Knowing the user’s head position is critical to providing the audio information in the correct position in 3D space relative to the user’s head, since the perceived location of the audio information is head position dependent and the user’s head can swivel rapidly.
  • the head orientation measurement unit may include a dedicated inertial measurement unit (IMU) or magnetic compass (magnetometer) sensor, such as the Bosch BMI160X. Alternatively, the head position can be measured and extracted through a head mounted detection system located on the head of the user.
  • the communication unit may include one or more communication channels for conveying information between internal subsystems or to/from external users.
  • the communication channel may use any type of channel model (digital or analog) and any suitable transmission protocol (e.g., Wi-Fi, Wi-Fi Direct, Bluetooth, GSM, GMRS, FRS, FM, and the like).
  • the communication channel maybe used as the medium to communicate the GPS coordinates of each individual in the group and/or relay the audio information to and from the members of the group and the user.
  • the transmitted information may include but not limited to: audio, video, relative and/or global user location data, user identification, and other forms of information.
  • the communication unit may employ more than one communications channel to allow for coverage of a larger area for group activity while maintaining low latency and good performance for users in close proximity to each other.
  • the communication unit may be configured to use Wi Fi for users up to 100m apart, and use the cellular network for communications at distances greater than 100m. Functionally this configuration may work well since the added latency of the cellular network is less important at greater distances.
  • the communication unit may be installed as part of a software application running on a mobile device (e.g., a cellular phone, or a tablet computer).
  • the communication unit may be part of a spatial communications system, which may contain dedicated inter-device communications components configured to operate under a suitable transmission protocol (e.g., Wi-Fi, Wi-Fi Direct, Bluetooth, GSM, GMRS, FRS, FM, sub-Giga, and the like).
  • the Texas Instruments CC1352P chip that can be configured in numerous configurations for group communications using different communication stacks (e.g., smart objects, M-Bus, Thread, Zigbee, KNX-RF, Wi-SUN, and the like). These communication stacks include mesh network architectures that may improve communication reliability and range.
  • the detection unit can be configured to transmit information between users in the group, such as via a technique known as "radar communication” or "RadCom” as known in the art (as described for example in: Flassanein et al.
  • the processing unit receives the positioning data of the surrounding users from the detection unit and the communication unit.
  • the processing unit tracks the users in the surrounding area using suitable tracking techniques, such as based on Extended Kalman filters, scented Kalman filters, and the like.
  • the processing unit correlates between the tracks of the detection unit and the positions of other users received by the communication unit. If the correlation is high, the detection track is assigned the ID and continuously verifies that the high correlation is maintained over time.
  • the processing unit acquires the other users' information from the communications unit. In the case a user ID is included in the data, the data is now attached to the track with the corresponding user ID.
  • the processing unit uses the spatial information of the corresponding user and the head orientation information of an ego-user and uses head-related transfer function (HRTF) algorithms to generate the spatially representative audio stream to be transmitted to the audio interface. This may be done for a multitude of users simultaneously to generate a single spatially representative audio stream containing all the users' audio representations.
  • HRTF head-related transfer function
  • the relative velocity can also be acquired by the detection unit and/or the positioning unit and/or other user’s velocity and heading from the communication unit.
  • the relative velocity between the ego-user and the other users can be used to impart a Doppler shift onto the generated audio stream. Doppler shifted audio may provide intuitive information to the ego-user regarding the relative velocity of the surrounding users.
  • the processing unit could be either a dedicated unit located on the platform or a software application running on an off-board, general processing platform (such as a smartphone or tablet computer).
  • the processing unit provides real-time, mission critical computational resources and/or can incorporates other audio and/or data input elements such as a GPS or telephone.
  • the processing unit is optionally connected to the helmet-mounted audio interface unit by either a cable or wireless protocol that can ensure real-time, mission critical data transfer.
  • the processing unit is responsible for managing the user IDs currently in the vicinity of the user and cross-referencing those user IDs with the ego-user’s defined group preferences and/or other information available on an on-line database regarding the user IDs.
  • the processing unit is also responsible for the process of inclusion and exclusion into the ego-user’s group and the connection protocol with other users.
  • the audio unit is required to convey the spatially resolved audio information to the user.
  • the audio unit thus includes at least two transducers (one for each ear) and can use different methods of conducting the audio information to the respective ear. Such methods include, but are not limited to: multichannel systems, stereo headphones, bone conduction headphones, and crosstalk cancellation speakers.
  • the audio transmissions can be overlaid on top of ambient sound that has been filtered through the system using microphones together with known analog and/or signal processing. A separate microphone may also be included for the user’s voice transmission.
  • the system can be implemented with ambient audio directly to the audio stream or alternatively by feeding the ambient audio through adaptive/active noise cancellation filtering (implemented in software or in hardware) prior to providing the user with the audio display.
  • the audio transmissions can also be overlaid on top of other audio information being transmitted to the user such as: music, synthesized spatial audio representations of surrounding objects, phone calls, and the like.
  • Figure 3 is an illustration of an exemplary projection of information from multiple surrounding users to an individual ego- user, operative in accordance with an embodiment of the present invention.
  • the users may choose to send meta-data including the ID of the user and/or position in addition to any audio or other data for transmission.
  • Figure 4 is a flow diagram of a RadCom mode operation of the system for spatially projected audio communication, operative in accordance with an embodiment of the present invention.
  • No handshaking or group definitions are required to initiate and maintain spatially mapped communication except for a definition of the allowed range for communication.
  • Figure 5 is a flow diagram of audio intercommunication between selected users of a group, operative in accordance with an embodiment of the present invention.
  • the users can be predefined or they can be added dynamically as described in this flow diagram.
  • Figure 6 is an illustration of the detection of surrounding objects by the system for spatially projected audio communication of an ego-user, operative in accordance with an embodiment of the present invention.
  • the objects are not identified until ID and position information is acquired through the communication unit.
  • FIG 7 is an illustration of the detection of identification and location data by the communication unit of the system for spatially projected audio communication, operative in accordance with an embodiment of the present invention.
  • the identification and location data is either directly transmitted from the users via a short range communication protocol (e.g., Wi-Fi, Wi-Fi Direct, Bluetooth) or from a centralized database accessed via the Internet.
  • Figure 8 is an illustration of the association of detected objects with the user position by the system for spatially projected audio communication of an ego-user, operative in accordance with an embodiment of the present invention.
  • the system now tracks the users while they are in range.
  • the users may continue to update their position via the communication unit in parallel to being tracked.
  • Figure 9 is an illustration of the spatial projection of audio by the system for spatially projected audio communication of an ego-user, operative in accordance with an embodiment of the present invention.
  • the ego-user system After receiving the user position and ID information, the ego-user system has the spatial and identification information to spatially project the audio conversations.
  • Figure 10 is an illustration of the tracking of user positions by the system for spatially projected audio communication of an ego-user, operative in accordance with an embodiment of the present invention.
  • the ego-user system continuously tracks the position of the users and updates the audio display according to the new positions of the users.
  • the system is installed on the helmets of respective motorcycle riders traveling in a group.
  • Figure 11 is an illustration of an exemplary operation of systems for spatially projected audio communication of respective motorcycle riders, operative in accordance with an embodiment of the present invention.
  • Each rider in the group has a system installed and operational.
  • the detection unit may include a plurality (e.g., three) radar modules that are mounted on the helmet of each user in such a manner as to provide 360 degree coverage.
  • the detection unit may inherently provide the low latency head tracked spatial data that is necessary to create the effect of externalization and localization for spatially projecting the synthesized sound being produced by the communication unit.
  • the detection unit provides detections of objects surrounding an ego-user with low latency. Spatially accurate head tracked data is especially important in demanding high-performance applications such as motorcycle riding due to the high absolute and relative speed of the riders.
  • the system of the ego-user detects the relative position of other riders in the vicinity.
  • the system acquires the ID and GPS coordinates of the users within the detection range of the system and correlates the GPS position to that provided by the detection unit. Once a successful correlation is found, the detection track of the user as identified by the detection unit now has a corresponding unique ID to associate the data being transmitted from that ID through the communication unit. The system can now project the audio information of the user with a particular ID from the point in space of the detected track. This process is done on each rider’s system to allow for spatial representation of each and all users in the group that are within the detection range of the system.
  • the detection unit also provides information on additional objects (other than the other users) near the first user and the spatial audio display generated by the processing unit in conjunction with the detection system and microphones can be synthesized with the spatial communications data being provided to the user (as described for example in U.S. Patent Application 15/531 ,563 to Tammam et al).
  • the communication unit can also interface and simultaneously display audio data from other sources, such as cellular telephones, computers, or other wireless communication devices.
  • RadCom may be implemented on the detection unit located on the riders’ helmets, thereby providing a communication unit as part of the detection unit with the advantages discussed hereinabove.
  • Figure 12 is an illustration of the overlapping fields of view of the detection subsystems and interconnecting communication channels among users in a group, operative in accordance with an embodiment of the present invention.
  • Figure 12 demonstrates the ability to use a mesh framework to allow for extended range and/or redundancy.
  • Figure 13 is an illustration of a three-way conversation between motorcyclists in a group by the respective systems for spatially projected audio communication, operative in accordance with an embodiment of the present invention.
  • Figure 13 illustrates a three-way conversation held between motorcyclists in a group and the perceived direction the conversation is being heard from.
  • the system may be applied to users in a group engaged in an activity, such as skiing, in which the application is cost sensitive and has lower performance requirements.
  • the system in such an embodiment may include: a head orientation measurement unit, a positioning unit, a communication unit, a processing unit, a power unit and an audio interface unit, but does not necessarily contain a detection unit.
  • the audio compression and decompression may (but not necessarily) be performed on a codec separate from the processing unit.
  • the processing unit may be a dedicated unit or a software application running on a mobile device (e.g., smartphone, tablet or laptop computer).
  • the GPS position of the users in the group combined with the head tracking motion sensors can provide for a less accurate and higher latency system but sufficient to maintain the spatial audio effect in applications where the absolute and relative motion between users is relatively slow and spatial accuracy is less critical.
  • High accuracy GPS units such as GNSS Precise Point Positioning (PPP), GNSS in conjunction with inertial navigation systems (INS), continuously operated reference station (CORS) corrected GPS or other positioning techniques that can provide for positioning accuracy within 10 meters.
  • PPP Precise Point Positioning
  • INS inertial navigation systems
  • CORS continuously operated reference station corrected GPS or other positioning techniques that can provide for positioning accuracy within 10 meters.
  • Each message conveyed to the group must contain the position information together with the other transmitted data but the ID of the user is optional in this embodiment. The ID may optionally be used to allow for an exclusive group of selected participants rather than open communications allowing for communications with any users within a certain range.
  • the system may also allow for a combination of open communications and exclusive communications such as allowing open communications with any user up to a predefined range and exclusive communications with certain users up to a larger range.
  • open communications and exclusive communications such as allowing open communications with any user up to a predefined range and exclusive communications with certain users up to a larger range.
  • IPS indoor positioning system
  • Such applications may include football players or groups of people spatially dispersed amongst other groups of people that would like to have a conversation.
  • the output of the system can be recorded and integrated into or interface with video devices that are either head mounted, vehicle mounted, stationary or otherwise, to provide a spatially mapped audio track to a video stream.
  • video devices that are either head mounted, vehicle mounted, stationary or otherwise, to provide a spatially mapped audio track to a video stream.
  • This may be particularly beneficial in 3D video applications that would provide observers of the video with an immersive spatial audio experience.
  • each user with a spatially projected audio communication system during video streaming and/or recording can have their audio channel spatially mapped and integrated onto the video.
  • the integration of the spatially mapped audio source can be done either off-line through video editing software or on-line.
  • the on-line integration can be done through software, either on a mobile device or on the video camera itself.
  • the system can be installed on vehicles or users of vehicles allowing for communication of operators of the vehicles with spatially mapped audio information being transmitted to the vehicle operators.
  • the system can be installed on remotely operated vehicles and transmit the communications between operators in a spatially mapped manner that relates to the position of the remotely operated vehicles. Such a system would allow for more efficient coordination between drone operators in formation or coordinating maneuvers.
  • system components can be implemented into a monolithic integrated circuit or System on Chip (SoC) design.
  • the integrated functionalities may include but not limited to, the SLAM sensor radio frequency interface, high accuracy GNSS/GPS sensor, a communication channel, a magnetic compass, an inertial measurement unit (IMU), a processing unit and an audio driver.
  • SoC System on Chip
  • the integrated functionalities may include but not limited to, the SLAM sensor radio frequency interface, high accuracy GNSS/GPS sensor, a communication channel, a magnetic compass, an inertial measurement unit (IMU), a processing unit and an audio driver.
  • IMU inertial measurement unit
  • An alternative embodiment may use SLAM sensors to register the relative position of one user with respect to other users using landmarks in the vicinity of the group.
  • the distance of the user from the landmarks can be used to triangulate the location of the user with respect to the other users in the vicinity of that user, thereby obviating the need for absolute coordinates from a GPS system.
  • This configuration can be useful in indoor applications where GPS signals cannot penetrate the structure the user is located within.
  • the communication between users is implemented through a cellular phone with a specific software application used to log and track the users in the group.
  • the users join a group through the application and the system begins transmitting the positioning and audio data from the user to the group.
  • Additional applications for the system and method of the present invention include spatial communications for bicycle riders, skiers, snowboarders, surfers, infantry, drone operators or other groups of intercommunicating individuals where spatially locating the participants may be of importance.
  • the system of the present invention may provide an audio display including audio spatial information to a first user, and detect an audio signature of an approaching threat and update the audio interface unit of the first user with approaching threat information, including spatial information.
  • the system may track the approaching threat and continuously update the audio interface unit until the approaching threat is out of range.
  • the approaching threat may be a second user, which may transmit an informational message, including spatial information, to the first user such that the first user’s system updates the audio interface unit to include the second user based on the informational message.
  • the informational message may include at least one of: position, relative velocity, direction of travel, relative orientation to the first user, and acceleration.
  • compositions comprising, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”.
  • Consisting of means “including and limited to”.
  • Consisting essentially of means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
  • a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Telephone Function (AREA)

Abstract

L'invention concerne un système et un procédé pour fournir une communication audio projetée dans l'espace entre des membres d'un groupe d'utilisateurs. Une position d'utilisateur et une position et une orientation de tête d'un utilisateur sujet sont déterminées avec une unité de positionnement et une unité de mesure d'orientation de tête du système associé à l'utilisateur sujet. Le système d'utilisateur sujet reçoit une position d'utilisateur d'un autre utilisateur, conjointement avec un identifiant d'utilisateur et des informations audio associés, à partir de l'autre système d'utilisateur. Une unité de traitement du système d'utilisateur sujet suit la position de l'utilisateur de l'autre utilisateur et établit une position relative de l'autre utilisateur par rapport à l'utilisateur sujet, sur la base de la position d'utilisateur reçue et de l'identifiant d'utilisateur, et synthétise un signal audio à résolution dans l'espace associé aux informations audio reçues sur la base de la position relative établie et de la position et de l'orientation déterminées de l'utilisateur sujet. Une unité d'interface audio transporte de manière audible le signal audio à résolution dans l'espace synthétisé vers l'utilisateur sujet.
EP20861623.5A 2019-09-04 2020-09-03 Système et procédé pour une communication audio projetée dans l'espace Withdrawn EP4026353A1 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/559,899 US10917736B2 (en) 2018-09-04 2019-09-04 System and method for spatially projected audio communication
IL271810A IL271810A (en) 2020-01-02 2020-01-02 System and method for spatially projected audio communication
PCT/IL2020/050959 WO2021044419A1 (fr) 2019-09-04 2020-09-03 Système et procédé pour une communication audio projetée dans l'espace

Publications (1)

Publication Number Publication Date
EP4026353A1 true EP4026353A1 (fr) 2022-07-13

Family

ID=74852189

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20861623.5A Withdrawn EP4026353A1 (fr) 2019-09-04 2020-09-03 Système et procédé pour une communication audio projetée dans l'espace

Country Status (3)

Country Link
EP (1) EP4026353A1 (fr)
IL (1) IL271810A (fr)
WO (1) WO2021044419A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118138824B (zh) * 2024-02-22 2025-01-24 中央广播电视总台 一种交互方法、移动终端、播放终端及交互系统
GB2639006A (en) * 2024-03-06 2025-09-10 Nokia Technologies Oy A multi-participant, spatial audio service

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030223602A1 (en) * 2002-06-04 2003-12-04 Elbit Systems Ltd. Method and system for audio imaging
US8718301B1 (en) * 2004-10-25 2014-05-06 Hewlett-Packard Development Company, L.P. Telescopic spatial radio system
EP2005793A2 (fr) * 2006-04-04 2008-12-24 Aalborg Universitet Procede de technologie binaurale avec suivi de position
DE102009050667A1 (de) * 2009-10-26 2011-04-28 Siemens Aktiengesellschaft System zur Notifikation verorteter Information
EP2936829A4 (fr) * 2012-12-18 2016-08-10 Nokia Technologies Oy Appareil audio spatial
US10166466B2 (en) * 2014-12-11 2019-01-01 Elwha Llc Feedback for enhanced situational awareness
EP3870991A4 (fr) * 2018-10-24 2022-08-17 Otto Engineering Inc. Système de communication audio à sensibilité directionnelle
US10708706B1 (en) * 2019-05-07 2020-07-07 Facebook Technologies, Llc Audio spatialization and reinforcement between multiple headsets

Also Published As

Publication number Publication date
IL271810A (en) 2021-07-29
WO2021044419A1 (fr) 2021-03-11

Similar Documents

Publication Publication Date Title
US10917736B2 (en) System and method for spatially projected audio communication
CN113196795B (zh) 与设备外部的所选目标对象相关联的声音的呈现
US11586280B2 (en) Head motion prediction for spatial audio applications
US10324474B2 (en) Spatial diversity for relative position tracking
CN113168307B (zh) 设备之间的媒体交换
US11582573B2 (en) Disabling/re-enabling head tracking for distracted user of spatial audio application
KR102235902B1 (ko) 초근접 위치 검출을 위한 uwb를 활용한 증강현실 시뮬레이터 시스템
CN105987694B (zh) 识别移动设备的用户的方法和装置
US20160238692A1 (en) Accurate geographic tracking of mobile devices
WO2007112756A3 (fr) procédé de technologie binaurale avec suivi de position
US9832587B1 (en) Assisted near-distance communication using binaural cues
US11047965B2 (en) Portable communication device with user-initiated polling of positional information of nodes in a group
US11132004B2 (en) Spatial diveristy for relative position tracking
US12079006B2 (en) Spatial diversity for relative position tracking
EP4026353A1 (fr) Système et procédé pour une communication audio projetée dans l'espace
US12278919B2 (en) Voice call method and apparatus, electronic device, and computer-readable storage medium
CN102546680A (zh) 一种室内人员定位跟踪系统
US10667073B1 (en) Audio navigation to a point of interest
US20130051560A1 (en) 3-Dimensional Audio Projection
US10448193B2 (en) Providing an audio environment based on a determined loudspeaker position and orientation
US10225638B2 (en) Ear piece with pseudolite connectivity
EP4284027B1 (fr) Procédé de traitement de signal audio et dispositif électronique
CN112098949B (zh) 一种定位智能设备的方法和装置
FR3066829B1 (fr) Systeme bidirectionnel de navigation sous-marine
WO2021128287A1 (fr) Procédé et dispositif de génération de données

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220301

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20230401