WO2025207634A1 - Relay operation for multi-channel audio broadcast - Google Patents
Relay operation for multi-channel audio broadcastInfo
- Publication number
- WO2025207634A1 WO2025207634A1 PCT/US2025/021345 US2025021345W WO2025207634A1 WO 2025207634 A1 WO2025207634 A1 WO 2025207634A1 US 2025021345 W US2025021345 W US 2025021345W WO 2025207634 A1 WO2025207634 A1 WO 2025207634A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio
- channel
- datagram
- given
- broadcast
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/80—Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/611—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for multicast or broadcast
Definitions
- the present disclosure relates to wireless communication of audio from an audio source and receipt and rendering of the audio by one or more audio presentation devices.
- each audio packet may contain a respective PDU with an appended header (e.g., 16-bits) carrying useful overhead information.
- an appended header e.g. 16-bits
- the receiving device may then read the PDUs in sequence and decode the audio for playout.
- Broadcast audio communication involves an audio source (SRC) more generally broadcasting the audio for receipt and playout by potentially multiple receiving devices, or sinks (SNKs) that are in range to receive the broadcast, which facilitates playout of the same audio by potentially multiple SNKs at once.
- SRC audio source
- SNKs sinks
- Examples SNK devices include a pair of earbuds (e.g., conventional earbuds, or canalphones), a set of headphones, another personal listening device, or a pair of speakers, among other possibilities.
- broadcasting of audio can allow numerous new services, such as audio communication of public-service announcements to multiple people wearing compatible earbuds within public areas (e.g., airports, train stations, or gyms), sharing of performance audio to audience members wearing compatible earbuds in a theater or other performance venue, and sharing of audio from a personal device such as a smartphone, tablet, or computer to earbuds worn by multiple friends, colleagues, or family members within range of the personal device.
- a personal device such as a smartphone, tablet, or computer to earbuds worn by multiple friends, colleagues, or family members within range of the personal device.
- Broadcasting of multi-channel audio under an example protocol occurs on one or more broadcast isochronous streams (BISs), each divided into recurring sub-intervals for carrying digitized audio with associated header information, among other data.
- BISs broadcast isochronous streams
- the SRC may broadcast time-division-multiplexed BISs respectively for the left and right channels or may broadcast a single BIS carrying both the left and right channels, among other possibilities.
- the left-channel BIS carries a sequence of left-channel audio packets each including a left-channel PDU defining a left-audio-channel datagram (i.e., a data segment of the left channel of audio), and the right-channel BIS can carry a sequence of right-channel audio packets each including a right-channel PDU defining a respective right-audio-channel datagram (i.e., a data segment of the right channel of audio).
- that BIS carries a sequence of dual-channel audio packets, each including a PDU having a combination (e.g., concatenation) of a left-audio channel datagram and a right-audio-channel datagram.
- An example protocol further defines a process that enables a SNK to discover the presence of an audio broadcast stream and to determine how to receive and decode the audio data carried by that stream.
- the SRC may broadcast various interrelated advertising-control messages including one or more such messages that provide information that indicates audio stream type (e.g., context type) and BIS structure, coding, and timing.
- a SNK can thus regularly scan for and discover presence of these advertising messages and can thereby learn of the existence of a broadcast audio stream of a desired type, determine the associated BIS structure, coding, and timing, and accordingly receive, decode, and play out the broadcast audio.
- An example protocol further provides a mechanism to help address this deficiency with broadcasting of audio.
- the SRC is configured to automatically transmit each audio packet multiple times, in an effort to ensure successful receipt of the audio packet by each recipient SNK.
- This mechanism may help to achieve what an acknowledgement and retransmission scheme would otherwise achieve, but without the acknowledgment messaging.
- this mechanism is also inefficient, as the automatic retransmissions will require valuable transmission time intervals and can therefore reduce throughput.
- the SRC is a phone or the like and a user puts the SRC in a pocket
- body absorption or other factors may result in one node of the multi-node SNK experiencing a blocked signal or otherwise poor receive audio quality and another node of the multi-node SNK experiencing good receive audio quality. Issues can arise from temporary signal blockage to one of the nodes, one of the nodes having a shorter distance/path from the SRC than another node, and/or other conditions.
- the multi-node SNK will be configured to implement a relay mechanism that helps its nodes successfully receive and play out broadcast audio even when one of the nodes has poor receive quality from the SRC providing the audio broadcast.
- the disclosed mechanism will be described at times in the context of a two-node SNK, where the audio is stereo audio having left and right audio channels, and where one node of the SNK is configured to receive and play out the left channel of audio and the other node of the SNK is configured to receive and play out the right channel of audio.
- each node of the SNK is configured to wirelessly receive over the air from the SRC the SRC’s broadcast of both the left and right audio channels, and the nodes are further configured to engage in signaling with each other to provide missing audio data when one of the nodes fails to successfully receive an expected audio datagram of its channel of audio.
- the nodes may be configured such that, if one node fails to successfully receive a given audio datagram of its respective audio channel, that node may inform the other node of the failure, and the other node may respond by providing the reporting node with the missing audio datagram.
- this relay arrangement can be unidirectional or bidirectional, and the relaying can work with various BIS configurations.
- the relay arrangement can involve one of the nodes being configured to provide the other node with a missing audio datagram when necessary, or the relay arrangement can involve each of the nodes being configured to provide the other of the nodes with a missing audio datagram when necessary.
- this relaying can work in a scenario where the left and right audio channels are broadcast in separate BISs, and also in a scenario where the left and right audio channels are aggregated together and broadcast in a single BIS. Other variations are possible as well.
- the method can apply for a given one of the audiochannel datagram groups including a given first-audio-channel datagram and a given second- audio-channel datagram.
- the method involves a first-audio-channel device attempting to receive from the multi-channel-audio broadcast source both the given first-audio-channel datagram and the given second-audio-channel datagram. Further, the method involves a second-audio-channel device attempting to receive from the multi -channel - audio broadcast source at least the given second-audio-channel datagram.
- the method involves, upon expiration of a predefined time period for multiple automated broadcasts of both the given first-audio-channel datagram and the given second-audio-channel datagram, the first-audio-channel device and second-audio-channel device engaging in a handshake process with each other according to which the first-audio-channel device is configured to determine whether the second-audio-channel device failed to successfully receive from the multi-channel-audio broadcast source the given second-audio-channel datagram and, if so, responsively provide to the second-audio-channel device the given second- audio-channel datagram.
- a multi-device system configured to process multi-channel audio broadcast from a multi-channel-audio broadcast source serially as a sequence of audio-channel-datagram groups each including a respective first-audio-channel datagram and a respective second-audio-channel datagram, a given one of the packet groups including a given first-audio-channel datagram and a given second-audio-channel datagram.
- the multi-device system includes a first-audio-channel device that is configured to receive and play out a first audio channel of the multi-channel audio broadcast from the multi-channel- audio broadcast source and a second-audio-channel device that is configured to receive and play out a second audio-channel of the multi-channel audio broadcast from the multi-channel- audio broadcast source.
- the first-audio-channel device is configured to attempt to receive from the multi-channel-audio broadcast source both the given first-audio- channel datagram and the given second-audio-channel datagram.
- the second-audio- channel device is configured to attempt to receive from the multi -channel -audio broadcast source at least the given second-audio-channel datagram.
- the first audio-channel device and second-audio-channel devices are configured to engage in a handshake process with each other upon expiration of a predefined time period for multiple automated broadcasts of both the given first-audio-channel datagram and the given second-audio-channel datagram.
- the first-audio-channel device determines whether the second-audio-channel device failed to successfully receive from the multi-channel-audio broadcast source the given second-audio- channel datagram and, if so, responsively provides to the second-audio-channel device the given second-audio-channel datagram.
- a first-audio-channel device configured to process multi-channel audio broadcast from a multi-channel-audio broadcast serially as a sequence of audio-channel-datagram groups each including a respective first- audio-channel datagram and a respective second-audio-channel datagram, a given one of the packet groups including a given first-audio-channel datagram and a given second-audio- channel datagram.
- the first-audio-channel device is configured to receive and play out a first audio channel of the multi-channel audio broadcast from the multi-channel-audio broadcast source and is further configured to interwork with a second-audio-channel device that is configured to receive and play out a second audio-channel of the multi-channel audio broadcast from the multi -channel -audio broadcast source.
- the first-audio-channel device includes (i) a wireless communication interface, (ii) an audio-presentation interface, (iii) a processor, (iv) non-transitory data storage, and (v) program instructions stored in the non- transitory data storage and executable by the processor to cause the first-audio-channel device to carry out operations.
- the first-audio-channel device determines whether the second-audio- channel device failed to successfully receive from the multi-channel-audio broadcast source the given second-audio-channel datagram and, if so, responsively provides to the second-audio- channel device the given second-audio-channel datagram.
- Figure 2 is a simplified illustration of how broadcast-receiver devices can have a communication link with each other.
- FIG. 3 is a simplified illustration of broadcast isochronous group timing.
- Figure 4 is a message flow diagram illustrating example operations.
- Figure 5 is a flow chart illustrating an example method.
- Figure 6 is a simplified block diagram of an example broadcast receiver device.
- Figure 7 is a simplified block diagram of an example broadcast source device.
- Figure l is a simplified illustration of an example scenario in which features of the present disclosure can be implemented.
- Figure 1 illustrates two example pairs of earbuds 100, 102, with each pair of earbuds functioning as a respective multi-node broadcast SNK to receive and play out a stereo audio broadcast 104 from a smartphone 106 as an example SRC.
- the audio broadcast 104 from the smartphone 106 includes a left audio channel 108 and a right audio channel 110.
- Each of the illustrated pairs of earbuds 100, 102 may be configured to be worn by a respective person and to receive from the smartphone 106 and play out the left and right audio channels 106, 108 of the stereo audio broadcast 104 for listening by the person.
- earbuds 100 include a left earbud 112 that may be configured to receive from the smartphone 106 and play out the left channel 108 of the audio broadcast 104 and a right earbud 114 that may be configured to receive from the smartphone 106 and play out the right channel 110 of the audio broadcast 104.
- earbuds 102 include a left earbud 116 that may be configured to receive from the smartphone and play out the left channel of the audio broadcast 104 and a right earbud 118 that may be configured to receive from the smartphone 106 and play out the right channel 110 of the audio broadcast 104.
- Smartphone 106 can be configured to provide the stereo audio broadcast 104 in accordance with an example audio broadcast protocol such as BLE with BAP, for receipt and play out by interested SNKs in wireless range of the smartphone 106, such as by earbuds 100 and earbuds 102.
- an example audio broadcast protocol such as BLE with BAP
- Each BIS defines a physical layer timing structure for carrying the audio data transmissions spaced by a constant time interval, or isochronous interval.
- the SRC divides the audio stream into a sequence of audio packets as noted above and, in each successive isochronous interval, transmits a next one of the audio packets.
- the BIG may also define spacing between these sub-intervals, at least in part to enable a receiving SNK to switch between BISs.
- a BIG may thus likewise have some maximum amount of time that it can take for the SRC to engage in the multiple transmissions of a stereo pair of audio packets, or of an audio packet of a given audio channel.
- BLE supports broadcasting of stereo audio with at least either of two defined audio configurations: (i) audio configuration 13 (AC 13), according to which the SRC transmits the left and right audio channels in separate but time-division multiplexed BISs or (ii) audio configuration 14 (ACM), according to which the SRC transmits the left and right audio channels aggregated together in a single BIS.
- AC 13 audio configuration 13
- ACM audio configuration 14
- the SRC may transmit the left channel of audio as a sequence of left-channel packets each containing a left-channel audio datagram and may transmit the right channel of audio as a sequence of right-channel packets each containing a respective right-channel audio datagram.
- the SRC may transmit a single sequence of packets each containing both a left-channel audio datagram and a right-channel audio datagram.
- AC 14 may be more efficient, as it can reduce the level of overhead by aggregating left-channel and right-channel audio data together with a shared header rather than requiring separate headers for the left audio data and the right audio data.
- the nodes can have a defined synchronization point per isochronous interval, which is a common point in time by which every node of the SNK would have had an opportunity to receive the audio packet destined to it in that interval.
- the SRC can configure a presentation delay, as a time delay that all of the nodes of the SNK should wait after the synchronization point before playing out their respectively received audio packet, allowing enough time for each node to decode their received audio PDU so that they nodes can play out their associated audio at the same time as each other.
- the SRC can periodically broadcast extended advertising (ADV_EXT) messages, auxiliary advertising (AUX ADV IND) messages, and periodic advertising (PA) messages.
- ADV EXT messages can contain a pointer pointing to and thus allowing a SNK to find the AUX ADV IND messages.
- AUX ADV IND messages can include a universally unique identifier (UUID) of a broadcast audio announcement service and a pointer pointing to and thus allowing the SNK to find the PA messages.
- UUID universally unique identifier
- the PA messages may then carry additional controller advertising information (ACAD) that includes BIG information (BIGInfo), which defines the BIG structure including one or more BISs, indicating the interval structure of the BIS and defining an applicable audio configuration (e.g., AC13 or AC14), pointing to and thus allowing the SNK to find a next periodic BIS interval per BIS, and also includes broadcast audio stream endpoint (BASE) information defining details about each BIS in the BIG, such as presentation delay, codec configuration, and metadata regarding the content in the audio stream.
- ACAD controller advertising information
- BIGInfo defines the BIG structure including one or more BISs, indicating the interval structure of the BIS and defining an applicable audio configuration (e.g., AC13 or AC14), pointing to and thus allowing the SNK to find a next periodic BIS interval per BIS, and also includes broadcast audio stream endpoint (BASE) information defining details about each BIS in the BIG, such as presentation delay, codec configuration, and metadata regarding the content in the audio stream.
- a multi-node SNK will be configured to apply an intra-SNK relay function, to help increase the likelihood of success overall of the SNK properly playing out a multi-channel audio broadcast from a SRC.
- the relay function in a scenario where one node the SNK is configured to receive and play out one audio channel of the audio broadcast and another node of the SNK is configured to receive and play out another audio channel of the audio broadcast, if one of the nodes does not successfully receive from the SRC a given audio packet of its audio channel, another node that successfully received that audio packet from the SRC can forward the audio packet to the node that failed to receive it, in order to facilitate timely playout of the audio packet.
- the left earbud 112 may be configured to receive from the smartphone 106 and play out the left audio channel 108 of the audio broadcast 104
- the right earbud 114 may be configured to receive from the smartphone 106 and play out the right audio channel 110 of the audio broadcast 104.
- left and right earbuds 112, 114 may have an established connection with each through which they can exchange control signaling and data.
- Figure 2 illustrates this connection as a wireless connection 120.
- This wireless connection 120 between the earbuds 112, 114 may be a Bluetooth asynchronous connection-oriented logical transport session (ACL) link or another form of wireless connection.
- ACL asynchronous connection-oriented logical transport session
- the earbuds may be pre-configured with parameters defining this connection, or the earbuds may engage in control signaling with each other to dynamically create the connection.
- connection is an ACL link
- one of the earbuds may broadcast an inquiry message requesting to connect, the other earbud may discover the inquiry message and send an inquiry response, and the earbuds may then engage in further signaling with each other to establish the ACL link.
- one of the two earbuds may control this process, to discover the presence of the audio broadcast 104, and may inform the other earbud of the audio broadcast 104 so that the two earbuds can then proceed to receive and play out audio of the audio broadcast.
- one earbud may discover the advertising messaging and provide the other earbud with a pointer to the PA messaging, and both earbuds may then read the PA messaging to learn the operational parameters of the audio broadcast 104.
- the earbuds 100 can then start to receive and play out their respective audio channels of the audio broadcast 104 in accordance with the determined operational parameters of the audio broadcast 104.
- the left earbud 112 can receive and play out the left audio channel 108 of the broadcast and the right earbud 114 can receive and play out the right audio channel 110 of the broadcast.
- the audio broadcast 104 may provide each successive pair of audio-channel datagrams, i.e., left-channel datagram plus rightchannel datagram, in a respective isochronous interval, with specifics depending on various factors, such as whether the broadcast provides the audio channels in separate respective BISs or rather together in a single BIS as noted above. For instance, in each isochronous interval, the audio broadcast 104 may provide a left-channel packet containing a left-channel datagram and a right-channel packet containing a right-channel datagram, or the audio broadcast may provide a single packet containing a combination of left-channel datagram and right-channel datagram.
- the audio broadcast may be configured to include multiple transmissions of each pair of audio-channel datagrams per isochronous interval and, as noted above, the BIG may define a total duration through completion of this transmission per isochronous interval.
- FIG. 3 illustrates this BIG timing by way of example.
- an example BIG configuration divides time into a series of isochronous intervals 300 and divides each isochronous interval 300 into a number of sub-intervals 302 (with possible intersub-interval spacing, not shown).
- each of multiple sub-intervals would carry a respective instance of an audio packet, which may be a left-audio- channel packet, a right-audio-channel packet, or a combined left+right audio packet, among other possibilities.
- the BIG configuration would thus define a time period P by which the transmission of audio (e.g., both channels of audio, or each channel of audio) would be finished per isochronous interval, possibly measured from the start of the isochronous interval, among other possibilities.
- the left and right earbuds 112, 114 may thus each receive, decode, and play out their respective audio channel from the audio broadcast 104, possibly using the automatic retransmissions to help ensure successful receipt.
- the left earbud 112 may read the first instance of an audio packet that includes left-channel audio and may engage in a CRC analysis to determine if the left earbud 112 successfully received that audio packet. If the left earbud 112 thereby concludes that it did not successfully receive that instance of the audio packet, the left earbud 112 may then repeat this process for the next instance of the audio packet in the isochronous interval, and so forth for as many transmissions of the audio packet the BIG configuration specifies per isochronous interval.
- the left earbud 112 may then decode the data in that and play out the left-channel audio of that audio packet. The left earbud 112 may then proceed to the next isochronous interval and repeat this process.
- the right earbud 114 may read the first instance of an audio packet that includes right-channel audio and may engage in a CRC analysis to determine if the right earbud 114 successfully received that audio packet. If the right earbud 114 thereby concludes that it did not successfully receive that instance of the audio packet, the right earbud 114 may similarly repeat this process for the next instance of the audio packet in the isochronous interval, and so forth for as many transmissions of the audio packet the BIG configuration specifies per isochronous interval.
- the right earbud 114 may then decode the data in that packet and play out the right-channel audio of that audio packet. The right earbud 114 may then proceed to the next isochronous interval and repeat this process.
- the earbuds engage in this process, there may be isochronous intervals when one earbud does not successfully receive the audio of its audio channel by the expiration of the time period (e.g., P) allowed for the multiple transmissions of the audio in that isochronous interval.
- P the time period allowed for the multiple transmissions of the audio in that isochronous interval.
- the present relay arrangement may help to address that issue, by enabling the other earbud to provide the missing audio to the earbud that failed to receive the audio.
- the relaying can be either unidirectional or bidirectional, and can work with either a multi-BIS configuration or single-BIS configuration.
- At least one of the two earbuds 112, 114, operating as a primary earbud, are configured to receive both the left and right audio channels 108, 110 broadcast from the smartphone 106, and the other earbud, operating as a secondary earbud is configured to receive at least its own audio channel broadcast from the smartphone 106.
- the left earbud 112 reads the left-channel-BIS sub-intervals and performs CRC analysis as discussed above, until hopefully successfully receiving the left-audio-channel datagram and (ii) for the right channel, the left earbud 112 reads the right-channel-BIS sub-intervals and performs CRC analysis as discussed above, until hopefully successfully receiving the right- audio-channel datagram.
- the left earbud 112 reads each BIS sub-interval and performs CRC analysis as discussed above, until hopefully successfully receiving both the left-audio-channel datagram and the right-audio-channel datagram.
- the right earbud 114 (the example non-primary earbud) can attempt to receive at least its own channel’s datagram, i.e., the right-audio-channel datagram. For instance, in an example multi-BIS scenario, the right earbud 114 reads the right-channel-BIS sub-intervals and performs CRC analysis as discussed above, until hopefully successfully receiving the right-audio-channel datagram. Whereas, in an example single-BIS scenario, the right earbud 114 reads each BIS sub-interval and performs CRC analysis as discussed above, until hopefully successfully receiving at least the right-audio- channel datagram.
- the earbuds may instead proceed to the handshake process without waiting for expiration of this time period.
- the time period P can be defined in another manner, such as specifically in relation to the later of the two channels of audio to be received, among other possibilities.
- the left earbud 112 can thus determine whether the right earbud 114 failed to successfully receive the right-audio-channel datagram. And in response to thereby determining that the right earbud 114 failed to successfully receive the right-audio-channel datagram, the left earbud 112 can then provide that right-audio-channel datagram to the right earbud 114 to enable the right earbud 114 to process play out of that right- audio-channel datagram. For instance, the left earbud can send over the wireless connection 120 to the right earbud a copy of a packet successfully received by the left earbud 112 that contains the right-audio-channel datagram. This process assumes of course that the left earbud 112 successfully receives the right-audio-channel datagram.
- FIG. 4 is a message flow diagram illustrating how this process can work in practice.
- the left earbud 112 attempts to receive both left and right audio datagrams in a given isochronous interval and the right earbud 114 attempts to receive at least the right audio datagram in that isochronous interval.
- the earbuds 112, 114 engage in the handshake process with each other, according to which, if the right earbud 114 failed to receive the right audio datagram, the left earbud 112 provides the right earbud 114 with that right audio datagram.
- the left earbud 112 attempts to receive both the left-audio-channel datagram and the right- audio-channel datagram as described above, and the right earbud 114 also attempts to receive both the left-audio-channel datagram and the right-audio-channel datagram in the same manner.
- the earbuds Upon expiration of the predefined time predefined time period to allow for multiple automated broadcasts of both the left-audio-channel datagram and the right-audio-channel datagram (if applicable), the earbuds then engage in a bidirectional handshake process with each other.
- the left earbud 112 operates as discussed above to determine whether the right earbud 114 failed to successfully receive the right-audio-channel datagram and, if so, provides the right earbud 114 with that right-audio- channel datagram (assuming the left earbud 112 successfully received that right-audio-channel datagram). Further, as part of the example handshake process, the right earbud 114 also determines whether the left earbud 112 failed to successfully receive the left-audio-channel datagram and, if so, provides the left earbud 112 with that left-audio-channel datagram (assuming the right earbud 114 successfully received that left-audio-channel datagram).
- the present relaying to help facilitate multi-channel broadcast audio can be applied for any of various audio services, including but not limited to voice call communication (e.g., hands-free-profile (HFP) communication) and media playback (e.g., advanced audio distribution profile (A2DP) communication).
- voice call communication e.g., hands-free-profile (HFP) communication
- media playback e.g., advanced audio distribution profile (A2DP) communication
- Figure 5 is a flow chart illustrating more specifically an example method that can be carried out in accordance with the present disclosure to help process multi-channel audio broadcast from a multi-channel-audio broadcast source serially as a sequence of audio-channel - datagram groups each including a respective first-audio-channel datagram and a respective second-audio-channel datagram.
- the method can be carried out for instance, as to a given one of the audio-channel-datagram groups (e.g., in a given broadcast isochronous interval) that includes a given first-audio-channel datagram and a given second-audio-channel datagram.
- the method includes a first-audio-channel device attempting to receive from the multi-channel-audio broadcast source both the given first-audio-channel datagram and the given second-audio-channel datagram. Further, at block 502, the method includes a second-audio-channel device attempting to receive from the multi- channel-audio broadcast source at least the given second-audio-channel datagram. As shown, these operations occur in parallel.
- the method then includes, upon expiration of a predefined time period for multiple automated broadcasts of both the given first-audio-channel datagram and the given second-audio-channel datagram, the first-audio-channel device and second-audio- channel device engaging in a handshake process with each other according to which the first- audio-channel device is configured to determine whether the second-audio-channel device failed to successfully receive from the multi-channel-audio broadcast source the given second- audio-channel datagram and, if so, to responsively provide to the second-audio-channel device the given second-audio-channel datagram.
- the multi-channel audio broadcast from the multi-channel-audio broadcast source can be in a single BIS or can be in multiple BISs, with one BIS per audio channel, among other possibilities.
- the multi-channel audio can comprise audio voice-call audio and/or media-playback audio, among other possibilities.
- Figure 6 is a simplified block diagram illustrating components of an example broadcast-receiver device that can be configured to carry out various operations described herein.
- this block diagram may represent components of a given earbud of the pair.
- this block diagram may represent components of a given speaker of the pair.
- Other examples are possible as well.
- the wireless communication interface 600 can comprise one or more modules (e.g., one or more chipsets) supporting wireless communication between the device and one or more other devices, such as wireless communication with another device (such as another earbud for instance) and wireless communication with the audio broadcast source.
- modules e.g., one or more chipsets
- wireless communication with another device such as another earbud for instance
- the processor 604 can comprise one or more general purpose processors (e.g., one or more microprocessors, etc.) and/or one or more special-purpose processors (e.g., digital signal processors, application-specific integrated circuits, etc.), possibly including processors of the wireless communication interface 600 and the audio-presentation interface 602, among other possibilities.
- the non-transitory data storage 606 can comprise one or more volatile and/or non-volatile storage components (e.g., optical, magnetic, or flash storage, RAM, ROM, EPROM, EEPROM, cache memory, and/or other computer-readable media, etc.), possibly integrated in whole or in part with the processor 604. As shown, the non- transitory data storage 506 may then store program instructions 614, which may be executable by the processor 604 to carry out various operations described herein.
- the present disclosure also contemplates a multi-device system, which may include a group of multiple such devices, such as a pair of earbuds, speakers, or the like.
- Figure 7 is next a simplified block diagram of an example multi-channel- audio broadcast-source device, such as but not limited to a smartphone for instance.
- the example device includes a wireless communication interface 700, a processor 702 and non-transitory data storage 704. These components can be integrated together and/or communicatively linked together in various ways. For instance, the components can be linked together through a system bus, network, or other connection mechanism 706. Alternatively, various integrations and other arrangements are possible.
- the wireless communication interface 700 can comprise one or more modules (e.g., one or more chipsets) supporting wireless communication according to a suitable wireless audio broadcast communication protocol.
- the communication protocol can be Bluetooth, including BLE with BAP as discussed above, so the wireless communication interface 700 can comprise a chipset configured to support Bluetooth communication and particularly BLE communication with BAP.
- the wireless communication interface 700 can include a radio 708 configured to encode and modulate outgoing data communications for air-interface transmission and to demodulate and decode incoming data communications as well as an antenna structure 710 supporting air interface transmission and reception, among other components.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
A method for processing of multi-channel audio broadcast from a multi-channel-audio broadcast source serially as a sequence of audio-channel-datagram groups each including a respective first-audio-channel datagram and a respective second-audio-channel datagram. As to a given group including a given first-audio-channel datagram and a given second-audio- channel datagram, a first-audio-channel device attempts to receive both the given first-audio- channel datagram and the given second-audio-channel datagram, and a second-audio-channel device attempts to receive at least the second-audio-channel datagram. Further, upon expiration of a predefined time period for multiple automated broadcasts of both the first-audio-channel datagram and the second-audio-channel datagram, the devices engage in a handshake process with each other according to which the first device is configured to determine whether the second device failed to successfully receive from the multi-channel-audio broadcast source the second-audio-channel datagram and, if so, to responsively provide to the second device the second-audio-channel datagram.
Description
Relay Operation for Multi-Channel Audio Broadcast
REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to U.S. Provisional Patent Application No. 63/571,188, filed March 28, 2024, the entirety of which is hereby incorporated by reference.
BACKGROUND
[0002] The present disclosure relates to wireless communication of audio from an audio source and receipt and rendering of the audio by one or more audio presentation devices.
[0003] Audio communication can be carried out largely in accordance with any of various wireless communication protocols. Without limitation, an example protocol is BLUETOOTH™, and more particularly Bluetooth Low Energy (BLE) with the Basic Audio Profile (BAP), as defined by the Bluetooth Special Interest Group (SIG). Other examples, including but not limited to WI-FI and ZIGBEE, are possible as well.
[0004] In order to communicate a stream of audio under an example protocol, an audio source encodes the audio using an audio codec (e.g., an LC3 codec), divides the encoded audio into a sequence of service data units (SDUs), translates the SDUs into protocol data units (PDUs), and transmits the sequence of PDUs over a radio-frequency (RF) air interface for receipt, decoding, and playout of the audio by one or more receiving devices. Further, under an example protocol, the transport of the PDU sequence occurs on an “isochronous stream,” which is divided over time into defined intervals and sub-intervals for carrying audio packets and associated information. In particular, each audio packet may contain a respective PDU with an appended header (e.g., 16-bits) carrying useful overhead information. As a receiving device receives this sequence of audio packets, the receiving device may then read the PDUs in sequence and decode the audio for playout.
SUMMARY
[0005] Disclosed aspects relate to multi-channel (e.g., stereo) audio communication where an audio source wirelessly transmits multi-channel audio and where each of one or more receiving devices wirelessly receives and plays out the audio in real time.
[0006] This communication of audio can be unicast or broadcast.
[0007] Unicast audio communication involves an audio source transmitting the audio specifically to an intended receiving device for playout. In this unicast arrangement, the audio source and receiving device typically have established control communication with each
other, to support the ongoing audio transmission. For instance, the audio source and receiving device regularly engage in an acknowledgement and retransmission scheme to help ensure that the receiving device successfully receives audio packets. With such a scheme, the receiving device sends to the audio source a positive or negative acknowledgement respectively for each audio packet transmission from the audio source (e.g., based on a cyclic-redundancy-check (CRC) analysis), and the audio source retransmits an audio packet in response to receiving a negative acknowledgement (or not receiving a positive acknowledgement) for the audio packet.
[0008] Broadcast audio communication, on the other hand, involves an audio source (SRC) more generally broadcasting the audio for receipt and playout by potentially multiple receiving devices, or sinks (SNKs) that are in range to receive the broadcast, which facilitates playout of the same audio by potentially multiple SNKs at once. Examples SNK devices include a pair of earbuds (e.g., conventional earbuds, or canalphones), a set of headphones, another personal listening device, or a pair of speakers, among other possibilities.
[0009] As such, broadcasting of audio can allow numerous new services, such as audio communication of public-service announcements to multiple people wearing compatible earbuds within public areas (e.g., airports, train stations, or gyms), sharing of performance audio to audience members wearing compatible earbuds in a theater or other performance venue, and sharing of audio from a personal device such as a smartphone, tablet, or computer to earbuds worn by multiple friends, colleagues, or family members within range of the personal device.
[0010] Broadcasting of multi-channel audio under an example protocol occurs on one or more broadcast isochronous streams (BISs), each divided into recurring sub-intervals for carrying digitized audio with associated header information, among other data.
[0011] Considering stereo audio for instance, to facilitate broadcasting of left and right audio channels concurrently for playout by one or more recipient SNKs, the SRC may broadcast time-division-multiplexed BISs respectively for the left and right channels or may broadcast a single BIS carrying both the left and right channels, among other possibilities. With separate BISs, the left-channel BIS carries a sequence of left-channel audio packets each including a left-channel PDU defining a left-audio-channel datagram (i.e., a data segment of the left channel of audio), and the right-channel BIS can carry a sequence of right-channel audio packets each including a right-channel PDU defining a respective right-audio-channel datagram (i.e., a data segment of the right channel of audio). Whereas, if there is a single BIS, that BIS carries a sequence of dual-channel audio packets, each including a PDU having a
combination (e.g., concatenation) of a left-audio channel datagram and a right-audio-channel datagram.
[0012] An example protocol further defines a process that enables a SNK to discover the presence of an audio broadcast stream and to determine how to receive and decode the audio data carried by that stream. For instance, in accordance with the protocol, the SRC may broadcast various interrelated advertising-control messages including one or more such messages that provide information that indicates audio stream type (e.g., context type) and BIS structure, coding, and timing. A SNK can thus regularly scan for and discover presence of these advertising messages and can thereby learn of the existence of a broadcast audio stream of a desired type, determine the associated BIS structure, coding, and timing, and accordingly receive, decode, and play out the broadcast audio.
[0013] With broadcasting of audio, in contrast with unicasting of audio, there may be no control communication to the SRC from each recipient SNK, since the SRC would simply advertise and broadcast audio in a standardized manner for receipt and playout by any and all interested SNKs within range.
[0014] Unfortunately, however, this presents a technical challenge in terms of ensuring successful receipt of the broadcast audio by each SNK, as it would not be possible to apply an acknowledgement and retransmission scheme.
[0015] An example protocol further provides a mechanism to help address this deficiency with broadcasting of audio. In particular, according to the example protocol, the SRC is configured to automatically transmit each audio packet multiple times, in an effort to ensure successful receipt of the audio packet by each recipient SNK. This mechanism may help to achieve what an acknowledgement and retransmission scheme would otherwise achieve, but without the acknowledgment messaging. On the other hand, this mechanism is also inefficient, as the automatic retransmissions will require valuable transmission time intervals and can therefore reduce throughput.
[0016] With or without this automatic retransmission of audio packets, there is also a related technical issue when it comes to a multi-node SNK device, such as a pair of earbuds, receiving and processing a multi-channel audio broadcast. A multi-node SNK device includes multiple nodes, e.g., as separate sub-devices (e.g., separate earbuds), each arranged to receive and playout a respective audio channel of multiple audio channels broadcast from a SRC. In many cases, as with earbuds for instance, the nodes of a multi-node SNK are very small and have relatively poor antenna performance. Also, in a scenario where the SRC is a phone or the
like and a user puts the SRC in a pocket, body absorption or other factors may result in one node of the multi-node SNK experiencing a blocked signal or otherwise poor receive audio quality and another node of the multi-node SNK experiencing good receive audio quality. Issues can arise from temporary signal blockage to one of the nodes, one of the nodes having a shorter distance/path from the SRC than another node, and/or other conditions.
[0017] The present disclosure provides a technical advance that may help to address these technical issues. In accordance with the disclosure, the multi-node SNK will be configured to implement a relay mechanism that helps its nodes successfully receive and play out broadcast audio even when one of the nodes has poor receive quality from the SRC providing the audio broadcast.
[0018] For simplicity, the disclosed mechanism will be described at times in the context of a two-node SNK, where the audio is stereo audio having left and right audio channels, and where one node of the SNK is configured to receive and play out the left channel of audio and the other node of the SNK is configured to receive and play out the right channel of audio.
[0019] In that example context, each node of the SNK is configured to wirelessly receive over the air from the SRC the SRC’s broadcast of both the left and right audio channels, and the nodes are further configured to engage in signaling with each other to provide missing audio data when one of the nodes fails to successfully receive an expected audio datagram of its channel of audio. In particular, the nodes may be configured such that, if one node fails to successfully receive a given audio datagram of its respective audio channel, that node may inform the other node of the failure, and the other node may respond by providing the reporting node with the missing audio datagram.
[0020] Furthermore, this relay arrangement can be unidirectional or bidirectional, and the relaying can work with various BIS configurations. For instance, the relay arrangement can involve one of the nodes being configured to provide the other node with a missing audio datagram when necessary, or the relay arrangement can involve each of the nodes being configured to provide the other of the nodes with a missing audio datagram when necessary. Further, this relaying can work in a scenario where the left and right audio channels are broadcast in separate BISs, and also in a scenario where the left and right audio channels are aggregated together and broadcast in a single BIS. Other variations are possible as well.
[0021] Accordingly, in one respect, disclosed is a method for processing of multichannel audio broadcast from a multi-channel-audio broadcast source serially as a sequence of
audio-datagram groups each including a respective first-audio-channel datagram and a respective second-audio-channel datagram. The method can apply for a given one of the audiochannel datagram groups including a given first-audio-channel datagram and a given second- audio-channel datagram.
[0022] In example implementations, the method involves a first-audio-channel device attempting to receive from the multi-channel-audio broadcast source both the given first-audio-channel datagram and the given second-audio-channel datagram. Further, the method involves a second-audio-channel device attempting to receive from the multi -channel - audio broadcast source at least the given second-audio-channel datagram. Still further, the method involves, upon expiration of a predefined time period for multiple automated broadcasts of both the given first-audio-channel datagram and the given second-audio-channel datagram, the first-audio-channel device and second-audio-channel device engaging in a handshake process with each other according to which the first-audio-channel device is configured to determine whether the second-audio-channel device failed to successfully receive from the multi-channel-audio broadcast source the given second-audio-channel datagram and, if so, responsively provide to the second-audio-channel device the given second- audio-channel datagram.
[0023] Further, in another respect, disclosed is a multi-device system configured to process multi-channel audio broadcast from a multi-channel-audio broadcast source serially as a sequence of audio-channel-datagram groups each including a respective first-audio-channel datagram and a respective second-audio-channel datagram, a given one of the packet groups including a given first-audio-channel datagram and a given second-audio-channel datagram. The multi-device system includes a first-audio-channel device that is configured to receive and play out a first audio channel of the multi-channel audio broadcast from the multi-channel- audio broadcast source and a second-audio-channel device that is configured to receive and play out a second audio-channel of the multi-channel audio broadcast from the multi-channel- audio broadcast source.
[0024] In this multi-device system, the first-audio-channel device is configured to attempt to receive from the multi-channel-audio broadcast source both the given first-audio- channel datagram and the given second-audio-channel datagram. Further, the second-audio- channel device is configured to attempt to receive from the multi -channel -audio broadcast source at least the given second-audio-channel datagram.
[0025] In addition, the first audio-channel device and second-audio-channel devices are configured to engage in a handshake process with each other upon expiration of a predefined time period for multiple automated broadcasts of both the given first-audio-channel datagram and the given second-audio-channel datagram. According to the handshake process, the first-audio-channel device determines whether the second-audio-channel device failed to successfully receive from the multi-channel-audio broadcast source the given second-audio- channel datagram and, if so, responsively provides to the second-audio-channel device the given second-audio-channel datagram.
[0026] Still further, in another respect, disclosed is a first-audio-channel device configured to process multi-channel audio broadcast from a multi-channel-audio broadcast serially as a sequence of audio-channel-datagram groups each including a respective first- audio-channel datagram and a respective second-audio-channel datagram, a given one of the packet groups including a given first-audio-channel datagram and a given second-audio- channel datagram.
[0027] In particular, the first-audio-channel device is configured to receive and play out a first audio channel of the multi-channel audio broadcast from the multi-channel-audio broadcast source and is further configured to interwork with a second-audio-channel device that is configured to receive and play out a second audio-channel of the multi-channel audio broadcast from the multi -channel -audio broadcast source. And the first-audio-channel device includes (i) a wireless communication interface, (ii) an audio-presentation interface, (iii) a processor, (iv) non-transitory data storage, and (v) program instructions stored in the non- transitory data storage and executable by the processor to cause the first-audio-channel device to carry out operations.
[0028] The operations include attempting to receive from the multi-channel-audio broadcast source both the given first-audio-channel datagram and the given second-audio- channel datagram, while the second-audio-channel device attempts to receive from the multi- channel-audio broadcast source at least the given second-audio-channel datagram. Further, the operations include engaging in a handshake process with the second-audio-channel device upon expiration of a predefined time period for multiple automated broadcasts of both the given first-audio-channel datagram and the given second-audio-channel datagram. According to the handshake process, the first-audio-channel device determines whether the second-audio- channel device failed to successfully receive from the multi-channel-audio broadcast source
the given second-audio-channel datagram and, if so, responsively provides to the second-audio- channel device the given second-audio-channel datagram.
[0029] These as well as other aspects, advantages, and alternatives will become apparent to those of ordinary skill in the art by reading the following detailed description, with reference where appropriate to the accompanying drawings. Further, it should be understood that the descriptions provided in this summary and below are intended to illustrate the invention by way of example only and not by way of limitation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0030] Figure 1 is a simplified illustration of an example scenario in which various features can be implemented.
[0031] Figure 2 is a simplified illustration of how broadcast-receiver devices can have a communication link with each other.
[0032] Figure 3 is a simplified illustration of broadcast isochronous group timing.
[0033] Figure 4 is a message flow diagram illustrating example operations.
[0034] Figure 5 is a flow chart illustrating an example method.
[0035] Figure 6 is a simplified block diagram of an example broadcast receiver device.
[0036] Figure 7 is a simplified block diagram of an example broadcast source device.
DETAILED DESCRIPTION
[0037] The present disclosure will discuss example implementations in the context of a SRC being a smartphone and a multi-node SNK being a pair of earbuds, and further using BLE with BAP as an example wireless audio communication protocol. It will be understood, however, that various principles disclosed can apply in any of a variety of other contexts, such as where the SRC is another type of audio source device, where the SNK is another type of device, and/or where the SNK has more than two nodes. Further, the disclosed principles can apply as well with respect to other wireless audio communication protocols, not limited to BLE with BAP.
[0038] More generally, it will be understood that the disclosed arrangements and processes are set forth for purposes of example only and may take various other forms. For instance, elements and operations can be re-ordered, distributed, replicated, combined, omitted,
added, or otherwise modified. In addition, it will be understood that functions described herein as being carried out by one or more components can be implemented by and/or on behalf of those components, through hardware, firmware, and/or software, such as by one or more processing units executing program instructions or the like.
[0039] Referring to the drawings, as noted above, Figure l is a simplified illustration of an example scenario in which features of the present disclosure can be implemented. Namely, Figure 1 illustrates two example pairs of earbuds 100, 102, with each pair of earbuds functioning as a respective multi-node broadcast SNK to receive and play out a stereo audio broadcast 104 from a smartphone 106 as an example SRC. As shown, the audio broadcast 104 from the smartphone 106 includes a left audio channel 108 and a right audio channel 110.
[0040] Each of the illustrated pairs of earbuds 100, 102 may be configured to be worn by a respective person and to receive from the smartphone 106 and play out the left and right audio channels 106, 108 of the stereo audio broadcast 104 for listening by the person. Thus, earbuds 100 include a left earbud 112 that may be configured to receive from the smartphone 106 and play out the left channel 108 of the audio broadcast 104 and a right earbud 114 that may be configured to receive from the smartphone 106 and play out the right channel 110 of the audio broadcast 104. Likewise, earbuds 102 include a left earbud 116 that may be configured to receive from the smartphone and play out the left channel of the audio broadcast 104 and a right earbud 118 that may be configured to receive from the smartphone 106 and play out the right channel 110 of the audio broadcast 104.
[0041] Smartphone 106 can be configured to provide the stereo audio broadcast 104 in accordance with an example audio broadcast protocol such as BLE with BAP, for receipt and play out by interested SNKs in wireless range of the smartphone 106, such as by earbuds 100 and earbuds 102.
[0042] As noted above, broadcasting of audio according to an example protocol such as a BLE with BAP makes use of one or more broadcast isochronous streams (BISs) that carry broadcast audio transmission from the SRC for receipt and playout by any applicable SNKs.
[0043] Each BIS defines a physical layer timing structure for carrying the audio data transmissions spaced by a constant time interval, or isochronous interval. In particular, the SRC divides the audio stream into a sequence of audio packets as noted above and, in each successive isochronous interval, transmits a next one of the audio packets.
[0044] Since example broadcasting of audio does not have an associated control connection between the SRC and the one or more recipient SNKs and therefore does not
support an acknowledgement and retransmission scheme, as noted above, each BIS can be further configured to include one or more automatic retransmissions of each audio packet. Namely, each BIS isochronous interval can be divided over time into sub -intervals, and, for an audio packet to be transmitted in a given isochronous interval, the SRC can be configured to transmit the audio packet repeatedly, once per sub-interval. Further, the BIS can define particular time spacing between sub-intervals. And the BIS can define the duration of each subinterval and the number of sub-intervals per interval, thus establishing a maximum amount of time that it can take for the SRC to engage in the multiple transmissions of the audio packet of that interval.
[0045] BLE also defines a broadcast isochronous group (BIG) construct that can be made up of one or more BISs to be broadcast from a SRC, which can support broadcast of stereo or other multi-channel audio. With a BIG that includes multiple BISs, each isochronous interval can contain separate sub-intervals for respective BISs. For instance, there can be multiple sub-intervals to carry respectively multiple transmissions of an audio packet of one audio channel, followed by multiple sub-intervals to carry respectively multiple transmissions of an audio packet of another audio channel. Or the sub-intervals of respective audio channels can be interleaved with each other over time within the isochronous interval. The BIG may also define spacing between these sub-intervals, at least in part to enable a receiving SNK to switch between BISs. A BIG may thus likewise have some maximum amount of time that it can take for the SRC to engage in the multiple transmissions of a stereo pair of audio packets, or of an audio packet of a given audio channel.
[0046] Further, BLE supports broadcasting of stereo audio with at least either of two defined audio configurations: (i) audio configuration 13 (AC 13), according to which the SRC transmits the left and right audio channels in separate but time-division multiplexed BISs or (ii) audio configuration 14 (ACM), according to which the SRC transmits the left and right audio channels aggregated together in a single BIS.
[0047] With AC 13, for instance, the SRC may transmit the left channel of audio as a sequence of left-channel packets each containing a left-channel audio datagram and may transmit the right channel of audio as a sequence of right-channel packets each containing a respective right-channel audio datagram. Whereas, with AC14, the SRC may transmit a single sequence of packets each containing both a left-channel audio datagram and a right-channel audio datagram. (Generally, AC 14 may be more efficient, as it can reduce the level of overhead
by aggregating left-channel and right-channel audio data together with a shared header rather than requiring separate headers for the left audio data and the right audio data.)
[0048] In addition, to facilitate synchronized playback of audio by multiple nodes of a multi-node SNK, such playback of stereo audio by a pair of earbuds, the nodes can have a defined synchronization point per isochronous interval, which is a common point in time by which every node of the SNK would have had an opportunity to receive the audio packet destined to it in that interval. Further, the SRC can configure a presentation delay, as a time delay that all of the nodes of the SNK should wait after the synchronization point before playing out their respectively received audio packet, allowing enough time for each node to decode their received audio PDU so that they nodes can play out their associated audio at the same time as each other.
[0049] BLE also defines a process for a SNK to discover the presence of a broadcast stream from a SRC and to determine how to receive and decode that broadcast stream. In particular, the SRC can broadcast a hierarchical set of advertising messages that would ultimately carry BIG configuration information for a given audio broadcast from the SRC, and each applicable SNK can scan for and detect this advertising in order to learn of the presence of an audio broadcast of interest and to then receive and play out that audio broadcast
[0050] More specifically, according to BLE, the SRC can periodically broadcast extended advertising (ADV_EXT) messages, auxiliary advertising (AUX ADV IND) messages, and periodic advertising (PA) messages. The ADV EXT messages can contain a pointer pointing to and thus allowing a SNK to find the AUX ADV IND messages. And the AUX ADV IND messages can include a universally unique identifier (UUID) of a broadcast audio announcement service and a pointer pointing to and thus allowing the SNK to find the PA messages. The PA messages (which the SRC may broadcast on the order of every 100 or 200 milliseconds (ms)) may then carry additional controller advertising information (ACAD) that includes BIG information (BIGInfo), which defines the BIG structure including one or more BISs, indicating the interval structure of the BIS and defining an applicable audio configuration (e.g., AC13 or AC14), pointing to and thus allowing the SNK to find a next periodic BIS interval per BIS, and also includes broadcast audio stream endpoint (BASE) information defining details about each BIS in the BIG, such as presentation delay, codec configuration, and metadata regarding the content in the audio stream.
[0051] As noted above, a multi-node SNK according to the present disclosure will be configured to apply an intra-SNK relay function, to help increase the likelihood of success
overall of the SNK properly playing out a multi-channel audio broadcast from a SRC. In particular, according to the relay function, in a scenario where one node the SNK is configured to receive and play out one audio channel of the audio broadcast and another node of the SNK is configured to receive and play out another audio channel of the audio broadcast, if one of the nodes does not successfully receive from the SRC a given audio packet of its audio channel, another node that successfully received that audio packet from the SRC can forward the audio packet to the node that failed to receive it, in order to facilitate timely playout of the audio packet.
[0052] This arrangement will now be described in the context of Figure 1 for example, with respect to earbuds 100 receiving the stereo audio broadcast 104 from the smartphone 106. In this scenario, as noted above, the left earbud 112 may be configured to receive from the smartphone 106 and play out the left audio channel 108 of the audio broadcast 104, and the right earbud 114 may be configured to receive from the smartphone 106 and play out the right audio channel 110 of the audio broadcast 104.
[0053] To facilitate application of the intra-SNK relay function in relation to this stereo audio broadcast, left and right earbuds 112, 114 may have an established connection with each through which they can exchange control signaling and data. Figure 2 illustrates this connection as a wireless connection 120. This wireless connection 120 between the earbuds 112, 114 may be a Bluetooth asynchronous connection-oriented logical transport session (ACL) link or another form of wireless connection. Further, the earbuds may be pre-configured with parameters defining this connection, or the earbuds may engage in control signaling with each other to dynamically create the connection. If the connection is an ACL link, for instance, one of the earbuds may broadcast an inquiry message requesting to connect, the other earbud may discover the inquiry message and send an inquiry response, and the earbuds may then engage in further signaling with each other to establish the ACL link.
[0054] In order for earbuds 100 to start receiving the audio broadcast 104 from the smartphone 106, the earbuds 100 may first discover presence of the audio broadcast 104 by discovering associated advertising messaging broadcast from the smartphone 106. (In some implementations, the earbuds 100 may interact with a broadcast assistant device that can help the earbuds 100 to discover this advertising messaging.) For instance, the earbuds 100 can scan for and detect the hierarchical set of advertising messages from the smartphone 106, ultimately reading the BIGInfo and BASE data from the smartphone’s PA messaging in order to determine the operational parameters of the audio broadcast 104.
[0055] In practice, one of the two earbuds may control this process, to discover the presence of the audio broadcast 104, and may inform the other earbud of the audio broadcast 104 so that the two earbuds can then proceed to receive and play out audio of the audio broadcast. For instance, one earbud may discover the advertising messaging and provide the other earbud with a pointer to the PA messaging, and both earbuds may then read the PA messaging to learn the operational parameters of the audio broadcast 104.
[0056] Once the earbuds 100 have discovered the audio broadcast 104, they can then start to receive and play out their respective audio channels of the audio broadcast 104 in accordance with the determined operational parameters of the audio broadcast 104. In particular, the left earbud 112 can receive and play out the left audio channel 108 of the broadcast and the right earbud 114 can receive and play out the right audio channel 110 of the broadcast.
[0057] In accordance with the BIG configuration, the audio broadcast 104 may provide each successive pair of audio-channel datagrams, i.e., left-channel datagram plus rightchannel datagram, in a respective isochronous interval, with specifics depending on various factors, such as whether the broadcast provides the audio channels in separate respective BISs or rather together in a single BIS as noted above. For instance, in each isochronous interval, the audio broadcast 104 may provide a left-channel packet containing a left-channel datagram and a right-channel packet containing a right-channel datagram, or the audio broadcast may provide a single packet containing a combination of left-channel datagram and right-channel datagram.
[0058] Regardless of whether the channels are provided in a single BIS or rather in separate respective BISs, though, the audio broadcast may be configured to include multiple transmissions of each pair of audio-channel datagrams per isochronous interval and, as noted above, the BIG may define a total duration through completion of this transmission per isochronous interval.
[0059] Figure 3 illustrates this BIG timing by way of example. As shown in Figure 3, an example BIG configuration divides time into a series of isochronous intervals 300 and divides each isochronous interval 300 into a number of sub-intervals 302 (with possible intersub-interval spacing, not shown). In accordance with the BIG configuration, each of multiple sub-intervals would carry a respective instance of an audio packet, which may be a left-audio- channel packet, a right-audio-channel packet, or a combined left+right audio packet, among other possibilities. Given specifically defined timing of these sub-intervals and given that the
BIG configuration may establish multiple transmissions of each audio packet per isochronous interval, the BIG configuration would thus define a time period P by which the transmission of audio (e.g., both channels of audio, or each channel of audio) would be finished per isochronous interval, possibly measured from the start of the isochronous interval, among other possibilities.
[0060] As the smartphone 106 provides the stereo audio broadcast 104 with a defined BIG configuration, the left and right earbuds 112, 114 may thus each receive, decode, and play out their respective audio channel from the audio broadcast 104, possibly using the automatic retransmissions to help ensure successful receipt.
[0061] For instance, per the BIG configuration, for each isochronous interval, the left earbud 112 may read the first instance of an audio packet that includes left-channel audio and may engage in a CRC analysis to determine if the left earbud 112 successfully received that audio packet. If the left earbud 112 thereby concludes that it did not successfully receive that instance of the audio packet, the left earbud 112 may then repeat this process for the next instance of the audio packet in the isochronous interval, and so forth for as many transmissions of the audio packet the BIG configuration specifies per isochronous interval. Whereas, if and when the left earbud 112 concludes from its CRC analysis that it successfully received an instance of the audio packet that includes left-channel audio, then the left earbud 112 may then decode the data in that and play out the left-channel audio of that audio packet. The left earbud 112 may then proceed to the next isochronous interval and repeat this process.
[0062] Likewise, for each isochronous interval, the right earbud 114 may read the first instance of an audio packet that includes right-channel audio and may engage in a CRC analysis to determine if the right earbud 114 successfully received that audio packet. If the right earbud 114 thereby concludes that it did not successfully receive that instance of the audio packet, the right earbud 114 may similarly repeat this process for the next instance of the audio packet in the isochronous interval, and so forth for as many transmissions of the audio packet the BIG configuration specifies per isochronous interval. Whereas, if and when the right earbud 114 concludes from its CRC analysis that it successfully received an instance of the audio packet that includes right-channel audio, then the right earbud 114 may then decode the data in that packet and play out the right-channel audio of that audio packet. The right earbud 114 may then proceed to the next isochronous interval and repeat this process.
[0063] As the earbuds engage in this process, there may be isochronous intervals when one earbud does not successfully receive the audio of its audio channel by the expiration of the time period (e.g., P) allowed for the multiple transmissions of the audio in that
isochronous interval. In some cases, that can result in the earbud failing to play out the audio of that interval, which can create user-experience issues. The present relay arrangement may help to address that issue, by enabling the other earbud to provide the missing audio to the earbud that failed to receive the audio.
[0064] As noted above, the relaying can be either unidirectional or bidirectional, and can work with either a multi-BIS configuration or single-BIS configuration.
[0065] For unidirectional relaying of stereo broadcast audio, at least one of the two earbuds 112, 114, operating as a primary earbud, are configured to receive both the left and right audio channels 108, 110 broadcast from the smartphone 106, and the other earbud, operating as a secondary earbud is configured to receive at least its own audio channel broadcast from the smartphone 106.
[0066] Further, the earbuds 112, 114 can be configured to work with each other to designate as the primary earbud the earbud that has the better receive quality of the two earbuds, and to switch this designation as receive-quality changes over time. For instance, each earbud may monitor the quality of its wireless connection, such as signal-to-noise ratio (SNR), signal- to-interference-plus-noise ratio (SINR), or packet error rate (PER), and the earbuds can engage in signaling over their established wireless connection 120 to agree on which earbud will be the primary earbud. The earbuds may also change this designation from time to time as their wireless conditions change.
[0067] In example operation, the left earbud 112 can be the primary earbud. With this arrangement, for each given isochronous interval carrying a pair of left-audio-channel datagram and right-audio-channel datagram, the left earbud 112 can attempt to receive both of those datagrams. In particular, the left earbud 112 can attempt to successfully receive from the audio broadcast 104 both the left-audio-channel datagram and the right-audio-channel datagram.
[0068] In an example multi-BIS scenario, for example, (i) for the left channel, the left earbud 112 reads the left-channel-BIS sub-intervals and performs CRC analysis as discussed above, until hopefully successfully receiving the left-audio-channel datagram and (ii) for the right channel, the left earbud 112 reads the right-channel-BIS sub-intervals and performs CRC analysis as discussed above, until hopefully successfully receiving the right- audio-channel datagram. Whereas, in an example single-BIS scenario, the left earbud 112 reads each BIS sub-interval and performs CRC analysis as discussed above, until hopefully
successfully receiving both the left-audio-channel datagram and the right-audio-channel datagram.
[0069] Meanwhile, for the same isochronous interval, the right earbud 114 (the example non-primary earbud) can attempt to receive at least its own channel’s datagram, i.e., the right-audio-channel datagram. For instance, in an example multi-BIS scenario, the right earbud 114 reads the right-channel-BIS sub-intervals and performs CRC analysis as discussed above, until hopefully successfully receiving the right-audio-channel datagram. Whereas, in an example single-BIS scenario, the right earbud 114 reads each BIS sub-interval and performs CRC analysis as discussed above, until hopefully successfully receiving at least the right-audio- channel datagram.
[0070] In a scenario where the smartphone 106 automatically broadcasts each pair of audio-channel-datagram groups multiple times, the present relay function can then allow time for the associated sub-intervals to pass, as the repeat transmissions themselves may suffice to ensure that both earbuds 112, 114 receive their respective audio. Thus, while both earbuds may attempt to receive audio as noted above, they may also wait for that predefined time to pass before they then engage in the present handshake process with each other. For instance, if, according to the BIG configuration, the time period for the multiple automated broadcasts would be time P measured from the start of the isochronous interval, they can wait until that time P has passed before then engaging in the handshake process.
[0071 ] In alternative embodiments, such as if there are no automated retransmissions of each packet, or if otherwise desired, on the other hand, the earbuds may instead proceed to the handshake process without waiting for expiration of this time period. Further, the time period P can be defined in another manner, such as specifically in relation to the later of the two channels of audio to be received, among other possibilities.
[0072] In example implementations, the handshake process involve the earbuds engaging in signaling with each other over their connection wireless connection 120 to find out whether the right earbud 114 failed to successfully receive the right-audio-channel datagram. For instance, the left earbud 112 may send to the right earbud 114 a query message asking if the right earbud 114 successfully received the right-audio-channel datagram, and the right earbud 114 can send a response message to the left earbud 112 indicating either that the right earbud 114 successfully received the right-audio-channel datagram or that the right earbud 114 failed to successfully receive the right-audio-channel datagram. Or the right earbud 114 may send this indication to the left earbud 112 without being asked.
[0073] Through this or another such process, the left earbud 112 can thus determine whether the right earbud 114 failed to successfully receive the right-audio-channel datagram. And in response to thereby determining that the right earbud 114 failed to successfully receive the right-audio-channel datagram, the left earbud 112 can then provide that right-audio-channel datagram to the right earbud 114 to enable the right earbud 114 to process play out of that right- audio-channel datagram. For instance, the left earbud can send over the wireless connection 120 to the right earbud a copy of a packet successfully received by the left earbud 112 that contains the right-audio-channel datagram. This process assumes of course that the left earbud 112 successfully receives the right-audio-channel datagram.
[0074] Figure 4 is a message flow diagram illustrating how this process can work in practice. As shown in Figure 4, at step 400, the left earbud 112 attempts to receive both left and right audio datagrams in a given isochronous interval and the right earbud 114 attempts to receive at least the right audio datagram in that isochronous interval. At step 402, upon expiration of a time period to allow for multiple repeated broadcasts of the left and right audio datagrams, the earbuds 112, 114 engage in the handshake process with each other, according to which, if the right earbud 114 failed to receive the right audio datagram, the left earbud 112 provides the right earbud 114 with that right audio datagram.
[0075] With the benefit of this process, the left and right earbuds 112, 114 can then play out their respective audio-channel datagrams as desired, even though the right earbud 114 had failed to receive the broadcast of the right-audio-channel datagram from the smartphone 106. Further, as this addresses broadcast rather than unicast, one or more other pairs of earbuds, such as earbud 102 for instance, may also carry out similar functionality as they work to receive and play out the same audio broadcast from the smartphone 106.
[0076] Bidirectional relaying can add further functionality to this process. With bidirectional relaying, not only can the left earbud 112 fill in the blank when necessary for the right earbud 114, but the right earbud 114 can also fill in the blank when necessary for the left earbud 112.
[0077] In an example implementation of this process, for each isochronous interval, the left earbud 112 attempts to receive both the left-audio-channel datagram and the right- audio-channel datagram as described above, and the right earbud 114 also attempts to receive both the left-audio-channel datagram and the right-audio-channel datagram in the same manner. Upon expiration of the predefined time predefined time period to allow for multiple automated broadcasts of both the left-audio-channel datagram and the right-audio-channel
datagram (if applicable), the earbuds then engage in a bidirectional handshake process with each other.
[0078] In an example of the bidirectional handshake process, the earbuds engage in signaling with each, with or without querying, to have each earbud determine whether the other earbud successfully received the audio-channel datagram that the other earbud was supposed to receive, and with either earbud being able to provide the other earbud with a missing audio datagram as noted above.
[0079] For instance, through an example of this process, the left earbud 112 operates as discussed above to determine whether the right earbud 114 failed to successfully receive the right-audio-channel datagram and, if so, provides the right earbud 114 with that right-audio- channel datagram (assuming the left earbud 112 successfully received that right-audio-channel datagram). Further, as part of the example handshake process, the right earbud 114 also determines whether the left earbud 112 failed to successfully receive the left-audio-channel datagram and, if so, provides the left earbud 112 with that left-audio-channel datagram (assuming the right earbud 114 successfully received that left-audio-channel datagram).
[0080] The present relaying to help facilitate multi-channel broadcast audio can be applied for any of various audio services, including but not limited to voice call communication (e.g., hands-free-profile (HFP) communication) and media playback (e.g., advanced audio distribution profile (A2DP) communication).
[0081] Figure 5 is a flow chart illustrating more specifically an example method that can be carried out in accordance with the present disclosure to help process multi-channel audio broadcast from a multi-channel-audio broadcast source serially as a sequence of audio-channel - datagram groups each including a respective first-audio-channel datagram and a respective second-audio-channel datagram. The method can be carried out for instance, as to a given one of the audio-channel-datagram groups (e.g., in a given broadcast isochronous interval) that includes a given first-audio-channel datagram and a given second-audio-channel datagram.
[0082] As shown in Figure 5, at block 500, the method includes a first-audio-channel device attempting to receive from the multi-channel-audio broadcast source both the given first-audio-channel datagram and the given second-audio-channel datagram. Further, at block 502, the method includes a second-audio-channel device attempting to receive from the multi- channel-audio broadcast source at least the given second-audio-channel datagram. As shown, these operations occur in parallel.
[0083] At block 504, the method then includes, upon expiration of a predefined time period for multiple automated broadcasts of both the given first-audio-channel datagram and the given second-audio-channel datagram, the first-audio-channel device and second-audio- channel device engaging in a handshake process with each other according to which the first- audio-channel device is configured to determine whether the second-audio-channel device failed to successfully receive from the multi-channel-audio broadcast source the given second- audio-channel datagram and, if so, to responsively provide to the second-audio-channel device the given second-audio-channel datagram.
[0084] In line with the discussion above, the act of the second-audio-channel device attempting to receive from the multi-channel-audio broadcast source at least the given second- audio-channel datagram can involve the second audio-channel device attempting to receive from the multi-channel-audio broadcast source both the given first-audio-channel datagram and the given second-audio-channel datagram. And in this case, additionally according to the handshake process, the second audio-channel device is configured to determine whether the first-audio-channel device failed to successfully receive from the multi-channel-audio broadcast source the given first-audio-channel datagram and, if so, to responsively provide to the first-audio-channel device the given first-audio-channel datagram.
[0085] As further discussed above, the first-audio-channel device and the second- audio-channel device can be cooperatively a pair of devices such as a pair of earbuds or a pair of speakers (e.g., room speakers or portable speakers), and as noted above, the broadcast source can be a smartphone.
[0086] As additionally discussed above, the multi-channel audio broadcast from the multi-channel-audio broadcast source can be in a single BIS or can be in multiple BISs, with one BIS per audio channel, among other possibilities. Further, the multi-channel audio can comprise audio voice-call audio and/or media-playback audio, among other possibilities.
[0087] Figure 6 is a simplified block diagram illustrating components of an example broadcast-receiver device that can be configured to carry out various operations described herein. As to a pair of earbuds, for instance, this block diagram may represent components of a given earbud of the pair. Likewise, as to a pair of speakers, this block diagram may represent components of a given speaker of the pair. Other examples are possible as well.
[0088] As shown in Figure 6, the example device includes a wireless communication interface 600, an audio-presentation interface 602, a processor 604, and non-transitory data storage 606. These components can be integrated together and/or communicatively linked
together in various ways. For instance, the components can be linked together through a system bus, network, or other connection mechanism 608. Alternatively, various integrations and other arrangements are possible.
[0089] The wireless communication interface 600 can comprise one or more modules (e.g., one or more chipsets) supporting wireless communication between the device and one or more other devices, such as wireless communication with another device (such as another earbud for instance) and wireless communication with the audio broadcast source.
[0090] As such, the wireless communication interface 600 can comprise one or more modules (e.g., one or more chipsets) supporting wireless communication according to a suitable wireless audio broadcast communication protocol. Without limitation, the communication protocol can be Bluetooth, including BLE with BAP as discussed above, so the wireless communication interface 600 can comprise a chipset configured to support Bluetooth communication and particularly BLE communication with BAP. Though other examples are possible as well. As shown, the wireless communication interface 600 can include a radio 610 configured to encode and modulate outgoing data communications for air-interface transmission and to demodulate and decode incoming data communications as well as an antenna structure 612 supporting air interface transmission and reception, among other components.
[0091] The audio-presentation interface 602 can comprise one or more modules configured to provide acoustic sound output, such as to play a stream of audio being received from a broadcast source. The audio-presentation interface 602 may comprise or interwork with a digital signal processor that processes a received digital audio stream, a digital-to-analog converter that converts the processed digital audio to analog form, and one or more sound speakers, which, may comprise dynamic drivers and/or balanced armatures, among other possibilities.
[0092] The processor 604 can comprise one or more general purpose processors (e.g., one or more microprocessors, etc.) and/or one or more special-purpose processors (e.g., digital signal processors, application-specific integrated circuits, etc.), possibly including processors of the wireless communication interface 600 and the audio-presentation interface 602, among other possibilities. Further, the non-transitory data storage 606 can comprise one or more volatile and/or non-volatile storage components (e.g., optical, magnetic, or flash storage, RAM, ROM, EPROM, EEPROM, cache memory, and/or other computer-readable media, etc.), possibly integrated in whole or in part with the processor 604. As shown, the non-
transitory data storage 506 may then store program instructions 614, which may be executable by the processor 604 to carry out various operations described herein.
[0093] As noted above, the present disclosure also contemplates a multi-device system, which may include a group of multiple such devices, such as a pair of earbuds, speakers, or the like.
[0094] Figure 7 is next a simplified block diagram of an example multi-channel- audio broadcast-source device, such as but not limited to a smartphone for instance.
[0095] As shown in Figure 7, the example device includes a wireless communication interface 700, a processor 702 and non-transitory data storage 704. These components can be integrated together and/or communicatively linked together in various ways. For instance, the components can be linked together through a system bus, network, or other connection mechanism 706. Alternatively, various integrations and other arrangements are possible.
[0096] The wireless communication interface 700 can support wireless communication between the device and one or more other devices, such as to support providing an audio broadcast from the device for receipt by one or more SNKs, and to support control signaling with each of one or more ASTs serving respective SNKs.
[0097] As such, the wireless communication interface 700 can comprise one or more modules (e.g., one or more chipsets) supporting wireless communication according to a suitable wireless audio broadcast communication protocol. Without limitation, the communication protocol can be Bluetooth, including BLE with BAP as discussed above, so the wireless communication interface 700 can comprise a chipset configured to support Bluetooth communication and particularly BLE communication with BAP. Though other examples are possible as well. As shown, the wireless communication interface 700 can include a radio 708 configured to encode and modulate outgoing data communications for air-interface transmission and to demodulate and decode incoming data communications as well as an antenna structure 710 supporting air interface transmission and reception, among other components.
[0098] The processor 702, which may be a processor of the wireless communication interface 700 and/or a host processor or other processor of the device, among other possibilities, can comprise one or more general purpose processors (e.g., one or more microprocessors, etc.) and/or one or more special-purpose processors (e.g., digital signal processors, applicationspecific integrated circuits, etc.) Further, the non-transitory data storage 704 can comprise one or more volatile and/or non-volatile storage components (e.g., optical, magnetic, or flash
storage, RAM, ROM, EPROM, EEPROM, cache memory, and/or other computer-readable media, etc.), possibly integrated in whole or in part with the processor 702.
[0099] As shown, the non-transitory data storage 704 may then store program instructions 712, which may be executable by the processor 702 to carry out various operations described herein. In accordance with the examples above, for instance, these program instructions may define SRC logic, so that the processor 702 executing these instructions can cause the device to carry out various audio-source operations described herein.
[00100] In addition, the present disclosure also contemplates a non-transitory computer-readable medium (e.g., optical, magnetic, or flash storage, RAM, ROM, EPROM, EEPROM, etc.) having stored thereon program instructions executable by a processor of a device to cause the device to carry out various operations described herein.
[00101] Example embodiments have been described above. Those skilled in the art will understand, however, that changes and modifications may be made to these embodiments without departing from the true scope and spirit of the invention.
Claims
1. A method for processing of multi-channel audio broadcast from a multi- channel-audio broadcast source serially as a sequence of audio-channel-datagram groups each including a respective first-audio-channel datagram and a respective second-audio-channel datagram, the method comprising, for a given one of the audio-channel-datagram groups including a given first-audio-channel datagram and a given second-audio-channel datagram: attempting to receive from the multi-channel-audio broadcast source, by a first-audio- channel device, both the given first-audio-channel datagram and the given second-audio- channel datagram; attempting to receive from the multi-channel-audio broadcast source, by a second- audio-channel device, at least the given second-audio-channel datagram; and upon expiration of a predefined time period for multiple automated broadcasts of both the given first-audio-channel datagram and the given second-audio-channel datagram, engaging by the first-audio-channel device and second-audio-channel device in a handshake process with each other according to which the first-audio-channel device is configured to determine whether the second-audio-channel device failed to successfully receive from the multi-channel-audio broadcast source the given second-audio-channel datagram and, if so, to responsively provide to the second-audio-channel device the given second-audio-channel datagram.
2. The method of claim 1, wherein attempting to receive from the multi-channel- audio broadcast source, by the second-audio-channel device, at least the given second-audio- channel datagram comprises attempting to receive from the multi-channel-audio broadcast source, by the second audio-channel device, both the given first-audio-channel datagram and the given second-audio-channel datagram, wherein additionally according to the handshake process, the second audio-channel device is configured to determine whether the first-audio-channel device failed to successfully receive from the multi-channel-audio broadcast source the given first-audio-channel datagram and, if so, to responsively provide to the first-audio-channel device the given first-audio- channel datagram.
3. The method of claim 1, wherein the first-audio-channel device and the second- audio-channel device are cooperatively a pair of devices selected from the group consisting of earbuds and speakers.
4. The method of claim 1, wherein the multi-channel audio broadcast from the multi-channel-audio broadcast source is in a single broadcast isochronous stream (BIS).
5. The method of claim 1, wherein the multi-channel audio broadcast from the broadcast source is in multiple broadcast isochronous streams (BISs), with one BIS per audio channel.
6. The method of claim 1, wherein the multi-channel audio comprises audio selected from the group consisting of voice-call audio and media-playback audio.
7. A multi-device system configured to process multi-channel audio broadcast from a multi-channel-audio broadcast source serially as a sequence of audio-channel-datagram groups each including a respective first-audio-channel datagram and a respective second- audio-channel datagram, a given one of the audio-channel-datagram groups including a given first-audio-channel datagram and a given second-audio-channel datagram, the multi-device system comprising: a first-audio-channel device configured to receive and play out a first audio channel of the multi-channel audio broadcast from the multi-channel-audio broadcast source; and a second-audio-channel device that is configured to receive and play out a second audio-channel of the multi-channel audio broadcast from the multi-channel-audio broadcast source, wherein the first-audio-channel device is configured to attempt to receive from the multi-channel-audio broadcast source both the given first-audio-channel datagram and the given second-audio-channel datagram, wherein the second-audio-channel device is configured to attempt to receive from the multi-channel-audio broadcast source at least the given second-audio-channel datagram, and wherein the first audio-channel device and second-audio-channel devices are configured to engage in a handshake process with each other upon expiration of a predefined
time period for multiple automated broadcasts of both the given first-audio-channel datagram and the given second-audio-channel datagram, wherein the handshake process involves the first-audio-channel device determining whether the second-audio-channel device failed to successfully receive from the multi-channel-audio broadcast source the given second-audio- channel datagram and, if so, responsively providing to the second-audio-channel device the given second-audio-channel datagram.
8. The multi-device system of claim 7, wherein the second-audio-channel device attempting to receive from the multi-channel-audio broadcast source at least the given second- audio-channel datagram comprises the second-audio-channel device attempting to receive from the multi-channel-audio broadcast source both the given first-audio-channel datagram and the given second-audio-channel datagram, wherein the handshake process additionally involves the second audio-channel device determining whether the first-audio-channel device failed to successfully receive from the multi-channel-audio broadcast source the given first-audio-channel datagram and, if so, responsively providing to the first-audio-channel device the given first-audio-channel datagram.
9. The multi-device system of claim 7, wherein the first-audio-channel device and the second-audio-channel device are cooperatively a pair of devices selected from the group consisting of earbuds and speakers.
10. The multi-device system of claim 7, wherein the multi-channel audio broadcast from the multi-channel-audio broadcast source is in a single broadcast isochronous stream (BIS).
11. The multi-device system of claim 7, wherein the multi-channel audio broadcast from the multi-channel-audio broadcast source is in multiple broadcast isochronous streams (BISs), with one BIS per audio channel.
12. The multi-device system of claim 7, wherein the multi-channel audio comprises audio selected from the group consisting of voice-call audio and media-playback audio.
13. A first-audio-channel device configured to process multi-channel audio broadcast from a multi-channel-audio broadcast source serially as a sequence of audio-channel- datagram groups each including a respective first-audio-channel datagram and a respective second-audio-channel datagram, a given one of the audio-channel-datagram groups including a given first-audio-channel datagram and a given second-audio-channel datagram, the first audio-channel device being configured to receive and play out a first audio channel of the multi-channel audio broadcast from the multi -channel-audio broadcast source, and the first-audio-channel device being configured to interwork with a second-audio-channel device that is configured to receive and play out a second audio-channel of the multi-channel audio broadcast from the multi-channel-audio broadcast source, the first-audio-channel device comprising (i) a wireless communication interface, (ii) an audio-presentation interface, (iii) a processor, (iv) non-transitory data storage, and (v) program instructions stored in the non-transitory data storage and executable by the processor to cause the first-audio-channel device to carry out operations including: attempting to receive from the multi-channel-audio broadcast source both the given first-audio-channel audio datagram and the given second-audio-channel datagram, while the second-audio-channel device attempts to receive from the multi- channel-audio broadcast source at least the given second-audio-channel datagram, and engaging in a handshake process with the second-audio-channel device upon expiration of a predefined time period for multiple automated broadcasts of both the given first-audio-channel datagram and the given second-audio-channel datagram, wherein, according to the handshake process, the first-audio-channel device determines whether the second-audio-channel device failed to successfully receive from the multi- channel-audio broadcast source the given second-audio-channel datagram and, if so, responsively provides to the second-audio-channel device the given second-audio- channel datagram
14. The first-audio-channel device of claim 13, wherein the handshake process additionally involves, if the first audio-channel device did not successfully receive the given first-audio-channel datagram from the multi-channel-audio broadcast source, the first audiochannel device receiving from the second-audio-channel device the given first-audio-channel datagram.
15. The first-audio-channel device of claim 13, wherein the first-audio-channel device is an earbud or a speaker.
16. The first-audio-channel device of claim 13, wherein the multi-channel audio broadcast from the multi-channel-audio broadcast source is in a single broadcast isochronous stream (BIS).
17. The first-audio-channel device of claim 13, wherein the multi-channel audio broadcast from the multi-channel-audio broadcast source is in multiple broadcast isochronous streams (BISs), with one BIS per audio channel.
18. The first-audio-channel device of claim 13, wherein the multi-channel audio comprises audio selected from the group consisting of voice-call audio and media-playback audio.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463571188P | 2024-03-28 | 2024-03-28 | |
| US63/571,188 | 2024-03-28 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025207634A1 true WO2025207634A1 (en) | 2025-10-02 |
Family
ID=95559021
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2025/021345 Pending WO2025207634A1 (en) | 2024-03-28 | 2025-03-25 | Relay operation for multi-channel audio broadcast |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025207634A1 (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210288764A1 (en) * | 2020-03-10 | 2021-09-16 | Qualcomm Incorporated | Broadcast relay piconet for low energy audio |
-
2025
- 2025-03-25 WO PCT/US2025/021345 patent/WO2025207634A1/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210288764A1 (en) * | 2020-03-10 | 2021-09-16 | Qualcomm Incorporated | Broadcast relay piconet for low energy audio |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8159990B2 (en) | Wireless audio data distribution using broadcast and bidirectional communication channels | |
| US20250007973A1 (en) | Adaptive audio processing method, device, computer program, and recording medium thereof in wireless communication system | |
| CN114760616B (en) | Wireless communication method and wireless audio playing assembly | |
| US9973839B2 (en) | Method and apparatus for communicating audio data | |
| US20250016526A1 (en) | Method, apparatus and computer program for broadcast discovery service in wireless communication system, and recording medium therefor | |
| US20220321368A1 (en) | Method, device, computer program, and recording medium for audio processing in wireless communication system | |
| US11907613B2 (en) | Method, device, and computer program for audio routing in wireless communication system, and recording medium therefor | |
| JP2021153304A (en) | New method of wireless transmission of digital audio | |
| US20220229628A1 (en) | Method, device and computer program for controlling audio data in wireless communication system, and recording medium therefor | |
| CN115669051A (en) | Method, device and computer program for channel selection in wireless communication system and recording medium thereof | |
| US12464320B2 (en) | Wireless stereo headset group communications | |
| JP2003534712A (en) | Radio system and station for broadcast communication, and broadcast method | |
| WO2019243078A1 (en) | Infrastructure equipment, communications device and methods | |
| JP2017076956A (en) | Method for exchanging data packages of different sizes between first portable communication device and second portable communication device | |
| KR20150130894A (en) | Method and apparatus for communicating audio data | |
| WO2025207634A1 (en) | Relay operation for multi-channel audio broadcast | |
| CN114979900B (en) | Wireless headset and audio sharing method | |
| CN118473439A (en) | Wireless audio data transmission method, expandable controller and receiving device | |
| CN102415018B (en) | Method for interrupting voice transmissions within a multi site communication system | |
| WO2025207609A1 (en) | Controlling audio broadcast configuration based on sink capability | |
| US12483357B2 (en) | Retry mechanism for low energy communications | |
| WO2025207611A1 (en) | Dynamic configuration of audio broadcast channel map based on sink monitoring of channel quality | |
| US20250310975A1 (en) | Wireless Audio Data Transmission Method, System, and Device | |
| JP2020167590A (en) | Communication devices, communication systems, communication methods, and programs | |
| TWI710225B (en) | Controller of wireless bluetooth device and method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25721985 Country of ref document: EP Kind code of ref document: A1 |