US20060055771A1 - System and method for optimizing audio and video data transmission in a wireless system - Google Patents
System and method for optimizing audio and video data transmission in a wireless system Download PDFInfo
- Publication number
- US20060055771A1 US20060055771A1 US10/924,687 US92468704A US2006055771A1 US 20060055771 A1 US20060055771 A1 US 20060055771A1 US 92468704 A US92468704 A US 92468704A US 2006055771 A1 US2006055771 A1 US 2006055771A1
- Authority
- US
- United States
- Prior art keywords
- speaker
- video
- audio
- wireless device
- wireless
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000005540 biological transmission Effects 0.000 title claims description 25
- 238000004891 communication Methods 0.000 claims abstract description 50
- 238000004590 computer program Methods 0.000 claims 12
- 230000008014 freezing Effects 0.000 claims 2
- 238000007710 freezing Methods 0.000 claims 2
- 230000004913 activation Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000013500 data storage Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000003213 activating effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
Images
Classifications
- 
        - H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
 
- 
        - H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/56—Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
- H04M3/567—Multimedia conference systems
 
- 
        - H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1822—Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
 
- 
        - H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/56—Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
- H04M3/568—Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities audio processing specific to telephonic conferencing, e.g. spatial distribution, mixing of participants
 
- 
        - H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
 
- 
        - H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/148—Interfacing a video terminal to a particular transmission medium, e.g. ISDN
 
- 
        - H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/152—Multipoint control units therefor
 
- 
        - H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/189—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast in combination with wireless systems
 
- 
        - H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2207/00—Type of exchange or network, i.e. telephonic medium, in which the telephonic communication takes place
- H04M2207/18—Type of exchange or network, i.e. telephonic medium, in which the telephonic communication takes place wireless networks
 
- 
        - H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/42187—Lines and connections with preferential service
 
Definitions
- the present invention generally relates to wireless telecommunications, and more specifically, relates to a system and method for optimizing video and audio data transmission during a video/audio conference in a wireless network.
- a wireless telephone can be equipped with a resident video camera and display image from the camera to other devices on the wireless network. During a video conference, a user may see images of participants, and, at the same time, listen to audio from the same participants.
- the speaker's audio and video data are transmitted from the speaker's wireless device to a server, and then from the server to all participating wireless telephones.
- the video and audio data from listeners (non-speakers) may also be transmitted from their respective wireless devices to the server and then transmitted to the participants.
- listeners non-speakers
- the stream of media between all the devices is difficult to maintain, and the resulting quality of video is often poor and the audio is often interrupted.
- the bandwidth in a wireless communication network is limited by the technology and the environment through which radio signals have to travel.
- the system and method according to the invention optimizes transmission of video and audio information during a video conference in the wireless network.
- the speaker's video and audio data are received from the speaker and transmitted to all non-speakers (listeners).
- the speaker's audio and video data are transmitted according to a predefined criterion. For example, the audio data is given a higher priority compared with the video data.
- the listeners' audio data are received at the server and used to determine whether to assign a new speaker. In this manner, the available resources are utilized to ensure the more critical speaker's data is maintained in the conference.
- the new speaker may also be determined through a priority list, where each member is pre-assigned a priority.
- the invention is a method for transmitting audio and video information from a server to a plurality of wireless devices during a video conference through a wireless telecommunication network.
- the method comprises the steps of receiving at the server a plurality of videos from the plurality of the wireless devices, receiving at the server a plurality of audio data from the plurality of the wireless devices, selecting a speaker from the plurality of the wireless device, and transmitting the video and the audio data of the speaker to the plurality of the wireless devices except the wireless device of the speaker.
- Each audio and video data are associated with a wireless device and each audio data is also associated with a volume.
- the audio and video data of the speaker are transmitted according to a predefined criteria.
- the invention further includes a method for transmitting and receiving video and audio information at a wireless device during a video conference, wherein the wireless device having an audio device and a display device.
- the method comprising the steps of, if the wireless device is assigned as a speaker, transmitting video and audio information to a remote server, and if the wireless device is not assigned as the speaker, transmitting audio information to the remote server.
- the method further includes the steps of receiving the speaker's video and audio information from the remote server, playing the audio information received from the remote server on the audio device, and displaying the video information received from the remote server on the display device.
- the system for transmitting and displaying video and audio information during a video conferencing session in a wireless communication network includes a server in communication with the wireless communication network, wherein the server including a video and audio transmission criteria, and a plurality of wireless communication devices capable of communicating with the server through the wireless communication network, wherein each wireless communication device capable of transmitting and receiving the audio and video information to the server according to the video and audio data transmission criteria.
- the system also includes an apparatus for enabling transmission and playing video and audio information on a wireless telecommunication device in wireless communication network.
- the apparatus includes a transceiver for transmitting and receiving audio and video information from a remote server, a storage unit for storing the audio and video information, a display unit for displaying the video information to a user, a speaker unit for playing the video information to the user, a user interface unit for receiving the audio information from the user, a push-to-talk interface for receiving a floor request from the user, and a controller for controlling the display unit based on a speaker information received from the remote server.
- the present system and methods are therefore advantageous as they optimize transmission of video and audio information during a video conference in a wireless communications network.
- FIG. 1 is a wireless network architecture that supports video conferencing in a wireless system.
- FIG. 2 is a block diagram of a wireless device that supports the transmission of alert tone information in a push-to-talk system.
- FIG. 3 is a diagram representing interactions between a server and remote wireless devices during a video conferencing.
- FIG. 4 is an illustration of a wireless device displaying a video of a speaker during a video conferencing.
- FIG. 5 is a flow chart for a server process that distributes video and audio information.
- FIG. 6 is a flow chart for a device process for receiving and transmitting audio and video information.
- FIGS. 7A and 7B are examples of video/audio transmission criteria.
- FIG. 8 is a flow chart for a server process according to an alternative embodiment.
- FIG. 9 is a flow chart for a server process according to yet another alternative embodiment.
- the terms “communication device,” “wireless device,” “wireless communications device,” “wireless handset,” “handheld device,” and “handset” are used interchangeably, and the term “application” as used herein is intended to encompass executable and nonexecutable software files, raw data, aggregated data, patches, and other code segments. Further, like numerals refer to like elements throughout the several views, and the articles “a” and “the” includes plural references, unless otherwise specified in the description.
- FIG. 1 depicts a communication network 100 used according to the present invention.
- the communication network 100 includes one or more communication towers 106 , each connected to a base station (BS) 110 and serving users with communication device 102 .
- the communication device 102 can be cellular telephones, pagers, personal digital assistants (PDAs), laptop computers, or other hand-held, stationary, or portable communication devices that supports push-to-talk (PTT) communications.
- PDAs personal digital assistants
- the commands and data input by each user are transmitted as digital data to a communication tower 106 .
- the communication between a user using a communication device 102 and the communication tower 106 can be based on different technologies, such code division multiplexed access (CDMA), time division multiplexed access (TDMA), frequency division multiplexed access (FDMA), the global system for mobile communications (GSM), or other protocols that may be used in a wireless communications network or a data communications network.
- CDMA code division multiplexed access
- TDMA time division multiplexed access
- FDMA frequency division multiplexed access
- GSM global system for mobile communications
- the data from each user is sent from the communication tower 106 to a base station (BS) 110 , and forwarded to a mobile switching center (MSC) 114 , which may be connected to a public switched telephone network (PSTN) 118 and the Internet 120 .
- the MSC 114 may be connected to a server 116 that supports the video conferencing feature in the communications network 100 .
- the server 116 includes an application that supports the video conferencing feature besides storing a pre
- FIG. 2 illustrates a block diagram 200 of a wireless handset 102 .
- the wireless handset 102 includes a controller 202 , a storage unit 204 , a display unit 206 , an external interface unit 208 , a user interface unit 212 , a push-to-talk activation unit 209 , a transceiver 214 , and an antenna 216 .
- the controller 202 can be hardware, software, or a combination thereof.
- the display unit 206 may display graphical images or other digital information to the user.
- the external interface unit 208 controls hardware, such as speaker, microphone, and display unit, used for communication with the user.
- the user interface unit 212 controls hardware, such as keypad and push-to-talk activation unit 209 .
- the push-to-talk activation unit 209 may be used during a video conference to make a floor request, i.e., to request a speaking opportunity during when another user is speaking.
- the transceiver 214 transmits and receives radio signals to and from a communication tower 106 .
- the controller 202 interprets commands and data received from the user and the communication network 100 .
- the wireless device 102 receives the speaker's audio and video information from a remote server and displays video data on a screen and audio data on a phone (speaker) device. If the user wants to speak, he may push the push-to-talk button 209 , if the wireless device is equipped with the PTT button. Alternatively, he may speak in a louder voice, and this increase in volume would be interpreted by the remote server as a request to become the speaker. If the user is not the speaker, his video information is not transmitted to the remote server, thereby saving bandwidth. Generally, the audio information is considered more important than video information during a video conference, therefore, the wireless device 102 may request retransmission of loss audio packets but not loss video packets from the remote server.
- FIG. 3 is a diagram 300 representing interactions between the server (also known as group communication server) and user devices during a video conference.
- a user is assigned as the speaker and the user has the “floor.”
- the video and audio data from the speaker 302 are transmitted to the server 304 and the server 304 broadcasts the speaker's video and audio data to all non-speakers in the video conference.
- the server 304 may assign higher priority to audio transmission and a lower priority to video transmission.
- the audio transmission may have a higher bandwidth than the video transmission. This preferred criterion results in a better audio quality.
- the server 304 may also assign, for example when transmitting video and audio data to non-speakers, 60% of bandwidth to audio data and 40% of bandwidth to video data.
- the images from the non-speakers are not transmitted to the server 304 so the bandwidth can be saved. Though the audio data from the non-speakers are not transmitted from the server 304 to every participating user, non-speakers' audio data are transmitted to the server 304 . The non-speakers' audio data may be used to determine the next speaker.
- a new speaker in a video conference may be determined by several ways.
- One way to select a new speaker is to compare the volume of audio received from all participants. The participant with the highest audio volume will be assigned as the new speaker.
- Another way to select a new speaker is to wait for a “floor” request from a user. A user may request the floor by using the PTT button and if the current speaker is idle for a predefined period, the requesting user will be assigned as the new speaker.
- FIG. 4 illustrates a wireless communication device 400 displaying a video image on a display screen 404 and audio message on a speaker 402 .
- a user may request the floor by activating a push-to-talk button 406 or by speaking in a louder voice into a microphone 408 .
- FIG. 5 is a flow chart for a server process 500 .
- the server 116 receives audio data from all parties, step 502 , and compares their volume, step 504 . A participant with a louder voice will be assigned as the new speaker.
- the server 116 checks whether the new speaker is the same as the previous speaker, step 506 . If there is a new speaker, the identity of the new speaker is stored in the server 116 , step 508 .
- the server 116 calculates the video and audio priorities, step 510 , for the audio and video data transmission.
- the video and audio priorities may be the same as the ones set up for the previous speaker or may be a new set of priorities.
- the server 116 proceeds to “freeze” the video and audio data transmission to the new speaker, step 512 .
- the server 116 may send a special command instructing the speaker's wireless handset 102 to “freeze” its last displayed image.
- the server 116 may transmit a single picture of the speaker himself back to the speaker's wireless handset 102 , and this picture will be displayed to the speaker and identifying himself as the current speaker.
- the speaker needs not to see his own image nor hear his own voice retransmitted back to him.
- the server 116 proceeds to send the speaker's video and audio information to all non-speakers, steps 514 and 516 .
- the server 116 continues to monitor the video conference until it ends (not shown).
- FIG. 6 is a flow chart for a device process 600 .
- a wireless device 102 receives a speaker's audio and video information from the server 116 during a video conference, step 602 , and plays audio and video data on the wireless device 102 , step 604 . Because the wireless device 102 is not the current speaker, it only sends user's audio data to the server 116 , step 606 , and does not send any video to the server 116 .
- the user may request the floor during the video conference by raising his voice or by activating a PTT button. The user's audio data and a signal relating to the activation of the PTT button are sent to the server 116 , where the decision to assign a new speaker is made.
- the server 116 sends a signal or message information the wireless device 102 informing that it is the current speaking device.
- the wireless device 102 checks for incoming messages to see if it is assigned as the new speaker, step 608 . If the wireless device 102 is assigned as the new speaker, it starts to send the user's video to the server 116 , step 610 , and freezes the video display, step 612 .
- the wireless device 102 continuously checks whether a different new speaker has been assigned, step 614 .
- FIG. 7A is one embodiment of audio and video data transmission criteria.
- no inbound video and audio data are handled and outbound audio data is given a higher priority while the outbound video is given a lower priority.
- its outbound video data is disabled and outbound audio data is transmitted with low priority. Its inbound audio data arrives with a higher priority than its inbound video.
- Another way for handling audio and video data may be assigning them different bandwidth and FIG. 7B illustrates one example of this audio and video data transmission criteria.
- a preference may be given to the audio data transmission since more information may be transmitted through audio data during a video conference. Although in FIG. 7B audio data is given 60% of bandwidth and video data is given 40% of bandwidth, other distributions are possible.
- when a wireless device is not the current speaker its outbound video is disabled (0%) and its outbound is given a low 10% bandwidth.
- FIG. 8 is an alternative server process 800 when a signal is used to indicate a floor request.
- the floor request signal may be transmitted from a wireless device after a user pushes a PTT button during a video conference.
- the server 116 checks whether a floor request is received from any of the wireless devices 102 , step 802 . If a floor request is received, the server 116 checks whether the current speaker is “idle,” step 806 . The current speaker may be idle if there is no audio information coming from the speaker's wireless device for a predefined period, for example two seconds. The server 116 may adjust this idle period. If the current speaker is idle, the server 116 sets the requesting wireless device as the current speaker, step 808 , and proceeds to calculate video and audio priorities and sends out audio and video information as previously described in FIG. 5 .
- FIG. 9 is yet another alternative server process 900 when each wireless device is assigned a priority.
- the priority may be assigned by the server 116 or by the party who set up the video conference. The host of the video conference may be given by default the highest priority.
- the server 116 checks whether a floor request is received from any of the wireless devices 102 , step 902 . If a floor request is received, the server 116 compares the priority of the requesting wireless device against the priority of the current speaker, step 904 . If the requesting wireless device has a higher priority, then the server 116 assigns it as the new speaker. If the requesting wireless device has a lower priority, then the server 116 may wait until the speaker is idle before assigning the requesting wireless device as the new speaker. When there is a new speaker, the server 116 sets the requesting wireless device as the current speaker, step 908 , and proceeds to calculate video and audio priorities and sends out audio and video information as previously described in FIG. 5 .
- the user may set up the video conference request using his computer. He enters his wireless device information as the host. A second participant may use a second wireless device, and a third participant may use a wireline based video telephone. The user may assign the highest priority to himself and next priority to the second participant and the lowest priority to the wireline based participant. The user may make assignment by using either his wireless device or through his computer prior to the video conference.
- the server sends the user's video and audio data to the second and third participants. The video data is sent with a lower priority than the audio data.
- the second participant presses a PTT button to request the floor so he can add a comment.
- the wireless device of the second participant sends a request to the server.
- the server receives the request and checks the second participant's priority. Because the second participant has a lower priority than the current speaker, the server does not interrupt the current speaker. Instead, the server waits until the current speaker is idle and then assigns the second participant as the new speaker.
- the second participant becomes the speaker he may want to share a picture with other two participants. He may direct his wireless handset to send a picture stored in his wireless handset instead of his image to the server. The server will send the picture to other participants along with the audio data from the second participant. If the second participant wants to share information with other two participants, he pushes the PTT button and a floor request is sent from his wireless device to the server.
- the method can be performed by a program resident in a computer readable medium, where the program directs a server or other computer device having a computer platform to perform the steps of the method.
- the computer readable medium can be the memory of the server, or can be in a connective database. Further, the computer readable medium can be in a secondary storage media that is loadable onto a wireless communications device computer platform, such as a magnetic disk or tape, optical disk, hard disk, flash memory, or other storage media as is known in the art.
- the method may be implemented, for example, by operating portion(s) of the wireless network, such as a wireless communications device or the server, to execute a sequence of machine-readable instructions.
- the instructions can reside in various types of signal-bearing or data storage primary, secondary, or tertiary media.
- the media may comprise, for example, RAM (not shown) accessible by, or residing within, the components of the wireless network.
- the instructions may be stored on a variety of machine-readable data storage media, such as DASD storage (e.g., a conventional “hard drive” or a RAID array), magnetic tape, electronic read-only memory (e.g., ROM, EPROM, or EEPROM), flash memory cards, an optical storage device (e.g. CD-ROM, WORM, DVD, digital optical tape), paper “punch”cards, or other suitable data storage media including digital and analog transmission media.
- DASD storage e.g., a conventional “hard drive” or a RAID array
- magnetic tape e.g., magnetic tape
- electronic read-only memory e.g., ROM, EPROM, or EEPROM
- flash memory cards e.g., an optical storage device
- an optical storage device e.g. CD-ROM, WORM, DVD, digital optical tape
- paper “punch”cards e.g. CD-ROM, WORM, DVD, digital optical tape
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Engineering & Computer Science (AREA)
- Telephonic Communication Services (AREA)
- Mobile Radio Communication Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
A system and method for transmitting video and audio information among communicating wireless devices during a video conference in a wireless communications system. The audio information from all participants is received by a server, which selects a speaker among the participants. The speaker's audio and video data are transmitted to all participants according to a predefined criteria. 
  Description
-  1. Field of the Invention
-  The present invention generally relates to wireless telecommunications, and more specifically, relates to a system and method for optimizing video and audio data transmission during a video/audio conference in a wireless network.
-  2. Description of the Related Art
-  Technology advancement has made mobile telephones or wireless communications devices cheap and affordable to almost everyone. As the wireless telephones are manufactured with greater processing ability and storage, they also become more versatile and incorporate many features including the ability to support real time video and audio conferencing. A wireless telephone can be equipped with a resident video camera and display image from the camera to other devices on the wireless network. During a video conference, a user may see images of participants, and, at the same time, listen to audio from the same participants.
-  During a video conference, the speaker's audio and video data are transmitted from the speaker's wireless device to a server, and then from the server to all participating wireless telephones. The video and audio data from listeners (non-speakers) may also be transmitted from their respective wireless devices to the server and then transmitted to the participants. However, because of bandwidth limitations, the stream of media between all the devices is difficult to maintain, and the resulting quality of video is often poor and the audio is often interrupted.
-  The bandwidth in a wireless communication network is limited by the technology and the environment through which radio signals have to travel. The system and method according to the invention optimizes transmission of video and audio information during a video conference in the wireless network. During a video conference, the speaker's video and audio data are received from the speaker and transmitted to all non-speakers (listeners). The speaker's audio and video data are transmitted according to a predefined criterion. For example, the audio data is given a higher priority compared with the video data. The listeners' audio data are received at the server and used to determine whether to assign a new speaker. In this manner, the available resources are utilized to ensure the more critical speaker's data is maintained in the conference. The new speaker may also be determined through a priority list, where each member is pre-assigned a priority.
-  In one embodiment, the invention is a method for transmitting audio and video information from a server to a plurality of wireless devices during a video conference through a wireless telecommunication network. The method comprises the steps of receiving at the server a plurality of videos from the plurality of the wireless devices, receiving at the server a plurality of audio data from the plurality of the wireless devices, selecting a speaker from the plurality of the wireless device, and transmitting the video and the audio data of the speaker to the plurality of the wireless devices except the wireless device of the speaker. Each audio and video data are associated with a wireless device and each audio data is also associated with a volume. The audio and video data of the speaker are transmitted according to a predefined criteria.
-  In another embodiment, the invention further includes a method for transmitting and receiving video and audio information at a wireless device during a video conference, wherein the wireless device having an audio device and a display device. The method comprising the steps of, if the wireless device is assigned as a speaker, transmitting video and audio information to a remote server, and if the wireless device is not assigned as the speaker, transmitting audio information to the remote server. The method further includes the steps of receiving the speaker's video and audio information from the remote server, playing the audio information received from the remote server on the audio device, and displaying the video information received from the remote server on the display device.
-  In another embodiment, the system for transmitting and displaying video and audio information during a video conferencing session in a wireless communication network includes a server in communication with the wireless communication network, wherein the server including a video and audio transmission criteria, and a plurality of wireless communication devices capable of communicating with the server through the wireless communication network, wherein each wireless communication device capable of transmitting and receiving the audio and video information to the server according to the video and audio data transmission criteria.
-  The system also includes an apparatus for enabling transmission and playing video and audio information on a wireless telecommunication device in wireless communication network. The apparatus includes a transceiver for transmitting and receiving audio and video information from a remote server, a storage unit for storing the audio and video information, a display unit for displaying the video information to a user, a speaker unit for playing the video information to the user, a user interface unit for receiving the audio information from the user, a push-to-talk interface for receiving a floor request from the user, and a controller for controlling the display unit based on a speaker information received from the remote server.
-  The present system and methods are therefore advantageous as they optimize transmission of video and audio information during a video conference in a wireless communications network.
-  Other advantages and features of the present invention will become apparent after review of the hereinafter set forth Brief Description of the Drawings, Detailed Description of the Invention, and the Claims.
-  FIG. 1 is a wireless network architecture that supports video conferencing in a wireless system.
-  FIG. 2 is a block diagram of a wireless device that supports the transmission of alert tone information in a push-to-talk system.
-  FIG. 3 is a diagram representing interactions between a server and remote wireless devices during a video conferencing.
-  FIG. 4 is an illustration of a wireless device displaying a video of a speaker during a video conferencing.
-  FIG. 5 is a flow chart for a server process that distributes video and audio information.
-  FIG. 6 is a flow chart for a device process for receiving and transmitting audio and video information.
-  FIGS. 7A and 7B are examples of video/audio transmission criteria.
-  FIG. 8 is a flow chart for a server process according to an alternative embodiment.
-  FIG. 9 is a flow chart for a server process according to yet another alternative embodiment.
-  In this description, the terms “communication device,” “wireless device,” “wireless communications device,” “wireless handset,” “handheld device,” and “handset” are used interchangeably, and the term “application” as used herein is intended to encompass executable and nonexecutable software files, raw data, aggregated data, patches, and other code segments. Further, like numerals refer to like elements throughout the several views, and the articles “a” and “the” includes plural references, unless otherwise specified in the description.
-  FIG. 1 depicts acommunication network 100 used according to the present invention. Thecommunication network 100 includes one ormore communication towers 106, each connected to a base station (BS) 110 and serving users withcommunication device 102. Thecommunication device 102 can be cellular telephones, pagers, personal digital assistants (PDAs), laptop computers, or other hand-held, stationary, or portable communication devices that supports push-to-talk (PTT) communications. The commands and data input by each user are transmitted as digital data to acommunication tower 106. The communication between a user using acommunication device 102 and thecommunication tower 106 can be based on different technologies, such code division multiplexed access (CDMA), time division multiplexed access (TDMA), frequency division multiplexed access (FDMA), the global system for mobile communications (GSM), or other protocols that may be used in a wireless communications network or a data communications network. The data from each user is sent from thecommunication tower 106 to a base station (BS) 110, and forwarded to a mobile switching center (MSC) 114, which may be connected to a public switched telephone network (PSTN) 118 and the Internet 120. The MSC 114 may be connected to aserver 116 that supports the video conferencing feature in thecommunications network 100. Theserver 116 includes an application that supports the video conferencing feature besides storing a predefined criterion that assigns different priority to video and audio data transmission. Optionally, theserver 116 may be part of the MSC 114.
-  FIG. 2 illustrates a block diagram 200 of awireless handset 102. Thewireless handset 102 includes acontroller 202, astorage unit 204, adisplay unit 206, anexternal interface unit 208, auser interface unit 212, a push-to-talk activation unit 209, atransceiver 214, and anantenna 216. Thecontroller 202 can be hardware, software, or a combination thereof. Thedisplay unit 206 may display graphical images or other digital information to the user. Theexternal interface unit 208 controls hardware, such as speaker, microphone, and display unit, used for communication with the user. Theuser interface unit 212 controls hardware, such as keypad and push-to-talk activation unit 209. The push-to-talk activation unit 209 may be used during a video conference to make a floor request, i.e., to request a speaking opportunity during when another user is speaking. Thetransceiver 214 transmits and receives radio signals to and from acommunication tower 106. Thecontroller 202 interprets commands and data received from the user and thecommunication network 100.
-  During a video conference and when a user does not have the floor, i.e., the user is not the current speaker, thewireless device 102 receives the speaker's audio and video information from a remote server and displays video data on a screen and audio data on a phone (speaker) device. If the user wants to speak, he may push the push-to-talk button 209, if the wireless device is equipped with the PTT button. Alternatively, he may speak in a louder voice, and this increase in volume would be interpreted by the remote server as a request to become the speaker. If the user is not the speaker, his video information is not transmitted to the remote server, thereby saving bandwidth. Generally, the audio information is considered more important than video information during a video conference, therefore, thewireless device 102 may request retransmission of loss audio packets but not loss video packets from the remote server.
-  FIG. 3 is a diagram 300 representing interactions between the server (also known as group communication server) and user devices during a video conference. During a video conference a user is assigned as the speaker and the user has the “floor.” The video and audio data from thespeaker 302 are transmitted to theserver 304 and theserver 304 broadcasts the speaker's video and audio data to all non-speakers in the video conference. When broadcasting the video and audio data, theserver 304 may assign higher priority to audio transmission and a lower priority to video transmission. The audio transmission may have a higher bandwidth than the video transmission. This preferred criterion results in a better audio quality. Theserver 304 may also assign, for example when transmitting video and audio data to non-speakers, 60% of bandwidth to audio data and 40% of bandwidth to video data. The images from the non-speakers are not transmitted to theserver 304 so the bandwidth can be saved. Though the audio data from the non-speakers are not transmitted from theserver 304 to every participating user, non-speakers' audio data are transmitted to theserver 304. The non-speakers' audio data may be used to determine the next speaker.
-  A new speaker in a video conference may be determined by several ways. One way to select a new speaker is to compare the volume of audio received from all participants. The participant with the highest audio volume will be assigned as the new speaker. Another way to select a new speaker is to wait for a “floor” request from a user. A user may request the floor by using the PTT button and if the current speaker is idle for a predefined period, the requesting user will be assigned as the new speaker.
-  FIG. 4 illustrates awireless communication device 400 displaying a video image on adisplay screen 404 and audio message on aspeaker 402. A user may request the floor by activating a push-to-talk button 406 or by speaking in a louder voice into amicrophone 408.
-  FIG. 5 is a flow chart for aserver process 500. During a video conference with many participants, theserver 116 receives audio data from all parties,step 502, and compares their volume,step 504. A participant with a louder voice will be assigned as the new speaker. Theserver 116 checks whether the new speaker is the same as the previous speaker,step 506. If there is a new speaker, the identity of the new speaker is stored in theserver 116,step 508. Theserver 116 calculates the video and audio priorities,step 510, for the audio and video data transmission. The video and audio priorities may be the same as the ones set up for the previous speaker or may be a new set of priorities. Theserver 116 proceeds to “freeze” the video and audio data transmission to the new speaker,step 512. When theserver 116 stops to transmit the video information to the speaker'swireless handset 102, theserver 116 may send a special command instructing the speaker'swireless handset 102 to “freeze” its last displayed image. Alternatively, theserver 116 may transmit a single picture of the speaker himself back to the speaker'swireless handset 102, and this picture will be displayed to the speaker and identifying himself as the current speaker. Generally, the speaker needs not to see his own image nor hear his own voice retransmitted back to him. Theserver 116 proceeds to send the speaker's video and audio information to all non-speakers,steps server 116 continues to monitor the video conference until it ends (not shown).
-  FIG. 6 is a flow chart for adevice process 600. Awireless device 102 receives a speaker's audio and video information from theserver 116 during a video conference,step 602, and plays audio and video data on thewireless device 102,step 604. Because thewireless device 102 is not the current speaker, it only sends user's audio data to theserver 116,step 606, and does not send any video to theserver 116. The user may request the floor during the video conference by raising his voice or by activating a PTT button. The user's audio data and a signal relating to the activation of the PTT button are sent to theserver 116, where the decision to assign a new speaker is made. Theserver 116 sends a signal or message information thewireless device 102 informing that it is the current speaking device. Thewireless device 102 checks for incoming messages to see if it is assigned as the new speaker,step 608. If thewireless device 102 is assigned as the new speaker, it starts to send the user's video to theserver 116,step 610, and freezes the video display,step 612. Thewireless device 102 continuously checks whether a different new speaker has been assigned,step 614.
-  FIG. 7A is one embodiment of audio and video data transmission criteria. For a wireless device assigned as the speaker, no inbound video and audio data are handled and outbound audio data is given a higher priority while the outbound video is given a lower priority. When a wireless device is not assigned as the speaker, its outbound video data is disabled and outbound audio data is transmitted with low priority. Its inbound audio data arrives with a higher priority than its inbound video. Another way for handling audio and video data may be assigning them different bandwidth andFIG. 7B illustrates one example of this audio and video data transmission criteria. A preference may be given to the audio data transmission since more information may be transmitted through audio data during a video conference. Although inFIG. 7B audio data is given 60% of bandwidth and video data is given 40% of bandwidth, other distributions are possible. In the same example, when a wireless device is not the current speaker, its outbound video is disabled (0%) and its outbound is given a low 10% bandwidth.
-  FIG. 8 is analternative server process 800 when a signal is used to indicate a floor request. The floor request signal may be transmitted from a wireless device after a user pushes a PTT button during a video conference. Theserver 116 checks whether a floor request is received from any of thewireless devices 102,step 802. If a floor request is received, theserver 116 checks whether the current speaker is “idle,”step 806. The current speaker may be idle if there is no audio information coming from the speaker's wireless device for a predefined period, for example two seconds. Theserver 116 may adjust this idle period. If the current speaker is idle, theserver 116 sets the requesting wireless device as the current speaker,step 808, and proceeds to calculate video and audio priorities and sends out audio and video information as previously described inFIG. 5 .
-  FIG. 9 is yet anotheralternative server process 900 when each wireless device is assigned a priority. The priority may be assigned by theserver 116 or by the party who set up the video conference. The host of the video conference may be given by default the highest priority. Theserver 116 checks whether a floor request is received from any of thewireless devices 102,step 902. If a floor request is received, theserver 116 compares the priority of the requesting wireless device against the priority of the current speaker,step 904. If the requesting wireless device has a higher priority, then theserver 116 assigns it as the new speaker. If the requesting wireless device has a lower priority, then theserver 116 may wait until the speaker is idle before assigning the requesting wireless device as the new speaker. When there is a new speaker, theserver 116 sets the requesting wireless device as the current speaker,step 908, and proceeds to calculate video and audio priorities and sends out audio and video information as previously described inFIG. 5 .
-  The following is a description of one use scenario according to one embodiment of the invention. When a user wants to have a video conference with two associates, the user may set up the video conference request using his computer. He enters his wireless device information as the host. A second participant may use a second wireless device, and a third participant may use a wireline based video telephone. The user may assign the highest priority to himself and next priority to the second participant and the lowest priority to the wireline based participant. The user may make assignment by using either his wireless device or through his computer prior to the video conference. During the video conference, when the user has the floor, the server sends the user's video and audio data to the second and third participants. The video data is sent with a lower priority than the audio data.
-  While the user has the floor, the second participant presses a PTT button to request the floor so he can add a comment. The wireless device of the second participant sends a request to the server. The server receives the request and checks the second participant's priority. Because the second participant has a lower priority than the current speaker, the server does not interrupt the current speaker. Instead, the server waits until the current speaker is idle and then assigns the second participant as the new speaker. When the second participant becomes the speaker, he may want to share a picture with other two participants. He may direct his wireless handset to send a picture stored in his wireless handset instead of his image to the server. The server will send the picture to other participants along with the audio data from the second participant. If the second participant wants to share information with other two participants, he pushes the PTT button and a floor request is sent from his wireless device to the server.
-  In view of the method being executable on a wireless service provider's computer device or a wireless communications device, the method can be performed by a program resident in a computer readable medium, where the program directs a server or other computer device having a computer platform to perform the steps of the method. The computer readable medium can be the memory of the server, or can be in a connective database. Further, the computer readable medium can be in a secondary storage media that is loadable onto a wireless communications device computer platform, such as a magnetic disk or tape, optical disk, hard disk, flash memory, or other storage media as is known in the art.
-  In the context ofFIGS. 5-9 , the method may be implemented, for example, by operating portion(s) of the wireless network, such as a wireless communications device or the server, to execute a sequence of machine-readable instructions. The instructions can reside in various types of signal-bearing or data storage primary, secondary, or tertiary media. The media may comprise, for example, RAM (not shown) accessible by, or residing within, the components of the wireless network. Whether contained in RAM, a diskette, or other secondary storage media, the instructions may be stored on a variety of machine-readable data storage media, such as DASD storage (e.g., a conventional “hard drive” or a RAID array), magnetic tape, electronic read-only memory (e.g., ROM, EPROM, or EEPROM), flash memory cards, an optical storage device (e.g. CD-ROM, WORM, DVD, digital optical tape), paper “punch”cards, or other suitable data storage media including digital and analog transmission media.
-  While the invention has been particularly shown and described with reference to a preferred embodiment thereof, it will be understood by those skilled in the art that various changes in form and detail may be made without departing from the spirit and scope of the present invention as set forth in the following claims. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
Claims (28)
 1. A method for transmitting a speaker's wireless device audio and video information from a server to a plurality of wireless devices during a video conference over a wireless telecommunication network, comprising the steps of: 
  receiving at the server a plurality of video data from the plurality of the wireless devices, each video data associated with a wireless device; 
 receiving at the server a plurality of audio data from the plurality of the wireless devices, each audio data having a volume level and associated with a wireless device; 
 selecting a speaker from the plurality of wireless devices; and 
 transmitting the video and the audio data of the speaker to the plurality of the wireless devices except to the wireless device of the speaker, wherein the video data and the audio data of the speaker are transmitted based upon a predefined criteria, wherein the speaker's video and audio data has a priority over non-speakers' audio and video data. 
  2. The method of claim 1 , wherein the step of selecting a speaker further comprising the steps of: 
  comparing volume levels of the plurality of the audio data received; 
 selecting an audio data with a highest volume level; and 
 assigning as the speaker the wireless device associated with the selected audio data. 
  3. The method of claim 1 , wherein the step of selecting a speaker further comprising the steps of: 
  receiving a speaking request from one of the wireless devices; and 
 assigning as the speaker the wireless device associated with the speaking request. 
  4. The method of claim 3 , wherein the step of assigning the speaker further comprising the step of, if the audio data from the speaker is silent for a predefined period, assigning as the speaker the wireless device associated with the speaking request. 
   5. The method of claim 1 , wherein the step of selecting a speaker further comprising the steps of: 
  receiving a speaking request from a requesting wireless device; 
 obtaining a priority associated with the requesting wireless device; 
 comparing the priority of the requesting wireless device with a priority of a current wireless device; and 
 if the priority of the requesting wireless device is higher than the priority of the current wireless device, assigning as the speaker the requesting wireless device. 
  6. The method of claim 1 , wherein the criteria further comprising transmitting the audio data with a high priority and transmitting the video data with a low priority. 
   7. A method for transmitting and receiving video and audio information at a wireless device during a video conference, the wireless device having an audio device and a display device, comprising the steps of: 
  if the wireless device is assigned as a speaker, transmitting video and audio information to a remote server; 
 if the wireless device is not assigned as the speaker, 
 transmitting audio information to the remote server, and 
receiving the speaker's video and audio information from the remote server; 
playing the audio information received from the remote server on the audio device; and 
 displaying the video information received from the remote server on the display device. 
  8. The method of claim 7 , further comprising the steps of: 
  receiving a floor request from a wireless device; and 
 transmitting the floor request to the remote server. 
  9. The method of claim 7 , further comprising the step of receiving a speaker assignment from the remote device. 
   10. The method of claim 7 , wherein the step of displaying the video information further comprising the step of, if the wireless device is assigned as the speaker, freezing the video information. 
   11. An apparatus for enabling transmission and playing video and audio information on a wireless telecommunication device in wireless communication network, comprising: 
  a transceiver for transmitting and receiving audio and video information from a remote server; 
 a storage unit for storing the audio and video information; 
 a display unit for displaying the video information; 
 a speaker unit for playing the video information; 
 an interface unit for receiving audio information; 
 a push-to-talk interface for receiving a floor request during a video conference; and 
 a controller for controlling the display unit based on speaker information received from the remote server. 
  12. An apparatus for enabling transmission and playing video and audio information on a wireless telecommunication device in wireless communication network, comprising: 
  means for transmitting and receiving audio and video information from a remote server; 
 means for storing the audio and video information; 
 means for displaying the video information; 
 means for playing the video information; 
 means for receiving audio information; 
 means for receiving a floor request during a video conference; and 
 means for controlling the means for displaying the video information based on a speaker information received from the remote server. 
  13. A computer-readable medium on which is stored a computer program for transmitting a speaker's wireless device audio and video information from a server to a plurality of wireless devices during a video conference over a wireless telecommunication network, the computer program comprising computer instructions that when executed by a computer performs the steps of: 
  receiving at the server a plurality of video data from the plurality of the wireless devices, each video data associated with a wireless device; 
 receiving at the server a plurality of audio data from the plurality of the wireless devices, each audio data having a volume level and associated with a wireless device; 
 selecting a speaker from the plurality of wireless devices; and 
 transmitting the video and the audio data of the speaker to the plurality of the wireless devices except to the wireless device of the speaker, 
 wherein the video data and the audio data of the speaker are transmitted based upon a predefined criteria, wherein the speaker's video and audio data has a priority over non-speakers' audio and video data. 
  14. The computer program of claim 13 , wherein the step of selecting a speaker further comprising the steps of: 
  comparing volume levels of the plurality of the audio data received; 
 selecting an audio data with a highest volume level; and 
 assigning as the speaker the wireless device associated with the selected audio data. 
  15. The computer program of claim 13 , wherein the step of selecting a speaker further comprising the steps of: 
  receiving a speaking request from one of the wireless devices; and 
 assigning as the speaker the wireless device associated with the speaking request. 
  16. The computer program of claim 15 , wherein the step of assigning the speaker further comprising the step of, if the audio data from the speaker is inactive for a predefined period, assigning as the speaker the wireless device associated with the speaking request. 
   17. The computer program of claim 13 , wherein the step of selecting a speaker further comprising the steps of: 
  receiving a speaking request from a requesting wireless device; 
 obtaining a priority associated with the requesting wireless device; 
 comparing the priority of the requesting wireless device with a priority of a current wireless device; and 
 if the priority of the requesting wireless device is higher than the priority of the current wireless device, assigning as the speaker the requesting wireless device. 
  18. The computer program of claim 13 , wherein the criteria further comprising transmitting the audio data with a high priority and transmitting the video data with a low priority. 
   19. A computer-readable medium on which is stored a computer program for transmitting and receiving video and audio information at a wireless device during a video conference, the wireless device having an audio device and a display device, the computer program comprising computer instructions that when executed by a computer performs the steps of: 
  if the wireless device is assigned as a speaker, transmitting video and audio information to a remote server; 
 if the wireless device is not assigned as the speaker, 
 transmitting audio information to the remote server, and 
receiving the speaker's video and audio information from the remote server; 
playing the audio information received from the remote server on the audio device; and 
 displaying the video information received from the remote server on the display device. 
  20. The computer program of claim 19 , further comprising the steps of: 
  receiving a floor request; and 
 transmitting the floor request to the remote server. 
  21. The computer program of claim 19 , further comprising the step of receiving a speaker assignment from the remote device. 
   22. The computer program of claim 19 , wherein the step of displaying the video information further comprising the step of, if the wireless device is assigned as the speaker, freezing the video information. 
   23. A system for transmitting and displaying priority video and audio information at a plurality of wireless devices engaging in a video conferencing session in a wireless communication network, comprising: 
  a server in communication with the wireless communication network, the server including a video and audio data transmission priority criteria, wherein the wireless device of a current speaker is given a high priority; and 
 a plurality of wireless communication devices capable of communicating with the server through the wireless communication network, each wireless communication device capable of transmitting and receiving the audio and video information to the server according to the video and audio data transmission criteria. 
  24. The system of claim 23 , wherein the server further includes a predefined priority table with a plurality of entries, wherein each entry is assigned to a wireless communication device. 
   25. The system of claim 24 , wherein the server assigns a wireless communication device as a current speaker based on the predefined priority table. 
   26. The system of claim 24 , wherein the video and audio transmission criteria assign a high priority to audio information and a low priority to video information from a wireless communication device assigned as a current speaker. 
   27. The system of claim 23 , wherein the server receives audio from the plurality of wireless communication devices and assigns a wireless communication device as a current speaker. 
   28. The system of claim 27 , wherein the server assigns a wireless communication device as a current speaker based on a volume associated with the audio information.
  Priority Applications (15)
| Application Number | Priority Date | Filing Date | Title | 
|---|---|---|---|
| US10/924,687 US20060055771A1 (en) | 2004-08-24 | 2004-08-24 | System and method for optimizing audio and video data transmission in a wireless system | 
| MX2007002295A MX2007002295A (en) | 2004-08-24 | 2005-08-24 | System and method for optimizing audio and video data transmission in a wireless system. | 
| PCT/US2005/030077 WO2006023961A2 (en) | 2004-08-24 | 2005-08-24 | System and method for optimizing audio and video data transmission in a wireless system | 
| PE2005000976A PE20060753A1 (en) | 2004-08-24 | 2005-08-24 | SYSTEM AND METHOD TO OPTIMIZE THE TRANSMISSION OF AUDIO AND VIDEO DATA IN A WIRELESS SYSTEM | 
| CNA2005800349420A CN101040524A (en) | 2004-08-24 | 2005-08-24 | System and method for optimizing audio and video data transmission in a wireless system | 
| JP2007530076A JP2008511263A (en) | 2004-08-24 | 2005-08-24 | System and method for optimizing audio and video data transmission in a wireless system | 
| EP05790311A EP1787469A2 (en) | 2004-08-24 | 2005-08-24 | System and method for optimizing audio and video data transmission in a wireless system | 
| RU2007110835/09A RU2007110835A (en) | 2004-08-24 | 2005-08-24 | SYSTEM AND METHOD FOR OPTIMIZATION OF TRANSFER OF AUDIO AND VIDEO DATA IN A WIRELESS SYSTEM | 
| BRPI0514566-0A BRPI0514566A (en) | 2004-08-24 | 2005-08-24 | system and method for optimizing audio and video data transmission in a wireless system | 
| TW094129023A TW200623879A (en) | 2004-08-24 | 2005-08-24 | System and method for optimizing audio and video data transmission in a wireless system | 
| KR1020077006703A KR20070040850A (en) | 2004-08-24 | 2005-08-24 | System and method for optimizing voice and video data transmission in wireless systems | 
| CA002578218A CA2578218A1 (en) | 2004-08-24 | 2005-08-24 | System and method for optimizing audio and video data transmission in a wireless system | 
| KR1020087031957A KR20090016004A (en) | 2004-08-24 | 2005-08-24 | System and method for optimizing voice and video data transmission in wireless systems | 
| ARP050103576A AR050380A1 (en) | 2004-08-24 | 2005-08-26 | SYSTEM AND METHOD TO OPTIMIZE TRANSMISSION OF AUDIO AND VIDEO DATA IN A WIRELESS SYSTEM. | 
| IL181537A IL181537A0 (en) | 2004-08-24 | 2007-02-25 | System and method for optimizing audio and video data transmission in a wireless system | 
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title | 
|---|---|---|---|
| US10/924,687 US20060055771A1 (en) | 2004-08-24 | 2004-08-24 | System and method for optimizing audio and video data transmission in a wireless system | 
Publications (1)
| Publication Number | Publication Date | 
|---|---|
| US20060055771A1 true US20060055771A1 (en) | 2006-03-16 | 
Family
ID=35968308
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date | 
|---|---|---|---|
| US10/924,687 Abandoned US20060055771A1 (en) | 2004-08-24 | 2004-08-24 | System and method for optimizing audio and video data transmission in a wireless system | 
Country Status (14)
| Country | Link | 
|---|---|
| US (1) | US20060055771A1 (en) | 
| EP (1) | EP1787469A2 (en) | 
| JP (1) | JP2008511263A (en) | 
| KR (2) | KR20090016004A (en) | 
| CN (1) | CN101040524A (en) | 
| AR (1) | AR050380A1 (en) | 
| BR (1) | BRPI0514566A (en) | 
| CA (1) | CA2578218A1 (en) | 
| IL (1) | IL181537A0 (en) | 
| MX (1) | MX2007002295A (en) | 
| PE (1) | PE20060753A1 (en) | 
| RU (1) | RU2007110835A (en) | 
| TW (1) | TW200623879A (en) | 
| WO (1) | WO2006023961A2 (en) | 
Cited By (43)
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| US20060082641A1 (en) * | 2004-10-15 | 2006-04-20 | Ganesan Rengaraju | Image and audio controls for a communication device in push-to-video services | 
| US20060098591A1 (en) * | 2004-11-10 | 2006-05-11 | Nec Corporation | Multi-spot call system, sound volume adjustment device, portable terminal device, and sound volume adjustment method used therefor and program thereof | 
| US20060120308A1 (en) * | 2004-12-06 | 2006-06-08 | Forbes Stephen K | Image exchange for image-based push-to-talk user interface | 
| US20060215585A1 (en) * | 2005-02-28 | 2006-09-28 | Sony Corporation | Conference system, conference terminal, and mobile terminal | 
| US7224999B1 (en) * | 1999-09-29 | 2007-05-29 | Kabushiki Kaisha Toshiba | Radio communication terminal with simultaneous radio communication channels | 
| US20070230372A1 (en) * | 2006-03-29 | 2007-10-04 | Microsoft Corporation | Peer-aware ranking of voice streams | 
| US20080316944A1 (en) * | 2007-06-25 | 2008-12-25 | Comverse, Ltd. | Identifying participants of an audio conference call | 
| WO2008115334A3 (en) * | 2007-02-23 | 2008-12-31 | John A Sachau | System and methods for mobile videoconferencing | 
| US20090187400A1 (en) * | 2006-09-30 | 2009-07-23 | Huawei Technologies Co., Ltd. | System, method and multipoint control unit for providing multi-language conference | 
| US20100013905A1 (en) * | 2008-07-16 | 2010-01-21 | Cisco Technology, Inc. | Floor control in multi-point conference systems | 
| US20100287251A1 (en) * | 2009-05-06 | 2010-11-11 | Futurewei Technologies, Inc. | System and Method for IMS Based Collaborative Services Enabling Multimedia Application Sharing | 
| US20100315484A1 (en) * | 2009-06-10 | 2010-12-16 | Microsoft Corporation | Implementing multiple dominant speaker video streams with manual override | 
| US20110159860A1 (en) * | 2009-12-30 | 2011-06-30 | Shenzhen Futaihong Precision Industry Co., Ltd. | Method for managing appointments in a communication device | 
| US20120051719A1 (en) * | 2010-08-31 | 2012-03-01 | Fujitsu Limited | System and Method for Editing Recorded Videoconference Data | 
| US20120300015A1 (en) * | 2011-05-23 | 2012-11-29 | Xuemin Chen | Two-way audio and video communication utilizing segment-based adaptive streaming techniques | 
| US8390670B1 (en) | 2008-11-24 | 2013-03-05 | Shindig, Inc. | Multiparty communications systems and methods that optimize communications based on mode and available bandwidth | 
| US8531994B2 (en) | 2007-12-28 | 2013-09-10 | Huawei Technologies Co., Ltd. | Audio processing method, system, and control server | 
| US8630854B2 (en) | 2010-08-31 | 2014-01-14 | Fujitsu Limited | System and method for generating videoconference transcriptions | 
| US20140064210A1 (en) * | 2012-08-31 | 2014-03-06 | Qualcomm Incorporated | Selectively allocating quality of service to support multiple concurrent sessions for a client device | 
| US8681203B1 (en) * | 2012-08-20 | 2014-03-25 | Google Inc. | Automatic mute control for video conferencing | 
| US20140098206A1 (en) * | 2012-10-04 | 2014-04-10 | Cute Circuit LLC | Multimedia communication and display device | 
| US20140153410A1 (en) * | 2012-11-30 | 2014-06-05 | Nokia Siemens Networks Oy | Mobile-to-mobile radio access network edge optimizer module content cross-call parallelized content re-compression, optimization, transfer, and scheduling | 
| US8755310B1 (en) * | 2011-05-02 | 2014-06-17 | Kumar C. Gopalakrishnan | Conferencing system | 
| US8791977B2 (en) | 2010-10-05 | 2014-07-29 | Fujitsu Limited | Method and system for presenting metadata during a videoconference | 
| EP2448173A4 (en) * | 2009-07-16 | 2015-04-29 | Zte Corp | System and method for realizing wireless video conference | 
| CN105208262A (en) * | 2014-06-26 | 2015-12-30 | 爱玛丽欧有限公司 | Network photographic data management system and method | 
| US20160088259A1 (en) * | 2011-01-17 | 2016-03-24 | Eric C. Anderson | System and method for interactive internet video conferencing | 
| US9401937B1 (en) | 2008-11-24 | 2016-07-26 | Shindig, Inc. | Systems and methods for facilitating communications amongst multiple users | 
| US20160224193A1 (en) * | 2012-12-28 | 2016-08-04 | Glide Talk Ltd. | Dual mode multimedia messaging | 
| US20170171511A1 (en) * | 2011-02-28 | 2017-06-15 | Yoshinaga Kato | Transmission management apparatus | 
| US9712579B2 (en) | 2009-04-01 | 2017-07-18 | Shindig. Inc. | Systems and methods for creating and publishing customizable images from within online events | 
| US9711181B2 (en) | 2014-07-25 | 2017-07-18 | Shindig. Inc. | Systems and methods for creating, editing and publishing recorded videos | 
| US9734410B2 (en) | 2015-01-23 | 2017-08-15 | Shindig, Inc. | Systems and methods for analyzing facial expressions within an online classroom to gauge participant attentiveness | 
| US9733333B2 (en) | 2014-05-08 | 2017-08-15 | Shindig, Inc. | Systems and methods for monitoring participant attentiveness within events and group assortments | 
| US9779708B2 (en) | 2009-04-24 | 2017-10-03 | Shinding, Inc. | Networks of portable electronic devices that collectively generate sound | 
| US9782675B2 (en) | 2008-11-24 | 2017-10-10 | Shindig, Inc. | Systems and methods for interfacing video games and user communications | 
| US9947366B2 (en) | 2009-04-01 | 2018-04-17 | Shindig, Inc. | Group portraits composed using video chat systems | 
| US9952751B2 (en) | 2014-04-17 | 2018-04-24 | Shindig, Inc. | Systems and methods for forming group communications within an online event | 
| US10133916B2 (en) | 2016-09-07 | 2018-11-20 | Steven M. Gottlieb | Image and identity validation in video chat events | 
| US20180367773A1 (en) * | 2005-08-31 | 2018-12-20 | Rah Color Technologies Llc | Color calibration of color image rendering devices | 
| US10271010B2 (en) | 2013-10-31 | 2019-04-23 | Shindig, Inc. | Systems and methods for controlling the display of content | 
| WO2022026842A1 (en) * | 2020-07-30 | 2022-02-03 | T1V, Inc. | Virtual distributed camera, associated applications and system | 
| US20230078451A1 (en) * | 2020-02-20 | 2023-03-16 | Shenzhen Hollyland Technology Co., Ltd. | Audio and video transmission devices and audio and video transmission systems | 
Families Citing this family (18)
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| CN100493038C (en) | 2006-05-26 | 2009-05-27 | 华为技术有限公司 | Method and system for media stream replacement during terminal call | 
| JP2008227693A (en) * | 2007-03-09 | 2008-09-25 | Oki Electric Ind Co Ltd | Speaker video display control system, speaker video display control method, speaker video display control program, communication terminal, and multipoint video conference system | 
| NO20071451L (en) | 2007-03-19 | 2008-09-22 | Tandberg Telecom As | System and method for controlling conference equipment | 
| CN101080000A (en) * | 2007-07-17 | 2007-11-28 | 华为技术有限公司 | Method, system, server and terminal for displaying speaker in video conference | 
| CN101789871B (en) * | 2009-01-23 | 2012-10-03 | 国际商业机器公司 | Method, server device and client device for supporting plurality of simultaneous online conferences | 
| CN101489091A (en) * | 2009-01-23 | 2009-07-22 | 深圳华为通信技术有限公司 | Audio signal transmission processing method and apparatus | 
| US9277021B2 (en) * | 2009-08-21 | 2016-03-01 | Avaya Inc. | Sending a user associated telecommunication address | 
| KR101636716B1 (en) | 2009-12-24 | 2016-07-06 | 삼성전자주식회사 | Apparatus of video conference for distinguish speaker from participants and method of the same | 
| WO2011087356A2 (en) * | 2010-01-15 | 2011-07-21 | Mimos Berhad | Video conferencing using single panoramic camera | 
| CN101867768B (en) * | 2010-05-31 | 2012-02-08 | 杭州华三通信技术有限公司 | Picture control method and device for video conference place | 
| CN102404542B (en) * | 2010-09-09 | 2014-06-04 | 华为终端有限公司 | Method and device for adjusting participant's image display in multi-screen video conference | 
| CN104038725B (en) * | 2010-09-09 | 2017-12-29 | 华为终端有限公司 | The method and device being adjusted is shown to participant's image in multi-screen video conference | 
| US9118940B2 (en) * | 2012-07-30 | 2015-08-25 | Google Technology Holdings LLC | Video bandwidth allocation in a video conference | 
| US9607630B2 (en) | 2013-04-16 | 2017-03-28 | International Business Machines Corporation | Prevention of unintended distribution of audio information | 
| CN103716171B (en) * | 2013-12-31 | 2017-04-05 | 广东公信智能会议股份有限公司 | A kind of audio data transmission method and main frame, terminal | 
| CN106488173B (en) * | 2015-08-26 | 2019-10-11 | 宇龙计算机通信科技(深圳)有限公司 | Implementation method, device and related equipment of mobile terminal video conferencing | 
| CN105930322B (en) * | 2016-07-14 | 2018-11-20 | 无锡科技职业学院 | A kind of conversion of long distance high efficiency is without original text synchronous translation apparatus system | 
| WO2019114911A1 (en) | 2017-12-13 | 2019-06-20 | Fiorentino Ramon | Interconnected system for high-quality wireless transmission of audio and video between electronic consumer devices | 
Citations (14)
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| US5862329A (en) * | 1996-04-18 | 1999-01-19 | International Business Machines Corporation | Method system and article of manufacture for multi-casting audio visual material | 
| US6396816B1 (en) * | 1994-12-20 | 2002-05-28 | Intel Corporation | Method and apparatus for multiple applications on a single ISDN line | 
| US20020154210A1 (en) * | 1993-10-01 | 2002-10-24 | Lester F. Ludwig | Videoconferencing hardware | 
| US20020181686A1 (en) * | 2001-05-03 | 2002-12-05 | Howard Michael D. | Teleconferencing system | 
| US20020191071A1 (en) * | 2001-06-14 | 2002-12-19 | Yong Rui | Automated online broadcasting system and method using an omni-directional camera system for viewing meetings over a computer network | 
| US20030041165A1 (en) * | 2001-08-24 | 2003-02-27 | Spencer Percy L. | System and method for group video teleconferencing using a bandwidth optimizer | 
| US6697614B2 (en) * | 2001-02-27 | 2004-02-24 | Motorola, Inc. | Method and apparatus for distributed arbitration of a right to speak among a plurality of devices participating in a real-time voice conference | 
| US20050062843A1 (en) * | 2003-09-22 | 2005-03-24 | Bowers Richard D. | Client-side audio mixing for conferencing | 
| US6906741B2 (en) * | 2002-01-29 | 2005-06-14 | Palm, Inc. | System for and method of conferencing with a handheld computer using multiple media types | 
| US20050237377A1 (en) * | 2004-04-22 | 2005-10-27 | Insors Integrated Communications | Audio data control | 
| US20050239485A1 (en) * | 2002-05-24 | 2005-10-27 | Gorachund Kundu | Dispatch service architecture framework | 
| US20060030344A1 (en) * | 2004-07-28 | 2006-02-09 | Lg Electronics Inc. | Talk burst allocation in a PTT communicaitn network | 
| US7107017B2 (en) * | 2003-05-07 | 2006-09-12 | Nokia Corporation | System and method for providing support services in push to talk communication platforms | 
| US7130282B2 (en) * | 2002-09-20 | 2006-10-31 | Qualcomm Inc | Communication device for providing multimedia in a group communication network | 
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| JPH03184489A (en) * | 1989-12-13 | 1991-08-12 | Fujitsu Ltd | Data switching controller for multi-spot television conference | 
| JPH03283982A (en) * | 1990-03-30 | 1991-12-13 | Nec Corp | Television conference system | 
| US6314302B1 (en) * | 1996-12-09 | 2001-11-06 | Siemens Aktiengesellschaft | Method and telecommunication system for supporting multimedia services via an interface and a correspondingly configured subscriber terminal | 
| JPH11220711A (en) * | 1998-02-03 | 1999-08-10 | Fujitsu Ltd | Multipoint conference system and conference terminal device | 
| US20020106998A1 (en) * | 2001-02-05 | 2002-08-08 | Presley Herbert L. | Wireless rich media conferencing | 
| US6894715B2 (en) * | 2001-06-16 | 2005-05-17 | Eric Harold Henrikson | Mixing video signals for an audio and video multimedia conference call | 
| US7096037B2 (en) * | 2002-01-29 | 2006-08-22 | Palm, Inc. | Videoconferencing bandwidth management for a handheld computer system and method | 
| JP2003299051A (en) * | 2002-03-29 | 2003-10-17 | Matsushita Electric Ind Co Ltd | Information output device and information output method | 
- 
        2004
        - 2004-08-24 US US10/924,687 patent/US20060055771A1/en not_active Abandoned
 
- 
        2005
        - 2005-08-24 PE PE2005000976A patent/PE20060753A1/en not_active Application Discontinuation
- 2005-08-24 KR KR1020087031957A patent/KR20090016004A/en not_active Ceased
- 2005-08-24 JP JP2007530076A patent/JP2008511263A/en active Pending
- 2005-08-24 WO PCT/US2005/030077 patent/WO2006023961A2/en active Application Filing
- 2005-08-24 CN CNA2005800349420A patent/CN101040524A/en active Pending
- 2005-08-24 MX MX2007002295A patent/MX2007002295A/en not_active Application Discontinuation
- 2005-08-24 BR BRPI0514566-0A patent/BRPI0514566A/en not_active Application Discontinuation
- 2005-08-24 CA CA002578218A patent/CA2578218A1/en not_active Abandoned
- 2005-08-24 KR KR1020077006703A patent/KR20070040850A/en not_active Ceased
- 2005-08-24 TW TW094129023A patent/TW200623879A/en unknown
- 2005-08-24 RU RU2007110835/09A patent/RU2007110835A/en not_active Application Discontinuation
- 2005-08-24 EP EP05790311A patent/EP1787469A2/en not_active Withdrawn
- 2005-08-26 AR ARP050103576A patent/AR050380A1/en unknown
 
- 
        2007
        - 2007-02-25 IL IL181537A patent/IL181537A0/en unknown
 
Patent Citations (14)
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| US20020154210A1 (en) * | 1993-10-01 | 2002-10-24 | Lester F. Ludwig | Videoconferencing hardware | 
| US6396816B1 (en) * | 1994-12-20 | 2002-05-28 | Intel Corporation | Method and apparatus for multiple applications on a single ISDN line | 
| US5862329A (en) * | 1996-04-18 | 1999-01-19 | International Business Machines Corporation | Method system and article of manufacture for multi-casting audio visual material | 
| US6697614B2 (en) * | 2001-02-27 | 2004-02-24 | Motorola, Inc. | Method and apparatus for distributed arbitration of a right to speak among a plurality of devices participating in a real-time voice conference | 
| US20020181686A1 (en) * | 2001-05-03 | 2002-12-05 | Howard Michael D. | Teleconferencing system | 
| US20020191071A1 (en) * | 2001-06-14 | 2002-12-19 | Yong Rui | Automated online broadcasting system and method using an omni-directional camera system for viewing meetings over a computer network | 
| US20030041165A1 (en) * | 2001-08-24 | 2003-02-27 | Spencer Percy L. | System and method for group video teleconferencing using a bandwidth optimizer | 
| US6906741B2 (en) * | 2002-01-29 | 2005-06-14 | Palm, Inc. | System for and method of conferencing with a handheld computer using multiple media types | 
| US20050239485A1 (en) * | 2002-05-24 | 2005-10-27 | Gorachund Kundu | Dispatch service architecture framework | 
| US7130282B2 (en) * | 2002-09-20 | 2006-10-31 | Qualcomm Inc | Communication device for providing multimedia in a group communication network | 
| US7107017B2 (en) * | 2003-05-07 | 2006-09-12 | Nokia Corporation | System and method for providing support services in push to talk communication platforms | 
| US20050062843A1 (en) * | 2003-09-22 | 2005-03-24 | Bowers Richard D. | Client-side audio mixing for conferencing | 
| US20050237377A1 (en) * | 2004-04-22 | 2005-10-27 | Insors Integrated Communications | Audio data control | 
| US20060030344A1 (en) * | 2004-07-28 | 2006-02-09 | Lg Electronics Inc. | Talk burst allocation in a PTT communicaitn network | 
Cited By (75)
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| US20070206518A1 (en) * | 1999-09-29 | 2007-09-06 | Kabushiki Kaisha Toshiba | Radio communication terminal | 
| US7933630B2 (en) | 1999-09-29 | 2011-04-26 | Fujitsu Toshiba Mobile Communications Limited | Radio communication terminal | 
| US7224999B1 (en) * | 1999-09-29 | 2007-05-29 | Kabushiki Kaisha Toshiba | Radio communication terminal with simultaneous radio communication channels | 
| US20060082641A1 (en) * | 2004-10-15 | 2006-04-20 | Ganesan Rengaraju | Image and audio controls for a communication device in push-to-video services | 
| US7692681B2 (en) * | 2004-10-15 | 2010-04-06 | Motorola, Inc. | Image and audio controls for a communication device in push-to-video services | 
| US7574228B2 (en) * | 2004-11-10 | 2009-08-11 | Nec Corporation | Multi-spot call system, sound volume adjustment device, portable terminal device, and sound volume adjustment method used therefor and program thereof | 
| US20060098591A1 (en) * | 2004-11-10 | 2006-05-11 | Nec Corporation | Multi-spot call system, sound volume adjustment device, portable terminal device, and sound volume adjustment method used therefor and program thereof | 
| US7596102B2 (en) * | 2004-12-06 | 2009-09-29 | Sony Ericsson Mobile Communications Ab | Image exchange for image-based push-to-talk user interface | 
| US20060120308A1 (en) * | 2004-12-06 | 2006-06-08 | Forbes Stephen K | Image exchange for image-based push-to-talk user interface | 
| US20060215585A1 (en) * | 2005-02-28 | 2006-09-28 | Sony Corporation | Conference system, conference terminal, and mobile terminal | 
| US20180367773A1 (en) * | 2005-08-31 | 2018-12-20 | Rah Color Technologies Llc | Color calibration of color image rendering devices | 
| US10560676B2 (en) * | 2005-08-31 | 2020-02-11 | Rah Color Technologies Llc | Color calibration of color image rendering devices | 
| US9331887B2 (en) * | 2006-03-29 | 2016-05-03 | Microsoft Technology Licensing, Llc | Peer-aware ranking of voice streams | 
| US20070230372A1 (en) * | 2006-03-29 | 2007-10-04 | Microsoft Corporation | Peer-aware ranking of voice streams | 
| US20090187400A1 (en) * | 2006-09-30 | 2009-07-23 | Huawei Technologies Co., Ltd. | System, method and multipoint control unit for providing multi-language conference | 
| US9031849B2 (en) | 2006-09-30 | 2015-05-12 | Huawei Technologies Co., Ltd. | System, method and multipoint control unit for providing multi-language conference | 
| WO2008115334A3 (en) * | 2007-02-23 | 2008-12-31 | John A Sachau | System and methods for mobile videoconferencing | 
| US8179821B2 (en) * | 2007-06-25 | 2012-05-15 | Comverse, Ltd. | Identifying participants of an audio conference call | 
| US20080316944A1 (en) * | 2007-06-25 | 2008-12-25 | Comverse, Ltd. | Identifying participants of an audio conference call | 
| US8649300B2 (en) | 2007-12-28 | 2014-02-11 | Huawei Technologies Co., Ltd. | Audio processing method, system, and control server | 
| US8531994B2 (en) | 2007-12-28 | 2013-09-10 | Huawei Technologies Co., Ltd. | Audio processing method, system, and control server | 
| US8269817B2 (en) * | 2008-07-16 | 2012-09-18 | Cisco Technology, Inc. | Floor control in multi-point conference systems | 
| US20100013905A1 (en) * | 2008-07-16 | 2010-01-21 | Cisco Technology, Inc. | Floor control in multi-point conference systems | 
| US9357169B2 (en) | 2008-11-24 | 2016-05-31 | Shindig, Inc. | Multiparty communications and methods that utilize multiple modes of communication | 
| US8390670B1 (en) | 2008-11-24 | 2013-03-05 | Shindig, Inc. | Multiparty communications systems and methods that optimize communications based on mode and available bandwidth | 
| US8405702B1 (en) | 2008-11-24 | 2013-03-26 | Shindig, Inc. | Multiparty communications systems and methods that utilize multiple modes of communication | 
| US10542237B2 (en) | 2008-11-24 | 2020-01-21 | Shindig, Inc. | Systems and methods for facilitating communications amongst multiple users | 
| US9215412B2 (en) | 2008-11-24 | 2015-12-15 | Shindig, Inc. | Multiparty communications systems and methods that optimize communications based on mode and available bandwidth | 
| US9041768B1 (en) | 2008-11-24 | 2015-05-26 | Shindig, Inc. | Multiparty communications systems and methods that utilize multiple modes of communication | 
| US8917310B2 (en) | 2008-11-24 | 2014-12-23 | Shindig, Inc. | Multiparty communications systems and methods that optimize communications based on mode and available bandwidth | 
| US9782675B2 (en) | 2008-11-24 | 2017-10-10 | Shindig, Inc. | Systems and methods for interfacing video games and user communications | 
| US8902272B1 (en) | 2008-11-24 | 2014-12-02 | Shindig, Inc. | Multiparty communications systems and methods that employ composite communications | 
| US9661270B2 (en) | 2008-11-24 | 2017-05-23 | Shindig, Inc. | Multiparty communications systems and methods that optimize communications based on mode and available bandwidth | 
| US9401937B1 (en) | 2008-11-24 | 2016-07-26 | Shindig, Inc. | Systems and methods for facilitating communications amongst multiple users | 
| US9947366B2 (en) | 2009-04-01 | 2018-04-17 | Shindig, Inc. | Group portraits composed using video chat systems | 
| US9712579B2 (en) | 2009-04-01 | 2017-07-18 | Shindig. Inc. | Systems and methods for creating and publishing customizable images from within online events | 
| US9779708B2 (en) | 2009-04-24 | 2017-10-03 | Shinding, Inc. | Networks of portable electronic devices that collectively generate sound | 
| US20100287251A1 (en) * | 2009-05-06 | 2010-11-11 | Futurewei Technologies, Inc. | System and Method for IMS Based Collaborative Services Enabling Multimedia Application Sharing | 
| US8330794B2 (en) * | 2009-06-10 | 2012-12-11 | Microsoft Corporation | Implementing multiple dominant speaker video streams with manual override | 
| US20100315484A1 (en) * | 2009-06-10 | 2010-12-16 | Microsoft Corporation | Implementing multiple dominant speaker video streams with manual override | 
| EP2448173A4 (en) * | 2009-07-16 | 2015-04-29 | Zte Corp | System and method for realizing wireless video conference | 
| US20110159860A1 (en) * | 2009-12-30 | 2011-06-30 | Shenzhen Futaihong Precision Industry Co., Ltd. | Method for managing appointments in a communication device | 
| US9247205B2 (en) * | 2010-08-31 | 2016-01-26 | Fujitsu Limited | System and method for editing recorded videoconference data | 
| US20120051719A1 (en) * | 2010-08-31 | 2012-03-01 | Fujitsu Limited | System and Method for Editing Recorded Videoconference Data | 
| US8630854B2 (en) | 2010-08-31 | 2014-01-14 | Fujitsu Limited | System and method for generating videoconference transcriptions | 
| US8791977B2 (en) | 2010-10-05 | 2014-07-29 | Fujitsu Limited | Method and system for presenting metadata during a videoconference | 
| US20160088259A1 (en) * | 2011-01-17 | 2016-03-24 | Eric C. Anderson | System and method for interactive internet video conferencing | 
| US11546548B2 (en) | 2011-02-28 | 2023-01-03 | Ricoh Company, Ltd. | Transmission management apparatus | 
| US10735689B2 (en) * | 2011-02-28 | 2020-08-04 | Ricoh Company, Ltd. | Transmission management apparatus | 
| US20170171511A1 (en) * | 2011-02-28 | 2017-06-15 | Yoshinaga Kato | Transmission management apparatus | 
| US8755310B1 (en) * | 2011-05-02 | 2014-06-17 | Kumar C. Gopalakrishnan | Conferencing system | 
| US9253532B2 (en) | 2011-05-23 | 2016-02-02 | Broadcom Corporation | Two-way audio and video communication utilizing segment-based adaptive streaming techniques | 
| US8860779B2 (en) * | 2011-05-23 | 2014-10-14 | Broadcom Corporation | Two-way audio and video communication utilizing segment-based adaptive streaming techniques | 
| US20120300015A1 (en) * | 2011-05-23 | 2012-11-29 | Xuemin Chen | Two-way audio and video communication utilizing segment-based adaptive streaming techniques | 
| US8681203B1 (en) * | 2012-08-20 | 2014-03-25 | Google Inc. | Automatic mute control for video conferencing | 
| US9247204B1 (en) * | 2012-08-20 | 2016-01-26 | Google Inc. | Automatic mute control for video conferencing | 
| US9554389B2 (en) * | 2012-08-31 | 2017-01-24 | Qualcomm Incorporated | Selectively allocating quality of service to support multiple concurrent sessions for a client device | 
| US20140064210A1 (en) * | 2012-08-31 | 2014-03-06 | Qualcomm Incorporated | Selectively allocating quality of service to support multiple concurrent sessions for a client device | 
| US20140098206A1 (en) * | 2012-10-04 | 2014-04-10 | Cute Circuit LLC | Multimedia communication and display device | 
| US10356356B2 (en) * | 2012-10-04 | 2019-07-16 | Cute Circuit LLC | Multimedia communication and display device | 
| US20140153410A1 (en) * | 2012-11-30 | 2014-06-05 | Nokia Siemens Networks Oy | Mobile-to-mobile radio access network edge optimizer module content cross-call parallelized content re-compression, optimization, transfer, and scheduling | 
| US11144171B2 (en) | 2012-12-28 | 2021-10-12 | Glide Talk Ltd. | Reduced latency server-mediated audio-video communication | 
| US10599280B2 (en) * | 2012-12-28 | 2020-03-24 | Glide Talk Ltd. | Dual mode multimedia messaging | 
| US20160224193A1 (en) * | 2012-12-28 | 2016-08-04 | Glide Talk Ltd. | Dual mode multimedia messaging | 
| US10271010B2 (en) | 2013-10-31 | 2019-04-23 | Shindig, Inc. | Systems and methods for controlling the display of content | 
| US9952751B2 (en) | 2014-04-17 | 2018-04-24 | Shindig, Inc. | Systems and methods for forming group communications within an online event | 
| US9733333B2 (en) | 2014-05-08 | 2017-08-15 | Shindig, Inc. | Systems and methods for monitoring participant attentiveness within events and group assortments | 
| CN105208262A (en) * | 2014-06-26 | 2015-12-30 | 爱玛丽欧有限公司 | Network photographic data management system and method | 
| US9711181B2 (en) | 2014-07-25 | 2017-07-18 | Shindig. Inc. | Systems and methods for creating, editing and publishing recorded videos | 
| US9734410B2 (en) | 2015-01-23 | 2017-08-15 | Shindig, Inc. | Systems and methods for analyzing facial expressions within an online classroom to gauge participant attentiveness | 
| US10133916B2 (en) | 2016-09-07 | 2018-11-20 | Steven M. Gottlieb | Image and identity validation in video chat events | 
| US20230078451A1 (en) * | 2020-02-20 | 2023-03-16 | Shenzhen Hollyland Technology Co., Ltd. | Audio and video transmission devices and audio and video transmission systems | 
| US11997420B2 (en) * | 2020-02-20 | 2024-05-28 | Shenzhen Hollyland Technology Co., Ltd. | Audio and video transmission devices and audio and video transmission systems | 
| WO2022026842A1 (en) * | 2020-07-30 | 2022-02-03 | T1V, Inc. | Virtual distributed camera, associated applications and system | 
| US20230171379A1 (en) * | 2020-07-30 | 2023-06-01 | T1V, Inc. | Virtual distributed camera, associated applications and system | 
Also Published As
| Publication number | Publication date | 
|---|---|
| JP2008511263A (en) | 2008-04-10 | 
| AR050380A1 (en) | 2006-10-18 | 
| TW200623879A (en) | 2006-07-01 | 
| KR20070040850A (en) | 2007-04-17 | 
| MX2007002295A (en) | 2007-05-11 | 
| IL181537A0 (en) | 2007-07-04 | 
| CN101040524A (en) | 2007-09-19 | 
| PE20060753A1 (en) | 2006-08-12 | 
| BRPI0514566A (en) | 2008-06-17 | 
| CA2578218A1 (en) | 2006-03-02 | 
| KR20090016004A (en) | 2009-02-12 | 
| WO2006023961A3 (en) | 2006-07-06 | 
| WO2006023961A2 (en) | 2006-03-02 | 
| EP1787469A2 (en) | 2007-05-23 | 
| RU2007110835A (en) | 2008-10-10 | 
Similar Documents
| Publication | Publication Date | Title | 
|---|---|---|
| US20060055771A1 (en) | System and method for optimizing audio and video data transmission in a wireless system | |
| US8700080B2 (en) | System and method for multiple simultaneous communication groups in a wireless system | |
| US8406797B2 (en) | System and method for transmitting and playing alert tones in a push-to-talk system | |
| CA2613786A1 (en) | System and method for resolving conflicts in multiple simultaneous communications in a wireless system | |
| CN101517906B (en) | Apparatus and method for identifying last speaker in a push-to-talk system | |
| US7725119B2 (en) | System and method for transmitting graphics data in a push-to-talk system | |
| HK1107210A (en) | System and method for optimizing audio and video data transmission in a wireless system | |
| HK1107457A (en) | System and method for transmitting and playing alert tones in a push-to-talk system | |
| HK1107199A (en) | System and method for transmitting graphics data in a push-to-talk system | 
Legal Events
| Date | Code | Title | Description | 
|---|---|---|---|
| AS | Assignment | Owner name: QUALCOMM INCORPORATED A DELAWARE CORPORATION, CALI Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KIES, JONATHAN K.;REEL/FRAME:015479/0517 Effective date: 20041201 | |
| STCB | Information on status: application discontinuation | Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |