EP1639821A1 - Systeme de visioconference - Google Patents
Systeme de visioconferenceInfo
- Publication number
- EP1639821A1 EP1639821A1 EP05702182A EP05702182A EP1639821A1 EP 1639821 A1 EP1639821 A1 EP 1639821A1 EP 05702182 A EP05702182 A EP 05702182A EP 05702182 A EP05702182 A EP 05702182A EP 1639821 A1 EP1639821 A1 EP 1639821A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- video
- data streams
- video data
- accordance
- compressed video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/152—Multipoint control units therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
Definitions
- the present invention relates generally to multimedia communications. More particularly, the present invention relates to multi-user video conferencing systems.
- Video conferencing systems permit multiple users to communicate with each other over a distributed communications network.
- video conferencing systems utilizing commonly available technology, such as personal computers, inevitably have relatively poor audio and video quality. This is in large part because the standards underlying such video conferencing systems (such as the H.323 codec format) were developed at a time when the widely available communication systems had relatively limited bandwidth and personal computers had modest processing power and ability to process video data in real-time.
- higher quality video conferencing systems have been developed, they require the use of communications networks with a relatively large amount of dedicated bandwidth (such as T-1 lines or ISDN networks) and/or specialized conferencing equipment.
- UDP User Datagram Protocol
- RTP Real Time Protocol
- MCU multi-point control unit
- the MCU receives the incoming video signal from the camera of each conference participant, processes the received incoming video signals and develops a single composite signal that is distributed to all of the participants.
- This video signal typically contains the video signals of a combination of the conference participants and the audio signal of one participant.
- processing is centralized at the MCU, a participant has limited capability to alter the signal that it receives so that it, for example, can receive the video signals for a different combination of participants. This reliance on central processing of the incoming video signals also limits the number of conference participants since the MCU has to simultaneously process the incoming video signals for all of the participants.
- a further object of a preferred embodiment of the invention is to provide a high quality video conference system that can be easily implemented over the Internet using the Transport Control Protocol and can be easily installed as a high-end software system at a widely available user terminal, such as a personal computer. It is an object of the preferred embodiments of the invention to provide a convenient user interface that permits the user to alter the audio/video signal that they receive. It is a further object of the invention for the user to be able to alter the combination of participants for which they receive audio/video signals and to change the display resolution of received video signals.
- Fig. 1 illustrates an exemplary video conferencing system according to a preferred embodiment of the invention.
- Fig. 2 illustrates the video media stream structure in the preferred embodiment.
- Fig. 3 shows the processing of the macroblock of a video frame in a preferred embodiment.
- FIG. 4 is a block diagram showing the processing of coding intermarries in a preferred embodiment of the invention.
- Fig. 5 shows the improved motion estimation used in a preferred embodiment of the invention.
- Fig. 6 illustrated an example of image rotation addressed in the improved motion estimation of the preferred embodiment of the invention.
- Fig. 7 illustrates 16 different patterns used to describe the movement of an object in a preferred embodiment of the invention.
- Fig. 8 is an example of the bit stream structure of the outgoing video stream from a client terminal in a preferred embodiment of the invention.
- Fig. 9 is an illustration of the multi-queue and multi-channel architecture utilized in the network connection in a preferred embodiment of the invention.
- Fig. 10 is a display screen of a client terminal while in main screen only mode according to a preferred embodiment of the invention.
- Fig. 10 is a display screen of a client terminal while in main screen only mode according to a preferred embodiment of the invention.
- Fig. 11 is a display screen of a client terminal while in main screen plus 4 sub-screen mode according to a preferred embodiment of the invention.
- Fig. 12 is a display screen of a client terminal while in main screen plus 8 sub-screen mode according to a preferred embodiment of the invention.
- Fig. 13 is a display screen of a client terminal while in full screen having 1 main screen plus 10 sub-screens according to a preferred embodiment of the invention.
- Fig. 14 is a display screen for a client terminal to connect to a video conference according to a preferred embodiment of the invention.
- Fig. 15 is a video setting display window in a preferred embodiment of the invention.
- Fig. 16 is an audio setting display window in a preferred embodiment of the invention.
- FIG. 1 is a diagram of the architecture and environment of an exemplary real-time video conferencing system according to a preferred embodiment of the invention.
- the system includes what is referred to as a multi-point control unit (MCU), but as described hereafter this MCU is significantly different in its functionality than the MCU of conventional video conferencing systems.
- MCU multi-point control unit
- the conference system has a plurality of user client terminals. Although an administrator's terminal and a certain number of user client terminals are shown as being connected to the MCU in Fig. 1, this is for illustration purposes only. There may be any number of connected administrator and user's client terminals. Indeed, as described hereafter, the number of connected user client terminals may vary during a video conference, as the users have the ability to join and drop from a video conference at their own control. Furthermore, the connections between the terminals shown in Fig. 1 are not fixed connections. They are switched network connections over open communication networks. Preferably, the network connections are broadband connections through an Internet Service Provider (ISP) of the client's choice using the Transport control Protocol and Internet Protocol (TCP/IP) at the network layer of the ISO network model.
- ISP Internet Service Provider
- TCP/IP Internet Protocol
- various access networks, firewalls and routers can be set up in a variety of different network configurations, including, for example, Ethernet local area networks.
- a local area network one of a certain number of ports, such as ports above 2000, should be opened/forwarded.
- the video conference system is designed and optimized to work with broadband connections (i.e., connections providing upload/download speeds of at least 128 kbps) at the user client terminals.
- broadband connections i.e., connections providing upload/download speeds of at least 128 kbps
- it does not require a fixed bandwidth, and may suitably operate at upload/download speeds of 256 kbps, 512 kbps or more at the user client terminals.
- Each client terminal is preferably a personal computer (PC) with a SVGA display monitor capable with a display resolution of 800 x 600 or better, a set of attached speakers or headphones, microphone and full duplex sound card.
- the display monitor may need to display a video signal in a large main screen at a normal resolution mode of 320 x 240 @ 25 fps or a high resolution mode of 640 x 480 @ 25 fps. It must also be able to simultaneously display a plurality of small sub-screens, each having a display resolution of 160 x 120 @ 25 fps.
- Each PC has a camera associated therewith to provide a video signal at the location of the client terminal (typically a video signal of the user at the location).
- the camera may be a USB 1.0 or 2.0 compatible camera providing a video signal directly to the client terminal or a professional CCD camera combined with a dedicated video capture card to generate a video signal that can be received by the client terminal.
- the video conferencing system preferably utilizes client terminals having the processing capabilities of a high-speed Intel Pentium 4 microprocessor with 256 MB of system memory, or better.
- the client terminals must have Microsoft Windows or other operating system software that permits it to receive and store a computer program in such a manner that allows it to utilize a low level language associated with the microprocessor and/or other hardware elements and having an extended instruction set appropriate to the processing of video. While computationally powerful and able to process video conferencing data in real-time, such personal computers are now commonly available.
- Each one of the client terminals performs processing of its outgoing video signals and incoming video signals and other processing related to operation of the video conferencing system.
- the MCU of the preferred embodiments thus needs to perform relatively little video processing since the video processing is carried out in the client terminals.
- the MCU captures audio/video data streams from all clients terminals in real-time and then redistributes the streams back to any client terminal upon request.
- the MCU closely approximates the functionality of a video switch unit - needing only a satisfactory network connection sufficient to support the total bandwidth of all connected user terminals. This makes it relatively easy to install and support video conferences managed by the MCU at locations that do not have a great deal of network infrastructure.
- Intraframes are utilized as key frames.
- the l-frames may be compressed according to the JPEG (Joint Picture Electronics Group) standard with additional dynamic macro block vector memory analysis technology.
- the Interframes are coded based on the difference between it and the predicted l-frame.
- Each frame is divided into a plurality of macroblocks, each macroblock preferably consisting of a block of 16 x 16 pixels.
- the system does not use the conventional 4:2:0 format in which the color information in the frame is downsampled by determining the average of the respective color values in each 2 x 2 subblock of four pixels.
- the color components in the I- frames, or the color components in both of the l-frames and the P-frames are preferably downsampled to a ratio for Y-Cr-Cb of 4:2:2.
- a macroblock is divided into four 8*8 Y-blocks (luminance), two 8*8 Cr-blocks (chrominance-red) and two 8 * 8 Cb-blocks (chrominance-blue). These are sampled in the stream sequence of Y-Cr-Y-Cb-Y-Cr-Y-Cb.
- the color loss introduced through compression is reduced to a minimal level, which in comparison to the conventional 4:2:0 format, yields superior video quality.
- additional color detail is conventionally avoided, when used in conjunction with the other features of the video conference system described in this application which improve the transport of the data through a TCP/IP network, the result is a high quality video.
- the data from the frame is then processed, in groups of 2 x 2 luminance blocks with two 2 x 1 chrominance blocks, before being passed to the unique context-based adaptive arithmetic coder (CABAC) of the preferred embodiments.
- a discrete cosine transformation (DCT) is performed and then quantization coefficients are determined as known to one of ordinary skill in the art.
- Huffman coding is used at this point.
- the unique context-based adaptive arithmetic coder (CABAC) is used instead in the preferred embodiments to obtain a higher video compression ratio.
- the preferred method of coding the P-frames is shown in Fig.4.
- the I- frame which serves as the reference image is compressed, coded and stored in memory.
- a motion estimation process For each macroblock in the P-frame containing the target image to be coded with respect to the reference image, a motion estimation process is performed that searches for the macroblock in the reference image that provides the best match. Depending upon the amount of motion that has occurred, the macroblock in the reference image that provides the best match may not be at the same location within the frame as the macroblock being coded in the target image of the P-frame. Fig.4 shows an example where this is the case. If the search finds a suitable match for the macroblock, then only a relative movement vector will be coded. If system CPU computation loading approaches full, a coding method similar to intraframe coding will be used. If no suitable match is found, then a comparison with the background image in the P- frame is performed to determine if a new object is identified.
- the macroblock will be coded and stored in memory and will be sent through the decoder for the next object search.
- This coding process has the advantages that there is a smaller final data matrix and a minimal number of bits is needed for coding.
- Many conventional video compression algorithms don't perform vector analysis on video images. They do not record the same or similar objects in the sequential image frames and the key frames.
- the object image is transmitted in conventional motion estimation techniques regardless of whether the object is undergoing translation or rotation.
- the improved motion estimation of the Context-Based Adaptive Arithmetic Coder (CABAC) used for video compression in the preferred embodiments is shown in Figs. 5-7. In the improved motion estimation scheme shown in Figs. 5- 7, rotation, mirror and other matching methods are added to improve the precision of motion estimation.
- CABAC Context-Based Adaptive Arithmetic Coder
- the software utilizes and leverages the low level language advantageously made available for use with modern central processing units, such as the Intel Pentium 4, supporting, for example, MMX, SSE, EES2 and similar extended instruction sets to meet demands such as those for general video image processing.
- modern central processing units such as the Intel Pentium 4
- ITU H.263 estimation does not give a motion vector analysis solution on an object going though rotation such as shown in Fig. 6. But the improved motion estimation method of the preferred embodiment gives a very simple solution.
- the ITU H.263 standard uses the following formula to compute motion estimation, where Fo and Fi represent the current frame and the reference frame; k, I are coordinates of the current frame; x, y are coordinates of the reference frame; and N is the size of the macroblocks.
- the improved motion estimation formula of the preferred embodiments can be expressed by the following equation, where T represents the transformation of one of the 16 different patterns shown in Fig. 7:
- the resulting data for a macroblock is preferably arranged into a bit stream having the structure illustrated in Fig. 8.
- the Move header contains the motion data for the macroblock (sequence number, coordinates, angle).
- the Type header indicates the motion type, preferably by reference to one of the sixteen types illustrated in Fig. 7.
- the Quant header contains the Macroblock sequential number.
- typical video conferencing protocols such as UDP
- UDP need specified protocol descriptors that may substantially increase the volume of data to be transmitted and the bandwidth that is necessary.
- the data volume generated by the video decoder of the preferred embodiments is only about 50% of the data that would be necessary if the video was decoded according to the ITU H.263 standard.
- this reduction is data is obtained while have more flexibility over the frame sizes, and still delivering better video quality in terms of possible mosaic, color accuracy, image loss.
- the bit stream structure of the preferred embodiments is optimized for transmission utilizing the TCP/IP protocol, which is one of the most common protocols for many data networks, including the Internet.
- video conferencing systems typically avoid transmission over TCP/IP networks even though it utilizes less overhead in terms of data block headers, etc., because the transmission of packets often incur delay and the resulting latency is unacceptable in a video conferencing system.
- the preferred embodiments utilize a unique technique for holding the data stream in a buffer and transmitting it over a TCP/IP network that it results in a video conferencing system free from undesireable latency effects. According to this technique, after a point-to-point connection is established between the two devices, multiple sockets are opened (called A, B, C, and D herein for simplicity), which correspond to an equal number of channels.
- the data buffer is configured to store a number of data blocks equal to the number of channels, and these buffered data blocks are then duplicated as necessary to produce multiple copies of each of the data blocks. The data blocks are then ordered into different internal sequences according to the number of channels.
- Fig. 8 illustrates a transmission architecture utilized in the preferred embodiment to deliver higher realized bandwidth and connection reliability over TCP/IP networks through the combination of concurrent multi-queue and multichannel transmission architecture.
- multiple queues are used to control the transmission of data over TCP/IP networks. Suppose there are "N" queues and that "M" logical channels, and that each queue of data blocks is duplicated and sequentially numbered and feed to all channels as described above, the total queues will then be:
- the data blocks are preferably prioritized based on their importance to providing real-time video communications. From top to bottom of prioritization, there are four preferred levels: 1 st - Control data (Ring, camera control%) 2nd - Audio data 3rd - Video data 4th - other data (file transfer%)
- This concurrent multi-queue and multi-channel transmission architecture delivers a much more reliable connection and smoother data flow over TCP/IP channels than was previously known. On average, the realized bandwidth is increased by 50%, which results in significant improvement in the quality of the video conferencing system.
- Fig. 14 illustrates a display window from which a user may select the remote client conferencing site with which they wish to connect and view from a listing of conferences.
- the window may be provided automatically upon launching a software application or, e.g., when the user right clicks on a display screen they are viewing.
- An alternative log-on screen may also be provided in which a registered user enters information identifying a conference center by number and/or name, along with their username and password, and then click on a button to connect to the conference.
- the screen may have save password and auto logon features utilized in the logon screen, in the same manner that is known for other types of applications.
- the user may select from among many screens, including the examples shown in Figs. 10-13.
- Fig. 10 shows the display in a main screen only mode.
- Fig. 11 shows the display in a main screen + 4 sub-screens mode.
- Fig. 12 shows the display in a main screen + 8 sub- screens mode.
- Fig. 13 shows the display in a full screen mode with one main screen and 10 sub screens.
- the user is not limited to these examples, but may view any number of screens simultaneously, up to the maximum number of users.
- the video on the main screen can be switched back and forth with any sub-screen by a simple left click on any live sub-screen to switch it with the main screen.
- the high efficiency transport picture smoothing algorithm described above greatly improves the system resources utilization to make this possible.
- These screens also provide various icons or buttons to enable user selection of various functions. The user may click on the record icon to start capture of the conference video.
- the user may select a site from the site list in the message selection to start private message chat. All messages are invisible to other users.
- a public message may be sent by selecting say to "All" to send messages to all sites (users, clients) in the conference.
- the user may click on the mute icon to activate a mute function muting the sound coming through the conference site.
- the screen may also indicate the current status of listed online meeting groups and users.
- a (V A S L) system may be used where the letters mean the following: V The site is sending video A The site is sending audio S The other site is receiving the user's audio L The other site is receiving the user's video
- the screens also preferably display the connection status.
- client user
- mode chaired or free mode
- data in speed inbound data in kbps
- data out speed outbound data in kbps
- session time in format hh:mm:ss.
- every client user works the same as a non-chaired conference.
- chaired mode each client user should ring the bell icon to get permission to speak and none of the users can switch screens or use a whiteboard.
- the chai ⁇ erson will open the site, then click on the sync button to broadcast the site to all client users.
- Figs. 15 and 16 show the video and audio settings available at the user terminal.
- Fig. 15 shows the video setting.
- There is a video device driver drop down menu which can be highlighted to select the appropriate video driver.
- There is a resolution section or check box which enables the user to set the resolution at wither 640 x 480 or 320 x 240.
- the video input device hardware equipment may be selected through a drop down menu or other interactive feature.
- a video format feature such as the button shown in Fig. 15, allows the appropriate video format (PAL or NTSC) to be selected.
- a video source feature such as the button shown in Fig. 15, allows the appropriate video source to be selected.
- Fig. 16 shows the user audio setting.
- There is an audio input device driver drop down menu which can be highlighted to select the appropriate audio input device.
- There is an audio output device driver drop down menu which can be highlighted to select the appropriate audio output device.
- There is an audio input volume feature to adjust the volume of the microphone and an audio output volume feature to adjust the volume of the speakers/headphone.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
L'invention concerne un procédé de visioconférence faisant appel à des données vidéo de caméras situées aux emplacements respectifs de terminaux d'utilisateurs. Les données vidéo de chacune des caméras sont fournies aux terminaux d'utilisateurs, où elles sont traitées afin d'obtenir un flux de données vidéo compressé par un logiciel installé et exécuté dans le terminal d'utilisateur. Les flux de données vidéo compressés sont transmis vers une unité de commande multipoint qui les commute en flux de données vidéo de sortie sans les décompresser. Chaque terminal d'utilisateur reçoit, décompresse et affiche une combinaison sélectionnée desdits flux de données vidéo de sortie décompressés en fonction d'une sélection de l'utilisateur du terminal d'utilisateur.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US10/755,067 US20050151836A1 (en) | 2004-01-09 | 2004-01-09 | Video conferencing system |
| PCT/IB2005/000011 WO2005069619A1 (fr) | 2004-01-09 | 2005-01-06 | Systeme de visioconference |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| EP1639821A1 true EP1639821A1 (fr) | 2006-03-29 |
| EP1639821A4 EP1639821A4 (fr) | 2007-05-23 |
Family
ID=34739498
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP05702182A Withdrawn EP1639821A4 (fr) | 2004-01-09 | 2005-01-06 | Systeme de visioconference |
Country Status (3)
| Country | Link |
|---|---|
| US (2) | US20050151836A1 (fr) |
| EP (1) | EP1639821A4 (fr) |
| WO (1) | WO2005069619A1 (fr) |
Families Citing this family (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7349000B2 (en) * | 2002-04-30 | 2008-03-25 | Tandberg Telecom As | Method and system for display of video device status information |
| IL158276A (en) * | 2003-10-02 | 2010-04-29 | Radvision Ltd | Method for dynamically optimizing bandwidth allocation in variable bitrate (multi-rate) conferences |
| GB0408457D0 (en) * | 2004-04-15 | 2004-05-19 | Ring 2 Comm Ltd | Various improvements relating to telephone and conference calls between telephony devices |
| US7400340B2 (en) * | 2004-11-15 | 2008-07-15 | Starent Networks, Corp. | Data mixer for portable communications devices |
| JP4434973B2 (ja) * | 2005-01-24 | 2010-03-17 | 株式会社東芝 | 映像表示装置、映像合成配信装置、プログラム、システム及び方法 |
| JP4695474B2 (ja) * | 2005-09-21 | 2011-06-08 | 株式会社東芝 | 合成映像制御装置、合成映像制御方法およびプログラム |
| US7932919B2 (en) * | 2006-04-21 | 2011-04-26 | Dell Products L.P. | Virtual ring camera |
| KR101234156B1 (ko) * | 2006-06-14 | 2013-02-15 | 삼성전자주식회사 | 항목 그룹화를 이용한 외부입력 리스트 제공방법 및 이를적용한 영상기기 |
| KR101405933B1 (ko) * | 2007-07-12 | 2014-06-12 | 엘지전자 주식회사 | 휴대 단말기 및 휴대 단말기의 위치 정보 표시 방법 |
| US8477177B2 (en) * | 2007-08-10 | 2013-07-02 | Hewlett-Packard Development Company, L.P. | Video conference system and method |
| CN101835019A (zh) * | 2007-09-13 | 2010-09-15 | 株式会社日立制作所 | 传送方法、传送装置、影像设备和显示装置 |
| JP4913097B2 (ja) * | 2008-06-16 | 2012-04-11 | 株式会社日立国際電気 | 映像再生方法及び映像再生装置並びに映像配信システム |
| US8145770B2 (en) * | 2008-10-31 | 2012-03-27 | Radvision Ltd. | Devices, methods, and media for determining and assigning optimal media characteristics in communications sessions |
| US9900280B2 (en) * | 2009-03-30 | 2018-02-20 | Avaya Inc. | System and method for managing incoming requests for a communication session using a graphical connection metaphor |
| US9893902B2 (en) * | 2011-05-31 | 2018-02-13 | Google Llc | Muting participants in a communication session |
| EP2679015A4 (fr) * | 2011-06-07 | 2014-05-21 | Huawei Tech Co Ltd | Dispositif et procédé de contrôle de session vidéo dans un réseau de données |
| FR2998995A1 (fr) * | 2012-12-03 | 2014-06-06 | France Telecom | Procede de communication entre plusieurs utilisateurs munis de terminaux de communication, par l'intermediaire d'une espace virtuel de communication |
| CN104022941A (zh) * | 2014-06-23 | 2014-09-03 | 国家电网公司 | 一种会议即时通讯系统及其实现方法 |
| CN106230608A (zh) * | 2016-07-21 | 2016-12-14 | 山东共达信息技术有限公司 | 一种云视声智能交互系统 |
| CN112118471A (zh) * | 2020-07-21 | 2020-12-22 | 北京博睿维讯科技有限公司 | 一种基于多信源的显示方法、显示装置、多信源共享系统及计算机可读存储介质 |
Family Cites Families (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US178451A (en) * | 1876-06-06 | Improvement in combined watch-chain bars and pencils | ||
| US181462A (en) * | 1876-08-22 | Improvement in grain-separators | ||
| US87645A (en) * | 1869-03-09 | Improvement in portable key-hole guards | ||
| US123488A (en) * | 1872-02-06 | Improvement in fences | ||
| US114142A (en) * | 1871-04-25 | Improvement in combined heater and condenser | ||
| US103830A (en) * | 1870-06-07 | Improved machine for paring fruit and vegetables | ||
| US160810A (en) * | 1875-03-16 | Improvement in barrel-barrows | ||
| US92444A (en) * | 1869-07-13 | Improvement in feed-rack for addressing-machines | ||
| US131395A (en) * | 1872-09-17 | Improvement in tobacco-presses | ||
| US188954A (en) * | 1877-03-27 | Improvement in revolving advertising-cases | ||
| US3656178A (en) * | 1969-09-15 | 1972-04-11 | Research Corp | Data compression and decompression system |
| US5315633A (en) * | 1991-12-20 | 1994-05-24 | Unisys Corporation | Digital video switch for video teleconferencing |
| US5886744A (en) * | 1995-09-08 | 1999-03-23 | Intel Corporation | Method and apparatus for filtering jitter from motion estimation video data |
| JPH09149396A (ja) * | 1995-11-27 | 1997-06-06 | Fujitsu Ltd | 多地点テレビ会議システム |
| KR970055857A (ko) * | 1995-12-30 | 1997-07-31 | 김광호 | 디지탈 셀룰라 시스템에서 기지국시스템의 로딩방법 |
| US6122259A (en) * | 1996-02-27 | 2000-09-19 | Hitachi, Ltd. | Video conference equipment and multipoint video conference system using the same |
| ATE491303T1 (de) * | 1997-09-04 | 2010-12-15 | Comcast Ip Holdings I Llc | Vorrichtung für videozugang und kontrolle über ein rechnernetzwerk mit bildkorrektur |
| US6091777A (en) * | 1997-09-18 | 2000-07-18 | Cubic Video Technologies, Inc. | Continuously adaptive digital video compression system and method for a web streamer |
| US5941951A (en) * | 1997-10-31 | 1999-08-24 | International Business Machines Corporation | Methods for real-time deterministic delivery of multimedia data in a client/server system |
| US6731734B1 (en) * | 1999-08-19 | 2004-05-04 | Siemens Information & Communication Networks, Inc. | Apparatus and method for intelligent conference call codec selection |
| JP2001069474A (ja) * | 1999-08-25 | 2001-03-16 | Nec Corp | 多地点制御装置及びそれに用いる映像表示方法 |
| US6366577B1 (en) * | 1999-11-05 | 2002-04-02 | Mci Worldcom, Inc. | Method for providing IP telephony with QoS using end-to-end RSVP signaling |
| US6590604B1 (en) * | 2000-04-07 | 2003-07-08 | Polycom, Inc. | Personal videoconferencing system having distributed processing architecture |
| CA2344595A1 (fr) * | 2000-06-08 | 2001-12-08 | International Business Machines Corporation | Systeme et methode de visualisation et/ou d'ecoute simultanees de plusieurs trains de donnees multimedia transmis par l'intermediaire d'un espace de traitement centralise |
| US6603501B1 (en) * | 2000-07-12 | 2003-08-05 | Onscreen24 Corporation | Videoconferencing using distributed processing |
| JP2004507145A (ja) * | 2000-08-15 | 2004-03-04 | シーゲイト テクノロジー エルエルシー | オペレーティング・コードのデュアル・モード圧縮 |
| JP4581210B2 (ja) * | 2000-09-29 | 2010-11-17 | 日本電気株式会社 | テレビ会議システム |
| US6535240B2 (en) * | 2001-07-16 | 2003-03-18 | Chih-Lung Yang | Method and apparatus for continuously receiving frames from a plurality of video channels and for alternately continuously transmitting to each of a plurality of participants in a video conference individual frames containing information concerning each of said video channels |
| US20030235338A1 (en) * | 2002-06-19 | 2003-12-25 | Meetrix Corporation | Transmission of independently compressed video objects over internet protocol |
-
2004
- 2004-01-09 US US10/755,067 patent/US20050151836A1/en not_active Abandoned
-
2005
- 2005-01-06 WO PCT/IB2005/000011 patent/WO2005069619A1/fr not_active Ceased
- 2005-01-06 EP EP05702182A patent/EP1639821A4/fr not_active Withdrawn
-
2006
- 2006-02-01 US US11/346,866 patent/US20060192848A1/en not_active Abandoned
Also Published As
| Publication number | Publication date |
|---|---|
| US20060192848A1 (en) | 2006-08-31 |
| US20050151836A1 (en) | 2005-07-14 |
| WO2005069619A1 (fr) | 2005-07-28 |
| EP1639821A4 (fr) | 2007-05-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20050151836A1 (en) | Video conferencing system | |
| US8471890B1 (en) | Adaptive video communication channel | |
| US8643695B2 (en) | Videoconferencing endpoint extension | |
| US8319814B2 (en) | Video conferencing system which allows endpoints to perform continuous presence layout selection | |
| US6590603B2 (en) | System and method for managing streaming data | |
| US7561179B2 (en) | Distributed real-time media composer | |
| US8139100B2 (en) | Virtual multiway scaler compensation | |
| US8456510B2 (en) | Virtual distributed multipoint control unit | |
| JP4921488B2 (ja) | スケーラブルなビデオ符号化を用いて、またスケーラブルなテレビ会議サーバを複合してテレビ会議を行うためのシステムおよび方法 | |
| US10455196B2 (en) | Method and system for conducting video conferences of diverse participating devices | |
| US8514265B2 (en) | Systems and methods for selecting videoconferencing endpoints for display in a composite video image | |
| US7627629B1 (en) | Method and apparatus for multipoint conferencing | |
| EP1868348B1 (fr) | Contrôle de présentation de conférence et protocole de contrôle | |
| EP1868347A2 (fr) | Sources multimédia indépendantes associées dans un appel en conférence | |
| US20100066804A1 (en) | Real time video communications system | |
| JP2004506347A (ja) | 分散処理構造を有するパーソナル・テレビ会議システム | |
| CN114600468A (zh) | 将复合视频流中的视频流与元数据组合 | |
| JP2012525076A (ja) | デジタルビデオ配信システムにおける即時マルチチャネルビデオコンテンツブラウジングのためのシステム、方法、およびコンピュータ可読媒体 | |
| JP2015192230A (ja) | 会議システム、会議サーバ、会議方法及び会議プログラム | |
| US7884844B2 (en) | System for conducting videoconferencing session over television network | |
| US20050195275A1 (en) | Arrangement and method for generating CP images | |
| JP2013042492A (ja) | 常駐表示式ビデオ会議においてビデオストリームを切替える方法およびシステム | |
| CN102957893B (zh) | 用于在连续呈现会议中的视频流之间切换的方法和系统 | |
| EP2227013A2 (fr) | Unité de commande multipoint avec distribution virtuelle | |
| Johanson | Designing an environment for distributed real-time collaboration |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| 17P | Request for examination filed |
Effective date: 20060130 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR |
|
| DAX | Request for extension of the european patent (deleted) | ||
| A4 | Supplementary search report drawn up and despatched |
Effective date: 20070425 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
| 18W | Application withdrawn |
Effective date: 20070727 |