WO2001067264A1 - Dispositif et procede de fourniture multimedia sur internet - Google Patents
Dispositif et procede de fourniture multimedia sur internet Download PDFInfo
- Publication number
- WO2001067264A1 WO2001067264A1 PCT/US2001/040264 US0140264W WO0167264A1 WO 2001067264 A1 WO2001067264 A1 WO 2001067264A1 US 0140264 W US0140264 W US 0140264W WO 0167264 A1 WO0167264 A1 WO 0167264A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- packets
- congestion
- streaming
- computing device
- network
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/26—Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
- H04L47/263—Rate modification at the source after receiving feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/29—Flow control; Congestion control using a combination of thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/32—Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/612—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/80—Responding to QoS
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Definitions
- Internet and more particularly, to an apparatus and method for predictable and differentiated delivery of multimedia streaming on the Internet.
- Multimedia streaming involves broadcast of audio and video information from content providers called streaming servers or senders via the Internet to clients or receivers on user computing devices called streaming players.
- Real-time delivery of the multimedia streaming on the Internet is inherently uncontrollable and consequently unpredictable.
- packets of multimedia streaming are typically dropped, thus causing degraded audio and video playback to be received by the streaming players.
- manifested forms of congestion can be classified into two types: instantaneous and sustained (longer-term). Instantaneous congestion is caused by random variations in the arrival patterns of packets of multimedia streaming at different Internet routers and switches, and due to the varying sizes of these packets.
- sustained congestion can last from tens of seconds up to a few minutes.
- the client or the streaming player can buffer incoming packets of multimedia streaming for a short duration before playback. This reduces the effect of short-term delay between packets of multimedia streaming. Additionally, by requesting that a lost packet be immediately retransmitted, the streaming player can counter the loss of packets of multimedia streaming due to instantaneous congestion.
- Several existing streaming systems use these techniques, i.e., short-term buffering along with retransmits of lost packets, to isolate the streaming player from the effects of instantaneous congestion.
- RSVP Resource Reservation Protocol
- IETF Internet Engineering Task Force
- the object of the present invention is to achieve a particular level of delivery isolation on particular streaming players. This level of delivery isolation may be based on factors like an importance "value” associated with particular streaming players, or the price the user of the streaming players are charged, etc.
- the streaming server may place specific streaming players in the high "value” category based on the demographic information on their owners, while other streaming players may be placed in the low “value” category.
- Another object of the present invention is to ensure that the high value users of streaming players always receive consistent, predictable delivery of streaming content, even if there is an excessive user demand. If there is a situation of sustained congestion, wherein not all users can be satisfied, the content provider might want to ensure that the high value users are promised predictable delivery.
- the present invention provides a method and an apparatus for predictable and differentiated streaming on the Internet.
- the system consists of software and random-access memory-based buffers installed at the streaming server and at the streaming players along with a service manager at a separate location.
- the method prescribes the manner in which the system is used to achieve predictable and differentiated delivery of streaming multimedia packets.
- the invention is based on streaming multimedia packets between the software on the streaming server and the software on the streaming.
- the buffer sizes in the computing devices of the streaming player and of the streaming server are initialized based on the configuration information provided by the service manager software at the start of the execution of the inventive method.
- This configuration information includes: a) the expected duration of sustained congestion periods, b) the worst-case packet loss rate during the sustained congestion period, c) the encoded bit-rate of streaming source information, and etc.
- the software on the streaming server relays the streaming packets to the software on the streaming player, which stores the packets in a playout buffer before playing them out.
- the playout buffer takes care of delay variation that may arise due to instantaneous congestion on the network path.
- the software on streaming player immediately requests the server-end software to retransmit the lost information.
- the software on the streaming player uses estimation techniques based on probes to detect and confirm the onset of sustained congestion. Once the congestion status is verified, the streaming player software informs the streaming server software about this event. The streaming server software then modifies the manner in which it streams packets to the streaming player software. In particular, the streaming server software ensures that the transmission rate of packets never exceeds the encoded bit-rate of the streaming source. Furthermore, it gives preemptive priority to the retransmitted packets over new packets. This creates a backlog of new packets at one of the streaming server buffers.
- the streaming player software continuously monitors the status of received packets and detects the end of congestion through estimation techniques based on probes. Once the end of congestion is confirmed, the streaming player software informs the streaming server software about this event. The streaming server software then begins to stream its backlog of new packets to the streaming player software. The transmission rate of this stream is increased based on the estimation information from the streaming player software along with feedback obtained from other streaming players in the network. This rate is carefully selected to ensure that the network is not flooded back into congestion.
- the buffer sizing at the streaming player and at the sfreaming server is based on the configuration information and ensures that the user experiences an uninterrupted playback during sustained congestion. This assumes that the configuration information is the "worst-case" scenario.
- the streaming player software contacts the service manager after the service.
- the configuration information at the service manager is now updated in a persistent manner with the actual worst-case parameters.
- this update happens as follows.
- the buffers are sized based on the updated configuration information. The method thus automatically learns the worst-case behavior from the actual behavior of the network each time. Eventually, the method can guarantee a predictable delivery based on a non-mterrupted playout in most cases. Va ⁇ ous other embodiments, are also possible, wherein the buffer size is fixed, but the encoding bit-rate of the stream is modified to choose the largest encoding bit-rate that leads to an uninterrupted playback.
- the method for differentiated streaming involves identical operations as the predictable streaming method, except for the mechanism for streaming the backlog of packets from the streaming server software.
- the streaming server software transmits the stream based on the "value" associated with the particular user, m addition to the information used for predictable streaming.
- the methodology of the mentioned invention is utilized to offer the inventive method as an "accountable delivery service" to a large number of content providers, i.e., streaming servers, and users, i.e., streaming players.
- Such a service may deliver performance contract-based services to users of the streaming players on the Internet.
- the service manager plays a central role in this service, by keeping and exchanging the state of several streaming players, delivery paths, and content providers/streaming servers in its database.
- the method of the present invention tracks the "actual” delivered viewing/listening expenence of users of streaming servers by computing the delivery statistics including but not restricted to metrics that quantify “pauses or interruptions", “content throughput”, and “content loss”.
- the delivery service can also answer the question of "service feasibility”.
- the service manager can estimate, based on the histo ⁇ cal and instantaneous information, whether a particular user can receive an "expected quality experience " While the present invention does not describe the details of this service, it should be obvious to those skilled in the art to build such a service around the framework of the present invention
- Figure 1 is a schematic diagram of the devices and programs comprising the invention.
- FIG. 2 is a schematic diagram of the components of the devices utilized by the invention.
- Figure 3 is a schematic diagram of the operation of the invention du ⁇ ng normal streaming.
- Figure 4 is a schematic diagram of the operation of the invention dunng congestion.
- Figure 5 is a schematic diagram of the recovery scheduling operation of the invention after congestion.
- Figure 6 is a diagram of the modeling the backlog operation of the invention du ⁇ ng congestion
- Figure 7 is a state transition diagram for streaming player module (normal streaming).
- Figure 8 is a state transition diagram for streaming server module (normal streaming).
- Figure 9 is diagram of a congestion processing protocol at the streaming player module (du ⁇ ng and after congestion)
- FIG 10 is a diagram of an congestion processing protocol at the streaming server module (du ⁇ ng and after congestion) DETAILED DESCRIPTION OF THE INVENTION Architecture
- the invention shown in Figure 1, comprises at least one computing device 10 for executing a service manager process 12, at least one computing device 14 for executing streaming server process 16, and at least one computing device 18 f executing streaming player process 20.
- the service manager process 12 is shown as residing on the computing device 12 only for the clarity of presentation. It may be obvious to those skilled in the art that the service manager process 12 may reside on any computing devices 14 and 18.
- server processes 16 may be present on the computing devices 18 alongside player processes 20, and vice versa the player processes 20 may be present on the computing devices 14 alongside server processes 16.
- Each computing device 10, 14 and 18 are connected to a network 22 and establish data paths amongst each other. In the prefe ⁇ ed embodiment, the network is the Internet.
- the streaming server process 16 transmits a single multimedia stream to each streaming player process 20 and the streaming player process 20 receives that single multimedia stream.
- the streaming server process consists of three buffers: a) Output buffer 24, used to transmit new packets from the server process 16 to the player process 20; b) Backup buffer 26 , for maintaining temporary copies of the packets transmitted to the player process 20; and c) Retransmit buffer 28, for maintaining retransmit requests received from the player process 20.
- the streaming player process 20 may comprise a player buffer 29. Before a streaming session begins, the Service Manager process 12 downloads configuration information via the network 22 to both the computing device 14 for the use of the streaming server process 16 and to the computing device 18 for the use of the streaming player process 20.
- the configuration information on the computing device 10 is updated for the future use of the Service Manager process 12 via the network 22 by both the streaming server process 16 on the computing device 14 and the streaming player process 20 on the computing device 18.
- the computing devices 10, 14, and 18 may take the configuration of any computer ranging from mainframes and personal computers (PCs) to digital telephones and hand held devices, e.g., palm pilotsTM.
- PCs personal computers
- hand held devices e.g., palm pilotsTM.
- such computing devices may comprise a bus 30, which is connected directly to each of the following:
- CPU central processing unit
- the common bus 30 is further connected 1. by the video interface 40 to a display 50;
- a storage device 52 which may illustratively take the form of memory gates, disks, diskettes, compact disks (CD), digital video disks (DVD), etc.;
- peripheral interface 38 to the peripherals 58, such as the keyboard, the mouse, navigational buttons, e.g., on a digital phone, a touch screen, and/or writing screen on full size and hand held devices, e.g., a palm pilot TM;
- the communications interface 44 e.g., a plurality of modems
- a network connection 60 e.g., an Internet Service Provider (ISP) and to other services, which is in turn connected to the network 22, whereby a data path is provided between the network 22 and the computing devices 10, 14, and 18 ( Figure 1) and, in particular, the common bus 30 of these computing devices; and
- ISP Internet Service Provider
- FIG. 3 the normal streaming operation is shown when there are either zero, i.e., no losses, or sporadic streaming losses, i.e., loss occurring due to instantaneous burst of streaming packets.
- the overall streaming sequence works as follows. Packets streamed from the streaming server process 16 are- ⁇ ueued up in the streaming server output buffer 24 for transmission.
- the server process 16 is responsible for transmitting streaming packets 70 to the computing device 18 executing the player process 20. Periodically, this process 16 checks the output buffer 24 and the retransmit buffer 28 to determine if there are streaming packets 70 awaiting transmission or any requests form the player process. Any remaining streaming packets in the output buffer 24 are then transmitted.
- any streaming packets 70 requested for retransmission and found in the backup buffer 26 are transmitted.
- streaming packets 70 are transmitted, a temporary backup copy of the transmitted streaming packets 70 is kept in the backup buffer.
- the streaming player process 20 receives the streaming packets 70 and stores them at the tail-end of the player buffer 29. Periodically after a user-defined interval, the player process 20 examines the received streaming packets 70 to see if any sfreaming packets 70 are missing. If packets are missing, the player process 20 sends a request 72 to the server process 16 to retransmit the missing packets. After assuring that all the streaming packets 70 are received, the received streaming packets 70 are forwarded from the head-end of the player buffer to the streaming player process 20.
- the invention detects congestion early on, and then may take steps so that the impact of congestion on the stream is minimized.
- a correctly configured framework can keep a stream completely transparent of sustained congestion, thus providing an effect equivalent to a "reservation.” Two principles are at the basis of this framework, detection reaction to congestion and congestion recovery.
- the invention detects congestion through probes launched by the player process 20. The probes are discussed in detail later.
- the server process 16 reacts by using a technique called "constant bit-rate emulation.” This name derives from the fact that the server process 16 tries to maintain a constant upper bound on the rate at which the streaming packets 70 are streamed to the player process 20. Typically, the upper bound is set equal to the encoding bit rate of the -stream.
- the server process 16 emulates a constant bit-rate by substituting new server packets with packets that have been requested 72 for retransmission. Please recall that in congestion, the server has a large number of retransmit requests 72 issued from the player process 20.
- Figure 4 shows the flow of streaming packets 70 during this congestion situation.
- the method demonstrates a "network- friendly" yet efficient behavior.
- the method is network- friendly, since it conserves bandwidth during congestion by lowering the upper bound of its streaming transmission rate. Additionally, the method is efficient for the following reasons.
- the upper bound on the transmission rate is set equal to the maximum encoding bit-rate of the source stream; this is not a restrictive setting.
- the present invention only reduces the transmission rate by the amount of incurred loss. This decision has an important implication in practice. Most practical congestion- processing algorithms will incur errors and delays in estimating and reacting to the start of congestion or end of congestion.
- the invention delivers a throughput better or equal to the throughput of any other scheme, if there is no loss of streaming packets 70.
- TCP Transmission Control Protocol
- the constant bit-rate of the invention in congestion allows the TCP traffic on the same path to accurately sense the available bandwidth (most Internet or Web traffic uses TCP).
- An alternative design with adapting bit-rate would confuse most other TCP connections and cause each to keep oscillating their TCP windows. If all TCP connections maintain a steady window- size, the performance of the network is much smoother in congestion. This behavior can drain out the congestion faster and reduce the estimation errors.
- the invention detects the end of congestion through probes launched by the player process 20. Once the player process 20 confirms that congestion is over, the server process 16 enters a phase called "recovery scheduling".
- Figure 4 shows a recovery scheduling after l o congestion. To understand this phase, recall that in the previous state of congestion, the server transmits packets at a constant rate. By substituting new packets with older lost packets, the server is in effect creating a backlog of new packets in its output buffer. The idea in recovery scheduling is to exhaust the substantial backlog that may build up at the server output queue. Clearly, this backlog must not be cleared by flooding the network. Such action may cause is significant losses in the intermediate switch buffers.
- Recovery scheduling is a procedure that configures the rate of streaming the backlog based on the network parameters. Additionally, recovery scheduling can also take into account the "premium" placed on the particular player process 20. The latter0 consideration allows the server process 16 to stream backlog of streaming packets at different priorities. "Higher value” users can receive backlog sooner than “lower value” users. This is the basic principle behind differentiated streaming.
- This configuration information is downloaded to the computing devices 14 and 16 at the beginning of each streaming session.
- This configuration information includes the following parameters:
- Averaged and minimum service rate, avg and mm This is the rate at which streaming packets 70 are actually transferred from the server process 16 to the player
- mm is the minimum rate across the path via the Internet 22 from the server process 16 to the player process 20.
- the parameters indicated in the list above may be obtained through historical measurements of the relevant values and through measurements made on streaming packets 70 at the computing device 14 of the server process 16. The parameters are in turn used to design the sizes of the buffers 24, 26, 28 and 29 of server process 16 and the player process 20. One preferred embodiment of achieving this is described immediately below. That embodiment ignores the upper bounds limits on the buffer sizes related to the available random-access memory 34 ( Figure 2) in the computing devices 14 and 18.
- the buffer on the computing device 18 used by the streaming player process 20 plays out the streaming packets 70 at the maximum rate of e * bits per second.
- the packet loss is approximately loss
- the maximum transmission rate from the server is e ⁇ - .
- the rate of the e *(l - P ) player buffer 29 fill rate equals to r loss bits per second.
- the net drain on e *P the player buffer 29 proceeds at the rate of r '"" bits per second.
- the buffer-size of the player buffer 29 should satisfy the constraint defined as: > c *x b > - e * P l,oss er * P ss or " "" (Equation 1)
- Figure 6 shows the approximate amount of backlog as a function of time 82 during sustained congestion. As indicated, the backlog rate 80 after two retransmission intervals 84 is
- the backup buffer 26 ( Figure 1) holds temporary copies of the server packets that have been transmitted to the player process 20 ( Figure 1). The copies must be held until the server is sure that the packets have been successfully received in the player buffer 29 ( Figure 1). To estimate the size of this backup buffer 26 ( Figure 1) again, an operational
- the transmission rate in packets per seconds is defined as p .
- P * R 70 ( Figure 3) is loss p . Assuming the loss affects new as well as retransmitted streaming packets 70 ( Figure 3) equally, the approximate rate of loss for older
- the minimum required buffer size may be defined as equal to m * x . Consequently, the size constraint of the backup buffer 26 (Figure 1) is
- FIGS. 6 and 7 show the detailed operational sequence at the streaming server process 16 ( Figure 3) and the streaming player process 20 ( Figure 3) during normal streaming of the streaming packets 70 ( Figure 3).
- the operational sequences are described with the help of state diagrams.
- a state diagram shows various states and transitions between states. Each state captures a specific operational state, and the transitions between states capture the events that cause the operational states to change.
- Figure 7 shows the state diagram of events at the streaming server process 16 ( Figure 1).
- Figure 8 shows the corresponding state diagram at the streaming player process 20.
- the streaming player process 20 is in a normal state 100.
- the player process 20 may keep a track of the packet loss of a received stream at a periodic interval that may be user determined. If the amount of loss exceeds a threshold, e g , Threshold 1 , across a time- window of some number of seconds which may also be user determined, e.g , TW1, the player process 20 moves into the Likely Congestion state 102
- the streaming player process 16 ( Figure 1) launches congestion probes to determine whether the congestion symptoms are ve ⁇ fied These probes form a part of a test called confirm congestion test If the test indicates that congestion is confirmed, the process moves to a Congestion Confirmed state 104 Immediately, the streaming player process 20 contacts its server counterpart and informs it about the new state change along with some other relevant information
- the player process 20 may actually receive a Remote Congestion Confirmed report in state 106 from its server process 16 ( Figure 1 ) indicating that some other player process 20 on a related network path is in the Congestion Confirmed state. If this happens, the player immediately moves to the Congestion Confirmed State 104 itself. If the player process 20 has already started conducting congestion probes before this notification, the player process 20 may halt them at the instant it receives the notification. Alternatively, the player process 20 may receive a Local Congestion Confirmed report in state 102 reporting that some other local player process 20 on the same streaming player module has ve ⁇ fied congestion. If the Remote or local Congestion Confirmed reports have ve ⁇ fied congestion within a past time-wmdow not exceeding the time defined as Tec, the player process 20 moves to a confirmed congestion state 104
- the sfreaming player process 20 ( Figure 1) enters the congestion confirmed state 104, it pe ⁇ odically monitors the loss of the received stream of streaming packets 70 (Figure 3). If the packet loss is less than a second user defined threshold, e g., Threshold2, across a second user defined time-wmdow, e.g., TW2 seconds, the player process 20 enters the likely end-of-congestion (EOC) state 106 At this point, the player process 20 launches an EOC probe test that either confirms or invalidates the EOC decision If the EOC probe test confirms congestion, the streaming player process 20 enters the EOC state 108 and sends an EOC report 110 to its server process 16 ( Figure 1) counterpart With this transition, the process re-enters the normal state 100 Note that during transitions from the EOC state 1 10 back to the Normal State 100, the streaming player process 20 also needs to continually process the arriving packets in step 120 ( Figure 7), check and request for retransmission of missing packets in step 122 ( Figure 7),
- FIG 10 shows the SCP state-diagram of the server process 16.
- the server process 16 enters the normal state 130. If the server process 16 receives a report from its streaming player process 20 ( Figure 1) indicating congestion, the server moves to the Confirmed Congestion or Confirmed Congestion State 132. At this point, the server process 16 shares the Confirmed Congestion status information with other server processes 16 running on the same computing device 14 ( Figure 1) in the Propagate Confirmed Congestion state 134, and potentially also from other computing device 14 ( Figure 1) through interaction with the service manager 12 ( Figure 1) via the network 22 ( Figure 1). The idea behind this is to have selected server processes report the congestion to their player process so that these player processes can avoid doing the Confirmed Congestion probing test.
- each server process that receives the Confirmed Congestion status in the Receive the Confirmed Congestion status state 140 from other server processes 16 first determines whether the Confirmed Congestion status is relevant to its target player process 20 ( Figure 1). If this is the case, the server processes notify their player process 20 ( Figure 1). Notification is done through a special signaling scheme involving reliable transport protocol (RTP) messages, as discussed in Schulzrinne, Casner, Frederick, Jacobson, "RTP: A Transport Protocol for Real-time Applications", RFC 1889, Internet Engineering Task Force.
- RTP reliable transport protocol
- the streaming server process 16 will then make transitions from the Confirmed Congestion State 132.
- the server process 16 transmits streaming packets 70 ( Figure 3) at a constant bit-rate.
- the server process 16 moves to the Recovery Scheduling State 136 as soon as it receives an End-of- congestion or EOC report from the player process 20 ( Figure 1) targeted for the reception of its content.
- the server process 16 changes its mode of transmission, using recovery scheduling to drain its backlogged output buffer 24 ( Figure 1 ) in an intelligent way.
- the server process 16 moves back to the normal state 130.
- the server process 16 may receive and filter information from other server processes 16 by entering the Receive Confirmed Congestion State 140. It may also notify the player processes 20 ( Figure 1) about the confirmed congestion status through the in-band RTP signaling, by entering the Confirmed Congestion Notification state 138 from where the process 16 will return to the Normal state 130.
- the server process 16 also needs to continually process the departing packets in step 150 ( Figure 8), check and process the requests for retransmission of missing packets in step 152 ( Figure 8), and backup packets to the backup buffer 26 ( Figure 1) for playback. These activities are indicated in Figure 8 and are not shown in Figure 10 for purposes of clarity.
- Equation 4 all the terms are measured at the service bottleneck hop between the computing device 14 executing the server process 16 and the computing device 18 executing the player process 16.
- B is the bottleneck link capacity at the bottleneck hop.
- ⁇ °' her is the aggregate packet rate earned at the hop from traffic not utilizing the inventive method. This rate can be estimated by obtaining the aggregate packet rate of the
- SR — p l — and using the fact that ° ⁇ her ⁇ X P ⁇ SME , where B is the: utilization of the bottleneck hop.
- the parameter n is defined as the number of priority "i" streams carried at the bottleneck hop.
- the hop is a section of a network between two network-computing devices such as routers. This parameter may be obtained from the service manager 12 ( Figure 1).
- the logic behind this equation is as follows. The first term indicates that the new transmission rate should be set equal to the minimum service rate along the path, in order to recover from the backlog. The second term takes into account the result of increasing the transmission rate on the utilization at the bottleneck hop.
- each computing device 14 executing server process 16 must carefully weigh the effects of increasing the transmission rate.
- the best rate is the maximum rate that does not congest the bottleneck bandwidth during recovery. Hence the rnin operator.
- the priorities within the equation 4 provide for differentiated streaming.
- congestion estimation tests as part of the SCP algorithm.
- congestion can be estimated in a variety of ways, and the invention is not restricted to any one particular method of congestion estimation.
- a particular embodiment of the invention for estimating congestion based on a mix of passive and active probes will now be described.
- This method comprises a set of rules configured around the results of the probes.
- a specific congestion test e.g., an end of congestion test or a confirmed congestion test mentioned in earlier sections, combines these rules in a particular way.
- the "Service Rate probe” computes the approximate service rate of transferring packets between the server and the player, as discussed in Van Jacobson, "pathchar - A tool to infer characteristics of Internet paths", MSRI, April 21, 1997; and R. L. Carter and M.E. Crovella, “Measuring Bottleneck Link Speed in Packet- switched Networks", TR- 96-006, Boston University Computer Science Department, March 15, 1996.
- the minimum service rate is the least service rate on the path, while the average service rate is the "expected" service rate on the path. .
- the steps of this exemplary method are as follows:
- the player Periodically, the player initiates a cycle. In each cycle, the player sends n-1 queries, each spaced by l ⁇ second. The player forms timestamp differences between query responses from consecutive hops. The results are sent back to the server.
- the server uses results from several cycles along with the link capacities to calculate the service rate.
- ⁇ m mini ⁇ - ⁇ )
- n max b) Denote the maximum queuing delay between node m and node m-1 by ⁇ m m - 1 , where node is a computing device, e.g., a computer or a router connected to the network.
- r( - T mm ) c) Then, y ra ' W m,m-l ' m.m-l / avg d) Denote the average queuing delay between node m and node m- 1 by ⁇ m,m-l
- a robust but responsive congestion detection mechanism is an essential part of the inventive method. It is important to respond quickly to congestion; this can reduce the congestion period and hence make it easier for the recovery scheduling to do its job in a shorter period of time. Ultimately this can allow the method to work with a reduced buffer size and hence a reduced connect lag. It is equally important to robustly distinguish a sustained congestion condition. False detection can lead to reduced throughput and performance. Recall that the inventive method treats sustained congestion by reducing the throughput of the newer packets, so that the throughput of older retransmitted packets is maximized.
- the present invention uses the following congestion detection rule:
- T s Packet losses exceeding a threshold for a consecutive time- window of duration T s .
- T s is set to 10 seconds by default.
- condition 1 If condition 1 is triggered, three short service rate probes are sent, spaced 2
- the player process 20 ( Figure 1) computes the expected utilization B
- condition 1 is satisfied and the utilization from condition 2 is more than a specified utilization threshold, e.g., 95%
- the player process 20 declares sustained congestion (Confirmed Congestion), and asks the server to enter the constant bit-rate emulation mode.
- An alternative method may consists of a simple rule where if packet losses exceed a threshold for Ni consecutive sampling time-windows, the player process 20 ( Figure 1) declares sustained congestion and asks the server to enter the constant bit- rate emulation mode. By default, Ni is set to 1, and time- window is set equal to 1 second.
- the end of congestion rule is analogous to the congestion detection rule: 1. No packet losses for a consecutive time- window of duration T e . T e is set equal to 10 seconds by default. 2. If condition 1 is triggered, three short service rate probes are sent, spaced 2 seconds apart, from the player to the m " > op. esponses are average to orm avg .
- the player computes the expected utilization B
- condition 1 is satisfied and the utilization from condition 2 is less than a specified threshold, e.g., 90%, the player declares sustained congestion to be over, and asks the server to transition to the state of recovery scheduling.
- a specified threshold e.g. 90%
- An alternative method may consists of a simple rule where if in a state of sustained congestion, no packet losses are manifest for N 2 consecutive sampling time- windows, the player declares sustained congestion to be over, and asks the server to transition to the state of recovery scheduling
- N 2 may be set equal to five, and the sampling time-wmdow is set equal to 1 second.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2001251715A AU2001251715A1 (en) | 2000-03-08 | 2001-03-08 | Apparatus and method for predictable and differentiated delivery of multimedia streaming on the internet |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US52043400A | 2000-03-08 | 2000-03-08 | |
| US09/520,434 | 2000-03-08 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2001067264A1 true WO2001067264A1 (fr) | 2001-09-13 |
Family
ID=24072583
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2001/040264 WO2001067264A1 (fr) | 2000-03-08 | 2001-03-08 | Dispositif et procede de fourniture multimedia sur internet |
Country Status (2)
| Country | Link |
|---|---|
| AU (1) | AU2001251715A1 (fr) |
| WO (1) | WO2001067264A1 (fr) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2004002107A1 (fr) * | 2002-06-20 | 2003-12-31 | Essential Viewing Limited | Method, network, server and client for distributing data via a data communications network |
| US8370514B2 (en) | 2005-04-28 | 2013-02-05 | DISH Digital L.L.C. | System and method of minimizing network bandwidth retrieved from an external network |
| US8402156B2 (en) | 2004-04-30 | 2013-03-19 | DISH Digital L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US8683066B2 (en) | 2007-08-06 | 2014-03-25 | DISH Digital L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US8868772B2 (en) | 2004-04-30 | 2014-10-21 | Echostar Technologies L.L.C. | Apparatus, system, and method for adaptive-rate shifting of streaming content |
| US9510029B2 (en) | 2010-02-11 | 2016-11-29 | Echostar Advanced Technologies L.L.C. | Systems and methods to provide trick play during streaming playback |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5339392A (en) * | 1989-07-27 | 1994-08-16 | Risberg Jeffrey S | Apparatus and method for creation of a user definable video displayed document showing changes in real time data |
| US6014706A (en) * | 1997-01-30 | 2000-01-11 | Microsoft Corporation | Methods and apparatus for implementing control functions in a streamed video display system |
| US6018515A (en) * | 1997-08-19 | 2000-01-25 | Ericsson Messaging Systems Inc. | Message buffering for prioritized message transmission and congestion management |
| US6031818A (en) * | 1997-03-19 | 2000-02-29 | Lucent Technologies Inc. | Error correction system for packet switching networks |
-
2001
- 2001-03-08 WO PCT/US2001/040264 patent/WO2001067264A1/fr active Application Filing
- 2001-03-08 AU AU2001251715A patent/AU2001251715A1/en not_active Abandoned
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5339392A (en) * | 1989-07-27 | 1994-08-16 | Risberg Jeffrey S | Apparatus and method for creation of a user definable video displayed document showing changes in real time data |
| US6014706A (en) * | 1997-01-30 | 2000-01-11 | Microsoft Corporation | Methods and apparatus for implementing control functions in a streamed video display system |
| US6031818A (en) * | 1997-03-19 | 2000-02-29 | Lucent Technologies Inc. | Error correction system for packet switching networks |
| US6018515A (en) * | 1997-08-19 | 2000-01-25 | Ericsson Messaging Systems Inc. | Message buffering for prioritized message transmission and congestion management |
Cited By (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2004002107A1 (fr) * | 2002-06-20 | 2003-12-31 | Essential Viewing Limited | Method, network, server and client for distributing data via a data communications network |
| US10225304B2 (en) | 2004-04-30 | 2019-03-05 | Dish Technologies Llc | Apparatus, system, and method for adaptive-rate shifting of streaming content |
| US10469554B2 (en) | 2004-04-30 | 2019-11-05 | DISH Technologies L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US8612624B2 (en) | 2004-04-30 | 2013-12-17 | DISH Digital L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US11991234B2 (en) | 2004-04-30 | 2024-05-21 | DISH Technologies L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US8868772B2 (en) | 2004-04-30 | 2014-10-21 | Echostar Technologies L.L.C. | Apparatus, system, and method for adaptive-rate shifting of streaming content |
| US11677798B2 (en) | 2004-04-30 | 2023-06-13 | DISH Technologies L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US9071668B2 (en) | 2004-04-30 | 2015-06-30 | Echostar Technologies L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US11470138B2 (en) | 2004-04-30 | 2022-10-11 | DISH Technologies L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US9407564B2 (en) | 2004-04-30 | 2016-08-02 | Echostar Technologies L.L.C. | Apparatus, system, and method for adaptive-rate shifting of streaming content |
| US10951680B2 (en) | 2004-04-30 | 2021-03-16 | DISH Technologies L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US8402156B2 (en) | 2004-04-30 | 2013-03-19 | DISH Digital L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US9571551B2 (en) | 2004-04-30 | 2017-02-14 | Echostar Technologies L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US10469555B2 (en) | 2004-04-30 | 2019-11-05 | DISH Technologies L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US8370514B2 (en) | 2005-04-28 | 2013-02-05 | DISH Digital L.L.C. | System and method of minimizing network bandwidth retrieved from an external network |
| US9344496B2 (en) | 2005-04-28 | 2016-05-17 | Echostar Technologies L.L.C. | System and method for minimizing network bandwidth retrieved from an external network |
| US8880721B2 (en) | 2005-04-28 | 2014-11-04 | Echostar Technologies L.L.C. | System and method for minimizing network bandwidth retrieved from an external network |
| US10165034B2 (en) | 2007-08-06 | 2018-12-25 | DISH Technologies L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US10116722B2 (en) | 2007-08-06 | 2018-10-30 | Dish Technologies Llc | Apparatus, system, and method for multi-bitrate content streaming |
| US8683066B2 (en) | 2007-08-06 | 2014-03-25 | DISH Digital L.L.C. | Apparatus, system, and method for multi-bitrate content streaming |
| US12375545B2 (en) | 2007-08-06 | 2025-07-29 | DISH Technologies L.L.C | Apparatus, system, and method for multi-bitrate content streaming |
| US9510029B2 (en) | 2010-02-11 | 2016-11-29 | Echostar Advanced Technologies L.L.C. | Systems and methods to provide trick play during streaming playback |
| US10075744B2 (en) | 2010-02-11 | 2018-09-11 | DISH Technologies L.L.C. | Systems and methods to provide trick play during streaming playback |
Also Published As
| Publication number | Publication date |
|---|---|
| AU2001251715A1 (en) | 2001-09-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Martin et al. | Delay-based congestion avoidance for TCP | |
| Al-Saadi et al. | A survey of delay-based and hybrid TCP congestion control algorithms | |
| Hashem | Analysis of random drop for gateway congestion control | |
| KR101046105B1 (ko) | 컴퓨터 프로그램 제조품, 리소스 요구 조정 방법 및 엔드 시스템 | |
| US8379535B2 (en) | Optimization of streaming data throughput in unreliable networks | |
| US7200111B2 (en) | Method for improving TCP performance over wireless links | |
| Abdelsalam et al. | TCP wave: a new reliable transport approach for future internet | |
| US20080037420A1 (en) | Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square waveform) TCP friendly san | |
| Anderson et al. | PCP: Efficient Endpoint Congestion Control. | |
| Mamatas et al. | Approaches to congestion control in packet networks | |
| Karnik et al. | Performance of TCP congestion control with explicit rate feedback | |
| AU2005308530A1 (en) | Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet NextGenTCP (square wave form) TCP friendly san | |
| WO2001067264A1 (fr) | Dispositif et procede de fourniture multimedia sur internet | |
| Sisalem et al. | The direct adjustment algorithm: A TCP-friendly adaptation scheme | |
| Martin et al. | The incremental deployability of RTT-based congestion avoidance for high speed TCP Internet connections | |
| Zhang et al. | An online learning based path selection for multipath real‐time video transmission in overlay network | |
| Hisamatsu et al. | Non bandwidth-intrusive video streaming over TCP | |
| Kühlewind et al. | Chirping for congestion control-implementation feasibility | |
| Zhang et al. | Optimizing TCP start-up performance | |
| McCreary et al. | TCP-RC: a receiver-centered TCP protocol for delay-sensitive applications | |
| Khan et al. | Jitter and delay reduction for time sensitive elastic traffic for TCP-interactive based world wide video streaming over ABone | |
| Chung et al. | Mtp a streaming friendly transport protocol | |
| Kaj et al. | Stochastic equilibrium modeling of the TCP dynamics in various AQM environments | |
| Schmitt et al. | Improving the Start-Up Behaviour of TCP-friendly Media Transmissions | |
| Wang et al. | Robust TCP congestion recovery |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: COMMUNICATION UNDER RULE 69 EPC (EPO FORM 1205A OF 03.01.2003) |
|
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: JP |