[go: up one dir, main page]

WO2021101610A1 - Garantie de latence pour des paquets de données dans un réseau - Google Patents

Garantie de latence pour des paquets de données dans un réseau Download PDF

Info

Publication number
WO2021101610A1
WO2021101610A1 PCT/US2020/048244 US2020048244W WO2021101610A1 WO 2021101610 A1 WO2021101610 A1 WO 2021101610A1 US 2020048244 W US2020048244 W US 2020048244W WO 2021101610 A1 WO2021101610 A1 WO 2021101610A1
Authority
WO
WIPO (PCT)
Prior art keywords
data packets
node
data packet
time
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2020/048244
Other languages
English (en)
Inventor
Lijun Dong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FutureWei Technologies Inc
Original Assignee
FutureWei Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FutureWei Technologies Inc filed Critical FutureWei Technologies Inc
Publication of WO2021101610A1 publication Critical patent/WO2021101610A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2466Traffic characterised by specific attributes, e.g. priority or QoS using signalling traffic

Definitions

  • the disclosure generally relates to latency guarantee for packet delivery in a network.
  • Computer networks such as the Internet, are in wide-spread use today. These networks provide network devices, namely devices connected to the network such as personal computers, servers, or the like, with the ability to communicate with each other. Network devices communicate with each other by converting the data to be communicated into data packets and transmitting the packets across the network. In a typical network, however, a direct physical connection between two devices is often not possible due to the large number of devices using the network. As such, the packets may pass through several intermediate network devices, such as routers, switches etc., which direct and help deliver the packets to their intended destination network device.
  • a method for regulating delivery time of a data packet between communication devices comprising receiving a plurality of data packets at a node, the data packets each specifying a delivery time; storing, in one or more queues of the node, the data packets; generating, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
  • the method further comprising sorting the data packets in the one or more queues in order of transmission time to form an ordered data packet set.
  • generating the ordered list further comprises calculating a total transmission time as a sum of the transmission time of each data packet in the plurality of data packets stored in the one or more queues; and determining a data packet with a largest transmission time, from one or more of the plurality of data packets having a delivery time indicating a deadline at a current hop that is greater than or equal to the total transmission time.
  • the generating the ordered list further comprises adding the determined data packet to a beginning of the ordered list such that the data packets are ordered beginning with a first of the data packets to be sent and ending with a last of the data packets to be sent in the sending step; and removing the determined data packet from the one or more queues.
  • the method further comprising dropping a current data packet when there is insufficient time to send the current data packet by a time required by the delivery time.
  • the data packets stored in the one or more queues have a highest scheduling priority compared to other received data packets without a specified delivery time.
  • the ordered list achieves a minimum average dwell time of the data packets in the queue and satisfies the delivery time for each of the data packets.
  • the plurality of data packets are received using a new internet protocol
  • each of the plurality of data packets comprise a header specification field, a shipping specification field, a contract specification field and a payload specification field
  • the contract specification field includes a contract specifying the delivery time
  • each contract includes one or more contract clauses, and each contract clause includes an action indicating the data packet has the delivery time and metadata identifying a budget of data packet budget and remaining number of hops.
  • the method further comprising calculating the deadline at the current hop using a remainder of the budget and the remaining number of hops.
  • the node includes multiple output ports and each output port is associated with the one or more queues.
  • the one or more queues are a latency guarantee queue.
  • a node for regulating delivery time of a data packet between communication devices comprising a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to receive a plurality of data packets at a node, the data packets each specifying a delivery time; store, in one or more queues of the node, the data packets; generate, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
  • a node for regulating delivery time of a data packet between communication devices comprising a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to receiving a plurality of data packets at a node, the data packets each including a contract specifying a delivery time; storing, in one or more queues of the node, the data packets; generating, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time included in the contract for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
  • non-transitory computer-readable medium storing computer instructions for regulating delivery time of a data packet between communication devices that, when executed by one or more processors, cause the one or more processors to perform the steps of receiving a plurality of data packets at a node, the data packets each specifying a delivery time; storing, in one or more queues of the node, the data packets; generating, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
  • a method for scheduling a plurality of data packets, each indicating a delivery time, between communication devices comprising determining whether a schedule satisfying the delivery time deadline for each of the plurality of data packets is feasible; ordering the plurality of data packets in a queue to reduce an average dwell time when the schedule is feasible and transmitting the plurality of packets in the order to satisfy the delivery time for each of the plurality of data packets; and dropping a data packet from the plurality of data packets when the schedule is not feasible.
  • the schedule is determined to be feasible.
  • the dropped data packet has a per-hop deadline less than a total transmission time of the plurality of data packets.
  • the reducing comprises minimizing the average dwell time for each of the ordered plurality of data packets as long as the delivery time for each of the plurality of data packets is satisfied.
  • FIG. 1 illustrates an example system in which embodiments of the disclosure may be implemented.
  • FIG. 2 illustrates an example New IP packet which may be used to implement the present technology.
  • FIG. 3 illustrates and example contract format for use in the New IP packet of FIG. 2.
  • FIG. 4 illustrates an example of actions that may be specified in a contract clause.
  • FIG. 5 illustrates an example network in which routers have a latency guarantee queue (LGQ).
  • LGQ latency guarantee queue
  • FIG. 6 illustrates an example scheduling algorithm in accordance with the present embodiments.
  • FIGS. 7A - 7F illustrate an example embodiment of implementing the
  • FIGS. 8 and 9 illustrate flow diagrams of regulating delivery time of data packets in a network in accordance with the disclosed embodiments.
  • FIG. 10 illustrates an embodiment of a node in accordance with embodiments of the disclosure.
  • FIG. 11 shows an example embodiment of a computing system for implementing embodiments of the disclosure.
  • the present technology relates to quality of service (QoS) guarantees, and in particular, to latency guarantee for packet delivery in a network.
  • QoS quality of service
  • a “New Internet Protocol” (New IP) packet is part of an Internet framework that brings several capabilities to the present technology.
  • the New IP includes a contract that may be fulfilled by routers in a network. Through use of the contracts, service assurance requirements at a packet level, such as end-to-end latency guarantees, may be provided.
  • the technology provides a scheduling algorithm that regulates the transmission of the data packets stored at the router to minimize dwell time.
  • such services can also be requested in non contract segments of a packet, and it will be understood that contracts are just one option for specifying the services.
  • FIG. 1 illustrates an example system in which embodiments of the disclosure may be implemented.
  • the system 100 includes routers 120, 122 and 124.
  • routers 122 are intermediate routers.
  • Computing devices 110 connect to network 130 via router 120 in order to access resources provided by network 130.
  • Computing devices 110 may be virtually any type of general- or specific- purpose computing device.
  • these computing devices 110 may be user devices such as desktop computers, laptop computers, tablet computers, display devices, cameras, printers, Internet of Things (loT) device, wearable computing devices, mobile devices or smartphones.
  • these computing devices may be server devices such as application server computers, virtual computing host computers, or file server computers.
  • computing devices 110 may be individually configured to provide computing, storage, and/or other suitable computing services.
  • Network 130 may include a plurality of network devices that facilitate the access of content by devices 110.
  • Each of the plurality of network devices may comprise one of a router, a switch, a server, a database server, a hub, a firewall, an intrusion detection/prevention (IDP) device and/or any other type of networking equipment or device that facilitates the transfer of data to and from devices 110.
  • Network 130 includes routers 120, 122 and 124, which communicate using various protocols, such as the Border Gateway Protocol and the Internet Control Message Protocol, in order to exchange routing, network configuration information, and other information. In one embodiment, the routers communicate using a new internet protocol (IP).
  • IP internet protocol
  • the network 130 may be a local area network (“LAN”), such as a token ring or Ethernet network, a virtual local area network (“VLAN”), or another type of network.
  • the network 130 may comprise one or more wired or wireless links.
  • network 130 may be an Ethernet network that comprises one or more Ethernet cables.
  • the network 130 may be a Wireless Fidelity (“Wi Fi”) network that uses wireless radio transmissions to communicate information.
  • network 130 may be a mobile network. Although shown as a single network 130, the network 130 may comprise any number of interconnected networks, either public or private, in which the various networks interconnect to form one or more virtual networks.
  • Network 130 provides a variety of resources that may be accessed by devices 110.
  • network 130 includes content server 126 that stores or otherwise sources content, which refers to any data commonly transmitted and/or stored within a network, such as web-based applications, images, documents, web pages, video data, audio data such as voice, web-based games, scripts, or any other type of network-based content.
  • Network 130 may support multicast techniques to improve the delivery efficiency of data transmitted with the network 130.
  • the network 130 supports the new IP to improve latency and guarantee data packet delivery, as discussed herein.
  • Network 130 may also connect to a variety of other types of devices (e.g., file servers, printers, telephones, and e-mail and other application servers).
  • Network 130 is also shown coupled to external network 140 (e.g., the Internet) via router 124.
  • Public network 140 may include, for example, one or more client computing devices.
  • Public network 140 may provide access to web servers, application servers, public databases, media servers, end-user devices, and many other types of network resource devices and content.
  • Network 130 may transmit content to devices 110 through router 120 using one or more packet-based protocols, such as an Internet Protocol (IP)/Transmission Control Protocol (TCP) or the new IP.
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • network 130 may support the transmission of data via discrete data units, often referred to as “packets.”
  • packets may be referred to as a “packet-based” or “packet switched” network.
  • Network traffic delivered by network 140 may be classified according to a number of categories. For instance, a content server (not shown) may stream live video to one of devices 110 through router 120. Packets that transmit such video may be classified as streaming multimedia packets and may require a guaranteed bandwidth to provide an acceptable user experience.
  • the content server may also send web pages to one of devices 110 using HTTP packets.
  • information exchanged by routers 120, 122 and 124 may be categorized as high precision communication (HPC) services.
  • Information categorized as HPC requires, for example, a particular quality of service (QoS), such as latency, throughput, which may be defined at the granularity of a packet, a group of packets or a flow.
  • QoS quality of service
  • a latency guarantee in HPC services refers to the network needing to ensure the delivery of a packet, a group of packets or all packets in a flow within a bounded time frame. When messages are specified with a same deadline, then the latency guarantee is ensured at the flow level. If each message is specified with an independent deadline, the latency guarantee is ensured at the data packet level.
  • various categories of network traffic may require varying levels of network performance.
  • Routers 120, 122 124 receive, analyze, and classify data packets to assign the data packets to a suitable priority level. In addition to classifying data packets, routers 120, 122 and 124 process the received and classified data packets according to their priority level. In this manner, routers 120, 122 124 implement aspects of the QoS guarantees provided by network 130. In addition, based on information received from other devices in system 100, routers 120, 122 124 determine the appropriate route through the system for each received data packet and forward the data packet accordingly. In one embodiment, as will be described below with reference to FIG. 2, the priority level of the data packets is determined using the new IP in which a contract specification details the guarantees, such as QoS guarantees, of the data packets. For example, the contract specification may detail an end-to-end latency requirement for a data packet that traverses the network 130.
  • the contract specification may detail an end-to-end latency requirement for a data packet that traverses the network 130.
  • Routers 120, 122 and 124 may each include one or more queues 125 configured to store data packets as they arrive at intermediate nodes, such as routers 122.
  • the queues 125 typically include a queuing algorithm, which are usually implemented at the router level.
  • output queues process the data packets that have been enqueued to await their turn to be forwarded or routed to the appropriate output port or interface.
  • a primary purpose of these queuing algorithms is to give preferential treatment to data packets having higher priority (which are enqueued at the router output port) over other enqueued data packets having lower priority.
  • the queue 125 is a latency guaranteed queue (LGQ) in which the data packets have a per-hop (e.g., router hop) deadline that is guaranteed, as well as a minimal dwell time at each hop.
  • the dwell time is defined as the time a packet spends at a router before being forwarded to the next hop (referred to as “dwell time”) and includes: processing delay, queueing delay (affected by packet scheduling in the outgoing queue) and transmission delay (which is affected by the size of the packet and is proportional to the packet size).
  • FIG. 2 illustrates an example IP packet which may be used to implement the present technology.
  • New Internet Protocol the packet 200 is part of an Internet framework that brings several capabilities in the present technology.
  • New IP is a data plane technology that defines a new IP packet format, its specification, and corresponding capabilities in the network nodes.
  • the new IP packet format is shown in FIG. 2, which includes three types of characteristics, each solving a specific class of problem in the network 130, namely, a) addressing evolution, b) the contract inclusion, and c) the payload extension.
  • a new header specification is the start of the packet 200 that describes the offsets to the specifications for shipping, contract and payload.
  • the new IP address (shipping spec) evolution aims to replace the current fixed type of addressing order to provide flexibility to include all types of addresses and fit different reachability scenarios.
  • the New IP shipping specification is backward compatible with the existing address schemes (e.g., IPv4 and IPv6).
  • the new IP contract (contract Spec) inclusion enables a large selection of network capabilities, their functioning and regulative control at the finest packet- level granularity, as shown in FIG. 3.
  • the network 130 and routers 120, 122 and 124 fulfill the contract 300, assuming the contract has been agreed between the packet sender (e.g., sending computing device 110) and the packet receiver (e.g., receiving computing device 110) and the network 130.
  • the contract 300 describes a formal service-specific arrangement between two or more parties, which includes one or more contract clauses 210 to describe the type of network service capability, actions and accounting information. Through use of the contracts 300, service assurance requirements at a packet level are provided.
  • a contract 300 carries specific attributes associated with time-engineered services, high-throughput media services and mission-critical ultra-reliable services.
  • the structure of the contract 300 is defined in a Chomsky style. Compared to traditional QoS, contracts operate at much lower-level — per packet, and instruct in high-level abstract commands.
  • Each of the contract clauses 210 is a combination of an Event
  • ECA Condition, and Action
  • Metadata 212 can optionally include Metadata 212.
  • an atomic contract ECA exists in which the event and condition are empty.
  • a contract clause 210 describes how the routers 120, 122 and 124 treat the packet as it traverses the network 130 based on the predefined triggering event and condition. Given a predefined triggering event and condition has occurred, various actions are processed by the intermediate routers 122. For example, in order to support ultra-reliable low latency (uRLLC) in 5G, two contacts C1 and C2 may be used, where the C1 contract clause indicates a BoundedLatency action and the C2 contract clause has a NoPktLoss action (i.e., the low latency and reliability are to be met).
  • uRLLC ultra-reliable low latency
  • the Metadata contains data about the packet, e.g. accounting information, customized statistics about the flow on intermediate hops, the contextual information about the user and application, etc.
  • the in-network node intelligence is naturally embedded and supported by the New IP framework.
  • the new IP Payload (payload spec) associates semantics to the data payload.
  • New IP payload provides options to the receiving computing device 110 to consume any residual information in the payload while allowing the network to drop portions of the payload when congestion occurs. This type of communication is referred to as a qualitative communication, which helps to mitigate re-transmission overheads and delays when faced with slow or congested conditions.
  • FIG. 4 illustrates an example of actions that may be specified in a contract clause.
  • An action set includes one or more actions 400 defined in the contract specification that is known and processed by all new IP nodes. For example, applications will insert operator-defined and/or application-defined actions. These actions 400 may be generally categorized into operations, monitoring, telemetry or signaling. New contracts as defined in the specification may be implemented across different hardware platforms using different methods or algorithms. Flowever, the result of implementation leads to packet delivery guarantees between the sender and the receiver. Several actions are described below (some of which are shown in action 400):
  • InTimeGuarantee instructs the router to deliver a packet any time before the t (with prescribed unit of time). It may use corresponding metadata to describe an end-to-end network latency or available latency since transmission starts from the sender.
  • An algorithm called latency-based forwarding (LBF) implements this action. Instead of a class of services, contract clauses embed exact parameters and objectives in the packet.
  • the LBF algorithm is disclosed in A. Clemm, T. Eckert, “High-Precision Latency Forwarding over Packet- Programmable Networks,” IEEE/IFIP Network Operations and Management Symposium, 2020, which is incorporated by reference herein in its entirety. • Action OnTimeDelivery(t, t’) with metadata t and t’ as a total end-to-end time and elapsed time respectively delivers packet at a specific time in order to accommodate very low values of time-jitter.
  • Action PreferredPath may be a set of node addresses or other forms of path identifiers embedded in the packet to provide path guarantees.
  • PktTrace tracks the packet flow behavior across the network and may be used to understand the end-to-end service assurance and performance degradations in particular. Such actions add noise and, therefore, are subjected to specific events or conditions of interest. For example, in order to understand hop-by-hop latency, PktTrace action may capture a path in the network along with the time spent in each node.
  • Action PktMonitor helps gain visibility into the current state of the system. This action captures events in a network relating to queue thresholds, packet drops, etc. Such actions identify situations such as congestion before they occur by monitoring the thresholds. For example, in order to identify real-time congestion, if a queue is built up to 70%, then this action sets the corresponding metric value in the metadata.
  • Action ReportlnsuringParty is an operator driven action to be executed when a service objective violation occurs; the node in error is then required to report such violations to the insuring party. Operators use this for the assessment of damages due to service level objectives violations, which may help build trust between different network systems.
  • This disclosure focuses on a new Action called InTimeGuarantee, which instructs the router to deliver a data packet with a latency guarantee.
  • the Action InTimeGuarantee uses deadline-aware scheduling with an algorithm (referred to herein as the Total Dwell Minimizing Schedule or “TDMS,” described below) such that the packets specified with InTimeGuarantee can meet corresponding end-to-end deadlines. It may use corresponding metadata to describe an end-to-end network latency, in which the metadata is defined by (1 ) a budget, which denotes the residual or remaining time before the packet deadline runs out and is considered unsuccessful, and (2) a hop, which denotes the residual or remaining number of hops towards destination.
  • TDMS Total Dwell Minimizing Schedule
  • the remaining number of hops may be determined when the routing path is configured, and the total number of hops between the source and destination is fixed.
  • the afore-mentioned technology may be applied to many different networks, such as an Internet-of-Things (loT) network, flexible addressing for smart homes, service-aware routing and more.
  • LoT Internet-of-Things
  • similar requests for a particular delivery time can be associated with packets in other forms, and neither the InTimeGuarantee action nor the contracts framework shall be limiting to this disclosure.
  • FIG. 5 illustrates an example network in which routers have a latency guarantee queue (LGQ).
  • the network 500 shows sending nodes 110, router 122 with a dedicated LGQ 125 and receiver nodes 110.
  • the LGQ is a queue that includes a scheduling mechanism to regulate transmission of data packets stored in a node of the network in which to minimize dwell time. Such a scheduling mechanism may provide a level or quality of service for delivery of the data packets. It is appreciated that network 500 is not limited to the depicted components, but may be a much larger network with additional components, such as network 100 in FIG. 1.
  • the LGQ 125 queues received data packets for latency guarantee according to the contract specified in the received data packets.
  • data packets received by the router 122 that include an InTimeGuarantee contract clause are stored in the LGQ 125. These data packets are assigned a highest scheduling priority compared to other received data packets that do not include the deadline constraints (i.e. , do not include the Action InTime Guarantee in the contract clause). Data packets that do not include the deadline constraints may be stored in other, non-LGQ queues (not shown).
  • network 500 is not limited to the depicted components, but may be a much larger network with additional components, such as network 100 in FIG. 1.
  • the routers 122 may include multiple input interfaces, with each input interface having one or more queued packets waiting to be processed and routed by the router 122. Additionally, each queued packet may have a different associated priority level which specifies the particular Quality of Service (QoS) level to be used when handling that packet.
  • QoS Quality of Service
  • the packet is first dequeued from the input interface, and is then decapsulated from its data link frame. After decapsulation, the packet is classified with an associated priority level. The packet is then encapsulated and routed to its appropriate output interface queue within the output queuing structure.
  • a packet scheduling algorithm is implemented in which to achieve a minimal average dwell time at each router during packet forwarding.
  • the packet schedule guarantees the delivery of a data packet stored in the LGQ 125 within its designated time frame (i.e., satisfy the InTimeGuarantee).
  • a packet schedule is generally defined as the order of the data packets
  • the average dwell time may be determined using the following parameters and equations:
  • the transmission time may be calculated as packet size divided by the average link rate
  • T 1 ,T 2 , ... T, ... T m the dwell time of the packets, which includes the transmission time and the waiting time;
  • L 1 ,L 2 , -L , ... L m the deadline for each packet at the current hop along a respective path.
  • a deadline refers to the remaining time, but could be adjusted to accommodate an absolute time (described below).
  • a central controller may set a specific deadline or it may be dynamically set at each hop.
  • a per-hop deadline of the packets may be calculated as L t
  • Tt D 1 + D 2 + — D t ,
  • the total dwell time of the packets in the LGQ is:
  • the average dwell time of the packets is: r T total l av 3 - m
  • the average dwell time determines how long an average packet stays in a router before its last bit gets transmitted.
  • the smaller the average dwell time the less the amount of end-to-end latency will be experienced by end users. That is, the smaller the average dwell time for a packet in the LGQ 125, the less amount of latency exists between sending nodes 110 and receiver nodes 110.
  • FIG. 6 illustrates an example scheduling algorithm in accordance with the present embodiments.
  • the scheduling algorithm 600 referred to as the Total Dwell Minimizing Schedule (TDMS) algorithm, determines whether a minimal average dwell time of packets stored in a queue can be met in order to meet delivery time guarantees, as set forth in the contract clauses of the packets. Minimizing the average dwell time is equivalent to minimizing the total dwell time. The minimal average dwell time results in minimal average packet delay at each router, thus minimizing average end-to-end delay for packet deliveries in the network. Moreover, if a feasible schedule can be determined, then a schedule is generated and the data packets are scheduled for delivery according to the schedule. If a feasible schedule cannot be determined, the data packet is discarded.
  • TDMS Total Dwell Minimizing Schedule
  • step 1 of the TDMS algorithm 600 the variables are set as packet set, / is the i th data packet in the set J l m is total number of packets in the LGQ 125, D is the transmission time of the packet and L is the deadline of the packet.
  • the TDMS schedule is determined from the last to the first packet through steps 3 - 14 (referred to herein as the “outer loop” of the algorithm), where steps 6 - 9 of the algorithm (referred to herein as the “inner loop” of the algorithm) identify the last packet to be transmitted of the remaining packets to be placed in the TDMS schedule.
  • j * is a bookkeeping variable that is initially set to zero.
  • the total transmission time is calculated as a sum of the transmission time of each data packet stored in the queue.
  • the total transmission time is calculated as a sum of the transmission time of each data packet in the ordered (sorted) data packet set. It will be understood that the total transmission time can also be adjusted if the router will experience other delays (for example, for transmission of higher-priority packets or other operations.
  • Steps 6 - 9 then identify the last packet to be transmitted, of the remaining packets to be placed in the schedule. This may be accomplished by determining the data packet in the ordered data packet set with the largest transmission time whose deadline is greater than or equal to the total transmission time of packets in the current data packet set.
  • j* is set to the index of that packet, having the largest transmission time and an acceptable deadline.
  • the “deadline” is an amount of time remaining until the packet must be transmitted. Thus, if the deadline is greater than the total transmission time, the packet can be transmitted last and still meet its deadline.
  • the “deadline” can also be expressed in other ways, such as an absolute time when the packet must be sent (for example 12:34pm).
  • data packets may be dropped when not placed in the schedule.
  • data packets may be sent late (using another scheduling policy) when not placed in the schedule. Other actions can also be taken to make a schedule feasible, such as reducing a size of some packets.
  • the determined data packet is prepended to the schedule and removed from the ordered data packet set.
  • the process is repeated with the new ordered data packet set until no more data packets remain in the ordered data packet set. That is, at step 13 the new ordered data packet set J M is generated by removing j* from the previous set J / , and j* is added to the front of the TDMS set at step 14. When no more data packets remain, the TDMS schedule is returned at step 15.
  • FIGS. 7 A - 7F illustrate an example embodiment of implementing the
  • the queues (such as LGQ 125) of the intermediate routers 122 contain a packet set 700 that includes two properties — deadline and transmission time.
  • the deadline refers to the deadline at a current hop for an associated packet in the packet set
  • the transmission time refers to the amount of time it takes to transmit the packet from the queue to the next hop node.
  • packet P1 has a deadline of 10 and a transmission time of 5.
  • ms milliseconds
  • the TDMS scheduling algorithm 600 first orders the packets in the packet set 700 into a decreasing (descending) order of transmission time, as shown in 702.
  • the packets in the packet set 700 are rearranged into the following order: P1 , P4, P2, P3, as shown in FIG. 7B.
  • the packets arranged in decreasing order of transmission time will be used as the ordered packet set operated upon by the TDMS scheduling algorithm.
  • steps 3 - 13 of the TDMS scheduling algorithm 600 are executed four times (equal to the number of packets to be scheduled or placed in an ordered list), as shown in FIGS. 7C - 7F, with steps 6 - 9 responsible for identifying the last packet.
  • a total transmission time is calculated as a sum of the transmission time of each data packet in the ordered data packet set ] .
  • the data packet in the ordered data packet set J 1 with a deadline that is greater than or equal to the total transmission time, with a largest transmission time, will be selected.
  • the data packet selected for placement as the last data packet in the TDMS schedule is the data packet with the largest transmission time and that meets its deadline (e.g., the deadline is equal to or greater than the total transmission time), which is data packet P2.
  • the data packet may be transmitted last and still meet its deadline. In this case, only data packets P2 and P3 can meet their deadline (i.e. , both data packets have a deadline that is equal to or greater than the total transmission time of 11 ms).
  • data packet P2 has the largest transmission time. Data packet P2 is therefore selected as the last entry into the TDMS scheduling algorithm, as shown in FIG. 7C. Once placed in the schedule, the data packet P2 is removed from the ordered data packet set to form a second data packet set J 2 with the remaining data packets P1 , P3 and P4.
  • the data packet selected for placement as the last data packet in the TDMS schedule is the data packet with the largest transmission time that meets its deadline, which is data packet P1.
  • data packets P1 and P3 can meet their deadline (i.e. , a deadline that is equal to or greater than the total transmission time of 9 ms). Since data packet P1 has a larger transmission time, it is therefore selected as the last open entry into the TDMS scheduling algorithm, as shown in FIG. 7D.
  • the data packet P1 is removed from the ordered data packet set J 2 to form a third data packet set / 3 with the remaining data packets P3 and P4.
  • the data packet selected for placement as the last data packet in the TDMS schedule is the data packet with the largest transmission time that meets its deadline, which is data packet P4.
  • data packets P3 and P4 can meet their deadline (i.e., a deadline that is equal to or greater than the total transmission time of 4 ms). Since data packet P4 has a larger transmission time, it is therefore selected as the last open entry into the TDMS scheduling algorithm, as shown in FIG. 7E.
  • the data packet P4 is removed from the ordered data packet set / 3 to form a fourth data packet set 4 with the remaining data packet P3.
  • the total transmission time of set / 4 is 1 since the only data packet remaining is P3.
  • the data packet P3 is placed in the front of the queue.
  • the schedule has been generated in which data packet P3 is first in the queue and data packet P2 is last in the queue.
  • the dwell time for each of the data packets may also be calculated as the transmission time of the data packet plus any waiting time. For example, since data packet P3 is in the front of the queue, there is no waiting time. The dwell time is therefore equal to the transmission time of 1 ms.
  • Data packet P4 has a transmission time of 3 ms and a waiting time of 1 ms (the dwell time of packets prior to P4).
  • the TDMS scheduling algorithm results in a lower minimal average dwell time.
  • FIGS. 8 and 9 illustrate flow diagrams of regulating delivery time of data packets in a network in accordance with the disclosed embodiments.
  • FIG. 8 illustrates a general overview of the process for regulating delivery time of data packets
  • FIG. 9 illustrates the process flow of the TDMS scheduling algorithm.
  • the intermediate routers or nodes perform the procedures. Flowever, it is appreciated that any other functional unit or processing unit may implement the processes described herein, and the disclosure is not limited to implementation by the routers.
  • the delivery time of a data packet may be measured according to different embodiments.
  • the delivery time may be measured as a remaining time for delivery. For example, if a data packet arrives at an intermediary node with a delivery time of 10 ps, the data packet dwells at the intermediary node for 1 ps and the transmission time to the next node is 1 ps, then the remaining time for delivery is 8 ps.
  • the delivery time may be measured as an absolute amount of time.
  • the intermediary node can determine the amount of time remaining (10 minutes) to meet its deadline (11 :47 am) regardless of the amount of time it takes to hop from one node to the next.
  • a plurality of data packets are received at a node, where each of the data packets is associated with a delivery time deadline.
  • the delivery time deadline may be specified in a contract clause of the data packet using the New IP.
  • the data packets are stored in one or more queues of the node at step 804.
  • a schedule is generated at step 806, and the data packets are sent to a next hop node according to the schedule at step 808.
  • FIG. 9 the TDMS schedule process of FIG. 6 is described.
  • data packets received at the intermediate nodes are optionally sorted in the LGQs 125 in decreasing order of transmission time.
  • the data packets are new IP packets 200 that include a contract clause 210 having an action and metadata.
  • the data packets form an ordered data packet set. For each of the data packets in the ordered data packet set (sorted) or stored in the queue (not sorted), a total transmission time is calculated as a sum of the transmission time of each data packet at step 812.
  • Step 814 determines the data packet in the ordered data packet set with the largest transmission time that satisfies the deadline at the current hop and is greater than or equal to the total transmission time. If no data packet is found (i.e.
  • a data packet is dropped (or sent to a lower priority queue, or some other action is taken) at step 818. Otherwise, if a data packet is found to satisfy the conditions, then the data packet is stored in LGQ 125 at step 820.
  • the data packets are stored in an order beginning with a first of the data packets to be sent and ending with a last of the data packets to be sent, such that each new packet is placed at the beginning. The determined data packet is then removed from the data packet set at step 822. At step 824, if no more data packets exist in the data set, then the schedule is returned at step 826. Otherwise, the process returns to step 812 and is repeated.
  • FIG. 10 illustrates an embodiment of a node in accordance with embodiments of the disclosure.
  • the node e.g., router
  • the node may transmit and receive data (e.g., a new IP packet) to and from at least one electronic device and/or a server 110, etc., through a network (e.g., global network), such as network 130.
  • the node 1000 may transmit the new IP packet, which is received through the network, to another electronic device 110 through a local network.
  • the node 100 may transmit a new IP packet, which is received from the other electronic device, to the electronic device or the server 110 through the network. It is appreciated that the node may also use other IP packets, such as IPv4 and IPv6 packets.
  • the node 1000 may comprise a plurality of input/output ports 1010/1030 and/or receivers (Rx) 1012 and transmitters (Tx) 1032 for receiving and transmitting data from other nodes, a processor 1020 including an address translation circuit to process data and determine which node to send the data, storage 1022 including cache 1024 and long-term storage 1026.
  • a processor 1020 including an address translation circuit to process data and determine which node to send the data
  • storage 1022 including cache 1024 and long-term storage 1026.
  • the processor 1020 is not so limited and may comprise multiple processors.
  • the processor 1020 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs.
  • the processor 1020 may be configured to implement any of the schemes described herein using any one or combination of steps described in the embodiments.
  • the processor 1020 may be implemented using hardware, software, or both.
  • the storage 1022 may include cache 1024 and long-term storage 1026, and may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein. Although illustrated as a single storage, storage 1022 may be implemented as a combination of read only memory (ROM), random access memory (RAM), or secondary storage (e.g., one or more disk drives or tape drives used for non-volatile storage of data).
  • ROM read only memory
  • RAM random access memory
  • secondary storage e.g., one or more disk drives or tape drives used for non-volatile storage of data.
  • the schedule of data packets, which are received by the node may be minimized by the total dwell minimizing schedule (TDMS) module 1028 in accordance with embodiments of the disclosure.
  • the data packets may be stored in the storage 1022.
  • the processor 1020 may calculate a schedule of data packets to minimize dwell time.
  • Other electronic devices or servers 110 in the network may process the new IP packet.
  • the processor 1020 may generate a schedule based on delivery time deadlines embedded in the new IP packet.
  • FIG. 11 shows an example embodiment of a computing system for implementing embodiments of the disclosure.
  • Computer system 1100 includes a processor 1104 and a memory 1108 that communicate with each other, and with other components, via a bus 1112.
  • Bus 1112 may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures.
  • Memory 1108 may include various components (e.g., machine- readable media) including, but not limited to, a random access memory component, a read only component, and any combinations thereof.
  • a basic input/output system 1116 (BIOS), including basic routines that help to transfer information between elements within computer system 1100, such as during start-up, may be stored in memory 1108.
  • BIOS basic input/output system
  • Memory 1108 may also include (e.g., stored on one or more machine-readable media) instructions (e.g., software) 1120 embodying any one or more of the aspects and/or methodologies of the present disclosure.
  • memory 1108 may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof.
  • Computer system 1100 may also include a storage device 1124.
  • Examples of a storage device include, but are not limited to, a hard disk drive, a magnetic disk drive, an optical disc drive in combination with an optical medium, a solid-state memory device, and any combinations thereof.
  • Storage device 1124 may be connected to bus 1112 by an appropriate interface (not shown).
  • Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE 1394 (FIREWIRE), and any combinations thereof.
  • storage device 1124 (or one or more components thereof) may be removably interfaced with computer system 1100 (e.g., via an external port connector (not shown)).
  • storage device 1124 and an associated machine-readable medium 1128 may provide nonvolatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for computer system 1100.
  • software 1120 may reside, completely or partially, within machine-readable medium 1128.
  • software 1120 may reside, completely or partially, within processor 1104.
  • Computer system 1100 may also include an input device 1132.
  • a user of computer system 1100 may enter commands and/or other information into computer system 700 via input device 1132.
  • Examples of an input device 1132 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device, a joystick, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), a cursor control device (e.g., a mouse), a touchpad, an optical scanner, a video capture device (e.g., a still camera, a video camera), a touchscreen, and any combinations thereof.
  • an alpha-numeric input device e.g., a keyboard
  • a pointing device e.g., a joystick, a gamepad
  • an audio input device e.g., a microphone, a voice response system, etc.
  • a cursor control device e.g., a mouse
  • Input device 1132 may be interfaced to bus 1112 via any of a variety of interfaces (not shown) including, but not limited to, a serial interface, a parallel interface, a game port, a USB interface, a FIREWIRE interface, a direct interface to bus 1112, and any combinations thereof.
  • Input device 1132 may include a touch screen interface that may be a part of or separate from display 1136, discussed further below.
  • Input device 1132 may be utilized as a user selection device for selecting one or more graphical representations in a graphical interface as described above.
  • a user may also input commands and/or other information to computer system 1100 via storage device 1124 (e.g., a removable disk drive, a flash drive, etc.) and/or network interface device 1140.
  • a network interface device such as network interface device 1140, may be utilized for connecting computer system 1100 to one or more of a variety of networks, such as network 1144, and one or more remote devices 1148 connected thereto. Examples of a network interface device include, but are not limited to, a network interface card (e.g., a mobile network interface card, a LAN card), a modem, and any combination thereof.
  • Examples of a network include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g., a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any combinations thereof.
  • a network such as network 1144, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used.
  • Information e.g., data, software 1120, etc.
  • Computer system 1100 may further include a video display adapter
  • a display device for communicating a displayable image to a display device, such as display device 1136.
  • a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof.
  • Display adapter 1152 and display device 1136 may be utilized in combination with processor 1104 to provide graphical representations of aspects of the present disclosure.
  • computer system 1100 may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof.
  • peripheral output devices may be connected to bus 1112 via a peripheral interface 1156. Examples of a peripheral interface include, but are not limited to, a serial port, a USB connection, a FIREWIRE connection, a parallel connection, and any combinations thereof.
  • the computer-readable non-transitory media includes all types of computer readable media, including magnetic storage media, optical storage media, and solid state storage media and specifically excludes signals.
  • the software can be installed in and sold with the device. Alternatively the software can be obtained and loaded into the device, including obtaining the software via a disc medium or from any manner of network or distribution system, including, for example, from a server owned by the software creator or from a server not owned but used by the software creator.
  • the software can be stored on a server for distribution over the Internet, for example.
  • Computer-readable storage media exclude (excludes) propagated signals per se, can be accessed by a computer and/or processor(s), and include volatile and non-volatile internal and/or external media that is removable and/or non-removable.
  • the various types of storage media accommodate the storage of data in any suitable digital format. It should be appreciated by those skilled in the art that other types of computer readable medium can be employed such as zip drives, solid state drives, magnetic tape, flash memory cards, flash drives, cartridges, and the like, for storing computer executable instructions for performing the novel methods (acts) of the disclosed architecture.
  • each process associated with the disclosed technology may be performed continuously and by one or more computing devices.
  • Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente invention concerne un appareil et un procédé pour réguler le temps de distribution d'un paquet de données entre des dispositifs de communication. Une pluralité de paquets de données sont reçus au niveau d'un nœud, les paquets de données spécifiant chacun un temps de distribution. Les paquets de données sont stockés dans une ou plusieurs files d'attente du nœud, et une liste ordonnée des paquets de données est générée sur la base d'un temps de transmission pour chacun des paquets de données et du temps de distribution pour chacun des paquets de données. Les paquets de données peuvent ensuite être envoyés à un nœud de bond suivant selon la liste ordonnée.
PCT/US2020/048244 2020-05-06 2020-08-27 Garantie de latence pour des paquets de données dans un réseau Ceased WO2021101610A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063020970P 2020-05-06 2020-05-06
US63/020,970 2020-05-06

Publications (1)

Publication Number Publication Date
WO2021101610A1 true WO2021101610A1 (fr) 2021-05-27

Family

ID=72433046

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/048244 Ceased WO2021101610A1 (fr) 2020-05-06 2020-08-27 Garantie de latence pour des paquets de données dans un réseau

Country Status (1)

Country Link
WO (1) WO2021101610A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023130744A1 (fr) * 2022-01-04 2023-07-13 中兴通讯股份有限公司 Procédé de programmation de message, dispositif de réseau, support de stockage et produit-programme d'ordinateur
CN118035683A (zh) * 2024-02-21 2024-05-14 广州龙数科技有限公司 一种时间序列数据分析方法、系统及设备

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018086558A1 (fr) * 2016-11-10 2018-05-17 Huawei Technologies Co., Ltd. Planification de latence de réseau

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018086558A1 (fr) * 2016-11-10 2018-05-17 Huawei Technologies Co., Ltd. Planification de latence de réseau

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A. CLEMMT. ECKERT: "High-Precision Latency Forwarding over Packet-Programmable Networks", IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, 2020
K. MAKHIJANI, R. LI, AND H. ELBAKOURY.: "Using Big Packet Protocol Framework to Support Low Latency based Large Scale Networks.", 6 February 2019 (2019-02-06), XP009524226, ISBN: 978-1-61208-711-5, Retrieved from the Internet <URL:http://ns2.thinkmind.org/index.php?view=article&articleid=icns_2019_1_20_10030> [retrieved on 20201123] *
K. MAKHIJANIH. YOUSEFIK. K. RAMAKRISHNANR. LI: "Extended Abstract: Coordinated Communications for Next-Generation Networks", 2019 IEEE 27TH INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (ICNP, 2019

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023130744A1 (fr) * 2022-01-04 2023-07-13 中兴通讯股份有限公司 Procédé de programmation de message, dispositif de réseau, support de stockage et produit-programme d'ordinateur
CN118035683A (zh) * 2024-02-21 2024-05-14 广州龙数科技有限公司 一种时间序列数据分析方法、系统及设备

Similar Documents

Publication Publication Date Title
US8665892B2 (en) Method and system for adaptive queue and buffer control based on monitoring in a packet network switch
US8971184B2 (en) Latency based random early discard for network packets
US8520520B2 (en) System and method for per flow guaranteed throughput, multiple TCP flow bandwidth provisioning, and elimination of packet drops for transmission control protocol (TCP) and TCP-friendly protocols
Kundel et al. OpenBNG: Central office network functions on programmable data plane hardware
US9042355B2 (en) Quality of service (QoS) for satellite communications network
US9059921B2 (en) Method, network, and computer product for flow based quality of service
BR112015004565B1 (pt) Método para classificação de tráfego em estágios entre terminal e nós de agregação de um sistema de comunicações de banda larga e aparelho para classificação de tráfego em estágios entre terminal e nós de agregação de um sistema de comunicações de banda larga
US11805071B2 (en) Congestion control processing method, packet forwarding apparatus, and packet receiving apparatus
WO2021174236A2 (fr) Signalisation intrabande pour service de garantie de latence (lgs)
WO2021101610A1 (fr) Garantie de latence pour des paquets de données dans un réseau
Sieber et al. Scalable application-and user-aware resource allocation in enterprise networks using end-host pacing
US20230057487A1 (en) Data packet format to communicate across different networks
CN113395612A (zh) 一种光纤通信中的数据转发方法以及相关装置
Schmitt et al. Per-flow guarantees under class-based priority queueing
WO2017088460A1 (fr) Procédé, dispositif et système de commande de transmission de paquet de service
US7804773B2 (en) System and method of managing data flow in a network
Adami et al. TCP/IP-based multimedia applications and services over satellite links: experience from an ASI/CNIT project
WO2021101640A1 (fr) Procédé et appareil de nettoyage de paquets pour la distribution de paquets dans les temps
US12494943B1 (en) Handling interface clock rate mismatches between network devices
Al-Haddad et al. A Survey of Quality of Service (QoS) Protocols and Software-Defined Networks (SDN) From the Traditional to the Latest Network Architecture
Bozed et al. Investigation the Performance Effect of QOS in MPLS-TE Network
CN110063048A (zh) 用于对由一组用户实现的计算机应用进行优先化的系统
US7821933B2 (en) Apparatus and associated methodology of processing a network communication flow
US20230088536A1 (en) Network contracts in communication packets
US8159944B2 (en) Time based queuing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20768791

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20768791

Country of ref document: EP

Kind code of ref document: A1