WO2020160564A1 - Preferred path routing in ethernet networks - Google Patents
Preferred path routing in ethernet networks Download PDFInfo
- Publication number
- WO2020160564A1 WO2020160564A1 PCT/US2020/023443 US2020023443W WO2020160564A1 WO 2020160564 A1 WO2020160564 A1 WO 2020160564A1 US 2020023443 W US2020023443 W US 2020023443W WO 2020160564 A1 WO2020160564 A1 WO 2020160564A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- ppr
- network
- node
- path
- description information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/34—Source routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/02—Topology update or discovery
- H04L45/03—Topology update or discovery by updating link state protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/66—Layer 2 routing, e.g. in Ethernet based MAN's
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/32—Flooding
Definitions
- the present disclosure relates to the field of routing in a network, and in particular, to construction of an end-to-end path between a source and a destination based on preferred path routing (PPR) information in an Ethernet network.
- PPR preferred path routing
- Packet-switched networks are being deployed by telecommunications providers to service the growing demand for data services in the corporate and consumer markets.
- the architecture of packet-switched networks such as Ethernet based networks is easy to deploy in smaller networks but not easily scalable in larger metropolitan area networks (MAN) or wide area networks (WAN) or provide the standards of service associated with service providers. Therefore Ethernet networking has traditionally been limited to Local Area Networks (LAN) deployments.
- LAN Local Area Networks
- Use of Ethernet switches in carrier's networks has the advantages of interoperability (mappings between Ethernet and other frame/packet/celS data structures such as IP and ATM are well known) and economy (Ethernet switches are relatively inexpensive compared to IP routers, for example).
- a computer-implemented method of creating a data path comprising receiving, by a node in an Ethernet network, a message that includes preferred path routing (PPR) description information, the PPR description information including a path identifier (PPR-ID) and a plurality of sequentially ordered topological path description elements (PDEs); determining, by the node in the Ethernet network, a next hop node using the next topological PDE in the plurality of sequentially ordered topological PDEs listed in the PPR description information; storing, by the node in the Ethernet network, the PPR description information; and forwarding data in the Ethernet network using the stored PPR description information.
- PPR preferred path routing
- PPR-ID path identifier
- PDEs topological path description elements
- the method further comprising constructing, by the node in the Ethernet network, a forwarding table entry including the PPR-ID.
- the method further comprising flooding, by the node in the Ethernet network, at least the PPR description information and the PPR-ID.
- the message is a link state message advertised using a link state protocol.
- the path identified by PPR-ID comprises a set of topological PDEs, each of which represent a segment of the data path from a source node to a destination node in the network.
- the PPR description information represents the data path from a source node to a destination in the network.
- each of the plurality of PDEs represents at least one topological element and at least one non-topological element on the PPR, wherein the topological element comprises at least one of a network element or a link, and wherein the non-topological element comprises at least one of a service, function, or context.
- the PPR-ID is a destination address in a Shortest Path Bridging (SPB)-MAC network.
- SPB Shortest Path Bridging
- the PPR-ID is a destination address and a VLAN ID (VID) in a Shortest Path Bridging (SPB)-VID network.
- VIP VLAN ID
- SPB Shortest Path Bridging
- the PPR-ID is a Nickname in a Transparent Interconnection of Lots of Links (TRILL) network.
- TRILL Transparent Interconnection of Lots of Links
- the PPR-ID representing a graph will have one or more source nodes to a single destination node.
- the PPR-ID representing a graph will have one or more source nodes to a plurality of destination nodes.
- a device for creating a data path comprising a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to receive, by the device in an Ethernet network, a message that includes preferred path routing (PPR) description information, the PPR description information including a path identifier (PPR-ID) and a plurality of sequentially ordered topological path description elements (PDEs); determine, by the device in the Ethernet network, a next hop node using the next topological PDE in the plurality of sequentially ordered topological PDEs listed in the PPR description information; store, in the device, the PPR description information; and forwarding data in the Ethernet network using the stored PPR description information.
- PPR preferred path routing
- PPR-ID path identifier
- PDEs topological path description elements
- a non-transitory computer-readable medium storing computer instructions for creating a data path , that when executed by one or more processors, cause the one or more processors to perform the steps of receiving, by a node in an Ethernet network, a message that includes preferred path routing (PPR) description information, the PPR description information including a path identifier (PPR-ID) and a plurality of sequentially ordered topological path description elements (PDEs); determining, by the node in the Ethernet network, a next hop node using the next topological PDE in the plurality of sequentially ordered topological PDEs listed in the PPR description information; storing, by the node in the Ethernet network, the PPR description information; and forwarding data in the Ethernet network using the stored PPR description information.
- PPR preferred path routing
- PPR-ID path identifier
- PDEs topological path description elements
- FIG. 1 A illustrates a network configured to implement conventional preferred path routing.
- FIG. 1 B illustrates PPR path description information with services.
- FIG. 2 illustrates a network showing shortest path routing and preferred path routing.
- FIGS. 3A and 3B illustrate example advertisements and fields within an IS IS LSP.
- FIG. 4A illustrates an example Ethernet network configuration using a link state protocol.
- FIG. 4B illustrates an example of a PPR-ID for a link state protocol network.
- FIG. 5A illustrates an example of a link state protocol controlled Ethernet network.
- FIG. 5B illustrates an example of a PPR-ID for a link state protocol network.
- FIG. 5C illustrates an example of a PPR-ID for a link state protocol network.
- FIGS. 6A and 6B illustrate example flow diagrams for creating a data path in a network.
- FIG. 7 illustrates an embodiment of a node.
- FIG. 8 illustrates a schematic diagram of a general-purpose network component or computer system.
- the PPR is applied to the Ethernet network to introduce traffic path management, traffic engineering and to support network slicing. In one embodiment, this is accomplished by introducing non-shortest path traffic steering into the Ethernet network in such a way that an operator can dynamically introduce new paths in response to customer and application need.
- the paths themselves may be installed using a fully managed path approach via a network management system (NMS), or through the use of a link state routing protocol. Advertisements flooded in the network using the link state routing protocol may be extended to handle different data plane types. This may be accomplished by extending the PPR to add a Sub-type length value (TLV) that maps the PPR path description information to an Ethernet address in the different network types.
- TLV Sub-type length value
- the traffic engineering and path steering functionality is used in an Ethernet network (e.g., a Layer 2 environment or a media access control (MAC) level network).
- MAC media access control
- each node needs to be aware of the topological relationships (i.e. , adjacencies) of all other nodes, such that all nodes may build a topological map (topology) of the AS.
- Nodes may learn about one another's adjacencies by distributing (i.e., flooding) link-state information throughout the network according to one or more Interior Gateway Protocols (IGPs) including, but not limited to, open shortest path first (OSPF) or intermediate system (IS) to IS (IS-IS).
- IGPs Interior Gateway Protocols
- OSPF open shortest path first
- IS intermediate system
- IS-IS is a link-state routing protocol, which means that the routers exchange topology information with their nearest neighbors. This topology information is flooded throughout the area such that every router within the AS has a complete understanding of the topology of the AS. Once the topology is understood, end-to-end paths may be calculated in the AS, for example, using Dijkstra’s algorithm or a variation thereof. Accordingly, a next hop address to which data is forwarded is determined by choosing the“best” end-to-end path to the eventual destination.
- Each IS-IS router distributes information about its local state (e.g., usable interfaces and reachable neighbors, and the cost of using each interface) to other routers using a Link State PDU (LSP) message.
- LSP Link State PDU
- Each router uses the received messages to build up an identical database that describes the topology of the AS. From this database, each router calculates its own routing table using a Shortest Path First (SPF) or Dijkstra algorithm, as noted above.
- SPF Shortest Path First
- Dijkstra Dijkstra algorithm
- a primary advantage of a link state routing protocol is that the complete knowledge of topology allows routers to calculate routes that satisfy particular criteria. This can be useful for traffic engineering purposes, where routes can be constrained to meet particular quality of service (QoS) requirements.
- QoS quality of service
- FIG. 1 A illustrates a network configured to implement conventional preferred path routing.
- the network 100 includes a central entity 103 (also referred to herein as a“controller”) and two network elements (NEs) 150 and 154 (also referred to herein as“nodes”), which are interconnected by links 160.
- the central entity 103 may be a Path Computation Element (PCE), which is described in Internet Engineering Task Force (IETF) Request for Comments (RFC) 8281 , entitled“Path Computation Element Communication Protocol (PCEP) Extensions for PCE-lnitiated LSP Setup in a Stateful PCE Model,” by E. Crabbe, dated December 2017, which is incorporated by reference in its entirety.
- PCE Path Computation Element
- Each of NEs 150 and 154 may be a physical device, such as a router, a bridge, a virtual machine, a network switch, or a logical device configured to perform switching and routing using the preferred path routing mechanisms disclosed herein.
- NEs 150 and 154 may be headend nodes positioned at an edge of the network 100.
- NE 150 may be an ingress node at which traffic (e.g., control packets and data packets) is received, and NE 154 may an egress node from which traffic is transmitted.
- the links 160 may be wired or wireless links or interfaces interconnecting each of the NEs 150 and 154 together and interconnecting each of the NEs 150 and 154 to the central entity 103. While NEs 150 and 154 are shown in FIG. 1A as headend nodes, it should be appreciated that either of NEs 150 and 154 may otherwise be an intermediate node or any other type of NE. Although only two NEs 150 and 154 are shown in FIG. 1A, it should be appreciated that the network 100 shown in FIG. 1A may include any number of NEs.
- the central entity 103 and NEs 150 and 154 are configured to implement various packet forwarding protocols, such as, but not limited to, MPLS, IPv4, IPv6, and Big Packet Protocol.
- the NEs 150 and 154 may communicate with the central entity 103 in both directions. That is, the central entity 103 may send south bound communications from the central entity 103 to the NEs 150 and 154 using various protocols, such as OPENFLOW, Path Computation Element Protocol (PCEP), or with NetConf/Restconf in conjunction with a YANG data model.
- the YANG data model is described by the LSR Working Group draft document, entitled“YANG data model for Preferred Path Routing,” by Y.
- the NEs 150 and 154 may also send north bound communications from the NEs 150 and 154 to the central entity103 using various protocols, such as Border Gateway Protocol (BGP) - Link State (LS) or IGP static adjacency. Additionally, it is appreciated that other well-known protocols may be used for layer 2 domain for similar functionality.
- Border Gateway Protocol BGP
- LS Link State
- IGP static adjacency IGP static adjacency
- the central entity 103 generates paths between a source and a destination of the network using a network topology of the network 100 stored at the central entity 103.
- the central entity 103 may determine the network topology using advertisements sent by each of the NEs 150 and 154 in the network 100, where the advertisements may include prefixes, traffic engineering (TE) information, IDs of adjacent NEs, links, interfaces, ports, and routes.
- TE traffic engineering
- the central entity 103 determines a shortest path between a source and destination and one or more PPRs between the source and destination.
- a shortest path refers to a path between the source and the destination that is determined based on a metric, such as, for example, a cost or weight associated with each link on the path, a number of NEs on the path, a number of links on the path, etc.
- a shortest path may be computed for a destination using a Dijkstra’s Shortest Path First (SPF) algorithm.
- SPF Shortest Path First
- a non-shortest path may be computed. For example, a custom path created based on one or more application or service requirements may determine a non-shortest path. It is also appreciated that calculation of the shortest path may also be applied in a Layer 2 (“L2”) routing environment, such as 802.1 aq, as further described below.
- L2 Layer 2
- the PPR may be a path that deviates from the shortest path computed for a particular source and destination.
- the PPRs may be determined based on an application or server request for a path between a source and destination that satisfies one or more network characteristics (such as TE characteristics obtained by the central entity through BGP-LS or PCEP) or service requirements.
- the PPRs and the shortest paths may each comprise a sequential ordering of one or more NEs 150 and 154 on the PPR and/or one or more links 160 on the PPR, which may be identified by labels, addresses, or IDs.
- An example of the shortest path and the PPR path will be further described below with reference to FIG. 2.
- each network element in the network 100 may be configured to store PPR information 130 received from the central entity 103.
- the PPR information includes a PPR-ID, which identifies the PPR and one or more path description elements (PDEs), where the PDEs describe each of the network elements on the PPR in sequential order.
- PPR information may be sent by each network element using a link state message (e.g., LSP) to the other network elements in the network 100, as described below.
- the message may include PPR information 170 (i.e.
- the PPR path description such as a PPR-ID 171 and one or more PDEs 173 (including flags 175, which include a flood bit, a down bit, an attach bit and an ultimate fragment bit, which is described in the link state routing (LSR) Working Group Draft Document entitled“Preferred Path Routing (PPR) in IS-IS,” dated January 9, 2020, by U. Chunduri, et. al. (hereinafter,“Chunduri”), which is incorporated by reference herein in its entirety), each describing an element on the PPR path, an example of which is shown in FIG. 1 B and described below.
- LSR link state routing
- each of the NEs 150 and 154 in the network 100 that receive the message first determine whether the NE 150 or 154 is identified in the PDEs 173. If so, then the NE 150 or 154 updates a locally stored forwarding database to indicate that data packets including this particular PPR-ID should be routed along the path identified by the PPR information instead of the predetermined shortest path, calculated using SPF.
- the NE 150 or 154 inspects the data packet to determine whether a PPR-ID is included in the data packet.
- the PPR-ID may be included in a header of the data packet. If a PPR- ID is included in the data packet, the NE 150 or 154 performs a lookup on the locally stored forwarding database to determine the next PDE associated with the PPR-ID identified in the data packet.
- the PDE in the locally stored forwarding database indicates a next hop (another network element, link, or segment) by which to forward the data packet.
- the NE 150 or 154 forwards the data packet to the next hop based on the PDE indicated in the locally stored forwarding database. In this way, the NEs 150 or 154 in the network are configured to transmit data packets via the PPR instead of the shortest path.
- FIG. 1 A shows network 100 comprising a central entity 103 configured to determine and send PPR information 170 to the NE 150 or 154
- the NE 150 or 154 may receive the PPR information 170 through other sources as well.
- an operator can store the information on one of the NEs 150 and 154 in the network 100 to include and store the traffic engineered source routed path information.
- the NE 150 or 154 can still be configured to send messages including the PPR information to the other NEs 150 and 154 in the network 100
- FIG. 1 B illustrates PPR path description information with services.
- the data plane identifier, PPR-ID 171 describes a path through the network 100.
- the data plane type and corresponding PPR-ID 171 can be specified in a link state packet (or link state message) advertised in the network that includes PPR path description information 170.
- the PPR-ID type allows data plane extensibility for PPR, and is currently defined for IPv4, IPv6, SR-MPLS and SRv6 data planes.
- This disclosure further extends the data plane types to include Transparent Interconnection of Lots of Links (TRILL), Shortest Path Bridging (SPB)-MAC (SPB-M) and SPB-VID (SPB-V), as described in more detail below.
- TRILL Transparent Interconnection of Lots of Links
- SPB-M Shortest Path Bridging
- SPB-V SPB-VID
- the path identified by the PPR-ID 171 is described as a set of Path Description Elements (PDEs) 173, each of which represent a segment of the path that is forwarded to the next segment/hop or label of the path description.
- Path attributes 1 ... n may be applied to ensure particular service levels, i.e. , QoS, are delivered across a path.
- Elements in the path description i.e., the PDEs
- the PDEs 173 may also represent non-topological PDEs, such as a service (e.g., QoS), function (e.g., functional behavior of a node) or context (e.g., addresses, packets, fingerprints, etc.) on a particular node.
- a service e.g., QoS
- function e.g., functional behavior of a node
- context e.g., addresses, packets, fingerprints, etc.
- service-x is applied and function-x is executed.
- these services and functions are pre-provisioned on the particular nodes in the network 100, can be advertised in IGPs, and are known to the central entity 103 along with the link state database of the IGP that is used in the underlying network.
- the PPR path can be one of two types— a strict PPR or a loose PPR, described below.
- FIG. 2 illustrates a network showing shortest path routing and preferred path routing.
- Traditional routing in a network 200 is based on shortest path computations, using for example IGPs, for all prefixes in the network 200.
- An example of a shortest path in the network 200 is shown from network element Rs to network element Rd using dashed lines, the path of which is calculated as discussed above (for example, using Dykstra’s algorithm).
- route computation is based on a specific path described along with the prefix as opposed to the shortest path towards the destination node (or egress node). Instead of using the next hop of the shortest path towards the destination, the next hop towards the next node in the path description (FIG. 1 B) is used. This allows for explicit path and per-hop processing, and optionally includes QoS or resources to be reserved along the path (using the afore-mentioned PDEs).
- the PPR path may be advertised in IGP along with a data plane identifier (PPR-ID). Using this technique, any packet identified with the PPR-ID may use the PPR path instead of the IGP computed shortest path to the destination indicated by the PPR-ID.
- packets destined to the PPR-ID use the PPR path instead of the IGP computed shortest path. In general, this is accomplished by IGP network elements (nodes) processing the PPR path. If an IGP node finds itself in the PPR path, it sets the next hop towards the PPR- ID according to the PPR path.
- network 200 is similar to network 100 (FIG. 1A).
- Network 200 includes a central entity 103, network element Rs, network element Rd, and network elements (NEs) R1 - R20.
- Network element Rs is an ingress or head-end node
- network element Rd is an egress or another head- end node.
- Each of the bi-directional links between network elements are associated with a metric.
- the bi-directional link metric (e.g., R15 - R16) may be a cost associated with a timing to transmit a packet across a link, a distance that the packet is transmitted across, a physical cost of transmitting a packet across a link, a bandwidth proportion used by transmitting a packet across a link, a number of intermediary nodes present on a link between two end nodes, etc.
- the bi-directional link metric for all the links connecting any two network elements is a value 1 , except for those links identified with a value 10. These metrics may then be used to compute a path in the network.
- Rs may be configured to receive a traffic engineered route or explicit source routed path information from a central entity 103 (PCE or Controller).
- the received path information from the central entity 103 includes PPR information, which is identified using a PPR-ID 171.
- the PPR path description information 170 is encoded as an ordered list of PPR-PDEs PDE-1 to PDE-n from a source node (e.g. Rs) to a destination node (e.g. Rd) in the network 200.
- the PPR- PDE information represents both topological and non-topological segments and specifies the actual path towards the Rd.
- the path Rs-R15-R7- R19-R20-R18-R13-R14-Rd can be attached with a PPR-ID (e.g. PPR-ID 100).
- PPR-ID 100 e.g. PPR-ID 100
- the path and PPR-ID 100 are signaled to the network elements Rs, Rd and R1 - R20 of the network 200 in an underlying IGP as a PPR
- only network elements Rs, Rd and R1 - R20 that find themselves in the path description have to act on the path.
- R15 finds that its node information is encoded as a PPR-PDE in the path.
- a loose path is described since not every node in the path from Rs to Rd is specified.
- R15-R7 there is another network element R16 (in addition to R6) over which R7 could be reached.
- the path type (loose or strict) is explicitly indicated in the PPR-ID description.
- Network element R15 acts on the path type (set by a flag) and, in the case of a loose path, programs the local hardware with two labels/SIDs using PPR-ID 100 as a bottom label and network element SID of R7 as a top label.
- Intermediate nodes like R16 do not need to be aware of the PPR or that data packets are being transported along a PPR path. Rather, they simply forward the packet based on the top label, in this case to R7.
- the path described were a strict path, the actual data packet would require only a single label, i.e. PPR-ID 100.
- FIGS. 3A and 3B illustrate example advertisements and fields within an IS IS LSP.
- the advertisement 300 may be included in an existing advertisement message of IS-IS or may be a new message created for IS-IS that may be advertised using IS-IS LSPs.
- the IS-IS LSP is advertised in Layer 2 using a MAC level network. It is appreciated that while the disclosed embodiment specifically refers to IS-IS and LSPs, any link state protocol may be applied to flood the network using link state packets.
- the advertisement 300 has four logical sections (or Sub- TLVs)— encoding of the PPR-Prefix (IS-IS prefix), encoding of PPR-ID, encoding of path description with an ordered PDE Sub-TLV and a set of optional PPR attributes that can be used to describe one or more parameters of the path.
- multiple instances of the TLV may be advertised in IS-IS LSPs with different PPR-ID types (i.e., data plane types) and with corresponding PDE Sub-TLVs.
- the format of the advertisement 300 includes a type field 301 , a length field 303, a PPR flags field 305, a Fragment ID 307, an MT ID 309, an Algorithm 31 1 , and the four Sub-TLBVs— a PPR-Prefix sub-TLV 313, a PPR-ID sub-TLV 315, a PPR-PDE sub-TLV 317, and a PPR-Attribute sub-TLV 319.
- the type field 301 carries a value assigned by the Internet Assigned Numbers Authority (IANA), and the length field 303 includes the total length of the value field in bytes.
- IANA Internet Assigned Numbers Authority
- the PPR flags field 305 includes flag bits as defined in Chunduri.
- Fragment ID field 307 is an 8-bit identifier value (0-255) of the TLV fragment
- MT-ID field 309 is a multi-topology identifier that is defined in Network Working Group, RFC5120, entitled “M-ISIS: Multi Topology (MT) Routing in Intermediate System to Intermediate Systems (IS-ISs)” (which is incorporated by reference herein in its entirety)
- the algorithm field 31 1 is a“1” octet value representing the route computation algorithm (i.e. , the computation towards PPR-ID occurs per MT-ID/algorithm pair).
- the PPR-Prefix Sub-TLV 313 is a variable size Sub-TLV representing the destination of the path being described.
- the PPR-ID Sub- TLV 315 is a variable size Sub-TLV defining PPR-ID 171 of the PPR path, further described below with reference to FIG. 3B.
- the PPR-PDE Sub-TLV 317 includes a variable number of ordered PDE Sub-TLVs 173 representing the PPR path, and the PPR-Attribute Sub-TLV 319 represent a variable number of PPR attribute Sub-TLVs that represent the path attributes to regulate traffic across network elements.
- the traffic accounting parameters are further described in the draft document entitled “Traffic Accounting for MPLS Segment Routing Paths,” by S.
- PPR Sub- TLVs 313 - 319 are limited in the described embodiment to include these fields, it should be appreciated that the PPR Sub-TLVs 313 - 319 may include additional fields as necessary to include information regarding the PPR within the network.
- the PPR-ID field 333 is a variable size Sub-TLV defining PPR- ID 171 of the PPR path.
- the PPR-ID field 333 is the data plane identifier in the packet header and may be any data plane defined in the PPR-ID type field 327.
- the PPR-ID Sub-TLV 315 includes a type field 321 , a length field 323 and a PPR-ID flags field 325, each of which are described above. Further fields included in the PPR-ID Sub-TLV 315 are PPR-ID type field 327, PPR-ID length field 329, PPR-ID Mask Length field 331 , and PPR-ID field 333.
- the PPR-ID type field 327 includes a value indicating a data plane type of the PPR-ID 171 being advertised.
- a type of value 1 may indicate a data plane type of SR-MPLS SID Label
- a type of value 2 may indicate a data plane type of native IPv4 addresses or prefixes
- a type of value 3 may indicate a data plane type of native IPv6 addresses or prefixes
- a type of value 4 may indicate a data plane type of IPv6 SID in SRv6 with SRH.
- PPR-ID length field 329 includes a length of the PPR-ID Sub-TLV field 333 in octets and may depend on the PPR-ID type defined in the PPR-ID type field 327.
- the PPR-ID mask length field 331 is applicable for certain PPR-ID types 327, namely types 2, 3, and 4, and may include the length of the PPR-ID Prefix 313 in bits. However, the PPR-ID mask length field 331 is not applicable to PPR-ID Sub-TLV 315.
- FIG. 4A illustrates an example Ethernet network configuration using a link state protocol.
- the link state protocol is a Transparent Interconnection of Lots of Links (TRILL) protocol that allows Ethernet switches (e.g., fabric switches) to function more like routing devices.
- TRILL allows routing bridges (RBridges) to be coupled in an arbitrary topology, without the risk of looping, by implementing routing functions in switches and including a hop count in a TRILL header.
- the RBridges implement a link state protocol, such as IS-IS, which allows the RBridge to calculate distribution trees for delivery of frames either to destinations that are unknown or to multicast/broadcast groups.
- the network implements a TRILL protocol in which a packet is encapsulated using a TRILL header that is forwarded towards its destination (indicated by the egress RBridge Nickname, discussed below) along the shortest path calculated by the link state protocol.
- TRILL applies network layer routing protocols to the link layer and -- with knowledge of the entire network -- uses that information to support Layer 2 multi-pathing.
- FCoE Fiber Channel over Ethernet
- FCoE also enables FC traffic across existing high-speed Ethernet infrastructure and converges storage and IP protocols onto a single cable transport and interface. FCoE also reduces latency and improves overall network bandwidth utilization.
- a fabric switch 400 includes switches 412, 414 and 416.
- Fabric switch 400 also includes switches 406, 408, 422 and 424, each with a number of edge ports which can be coupled to external devices.
- switches 406 and 408 are coupled with end devices 462 and 464 via Ethernet edge ports.
- the switches 406 and 408 share the MAC addresses with the other switches in fabric switch 400.
- switches 422 and 424 are coupled to devices 402 and 404, also via Ethernet edge ports.
- Devices 402 and 404 provide service 401 a to fabric switch 200.
- Examples of devices 402 and 404 include, but are not limited to, a firewall, load balancer, intrusion detection/protection device, network analyzer, and network virtualizer.
- the devices 402 and 404 can be a physical device or a virtual machine running on a physical device.
- a device 402 and 404 is typically deployed in the data path of upstream or downstream traffic (which can be referred to as“north-south traffic”).
- a device can be deployed at an aggregation router.
- the switches in fabric switch 400 are TRILL RBridges and in communication with each other using a TRILL protocol (i.e., routing bridges that use TRILL protocol). These RBridges have TRILL-based inter-switch ports for connection with other TRILL RBridges in fabric switch 400.
- switches 406 and 408 in conjunction with each other, virtualize devices 402 and 404 as a virtual device 401 coupled to fabric switch 400 via a virtual member switch 440.
- virtual switch 440 is a TRILL RBridge and assigned a virtual RBridge identifier 445.
- RBridges 406 and 408 send notification messages to RBridges 406, 408, 412, 414 and 416.
- the notification message specifies that virtual RBridge 440, which is associated with virtual RBridge identifier 445, is reachable via RBridges 406 and 408.
- RBridges 406 and 408 specify in the same message or in a different notification message that virtual device 401 is associated with service 401 a and coupled to virtual switch 440.
- end devices 462 and 464 belong to a subnet that requires service 401 a.
- end device 462 sends a data frame (not shown) to end device 464.
- Device 402 receives the data frame and detects the requirement of service 401 a for the data frame.
- RBridge 402 becomes aware of virtual device 401 and virtual RBridge 440.
- RBridge 402 encapsulates the data frame in a TRILL packet with virtual RBridge identifier 445 as the egress RBridge identifier and forwards the TRILL packet toward virtual RBridge 440.
- RBridge 422 receives the TRILL packet via intermediate RBridge 412 and recognizes the TRILL packet to be destined to virtual RBridge 440. Since virtual RBridge 440 is associated with service 401 a, and the TRILL packet includes virtual RBridge identifier 445, RBridge 422 detects that the encapsulated data frame requires service 401 a. RBridge 422 extracts the data frame from the TRILL packet and forwards the data frame to locally coupled device 401 .
- RBridge 422 Upon receiving back the data frame from device 401 , RBridge 422 identifies the destination MAC address of end device 464, encapsulates the data frame in a TRILL packet with an RBridge identifier of Device 404 as the egress RBridge identifier, and forwards the TRILL packet toward Device 404.
- PPR is extended by adding a Sub-TLV to map the PPR path description information (FIG. 1 B) to an Egress RBridge Nickname of a TRILL network, such as the network in FIG. 4A.
- This enables introduction of non- shortest path traffic steering into an Ethernet network such that an operator can dynamically introduce new paths in response to customer and application need.
- this is accomplished by adding a new PPR-ID type to the PPR-ID Sub- TLV.
- Type 1 SR-MPLS SID/Label
- Type 2 Native IPv4 Address/Prefix
- Type 3 Native IPv6 Address/Prefix
- Type 4 IPv6 SID in SRv6 with SRH.
- a new type of PPR-ID Sub-TLV is created in which the PPR-ID Sub-TLV is an Egress RBridge Nickname of a TRILL network.
- the PPR-ID Sub-TLV 450 includes a type field 452, a length field 454 and a PPR-ID flags field 456. Additional fields in the PPR-ID Sub-TLV 450 include the PPR-ID type field 470, PPR-ID length field 472, PPR-ID Mask Length field 474, and PPR-ID field 476.
- the PPR-ID length field 472 includes a length of the PPR-ID field 476 in octets and depends on the PPR-ID type defined in the PPR-ID type field 470.
- the PPR-ID mask length field 474 while applicable for PPR-ID types 2, 3, and 4, is not applicable PPR-ID Sub-TLV 450.
- the PPR-ID type field 470 is a new type (it does not include Types 1 - 4 described above).
- the PPR-ID type field 470 may include a new classification or value (e.g. a value of 5, or“Type 5”) that indicates a data plane type of a TRILL network.
- the PPR-ID field 476 remains the data plane identifier in the packet header, the value in this field is set to the Egress RBridge Nickname of the network.
- end device 462 is an ingress node
- end device 464 is an egress node
- a Layer 2 frame (PDU) at the ingress node 462 uses TRILL encapsulation and MAC routing using egress node’s 464 nickname (i.e. , the PPR-ID 476, which identifies the shortest path or graph).
- the control plane pre-programs the egress node’s 464 PPR-ID in all the nodes of the network (e.g., the network illustrated in FIG. 4A) described in the PPR description information.
- the L2 frame When an L2 frame with a traffic matching rule (e.g., set up by the operator) is received at the ingress node 462, the L2 frame is encapsulated with the corresponding PPR-ID 476 of the egress node 464, as opposed to egress node’s 464 shortest path/default nickname. After this is accomplished, the network routes the L2 frames per the PPR description information instead of the shortest path. In one embodiment, extending the PPR in this manner eliminates the need to pre-provision policies at the ingress node.
- a traffic matching rule e.g., set up by the operator
- FIG. 5A illustrates an example of a link state protocol controlled Ethernet network.
- SPB One type of network technology is known as SPB, in which a link state protocol is used for advertising.
- a set of standards for implementing SPB is specified by the Institute of Electrical and Electronics Engineers (IEEE), and is identified as IEEE 802.1 aq.
- 802.1 aq allows for the shortest path forwarding in an Ethernet mesh network context utilizing multiple equal cost paths, which in turn allows SPB to support Layer 2 topologies. This supports two Ethernet encapsulating data paths— 802.1 ad (Provider Bridges, or PBs) and 802.1 ah (Provider Backbone Bridges, or PBBs.
- SPB supports two modes of operation, SPB-VID (SPB-V) mode and SPB- MAC (SPB-M) mode, where MAC stands for“media access control” and VID stands for“virtual local area network.”
- SPB-V multiple VLANs can be used to distribute load on different shortest path trees, and is used in PB networks implementing VLANs, PB or PBB encapsulation.
- SPB-M service instances are delineated by l-SIDs (described above), but VLANs can be used to distribute load on different shortest path trees.
- SPB-M is conventionally used in a PBB network that implements PBB encapsulation.
- SPB uses the Intermediate System to Intermediate System (IS-IS) routing protocol, extensions of which for SPB are documented in RFC 6329, which is incorporated by reference herein in its entirety.
- IS-IS can be used to synchronize a common repository of information so as to condense SPB control and configuration into a single control protocol where the provider B-MAC, Virtual LAN Identifier (VID) for SPBV, Backbone VID (B-VID) for SPBM and Service Identifier information in the form of l-SID are all global to the network.
- Connectivity can be constructed using the IS-IS distributed routing system where each node independently computes the forwarding paths and populates the local filtering database (FDB) based on the information in the routing system database.
- FDB local filtering database
- FIG. 5A One example SPB network is illustrated in FIG. 5A.
- the example is an SPB- M network 500 and includes network elements 501 to 507.
- Each of the network elements is associated with a B-MAC address.
- the B-MAC address of each network element is“4455-667-00xx,” with the last nibble of the B-MAC address different for each element, as shown in Table I below.
- Links 511 are illustrated between the network elements 501 to 507, and interface indexes are shown as a number next to each of the links 511.
- an interface index of network element 504 is shown as a“2” on the link 511 between network element 504 and 505.
- UNI ports“i1 ,” which are the customer-to-SPB attachment point, are shown with the l-SID (Ethernet Services Instance Identifier used for Logical Grouping for E-LAN/LINE/TREE UNIs).
- l-SID Ethernet Services Instance Identifier used for Logical Grouping for E-LAN/LINE/TREE UNIs.
- an ECT-Algorithm (e.g., default ECT algorithm 00- 80-C2-01 ) is assigned to B-VID 100.
- the ECT-Algorithm picks the equal cost path with the lowest BridgelD.
- the 1 -hop shortest paths are all direct and the 2-hop shortest paths, which are symmetric, are: ⁇ 501-502- 503, 501 -502-505, 501 -502-507, 506-502-505, 504-502-507, 504-501 -506, 505-502- 507, 506-502-503, 504-502-503 ⁇ .
- An example of the shortest path of 501-502-503 is shown by the dashed lines.
- the forwarding table of network element 501 unicasts its forwarding table toward B-MACs: 4455-6677-0007, 4455-6677-0003, and 4455-6677-0005 via interface“2,” while its single-hop paths are all direct, as shown in the FDB of Table I below.
- the network element 501 In the case where network element 501 is the head of the multicast distribution tree (MDT)), the network element 501 also originates a multicast to nodes 4455-6677-0003, 4455-6677-0005, and 4455-6677-0007 and is a transmitter of l-SID 1 , which network elements 503, 505, and 507 all wish to receive.
- the full unicast (U) and multicast (M) table for network element 501 is shown in Table I below, which shows the incoming interface (“IN/IF”), the destination address, the BVID and the outgoing interfaces (OUT/IF(s)”).
- Network element 502 at the center of the network 500, has direct 1 -hop paths to all other nodes 501 and 503 - 507.
- M multicast
- FDB entries are used depending on which member it is forwarding/replicating on behalf of.
- network element 502 is on the shortest path between network elements 501 and each of network elements 503, 505, 507.
- the network element 502 replicates from network element 501 , I- SID 1 out on interfaces ⁇ if/2, if/3 and if/5 ⁇ in order to reach nodes 503, 505 and 507.
- the PPR paths described above are a path structure which is an ordered linear list of PDEs starting with a sender PDE followed by zero or more transit PDEs and finishing with a destination PDE.
- a separate PPR-ID is required for every PPR path.
- an IS-IS PPR-Tree TLV be encoded.
- This TLV includes encoding of the PPR-MAC address, encoding of a Preferred Path Graph (PPG)-ID, encoding of a path description with ordered PDE Sub-TLVs (belonging to one or more Branch-IDs) and a set of optional PPR attribute Sub-TLVs, which can be used to describe PPR Graph common parameters. Multiple instances of this TLV may be advertised in IS IS LSPs with different PPG-ID Type and with corresponding Branch-ID/PDE Sub- TLVS.
- PPG Preferred Path Graph
- the PPG-ID places policies at an ingress nodes in the network that defines how to classify the incoming traffic to the correct PPG.
- the PPR-ID remains as the forwarding identifier in the PPR graph.
- the PDE Sub-TLV has flags field that includes flag bits— an“S” (source) bit and a“D” (destination) bit, among other reserved bits.
- The“D” bit allows for more than one PPR-ID in a PPR Graph.
- the PPR graph uses one“D” bit, the PPR graph is a multi-point to point graph that uses one PPR-ID per path.
- more than one“D” bit is in the PDEs of the PPR description information, there is a corresponding PPR-ID for each PDE with a set flag (“S” or“D”) bit.
- the graph in this case is a multi-point to multi-point graph, which allows multiple ingress nodes to deliver the traffic on the traffic engineered path to the relevant destination as an encoded with the PPR-ID.
- PPR is extended by adding a Sub-TLV to map the PPR path description information (FIG. 1 B) to an Ethernet address in an SPB (SPB-M or SPB-V) network. Similar to the embodiment of FIG. 4B, this enables introduction of non-shortest path traffic steering into an Ethernet network such that an operator can dynamically introduce new paths in response to customer and application need. Additionally, by extending the PPR in this manner, pre-provisioning policies at the ingress node may be eliminated. In one embodiment, this is accomplished by adding a new PPR-ID type to the PPR-ID Sub-TLV. In the embodiment of FIG.
- the new type of PPR-ID Sub-TLV is created in which the PPR-ID Sub-TLV is a destination address of the PPR path in a SPB-M network.
- the new type of PPR-ID Sub-TLV is created in which the PPR-ID SubOTLV is a destination address and the VID of the PPR path in a SPB-V network.
- the PPR-ID Sub-TLVs 510 and 520 each include a type field 512, a length field 514, a PPR-ID flags field 516, a PPRD-ID length field 520 and a PPR-ID mask length 522 (which may be set to zero or not used). These fields are similar to those discussed above with reference to FIG. 4B. Additional fields in the PPR-ID Sub-TLV 510 include the PPR-ID type field 518 and PPR-ID field 524.
- the PPR-ID type field 518 is a new type (it does not include Types 1 - 4 described above).
- the PPR-ID type field 518 may include a new classification or value (e.g.
- a value of 6, or“Type 6”) that indicates a data plane type of SPB-M.
- the PPR-ID field 524 remains the data plane identifier in the packet header, the value in this field is set to the destination address in the SPB-M network, such as illustrated in FIG. 5A.
- Additional fields in the PPR-ID Sub-TLV 520 include the PPR-ID type field 526 and PPR-ID field 528.
- the PPR-ID type field 526 is a new type (it does not include Types 1 - 4 described above).
- the PPR-ID type field 526 may include a new classification or value (e.g. a value of 7, or“Type 7”) that indicates a data plane type of SPB-V.
- the PPR-ID field 528 remains the data plane identifier in the packet header, the value in this field is set to the destination address plus the VLAN ID (VID) in the SPB-V network, such as illustrated in FIG. 5A.
- VIP VLAN ID
- an incoming L2 frame is encapsulated with PPR-ID 528 of the path i.e. , the corresponding MAC address and VLAN ID of the egress node 464, as opposed to egress node’s original MAC address and VLAN ID (which takes to egress via shortest path).
- the incoming L2 frame is encapsulated with PPR- ID 524 of the path i.e., the corresponding MAC address of the egress node 464, as opposed to egress node’s original MAC address and VLAN ID (which takes to egress via shortest path). Similar to the example of FIG.
- the control plane pre-programs the egress node’s 464 PPR-ID in all the nodes of the network (e.g., the network illustrated in FIG. 5A) described in the PPR description information.
- the L2 frame is encapsulated with the corresponding PPR-ID 524 (SPB-M) or 528 (SPB-V) of the egress node 464, as opposed to egress node’s 464 shortest path/default nickname.
- the network routes the L2 frames per the PPR description information instead of the shortest path.
- the path 501 -506-502-507-503 is advertised in the control plane (IS-IS) with the corresponding PPR-ID 524 (SPB-M) or 528 (SPB-V) and the PPR-ID is programmed with the next hop towards the egress node 464 via this path, as opposed to shortest path.
- IS-IS control plane
- SPB-M PPR-ID 524
- SPB-V 528
- FIGS. 6A and 6B illustrate an example flow diagram for creating a data path in a network.
- the flow diagram may be computer-implemented methods performed, at least partly, by hardware and/or software components illustrated in the various figures and as described herein.
- the disclosed process may be performed by the nodes disclosed in FIGS. 1A, 2, 4A and 5A.
- the process begins at step 602, from a sending node, which receives PPR description information in from a central entity or operator input with the PPR-ID.
- the local Link State PDU (LSP) message is updated, and the PPR path description information is flooded to the network using a link state protocol, at step 604.
- the PPR path is computed, and the PPR description information is processed to install a database entry at the sending node.
- LSP Link State PDU
- a message including the PPR description information is received at step 610.
- the PPR description information includes a path identifier (ID) and a plurality of sequentially ordered topological path description elements (PDEs).
- ID path identifier
- PDE topological path description elements
- nodes in the network can forward frames to other nodes according to this description information.
- the forwarding table entry may be optionally forwarded to the next hop node in the network per the PPR path description information, at step 618.
- the next hop node may optionally download the forwarding table entry.
- FIG. 7 illustrates an embodiment of a node.
- the node 700 may be configured to implement and/or support the routing mechanisms described herein.
- the node 700 may be implemented in a single node or the functionality of node 700 may be implemented in a plurality of nodes.
- One skilled in the art will recognize that the term NE encompasses a broad range of devices of which node 700 is merely an example. While node 700 is described as a physical device, such as a router or gateway, the node 700 may also be a virtual device implemented as a router or gateway running on a server or a generic routing hardware (whitebox).
- whitebox generic routing hardware
- the node 700 may comprise a plurality of input/output ports 710/730 and/or receivers (Rx) 712 and transmitters (Tx) 732 for receiving and transmitting data from other nodes, a processor 720 to process data and determine which node to send the data to and a memory.
- the node 700 may also generate and distribute LSAs to describe and flood the various topologies and/or area of a network.
- the processor 720 is not so limited and may comprise multiple processors.
- the processor 720 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. Moreover, the processor 720 may be implemented using hardware, software, or both.
- the processor 720 includes a network configuration module 722, which may perform processing functions of the central entity 103 or the network elements.
- the network configuration module 722 may also be configured to perform the steps of the methods discussed herein. As such, the inclusion of the network configuration module 722 and associated methods and systems provide improvements to the functionality of the node 700.
- network configuration module 722 effects a transformation of a particular article (e.g., the network) to a different state.
- network configuration module 722 may be implemented as instructions stored in the memory 760, which may be executed by the processor 720.
- the memory 760 stores the network configuration module as instructions, and the processor 720 executes those instructions.
- the memory 760 may comprise a cache for temporarily storing content, e.g., a random-access memory (RAM). Additionally, the memory 760 may comprise a long term storage for storing content relatively longer, e.g., a read-only memory (ROM). For instance, the cache and the long-term storage may include dynamic RAMs (DRAMs), solid-state drives (SSDs), hard disks, or combinations thereof.
- the memory 760 may be configured to store the PPR information 763, which includes PPR-IDs 765 and the PPR-PDEs 767.
- the memory 760 is configured to store the forwarding database 743. In an embodiment, the forwarding database 743 stores entries describing forwarding rules for how a particular network element should forward a data packet that includes a PPR-ID 765 and/or a destination address.
- FIG. 8 illustrates a schematic diagram of a general-purpose network component or computer system.
- the general-purpose network component or computer system 800 includes a processor 802 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 804, and memory, such as ROM 806 and RAM 808, input/output (I/O) devices 810, and a network 812, such as the Internet or any other well-known type of network, that may include network connectively devices, such as a network interface.
- a processor 802 is not so limited and may comprise multiple processors.
- the processor 802 may be implemented as one or more CPU chips, cores (e.g., a multi-core processor), FPGAs, ASICs, and/or DSPs, and/or may be part of one or more ASICs.
- the processor 802 may be configured to implement any of the schemes described herein.
- the processor 802 may be implemented using hardware, software, or both.
- the secondary storage 804 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if the RAM 808 is not large enough to hold all working data.
- the secondary storage 804 may be used to store programs that are loaded into the RAM 808 when such programs are selected for execution.
- the ROM 806 is used to store instructions and perhaps data that are read during program execution.
- the ROM 806 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of the secondary storage 804.
- the RAM 808 is used to store volatile data and perhaps to store instructions. Access to both the ROM 806 and the RAM 808 is typically faster than to the secondary storage 804.
- At least one of the secondary storage 804 or RAM 808 may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein.
- the processor 820 or the memory 822 are changed, transforming the node 800 in part into a particular machine or apparatus, e.g., a router, having the novel functionality taught by the present disclosure.
- the processor 802 the ROM 806, and the RAM 808 are changed, transforming the node 800 in part into a particular machine or apparatus, e.g., a router, having the novel functionality taught by the present disclosure.
- each process associated with the disclosed technology may be performed continuously and by one or more computing devices.
- Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The disclosure relates to creating a data path using a link state protocol. A node in an Ethernet network receives a link state message that includes preferred path routing (PPR) description information. The PPR description information includes a path identifier (ID) and a plurality of sequentially ordered topological path description elements (PDEs). A next hop node is determined using the next topological PDE in the plurality of sequentially ordered topological PDEs listed in the PPR description information, and the PPR description information is stored by the node and used to forward data in the Ethernet network.
Description
PREFERRED PATH ROUTING IN ETHERNET NETWORKS
Inventor:
Stewart Bryant
Uma Chunduri
Toerless Eckert
Richard Li
Hesham EIBakoury
CLAIM FOR PRIORITY
[0001] This application claims the benefit of priority to U.S. Provisional App. 62/820,404, filed March 19, 2019, the entire contents of which are hereby incorporated by reference.
FIELD
[0002] The present disclosure relates to the field of routing in a network, and in particular, to construction of an end-to-end path between a source and a destination based on preferred path routing (PPR) information in an Ethernet network.
BACKGROUND
[0003] Packet-switched networks are being deployed by telecommunications providers to service the growing demand for data services in the corporate and consumer markets. The architecture of packet-switched networks such as Ethernet based networks is easy to deploy in smaller networks but not easily scalable in larger metropolitan area networks (MAN) or wide area networks (WAN) or provide the standards of service associated with service providers. Therefore Ethernet networking has traditionally been limited to Local Area Networks (LAN) deployments.
[0004] Use of Ethernet switches in carrier's networks has the advantages of interoperability (mappings between Ethernet and other frame/packet/celS data structures such as IP and ATM are well known) and economy (Ethernet switches are relatively inexpensive compared to IP routers, for example). However, the behavior of conventional switched Ethernet networks is incompatible with carriers' requirements for providing guaranteed services to customers and provide the ability to soale the network to a growing customer base. Carriers need networks to be meshed for load balancing and resiliency— i.e. there must be multiple paths across it. In addition any network must provide the ability to perform traffic engineering, i.e. the ability of the network operator to control the provisioning of explicitly routed variable bandwidth connections (or tunnels) through which traffic may be directed and to provide the ability to easily add network capacity as required.
BRIEF SUMMARY
[0005] According to one aspect of the present disclosure, there is provided a computer-implemented method of creating a data path, comprising receiving, by a node in an Ethernet network, a message that includes preferred path routing (PPR) description information, the PPR description information including a path identifier (PPR-ID) and a plurality of sequentially ordered topological path description elements (PDEs); determining, by the node in the Ethernet network, a next hop node using the next topological PDE in the plurality of sequentially ordered topological PDEs listed in the PPR description information; storing, by the node in the Ethernet network, the PPR description information; and forwarding data in the Ethernet network using the stored PPR description information.
[0006] Optionally, in any of the preceding aspects, the method further comprising constructing, by the node in the Ethernet network, a forwarding table entry including the PPR-ID.
[0007] Optionally, in any of the preceding aspects, the method further comprising flooding, by the node in the Ethernet network, at least the PPR description information and the PPR-ID.
[0008] Optionally, in any of the preceding aspects, wherein the message is a link state message advertised using a link state protocol.
[0009] Optionally, in any of the preceding aspects, further comprising receiving, by the node in the Ethernet network, another message including second PPR description information; determining, by the node in the Ethernet network, that the node is identified in one of the plurality of PDEs; and updating, by the node in the Ethernet network, the stored PPR description information with the second PPR description information for a destination address corresponding to a destination node.
[0010] Optionally, in any of the preceding aspects, further comprising extracting, by the node in the Ethernet, the PPR-ID from the message in the network.
[0011] Optionally, in any of the preceding aspects, wherein the path identified by PPR-ID comprises a set of topological PDEs, each of which represent a segment of the data path from a source node to a destination node in the network.
[0012] Optionally, in any of the preceding aspects, wherein the PPR description information represents the data path from a source node to a destination in the network.
[0013] Optionally, in any of the preceding aspects, wherein each of the plurality of PDEs represents at least one topological element and at least one non-topological element on the PPR, wherein the topological element comprises at least one of a network element or a link, and wherein the non-topological element comprises at least one of a service, function, or context.
[0014] Optionally, in any of the preceding aspects, wherein the PPR-ID is a destination address in a Shortest Path Bridging (SPB)-MAC network.
[0015] Optionally, in any of the preceding aspects, wherein the PPR-ID is a destination address and a VLAN ID (VID) in a Shortest Path Bridging (SPB)-VID network.
[0016] Optionally, in any of the preceding aspects, wherein the PPR-ID is a Nickname in a Transparent Interconnection of Lots of Links (TRILL) network.
[0017] Optionally, in any of the preceding aspects, wherein the PPR-ID representing a graph will have one or more source nodes to a single destination node.
[0018] Optionally, in any of the preceding aspects, wherein the PPR-ID representing a graph will have one or more source nodes to a plurality of destination nodes.
[0019] According to yet another aspect of the disclosure, there is provided a device for creating a data path, comprising a non-transitory memory storage comprising
instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to receive, by the device in an Ethernet network, a message that includes preferred path routing (PPR) description information, the PPR description information including a path identifier (PPR-ID) and a plurality of sequentially ordered topological path description elements (PDEs); determine, by the device in the Ethernet network, a next hop node using the next topological PDE in the plurality of sequentially ordered topological PDEs listed in the PPR description information; store, in the device, the PPR description information; and forwarding data in the Ethernet network using the stored PPR description information.
[0020] According to still one other aspect of the disclosure, there is a non-transitory computer-readable medium storing computer instructions for creating a data path , that when executed by one or more processors, cause the one or more processors to perform the steps of receiving, by a node in an Ethernet network, a message that includes preferred path routing (PPR) description information, the PPR description information including a path identifier (PPR-ID) and a plurality of sequentially ordered topological path description elements (PDEs); determining, by the node in the Ethernet network, a next hop node using the next topological PDE in the plurality of sequentially ordered topological PDEs listed in the PPR description information; storing, by the node in the Ethernet network, the PPR description information; and forwarding data in the Ethernet network using the stored PPR description information.
[0021] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background.
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate elements.
[0023] FIG. 1 A illustrates a network configured to implement conventional preferred path routing.
[0024] FIG. 1 B illustrates PPR path description information with services.
[0025] FIG. 2 illustrates a network showing shortest path routing and preferred path routing.
[0026] FIGS. 3A and 3B illustrate example advertisements and fields within an IS IS LSP.
[0027] FIG. 4A illustrates an example Ethernet network configuration using a link state protocol.
[0028] FIG. 4B illustrates an example of a PPR-ID for a link state protocol network.
[0029] FIG. 5A illustrates an example of a link state protocol controlled Ethernet network.
[0030] FIG. 5B illustrates an example of a PPR-ID for a link state protocol network.
[0031] FIG. 5C illustrates an example of a PPR-ID for a link state protocol network.
[0032] FIGS. 6A and 6B illustrate example flow diagrams for creating a data path in a network.
[0033] FIG. 7 illustrates an embodiment of a node.
[0034] FIG. 8 illustrates a schematic diagram of a general-purpose network component or computer system.
DETAILED DESCRIPTION
[0035] The present disclosure will now be described with reference to the figures, which generally relates to construction of an end-to-end path between a source node and a destination node based on preferred path routing (PPR) information in an Ethernet network.
[0036] The PPR is applied to the Ethernet network to introduce traffic path management, traffic engineering and to support network slicing. In one embodiment, this is accomplished by introducing non-shortest path traffic steering into the Ethernet network in such a way that an operator can dynamically introduce new paths in response to customer and application need. The paths themselves may be installed using a fully managed path approach via a network management system (NMS), or through the use of a link state routing protocol. Advertisements flooded in the network using the link state routing protocol may be extended to handle different data plane types. This may be accomplished by extending the PPR to add a Sub-type length value (TLV) that maps the PPR path description information to an Ethernet address in the different network types. In one embodiment, the traffic engineering and path
steering functionality is used in an Ethernet network (e.g., a Layer 2 environment or a media access control (MAC) level network).
[0037] It is understood that the present embodiments of the disclosure may be implemented in many different forms and that claim scope should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the inventive embodiment concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present embodiments of the disclosure, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details.
[0038] In a network comprising a single autonomous system (AS), each node needs to be aware of the topological relationships (i.e. , adjacencies) of all other nodes, such that all nodes may build a topological map (topology) of the AS. Nodes may learn about one another's adjacencies by distributing (i.e., flooding) link-state information throughout the network according to one or more Interior Gateway Protocols (IGPs) including, but not limited to, open shortest path first (OSPF) or intermediate system (IS) to IS (IS-IS).
[0039] IS-IS is a link-state routing protocol, which means that the routers exchange topology information with their nearest neighbors. This topology information is flooded throughout the area such that every router within the AS has a complete understanding of the topology of the AS. Once the topology is understood, end-to-end paths may be calculated in the AS, for example, using Dijkstra’s algorithm or a variation thereof. Accordingly, a next hop address to which data is forwarded is determined by choosing the“best” end-to-end path to the eventual destination.
[0040] Each IS-IS router distributes information about its local state (e.g., usable interfaces and reachable neighbors, and the cost of using each interface) to other routers using a Link State PDU (LSP) message. Each router uses the received messages to build up an identical database that describes the topology of the AS. From this database, each router calculates its own routing table using a Shortest Path
First (SPF) or Dijkstra algorithm, as noted above. This routing table contains all the destinations the routing protocol learns, associated with a next hop node. The protocol recalculates routes when the network topology changes, using the SPF or Dijkstra algorithm, and minimizes the routing protocol traffic that it generates.
[0041] A primary advantage of a link state routing protocol is that the complete knowledge of topology allows routers to calculate routes that satisfy particular criteria. This can be useful for traffic engineering purposes, where routes can be constrained to meet particular quality of service (QoS) requirements.
[0042] FIG. 1 A illustrates a network configured to implement conventional preferred path routing. The network 100 includes a central entity 103 (also referred to herein as a“controller”) and two network elements (NEs) 150 and 154 (also referred to herein as“nodes”), which are interconnected by links 160. The central entity 103 may be a Path Computation Element (PCE), which is described in Internet Engineering Task Force (IETF) Request for Comments (RFC) 8281 , entitled“Path Computation Element Communication Protocol (PCEP) Extensions for PCE-lnitiated LSP Setup in a Stateful PCE Model,” by E. Crabbe, dated December 2017, which is incorporated by reference in its entirety.
[0043] Each of NEs 150 and 154 may be a physical device, such as a router, a bridge, a virtual machine, a network switch, or a logical device configured to perform switching and routing using the preferred path routing mechanisms disclosed herein. In an embodiment, NEs 150 and 154 may be headend nodes positioned at an edge of the network 100. For example, NE 150 may be an ingress node at which traffic (e.g., control packets and data packets) is received, and NE 154 may an egress node from which traffic is transmitted.
[0044] The links 160 may be wired or wireless links or interfaces interconnecting each of the NEs 150 and 154 together and interconnecting each of the NEs 150 and 154 to the central entity 103. While NEs 150 and 154 are shown in FIG. 1A as headend nodes, it should be appreciated that either of NEs 150 and 154 may otherwise be an intermediate node or any other type of NE. Although only two NEs 150 and 154 are shown in FIG. 1A, it should be appreciated that the network 100 shown in FIG. 1A may include any number of NEs. In an embodiment, the central entity 103 and NEs 150 and 154 are configured to implement various packet forwarding protocols, such as, but not limited to, MPLS, IPv4, IPv6, and Big Packet Protocol.
[0045] In the network 100, the NEs 150 and 154 may communicate with the central entity 103 in both directions. That is, the central entity 103 may send south bound communications from the central entity 103 to the NEs 150 and 154 using various protocols, such as OPENFLOW, Path Computation Element Protocol (PCEP), or with NetConf/Restconf in conjunction with a YANG data model. The YANG data model is described by the LSR Working Group draft document, entitled“YANG data model for Preferred Path Routing,” by Y. Qu, dated June 27, 2018, which is incorporated by reference herein in its entirety. The NEs 150 and 154 may also send north bound communications from the NEs 150 and 154 to the central entity103 using various protocols, such as Border Gateway Protocol (BGP) - Link State (LS) or IGP static adjacency. Additionally, it is appreciated that other well-known protocols may be used for layer 2 domain for similar functionality.
[0046] In an embodiment, the central entity 103 generates paths between a source and a destination of the network using a network topology of the network 100 stored at the central entity 103. For example, the central entity 103 may determine the network topology using advertisements sent by each of the NEs 150 and 154 in the network 100, where the advertisements may include prefixes, traffic engineering (TE) information, IDs of adjacent NEs, links, interfaces, ports, and routes.
[0047] In one embodiment, the central entity 103 determines a shortest path between a source and destination and one or more PPRs between the source and destination. A shortest path refers to a path between the source and the destination that is determined based on a metric, such as, for example, a cost or weight associated with each link on the path, a number of NEs on the path, a number of links on the path, etc. In an embodiment, a shortest path may be computed for a destination using a Dijkstra’s Shortest Path First (SPF) algorithm. In another embodiment, using PPR, a non-shortest path may be computed. For example, a custom path created based on one or more application or service requirements may determine a non-shortest path. It is also appreciated that calculation of the shortest path may also be applied in a Layer 2 (“L2”) routing environment, such as 802.1 aq, as further described below.
[0048] In an embodiment, the PPR may be a path that deviates from the shortest path computed for a particular source and destination. The PPRs may be determined based on an application or server request for a path between a source and destination that satisfies one or more network characteristics (such as TE characteristics obtained
by the central entity through BGP-LS or PCEP) or service requirements. The PPRs and the shortest paths may each comprise a sequential ordering of one or more NEs 150 and 154 on the PPR and/or one or more links 160 on the PPR, which may be identified by labels, addresses, or IDs. An example of the shortest path and the PPR path will be further described below with reference to FIG. 2.
[0049] In one embodiment, packets are transmitted along a PPR path instead of a shortest path. In this case, each network element in the network 100 may be configured to store PPR information 130 received from the central entity 103. In one embodiment, the PPR information includes a PPR-ID, which identifies the PPR and one or more path description elements (PDEs), where the PDEs describe each of the network elements on the PPR in sequential order. The PPR information may be sent by each network element using a link state message (e.g., LSP) to the other network elements in the network 100, as described below. In an embodiment, the message may include PPR information 170 (i.e. , the PPR path description), such as a PPR-ID 171 and one or more PDEs 173 (including flags 175, which include a flood bit, a down bit, an attach bit and an ultimate fragment bit, which is described in the link state routing (LSR) Working Group Draft Document entitled“Preferred Path Routing (PPR) in IS-IS,” dated January 9, 2020, by U. Chunduri, et. al. (hereinafter,“Chunduri”), which is incorporated by reference herein in its entirety), each describing an element on the PPR path, an example of which is shown in FIG. 1 B and described below. In the example of FIG. 1 A, each of the NEs 150 and 154 in the network 100 that receive the message first determine whether the NE 150 or 154 is identified in the PDEs 173. If so, then the NE 150 or 154 updates a locally stored forwarding database to indicate that data packets including this particular PPR-ID should be routed along the path identified by the PPR information instead of the predetermined shortest path, calculated using SPF.
[0050] When the NE 150 or 154 receives a data packet, the NE 150 or 154 inspects the data packet to determine whether a PPR-ID is included in the data packet. In an embodiment, the PPR-ID may be included in a header of the data packet. If a PPR- ID is included in the data packet, the NE 150 or 154 performs a lookup on the locally stored forwarding database to determine the next PDE associated with the PPR-ID identified in the data packet. The PDE in the locally stored forwarding database indicates a next hop (another network element, link, or segment) by which to forward
the data packet. The NE 150 or 154 forwards the data packet to the next hop based on the PDE indicated in the locally stored forwarding database. In this way, the NEs 150 or 154 in the network are configured to transmit data packets via the PPR instead of the shortest path.
[0051] While FIG. 1 A shows network 100 comprising a central entity 103 configured to determine and send PPR information 170 to the NE 150 or 154, the NE 150 or 154 may receive the PPR information 170 through other sources as well. In an embodiment, an operator can store the information on one of the NEs 150 and 154 in the network 100 to include and store the traffic engineered source routed path information. In this case, the NE 150 or 154 can still be configured to send messages including the PPR information to the other NEs 150 and 154 in the network 100
[0052] FIG. 1 B illustrates PPR path description information with services. The data plane identifier, PPR-ID 171 , describes a path through the network 100. The data plane type and corresponding PPR-ID 171 can be specified in a link state packet (or link state message) advertised in the network that includes PPR path description information 170. The PPR-ID type allows data plane extensibility for PPR, and is currently defined for IPv4, IPv6, SR-MPLS and SRv6 data planes. This disclosure further extends the data plane types to include Transparent Interconnection of Lots of Links (TRILL), Shortest Path Bridging (SPB)-MAC (SPB-M) and SPB-VID (SPB-V), as described in more detail below.
[0053] The path identified by the PPR-ID 171 is described as a set of Path Description Elements (PDEs) 173, each of which represent a segment of the path that is forwarded to the next segment/hop or label of the path description. Path attributes 1 ... n may be applied to ensure particular service levels, i.e. , QoS, are delivered across a path. Elements in the path description (i.e., the PDEs) are similar to SR SIDs, described above, and can represent topological PDEs, such as links, nodes, backup nodes, etc. The PDEs 173 may also represent non-topological PDEs, such as a service (e.g., QoS), function (e.g., functional behavior of a node) or context (e.g., addresses, packets, fingerprints, etc.) on a particular node. For example, when a data packet with a PPR-ID 171 is delivered to node-1 , the packet is delivered with context- 1 . Similarly, on node x, service-x is applied and function-x is executed. As appreciated, these services and functions are pre-provisioned on the particular nodes in the network 100, can be advertised in IGPs, and are known to the central entity 103
along with the link state database of the IGP that is used in the underlying network. The PPR path can be one of two types— a strict PPR or a loose PPR, described below.
[0054] FIG. 2 illustrates a network showing shortest path routing and preferred path routing. Traditional routing in a network 200 is based on shortest path computations, using for example IGPs, for all prefixes in the network 200. An example of a shortest path in the network 200 is shown from network element Rs to network element Rd using dashed lines, the path of which is calculated as discussed above (for example, using Dykstra’s algorithm).
[0055] In PPR, route computation is based on a specific path described along with the prefix as opposed to the shortest path towards the destination node (or egress node). Instead of using the next hop of the shortest path towards the destination, the next hop towards the next node in the path description (FIG. 1 B) is used. This allows for explicit path and per-hop processing, and optionally includes QoS or resources to be reserved along the path (using the afore-mentioned PDEs). The PPR path may be advertised in IGP along with a data plane identifier (PPR-ID). Using this technique, any packet identified with the PPR-ID may use the PPR path instead of the IGP computed shortest path to the destination indicated by the PPR-ID. That is, packets destined to the PPR-ID use the PPR path instead of the IGP computed shortest path. In general, this is accomplished by IGP network elements (nodes) processing the PPR path. If an IGP node finds itself in the PPR path, it sets the next hop towards the PPR- ID according to the PPR path.
[0056] With reference to the depicted example, network 200 is similar to network 100 (FIG. 1A). Network 200 includes a central entity 103, network element Rs, network element Rd, and network elements (NEs) R1 - R20. Network element Rs is an ingress or head-end node, while network element Rd is an egress or another head- end node. Each of the bi-directional links between network elements are associated with a metric. For example, the bi-directional link metric (e.g., R15 - R16) may be a cost associated with a timing to transmit a packet across a link, a distance that the packet is transmitted across, a physical cost of transmitting a packet across a link, a bandwidth proportion used by transmitting a packet across a link, a number of intermediary nodes present on a link between two end nodes, etc. In the example, and for purposes of discussion, the bi-directional link metric for all the links connecting
any two network elements is a value 1 , except for those links identified with a value 10. These metrics may then be used to compute a path in the network.
[0057] In one embodiment, Rs may be configured to receive a traffic engineered route or explicit source routed path information from a central entity 103 (PCE or Controller). The received path information from the central entity 103 includes PPR information, which is identified using a PPR-ID 171. The PPR path description information 170 is encoded as an ordered list of PPR-PDEs PDE-1 to PDE-n from a source node (e.g. Rs) to a destination node (e.g. Rd) in the network 200. The PPR- PDE information represents both topological and non-topological segments and specifies the actual path towards the Rd. For example, assume the path Rs-R15-R7- R19-R20-R18-R13-R14-Rd can be attached with a PPR-ID (e.g. PPR-ID 100). Once the path and PPR-ID 100 are signaled to the network elements Rs, Rd and R1 - R20 of the network 200 in an underlying IGP as a PPR, only network elements Rs, Rd and R1 - R20 that find themselves in the path description have to act on the path. For example, after completing its shortest path computation, R15 finds that its node information is encoded as a PPR-PDE in the path. As a result, it adds an entry to its Forwarding Information Base (FIB) with PPR-ID 100 as the incoming label and sets the next hop node as the shortest path next hop towards the next PPR-PDE (R7), which based on the path information is the link towards R16. This process continues on every node as represented in the PPR path description until reaching the egress node Rd, which path is highlighted as the darkened PPR path.
[0058] There are two types of paths in PPR: loose and strict. In the case of a strict path, every network element along the path is defined and aware of the PPR-ID. This means that the PPR-ID itself is sufficient for forwarding decisions and is the only label that needs to be carried in the packet. In the case of a loose path, some network elements along the path are specified and aware of the PPR-ID, and there are intermediate path segments on which network elements are not aware of the PPR-ID. Forwarding decisions are then based on the next network element on the path that can be reached. In this case, the Segment ID defining the next network element on the path is added to the packet (in addition to the PPR-ID), which is popped as the next network element on the path is reached. In the depicted example, a loose path is described since not every node in the path from Rs to Rd is specified. For example, for the path segment from R15-R7, there is another network element R16 (in addition
to R6) over which R7 could be reached. The path type (loose or strict) is explicitly indicated in the PPR-ID description. Network element R15 acts on the path type (set by a flag) and, in the case of a loose path, programs the local hardware with two labels/SIDs using PPR-ID 100 as a bottom label and network element SID of R7 as a top label. Intermediate nodes like R16 do not need to be aware of the PPR or that data packets are being transported along a PPR path. Rather, they simply forward the packet based on the top label, in this case to R7. However, if the path described were a strict path, the actual data packet would require only a single label, i.e. PPR-ID 100.
[0059] FIGS. 3A and 3B illustrate example advertisements and fields within an IS IS LSP. The advertisement 300 may be included in an existing advertisement message of IS-IS or may be a new message created for IS-IS that may be advertised using IS-IS LSPs. In one embodiment, the IS-IS LSP is advertised in Layer 2 using a MAC level network. It is appreciated that while the disclosed embodiment specifically refers to IS-IS and LSPs, any link state protocol may be applied to flood the network using link state packets.
[0060] In one embodiment, the advertisement 300 has four logical sections (or Sub- TLVs)— encoding of the PPR-Prefix (IS-IS prefix), encoding of PPR-ID, encoding of path description with an ordered PDE Sub-TLV and a set of optional PPR attributes that can be used to describe one or more parameters of the path. In one embodiment, multiple instances of the TLV may be advertised in IS-IS LSPs with different PPR-ID types (i.e., data plane types) and with corresponding PDE Sub-TLVs.
[0061] As shown in FIG. 3A, the format of the advertisement 300 (PPR-TLV) includes a type field 301 , a length field 303, a PPR flags field 305, a Fragment ID 307, an MT ID 309, an Algorithm 31 1 , and the four Sub-TLBVs— a PPR-Prefix sub-TLV 313, a PPR-ID sub-TLV 315, a PPR-PDE sub-TLV 317, and a PPR-Attribute sub-TLV 319. The type field 301 carries a value assigned by the Internet Assigned Numbers Authority (IANA), and the length field 303 includes the total length of the value field in bytes. The PPR flags field 305 includes flag bits as defined in Chunduri. Fragment ID field 307 is an 8-bit identifier value (0-255) of the TLV fragment, MT-ID field 309 is a multi-topology identifier that is defined in Network Working Group, RFC5120, entitled “M-ISIS: Multi Topology (MT) Routing in Intermediate System to Intermediate Systems (IS-ISs)” (which is incorporated by reference herein in its entirety), and the algorithm
field 31 1 is a“1” octet value representing the route computation algorithm (i.e. , the computation towards PPR-ID occurs per MT-ID/algorithm pair).
[0062] In the disclosed embodiment, the PPR-Prefix Sub-TLV 313 is a variable size Sub-TLV representing the destination of the path being described. The PPR-ID Sub- TLV 315 is a variable size Sub-TLV defining PPR-ID 171 of the PPR path, further described below with reference to FIG. 3B. The PPR-PDE Sub-TLV 317 includes a variable number of ordered PDE Sub-TLVs 173 representing the PPR path, and the PPR-Attribute Sub-TLV 319 represent a variable number of PPR attribute Sub-TLVs that represent the path attributes to regulate traffic across network elements. The traffic accounting parameters are further described in the draft document entitled “Traffic Accounting for MPLS Segment Routing Paths,” by S. Hedge, dated October 30, 2017, which is incorporated by reference herein in its entirety. While the PPR Sub- TLVs 313 - 319 are limited in the described embodiment to include these fields, it should be appreciated that the PPR Sub-TLVs 313 - 319 may include additional fields as necessary to include information regarding the PPR within the network.
[0063] A more detailed description of PPR in IS-IS may be found in Chunduri.
[0064] Turning to FIG. 3B, illustrated is an example embodiment of the PPR-ID Sub-TLV of FIG. 3A. The PPR-ID field 333 is a variable size Sub-TLV defining PPR- ID 171 of the PPR path. The PPR-ID field 333 is the data plane identifier in the packet header and may be any data plane defined in the PPR-ID type field 327. The PPR-ID Sub-TLV 315 includes a type field 321 , a length field 323 and a PPR-ID flags field 325, each of which are described above. Further fields included in the PPR-ID Sub-TLV 315 are PPR-ID type field 327, PPR-ID length field 329, PPR-ID Mask Length field 331 , and PPR-ID field 333.
[0065] The PPR-ID type field 327 includes a value indicating a data plane type of the PPR-ID 171 being advertised. For example, a type of value 1 may indicate a data plane type of SR-MPLS SID Label, a type of value 2 may indicate a data plane type of native IPv4 addresses or prefixes, a type of value 3 may indicate a data plane type of native IPv6 addresses or prefixes, and a type of value 4 may indicate a data plane type of IPv6 SID in SRv6 with SRH. In other embodiments, additional and new PPR- ID type fields 327 are introduced to extend PPR in which Sub-TLVs are added to map the PPR path description information to an Ethernet address in both an SPB and TRILL network, as described below.
[0066] The PPR-ID length field 329 includes a length of the PPR-ID Sub-TLV field 333 in octets and may depend on the PPR-ID type defined in the PPR-ID type field 327. The PPR-ID mask length field 331 is applicable for certain PPR-ID types 327, namely types 2, 3, and 4, and may include the length of the PPR-ID Prefix 313 in bits. However, the PPR-ID mask length field 331 is not applicable to PPR-ID Sub-TLV 315.
[0067] FIG. 4A illustrates an example Ethernet network configuration using a link state protocol. In one embodiment, the link state protocol is a Transparent Interconnection of Lots of Links (TRILL) protocol that allows Ethernet switches (e.g., fabric switches) to function more like routing devices. TRILL allows routing bridges (RBridges) to be coupled in an arbitrary topology, without the risk of looping, by implementing routing functions in switches and including a hop count in a TRILL header. The RBridges implement a link state protocol, such as IS-IS, which allows the RBridge to calculate distribution trees for delivery of frames either to destinations that are unknown or to multicast/broadcast groups.
[0068] In the disclosed embodiment, the network implements a TRILL protocol in which a packet is encapsulated using a TRILL header that is forwarded towards its destination (indicated by the egress RBridge Nickname, discussed below) along the shortest path calculated by the link state protocol. In particular, TRILL applies network layer routing protocols to the link layer and -- with knowledge of the entire network -- uses that information to support Layer 2 multi-pathing. This enables multi-hop Fiber Channel over Ethernet (FCoE), which is a storage protocol that enables FC communications to run directly over Ethernet. FCoE also enables FC traffic across existing high-speed Ethernet infrastructure and converges storage and IP protocols onto a single cable transport and interface. FCoE also reduces latency and improves overall network bandwidth utilization.
[0069] As illustrated in FIG. 4A, a fabric switch 400 includes switches 412, 414 and 416. Fabric switch 400 also includes switches 406, 408, 422 and 424, each with a number of edge ports which can be coupled to external devices. For example, switches 406 and 408 are coupled with end devices 462 and 464 via Ethernet edge ports. In one embodiment, when the media access control (MAC) addresses of end devices 462 and 464 are determined, the switches 406 and 408 share the MAC addresses with the other switches in fabric switch 400. Similarly, switches 422 and 424 are coupled to devices 402 and 404, also via Ethernet edge ports. Devices 402 and 404
provide service 401 a to fabric switch 200. Examples of devices 402 and 404 include, but are not limited to, a firewall, load balancer, intrusion detection/protection device, network analyzer, and network virtualizer. The devices 402 and 404 can be a physical device or a virtual machine running on a physical device. Furthermore, a device 402 and 404 is typically deployed in the data path of upstream or downstream traffic (which can be referred to as“north-south traffic”). For example, a device can be deployed at an aggregation router.
[0070] In some embodiments, the switches in fabric switch 400 are TRILL RBridges and in communication with each other using a TRILL protocol (i.e., routing bridges that use TRILL protocol). These RBridges have TRILL-based inter-switch ports for connection with other TRILL RBridges in fabric switch 400. When devices 402 and 404 are coupled to fabric switch 400, switches 406 and 408, in conjunction with each other, virtualize devices 402 and 404 as a virtual device 401 coupled to fabric switch 400 via a virtual member switch 440. In some embodiments, virtual switch 440 is a TRILL RBridge and assigned a virtual RBridge identifier 445. RBridges 406 and 408 send notification messages to RBridges 406, 408, 412, 414 and 416. The notification message specifies that virtual RBridge 440, which is associated with virtual RBridge identifier 445, is reachable via RBridges 406 and 408. RBridges 406 and 408 specify in the same message or in a different notification message that virtual device 401 is associated with service 401 a and coupled to virtual switch 440.
[0071] One example of routing data packets is explained below. For purposes of discussion, end devices 462 and 464 belong to a subnet that requires service 401 a. During operation, end device 462 sends a data frame (not shown) to end device 464. Device 402 receives the data frame and detects the requirement of service 401 a for the data frame. Based on the notification messages received from RBridges 422 and 424, RBridge 402 becomes aware of virtual device 401 and virtual RBridge 440. RBridge 402 encapsulates the data frame in a TRILL packet with virtual RBridge identifier 445 as the egress RBridge identifier and forwards the TRILL packet toward virtual RBridge 440. RBridge 422 receives the TRILL packet via intermediate RBridge 412 and recognizes the TRILL packet to be destined to virtual RBridge 440. Since virtual RBridge 440 is associated with service 401 a, and the TRILL packet includes virtual RBridge identifier 445, RBridge 422 detects that the encapsulated data frame requires service 401 a. RBridge 422 extracts the data frame from the TRILL packet
and forwards the data frame to locally coupled device 401 . Upon receiving back the data frame from device 401 , RBridge 422 identifies the destination MAC address of end device 464, encapsulates the data frame in a TRILL packet with an RBridge identifier of Device 404 as the egress RBridge identifier, and forwards the TRILL packet toward Device 404.
[0072] While the use and development of the TRILL protocol allows Ethernet switches to function more like routing devices, issues remain unsolved in the dynamic insertion of paths into the network, particularly in response to customer and application needs. This disclosure provides a mechanism in which PPR is applied to Ethernet networks to introduce traffic path management, traffic engineering and to generally support network slicing.
[0073] With reference to FIG. 4B, PPR is extended by adding a Sub-TLV to map the PPR path description information (FIG. 1 B) to an Egress RBridge Nickname of a TRILL network, such as the network in FIG. 4A. This enables introduction of non- shortest path traffic steering into an Ethernet network such that an operator can dynamically introduce new paths in response to customer and application need. In one embodiment, this is accomplished by adding a new PPR-ID type to the PPR-ID Sub- TLV. As described above, and with reference to FIG. 3A, four types of PPR-IDs currently exist— Type 1 : SR-MPLS SID/Label, Type 2: Native IPv4 Address/Prefix, Type 3: Native IPv6 Address/Prefix and Type 4: IPv6 SID in SRv6 with SRH. A new type of PPR-ID Sub-TLV is created in which the PPR-ID Sub-TLV is an Egress RBridge Nickname of a TRILL network.
[0074] Similar to type 301 in advertisement 300 of FIG. 3A (and type 321 of PPR- ID Sub-TLV 315 in FIG. 3B), the PPR-ID Sub-TLV 450 includes a type field 452, a length field 454 and a PPR-ID flags field 456. Additional fields in the PPR-ID Sub-TLV 450 include the PPR-ID type field 470, PPR-ID length field 472, PPR-ID Mask Length field 474, and PPR-ID field 476. In one embodiment, the PPR-ID length field 472 includes a length of the PPR-ID field 476 in octets and depends on the PPR-ID type defined in the PPR-ID type field 470. The PPR-ID mask length field 474, while applicable for PPR-ID types 2, 3, and 4, is not applicable PPR-ID Sub-TLV 450.
[0075] Unlike the PPR-ID types defined for advertisement 300, the PPR-ID type field 470 is a new type (it does not include Types 1 - 4 described above). For example, the PPR-ID type field 470 may include a new classification or value (e.g. a value of 5,
or“Type 5”) that indicates a data plane type of a TRILL network. In this case, while the PPR-ID field 476 remains the data plane identifier in the packet header, the value in this field is set to the Egress RBridge Nickname of the network. For example, and for purposes of discussion, end device 462 is an ingress node, end device 464 is an egress node, and a Layer 2 frame (PDU) at the ingress node 462 uses TRILL encapsulation and MAC routing using egress node’s 464 nickname (i.e. , the PPR-ID 476, which identifies the shortest path or graph). Applying PPR for Layer 2, the control plane pre-programs the egress node’s 464 PPR-ID in all the nodes of the network (e.g., the network illustrated in FIG. 4A) described in the PPR description information. When an L2 frame with a traffic matching rule (e.g., set up by the operator) is received at the ingress node 462, the L2 frame is encapsulated with the corresponding PPR-ID 476 of the egress node 464, as opposed to egress node’s 464 shortest path/default nickname. After this is accomplished, the network routes the L2 frames per the PPR description information instead of the shortest path. In one embodiment, extending the PPR in this manner eliminates the need to pre-provision policies at the ingress node.
[0076] FIG. 5A illustrates an example of a link state protocol controlled Ethernet network. One type of network technology is known as SPB, in which a link state protocol is used for advertising. A set of standards for implementing SPB is specified by the Institute of Electrical and Electronics Engineers (IEEE), and is identified as IEEE 802.1 aq. 802.1 aq allows for the shortest path forwarding in an Ethernet mesh network context utilizing multiple equal cost paths, which in turn allows SPB to support Layer 2 topologies. This supports two Ethernet encapsulating data paths— 802.1 ad (Provider Bridges, or PBs) and 802.1 ah (Provider Backbone Bridges, or PBBs.
[0077] SPB supports two modes of operation, SPB-VID (SPB-V) mode and SPB- MAC (SPB-M) mode, where MAC stands for“media access control” and VID stands for“virtual local area network.” In SPB-V, multiple VLANs can be used to distribute load on different shortest path trees, and is used in PB networks implementing VLANs, PB or PBB encapsulation. In SPB-M, service instances are delineated by l-SIDs (described above), but VLANs can be used to distribute load on different shortest path trees. SPB-M is conventionally used in a PBB network that implements PBB encapsulation.
[0078] SPB uses the Intermediate System to Intermediate System (IS-IS) routing protocol, extensions of which for SPB are documented in RFC 6329, which is incorporated by reference herein in its entirety. IS-IS can be used to synchronize a common repository of information so as to condense SPB control and configuration into a single control protocol where the provider B-MAC, Virtual LAN Identifier (VID) for SPBV, Backbone VID (B-VID) for SPBM and Service Identifier information in the form of l-SID are all global to the network. Connectivity can be constructed using the IS-IS distributed routing system where each node independently computes the forwarding paths and populates the local filtering database (FDB) based on the information in the routing system database.
[0079] One example SPB network is illustrated in FIG. 5A. The example is an SPB- M network 500 and includes network elements 501 to 507. Each of the network elements is associated with a B-MAC address. For example, the B-MAC address of each network element is“4455-667-00xx,” with the last nibble of the B-MAC address different for each element, as shown in Table I below. Links 511 are illustrated between the network elements 501 to 507, and interface indexes are shown as a number next to each of the links 511. For example, an interface index of network element 504 is shown as a“2” on the link 511 between network element 504 and 505. Additionally, user-network interface (UNI) ports“i1 ,” which are the customer-to-SPB attachment point, are shown with the l-SID (Ethernet Services Instance Identifier used for Logical Grouping for E-LAN/LINE/TREE UNIs).
[0080] Following the example, an ECT-Algorithm (e.g., default ECT algorithm 00- 80-C2-01 ) is assigned to B-VID 100. The ECT-Algorithm picks the equal cost path with the lowest BridgelD. When all links have the same cost, then the 1 -hop shortest paths are all direct and the 2-hop shortest paths, which are symmetric, are: {501-502- 503, 501 -502-505, 501 -502-507, 506-502-505, 504-502-507, 504-501 -506, 505-502- 507, 506-502-503, 504-502-503}. An example of the shortest path of 501-502-503 is shown by the dashed lines.
[0081] Based on the calculated paths, the forwarding table of network element 501 unicasts its forwarding table toward B-MACs: 4455-6677-0007, 4455-6677-0003, and 4455-6677-0005 via interface“2,” while its single-hop paths are all direct, as shown in the FDB of Table I below. In the case where network element 501 is the head of the multicast distribution tree (MDT)), the network element 501 also originates a multicast
to nodes 4455-6677-0003, 4455-6677-0005, and 4455-6677-0007 and is a transmitter of l-SID 1 , which network elements 503, 505, and 507 all wish to receive. The network element 501 produces a multicast forwarding entry having a destination address (DA) that contains its SPSourcelD (last 20 bits of the B-MAC that identifies a SPBM node for all B-VIDs) and the l-SID = 1. The network element 501 then sends packets matching the forwarding entry to interface“if/2” with B-VID=100.
[0082] The full unicast (U) and multicast (M) table for network element 501 is shown in Table I below, which shows the incoming interface (“IN/IF”), the destination address, the BVID and the outgoing interfaces (OUT/IF(s)”). The incoming interface“IN/IF” field is not specified for unicast (U) traffic, and for multicast (M) traffic has to point back to the root of the tree, unless it is the head of the tree -- in which case, the convention “if/00” is used (as in this example). Since network element 501 is not transit for any multicast, it only has a single entry for the root of its tree for l-SID=1 .
TABLE I
[0083] Network element 502, at the center of the network 500, has direct 1 -hop paths to all other nodes 501 and 503 - 507. Thus, network element’s 502 unicast FDB sends packets with the given B-MAC/B-VID=100 to the interface directly to the addressed network element. This can be seen by looking at the unicast (U) entries shown in Table II below.
TABLE II
[0084] For multicast, network element 502 is a transit node of four members of I- SID = 1 . Thus, four multicast (M) FDB entries are used depending on which member it is forwarding/replicating on behalf of. For example, network element 502 is on the shortest path between network elements 501 and each of network elements 503, 505, 507. In this case, the network element 502 replicates from network element 501 , I- SID 1 out on interfaces {if/2, if/3 and if/5} in order to reach nodes 503, 505 and 507. A multicast destination address is created with the SPSourcelD of node 501 together with l-SID=1 , which is received over“if/1” and will replicate out interfaces {if/2, if/3 and if/5}, as depicted in the first multicast (M) entry in Table II, above. While network element 502 is not on the shortest path between network elements 503 and 505, or between network elements 503 and 507, it still has to forward packets to network element 501 from network element 503 for the l-SID. This results in the second multicast (M) forwarding entry in Table II. Similarly, for packets originating at network element 505 or 507, network element 502 replicates twice, which results in the last two multicast (M) forwarding entries in Table II.
[0085] It should be appreciated that the PPR paths described above are a path structure which is an ordered linear list of PDEs starting with a sender PDE followed by zero or more transit PDEs and finishing with a destination PDE. In this case, a separate PPR-ID is required for every PPR path. To allow scalability— for the number of PPR-IDs needed on the destination nodes, the number of forwarding entries needed on the nodes in the paths and to minimize the amount of PPR information needed in the control plane, an IS-IS PPR-Tree TLV be encoded. This TLV includes encoding of the PPR-MAC address, encoding of a Preferred Path Graph (PPG)-ID, encoding of a path description with ordered PDE Sub-TLVs (belonging to one or more Branch-IDs) and a set of optional PPR attribute Sub-TLVs, which can be used to describe PPR
Graph common parameters. Multiple instances of this TLV may be advertised in IS IS LSPs with different PPG-ID Type and with corresponding Branch-ID/PDE Sub- TLVS.
[0086] The PPG-ID places policies at an ingress nodes in the network that defines how to classify the incoming traffic to the correct PPG. The PPR-ID remains as the forwarding identifier in the PPR graph. However, the PDE Sub-TLV has flags field that includes flag bits— an“S” (source) bit and a“D” (destination) bit, among other reserved bits. The“D” bit allows for more than one PPR-ID in a PPR Graph. Thus, if the PPR graph uses one“D” bit, the PPR graph is a multi-point to point graph that uses one PPR-ID per path. This allows multiple ingress nodes to encapsulate the incoming traffic with the PPR-ID to perform traffic engineering on a graph and eventually deliver the traffic to the egress node. When more than one“D” bit is in the PDEs of the PPR description information, there is a corresponding PPR-ID for each PDE with a set flag (“S” or“D”) bit. The graph in this case is a multi-point to multi-point graph, which allows multiple ingress nodes to deliver the traffic on the traffic engineered path to the relevant destination as an encoded with the PPR-ID.
[0087] A detailed discussion of PPR Graphs may be found in Chunduri et al. , “Preferred Path Rote Graph Structure draft-ce-lsr-ppr-graph-03,” LSR Working Group, Internet Draft, March 8, 2020, the contents of which is incorporated herein in its entirety.
[0088] With reference to FIGS. 5B and 5C, PPR is extended by adding a Sub-TLV to map the PPR path description information (FIG. 1 B) to an Ethernet address in an SPB (SPB-M or SPB-V) network. Similar to the embodiment of FIG. 4B, this enables introduction of non-shortest path traffic steering into an Ethernet network such that an operator can dynamically introduce new paths in response to customer and application need. Additionally, by extending the PPR in this manner, pre-provisioning policies at the ingress node may be eliminated. In one embodiment, this is accomplished by adding a new PPR-ID type to the PPR-ID Sub-TLV. In the embodiment of FIG. 5B, the new type of PPR-ID Sub-TLV is created in which the PPR-ID Sub-TLV is a destination address of the PPR path in a SPB-M network. In the embodiment of FIG. 5C, the new type of PPR-ID Sub-TLV is created in which the PPR-ID SubOTLV is a destination address and the VID of the PPR path in a SPB-V network.
[0089] In the embodiments of both FIGS. 5B and 5C, the PPR-ID Sub-TLVs 510 and 520 each include a type field 512, a length field 514, a PPR-ID flags field 516, a PPRD-ID length field 520 and a PPR-ID mask length 522 (which may be set to zero or not used). These fields are similar to those discussed above with reference to FIG. 4B. Additional fields in the PPR-ID Sub-TLV 510 include the PPR-ID type field 518 and PPR-ID field 524. The PPR-ID type field 518 is a new type (it does not include Types 1 - 4 described above). For example, the PPR-ID type field 518 may include a new classification or value (e.g. a value of 6, or“Type 6”) that indicates a data plane type of SPB-M. In this case, while the PPR-ID field 524 remains the data plane identifier in the packet header, the value in this field is set to the destination address in the SPB-M network, such as illustrated in FIG. 5A.
[0090] Additional fields in the PPR-ID Sub-TLV 520 include the PPR-ID type field 526 and PPR-ID field 528. The PPR-ID type field 526 is a new type (it does not include Types 1 - 4 described above). For example, the PPR-ID type field 526 may include a new classification or value (e.g. a value of 7, or“Type 7”) that indicates a data plane type of SPB-V. In this case, while the PPR-ID field 528 remains the data plane identifier in the packet header, the value in this field is set to the destination address plus the VLAN ID (VID) in the SPB-V network, such as illustrated in FIG. 5A.
[0091] For example, in SPB-V, an incoming L2 frame is encapsulated with PPR-ID 528 of the path i.e. , the corresponding MAC address and VLAN ID of the egress node 464, as opposed to egress node’s original MAC address and VLAN ID (which takes to egress via shortest path). In SPB-M, the incoming L2 frame is encapsulated with PPR- ID 524 of the path i.e., the corresponding MAC address of the egress node 464, as opposed to egress node’s original MAC address and VLAN ID (which takes to egress via shortest path). Similar to the example of FIG. 4B, the control plane pre-programs the egress node’s 464 PPR-ID in all the nodes of the network (e.g., the network illustrated in FIG. 5A) described in the PPR description information. When an L2 frame with a traffic matching rule (e.g., set up by the operator) is received at the ingress node 462, the L2 frame is encapsulated with the corresponding PPR-ID 524 (SPB-M) or 528 (SPB-V) of the egress node 464, as opposed to egress node’s 464 shortest path/default nickname. After this is accomplished, the network routes the L2 frames per the PPR description information instead of the shortest path. For example, the path 501 -506-502-507-503 is advertised in the control plane (IS-IS) with the
corresponding PPR-ID 524 (SPB-M) or 528 (SPB-V) and the PPR-ID is programmed with the next hop towards the egress node 464 via this path, as opposed to shortest path.
[0092] FIGS. 6A and 6B illustrate an example flow diagram for creating a data path in a network. In embodiments, the flow diagram may be computer-implemented methods performed, at least partly, by hardware and/or software components illustrated in the various figures and as described herein. In one embodiment, the disclosed process may be performed by the nodes disclosed in FIGS. 1A, 2, 4A and 5A. In one embodiment, software components executed by one or more processors, such processor 720 or 804, performs at least a portion of the process.
[0093] The process begins at step 602, from a sending node, which receives PPR description information in from a central entity or operator input with the PPR-ID. The local Link State PDU (LSP) message is updated, and the PPR path description information is flooded to the network using a link state protocol, at step 604. At step 606, the PPR path is computed, and the PPR description information is processed to install a database entry at the sending node.
[0094] At a receiving node, a message including the PPR description information is received at step 610. In one embodiment, the PPR description information includes a path identifier (ID) and a plurality of sequentially ordered topological path description elements (PDEs). At step 612, it is determined whether the PPR description information includes the current node. If no, the process ends at step 622. If yes, the process proceeds to step 614 where a next hop node is determined using the next topological PDE in the sequentially ordered topological PDEs listed in the PPR description information. Once the next hop node is determined, a forwarding table entry, including the PPR-ID, is constructed, at step 616. In one embodiment, after the PPR path description information is flooded to the network, nodes in the network can forward frames to other nodes according to this description information. For example, the forwarding table entry may be optionally forwarded to the next hop node in the network per the PPR path description information, at step 618. At step 620, the next hop node may optionally download the forwarding table entry.
[0095] FIG. 7 illustrates an embodiment of a node. The node 700 may be configured to implement and/or support the routing mechanisms described herein. The node 700 may be implemented in a single node or the functionality of node 700
may be implemented in a plurality of nodes. One skilled in the art will recognize that the term NE encompasses a broad range of devices of which node 700 is merely an example. While node 700 is described as a physical device, such as a router or gateway, the node 700 may also be a virtual device implemented as a router or gateway running on a server or a generic routing hardware (whitebox).
[0096] The node 700 may comprise a plurality of input/output ports 710/730 and/or receivers (Rx) 712 and transmitters (Tx) 732 for receiving and transmitting data from other nodes, a processor 720 to process data and determine which node to send the data to and a memory. The node 700 may also generate and distribute LSAs to describe and flood the various topologies and/or area of a network. Although illustrated as a single processor, the processor 720 is not so limited and may comprise multiple processors. The processor 720 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. Moreover, the processor 720 may be implemented using hardware, software, or both. The processor 720 includes a network configuration module 722, which may perform processing functions of the central entity 103 or the network elements. The network configuration module 722 may also be configured to perform the steps of the methods discussed herein. As such, the inclusion of the network configuration module 722 and associated methods and systems provide improvements to the functionality of the node 700. Further, the network configuration module 722 effects a transformation of a particular article (e.g., the network) to a different state. In an alternative embodiment, network configuration module 722 may be implemented as instructions stored in the memory 760, which may be executed by the processor 720. Alternatively, the memory 760 stores the network configuration module as instructions, and the processor 720 executes those instructions.
[0097] The memory 760 may comprise a cache for temporarily storing content, e.g., a random-access memory (RAM). Additionally, the memory 760 may comprise a long term storage for storing content relatively longer, e.g., a read-only memory (ROM). For instance, the cache and the long-term storage may include dynamic RAMs (DRAMs), solid-state drives (SSDs), hard disks, or combinations thereof. The memory 760 may be configured to store the PPR information 763, which includes PPR-IDs 765
and the PPR-PDEs 767. In addition, the memory 760 is configured to store the forwarding database 743. In an embodiment, the forwarding database 743 stores entries describing forwarding rules for how a particular network element should forward a data packet that includes a PPR-ID 765 and/or a destination address.
[0098] The schemes described above may be implemented on any general- purpose network component, such as a computer or network component with sufficient processing power, memory resources, and network throughput capability to handle the necessary workload placed upon it.
[0099] FIG. 8 illustrates a schematic diagram of a general-purpose network component or computer system. The general-purpose network component or computer system 800 includes a processor 802 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 804, and memory, such as ROM 806 and RAM 808, input/output (I/O) devices 810, and a network 812, such as the Internet or any other well-known type of network, that may include network connectively devices, such as a network interface. Although illustrated as a single processor, the processor 802 is not so limited and may comprise multiple processors. The processor 802 may be implemented as one or more CPU chips, cores (e.g., a multi-core processor), FPGAs, ASICs, and/or DSPs, and/or may be part of one or more ASICs. The processor 802 may be configured to implement any of the schemes described herein. The processor 802 may be implemented using hardware, software, or both.
[0100] The secondary storage 804 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if the RAM 808 is not large enough to hold all working data. The secondary storage 804 may be used to store programs that are loaded into the RAM 808 when such programs are selected for execution. The ROM 806 is used to store instructions and perhaps data that are read during program execution. The ROM 806 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of the secondary storage 804. The RAM 808 is used to store volatile data and perhaps to store instructions. Access to both the ROM 806 and the RAM 808 is typically faster than to the secondary storage 804. At least one of the secondary storage 804 or RAM 808 may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein.
[0101] It is understood that by programming and/or loading executable instructions onto the node 800, at least one of the processor 820 or the memory 822 are changed, transforming the node 800 in part into a particular machine or apparatus, e.g., a router, having the novel functionality taught by the present disclosure. Similarly, it is understood that by programming and/or loading executable instructions onto the node 800, at least one of the processor 802, the ROM 806, and the RAM 808 are changed, transforming the node 800 in part into a particular machine or apparatus, e.g., a router, having the novel functionality taught by the present disclosure. It is fundamental to the electrical engineering and software engineering arts that functionality that can be implemented by loading executable software into a computer can be converted to a hardware implementation by well-known design rules. Decisions between implementing a concept in software versus hardware typically hinge on considerations of stability of the design and numbers of units to be produced rather than any issues involved in translating from the software domain to the hardware domain. Generally, a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re spinning a software design. Generally, a design that is stable that will be produced in large volume may be preferred to be implemented in hardware, for example in an ASIC, because for large production runs the hardware implementation may be less expensive than the software implementation. Often a design may be developed and tested in a software form and later transformed, by well-known design rules, to an equivalent hardware implementation in an application specific integrated circuit that hardwires the instructions of the software. In the same manner as a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions may be viewed as a particular machine or apparatus.
[0102] The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to
enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.
[0103] For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.
[0104] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims
1 . A computer-implemented method of creating a data path, comprising:
receiving, by a node in an Ethernet network, a message that includes preferred path routing (PPR) description information, the PPR description information including a path identifier (PPR-ID) and a plurality of sequentially ordered topological path description elements (PDEs);
determining, by the node in the Ethernet network, a next hop node using the next topological PDE in the plurality of sequentially ordered topological PDEs listed in the PPR description information;
storing, by the node in the Ethernet network, the PPR description information; and
forwarding data in the Ethernet network using the stored PPR description information.
2. The computer-implemented method of claim 1 , further comprising constructing, by the node in the Ethernet network, a forwarding table entry including the PPR-ID.
3. The computer-implemented method of any one of claims 1 -2, further comprising flooding, by the node in the Ethernet network, at least the PPR description information and the PPR-ID.
4. The computer-implemented method of any one of claims 1 -3, wherein the message is a link state message advertised using a link state protocol.
5. The computer-implemented method according to any one of claims 1 -4, further comprising:
receiving, by the node in the Ethernet network, another message including second PPR description information;
determining, by the node in the Ethernet network, that the node is identified in one of the plurality of PDEs; and
updating, by the node in the Ethernet network, the stored PPR description information with the second PPR description information for a destination address corresponding to a destination node.
6. The computer-implemented method according to any one of claims 1 -5, further comprising extracting, by the node in the Ethernet network, the PPR-ID from the message in the network.
7. The computer-implemented method according to any one of claims 1 -6, wherein the path identified by PPR-ID comprises a set of topological PDEs, each of which represent a segment of the data path from a source node to a destination node in the network.
8. The computer-implemented method according to any one of claims 1 -7, wherein the PPR description information represents the data path from a source node to a destination in the network.
9. The computer-implemented method according to any one of claims 1 -8, wherein each of the plurality of PDEs represents at least one topological element and at least one non-topological element on the PPR, wherein the topological element comprises at least one of a network element or a link, and wherein the non-topological element comprises at least one of a service, function, or context.
10. The computer-implemented method according to claim 1 -9, wherein the PPR- ID is a destination address in a Shortest Path Bridging (SPB)-MAC network.
1 1. The computer-implemented method according to claim 1 -9, wherein PPR-ID is a destination address and a VLAN ID (VID) in a Shortest Path Bridging (SPB)-VID network.
12. The computer-implemented method according to claim 1 -9, wherein the PPR- ID is a Nickname in a Transparent Interconnection of Lots of Links (TRILL) network.
13. The computer-implemented method according to claim 1 -9, wherein the PPR- ID representing a graph will have one or more source nodes to a single destination node.
14. The computer-implemented method according to claim 1 -9, wherein the PPR- ID representing a graph will have one or more source nodes to a plurality of destination nodes.
15. A device in an Ethernet network for creating a data path, comprising:
a non-transitory memory storage comprising instructions; and
one or more processors in communication with the memory, wherein the one or more processors execute the instructions to:
receive a message that includes preferred path routing (PPR) description information, the PPR description information including a path identifier (PPR-ID) and a plurality of sequentially ordered topological path description elements (PDEs);
determine a next hop node using the next topological PDE in the plurality of sequentially ordered topological PDEs listed in the PPR description information;
store the PPR description information; and
forward data in the Ethernet network using the stored PPR description information.
16. The device of claim 15, wherein the one or more processors further execute the instructions to construct a forwarding table entry including the PPR-ID.
17. The device of any one of claims 15-16, wherein the one or more processors further execute the instructions to flood at least the PPR description information and the PPR-ID.
18. The device of any one of claims 15-17, wherein the message is a link state message advertised using a link state protocol.
19. The device according to any one of claims 15-18, wherein the one or more processors further execute the instructions to:
receive another message including second PPR description information;
determine that the node is identified in one of the plurality of PDEs; and update the stored PPR description information with the second PPR description information for a destination address corresponding to a destination node.
20. The device according to any one of claims 15-19, further comprising extract the PPR-ID from the link state message in the network.
21 . The device according to any one of claims 15-20, wherein the path identified by PPR-ID comprises a set of topological PDEs, each of which represent a segment of the data path from a source node to a destination node in the network.
22. The device according to any one of claims 15-21 , wherein the PPR description information represents the data path from a source node to a destination in the network.
23. The device according to any one of claims 15-22, wherein each of the plurality of PDEs represents at least one topological element and at least one non-topological element on the PPR, wherein the topological element comprises at least one of a network element or a link, and wherein the non-topological element comprises at least one of a service, function, or context.
24. The device according to any one of claims 15-23, wherein the PPR description information further includes the PPR-ID is a destination address in a Shortest Path Bridging (SPB)-MAC network.
25. The device according to any one of claims 15-24, wherein the PPR-ID is a destination address and a VLAN ID (VID) in a Shortest Path Bridging (SPB)-VID network.
26. The device according to any one of claims 15-24, wherein the PPR-ID is a Nickname in a Transparent Interconnection of Lots of Links (TRILL) network.
27. The device according to any one of claims 15-24 wherein the PPR-ID representing a graph will have one or more source nodes to a single destination node.
28. The device according to any one of claims 15-24, wherein the PPR-ID representing a graph will have one or more source nodes to a plurality of destination nodes.
29. A non-transitory computer-readable medium storing computer instructions for creating a data path, that when executed by one or more processors, cause the one or more processors to perform the steps of:
receiving, by a node in an Ethernet network, a message that includes preferred path routing (PPR) description information, the PPR description information including a path identifier (PPR-ID) and a plurality of sequentially ordered topological path description elements (PPR-PDEs);
determining, by the node in the Ethernet network, a next hop node using the next topological PDE in the plurality of sequentially ordered topological PDEs listed in the PPR description information;
storing, by the node in the Ethernet network, the PPR description information; and
forwarding data in the Ethernet network using the stored PPR description information.
30. The non-transitory computer-readable medium of claim 29, further comprising constructing, by the node in the Ethernet network, a forwarding table entry including the PPR-ID; and
31 . The non-transitory computer-readable medium of any one of claims 29-30, further comprising flooding, by the node in the Ethernet network, at least the PPR description information and the PPR-ID.
32. The non-transitory computer-readable medium of any one of claims 29-31 , wherein the message is a link state message advertised using a link state protocol.
33. The non-transitory computer-readable medium according to any one of claims 29-32, further causing the one or more processors to perform the step of:
receiving, by the node in the Ethernet network, another message including second PPR description information;
determining, by the node in the Ethernet network, that the node is identified in one of the plurality of PDEs; and
updating, by the node in the Ethernet network, the stored PPR description information with the second PPR description information for a destination address corresponding to a destination node.
34. The non-transitory computer-readable medium according to any one of claims 29 to 33, further causing the one or more processors to perform the step of extracting, by the node in the Ethernet network, the PPR-ID from the link state message in the network.
35. The non-transitory computer-readable medium according to any one of claims 29-34, wherein the path identified by PPR-ID comprises a set of topological PDEs, each of which represent a segment of the data path from a source node to a destination node in the network.
36. The non-transitory computer-readable medium according to any one of claims 29-35, wherein the PPR description information represents the data path from a source node to a destination in the network.
37. The non-transitory computer-readable medium according to any one of claims 29-36, wherein each of the plurality of PDEs represents at least one topological element and at least one non-topological element on the PPR, wherein the topological element comprises at least one of a network element or a link, and wherein the non- topological element comprises at least one of a service, function, or context.
38. The non-transitory computer-readable medium according to any one of claims 29-37, wherein the PPR-ID is a destination address in a Shortest Path Bridging (SPB)- MAC network.
39. The non-transitory computer-readable medium according to any one of claims 29-37, wherein the PPR-ID is a destination address and a VLAN ID (VID) in a Shortest Path Bridging (SPB)-VID network.
40. The non-transitory computer-readable medium according to any one of claims 29-37, wherein the PPR-ID is a Nickname in a Transparent Interconnection of Lots of Links (TRILL) network.
41 . The non-transitory computer-readable medium according to any one of claims 29-37, wherein the PPR-ID representing a graph will have one or more source nodes to a single destination node.
42. The non-transitory computer-readable medium according to any one of claims 29-37, the PPR-ID representing a graph will have one or more source nodes to a plurality of destination nodes.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201962820404P | 2019-03-19 | 2019-03-19 | |
| US62/820,404 | 2019-03-19 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020160564A1 true WO2020160564A1 (en) | 2020-08-06 |
Family
ID=70289466
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2020/023443 Ceased WO2020160564A1 (en) | 2019-03-19 | 2020-03-18 | Preferred path routing in ethernet networks |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2020160564A1 (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022042610A1 (en) * | 2020-08-25 | 2022-03-03 | 中兴通讯股份有限公司 | Information processing method, network controller, node and computer-readable storage medium |
| CN114157531A (en) * | 2020-08-18 | 2022-03-08 | 华为技术有限公司 | Method, device and network equipment for transmitting segment identification VPN SID of virtual private network |
| EP4020927A1 (en) * | 2020-12-28 | 2022-06-29 | Nokia Solutions and Networks Oy | Packet forwarding on non-coherent paths |
| CN115484204A (en) * | 2022-09-15 | 2022-12-16 | 中国电信股份有限公司 | Network fault recovery method, device, system, electronic equipment and storage medium |
| US11622029B2 (en) | 2021-07-28 | 2023-04-04 | International Business Machines Corporation | Optimizing information transmitted over a direct communications connection |
| EP4561010A4 (en) * | 2022-07-29 | 2025-11-05 | Huawei Tech Co Ltd | METHOD AND DEVICE FOR ESTABLISHING A RESOURCE RESERVATION PATH AND COMMUNICATION METHOD AND DEVICE |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120243539A1 (en) * | 2011-03-21 | 2012-09-27 | Avaya Inc. | Usage of masked ethernet addresses between transparent interconnect of lots of links (trill) routing bridges |
| WO2017147076A1 (en) * | 2016-02-22 | 2017-08-31 | Idac Holdings, Inc. | Methods, apparatuses and systems directed to common transport of backhaul and fronthaul traffic |
-
2020
- 2020-03-18 WO PCT/US2020/023443 patent/WO2020160564A1/en not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120243539A1 (en) * | 2011-03-21 | 2012-09-27 | Avaya Inc. | Usage of masked ethernet addresses between transparent interconnect of lots of links (trill) routing bridges |
| WO2017147076A1 (en) * | 2016-02-22 | 2017-08-31 | Idac Holdings, Inc. | Methods, apparatuses and systems directed to common transport of backhaul and fronthaul traffic |
Non-Patent Citations (6)
| Title |
|---|
| CHUNDURI ET AL.: "Preferred Path Rote Graph Structure draft-ce-lsr-ppr-graph-03", 8 March 2020, LSR WORKING GROUP |
| CHUNDURI R LI HUAWEI USA R WHITE JUNIPER NETWORKS J TANTSURA APSTRA INC L CONTRERAS TELEFONICA Y QU HUAWEI USA U: "Preferred Path Routing (PPR) in IS-IS; draft-chunduri-lsr-isis-preferred-path-routing-02.txt", no. 2, 15 February 2019 (2019-02-15), pages 1 - 26, XP015131102, Retrieved from the Internet <URL:https://tools.ietf.org/html/draft-chunduri-lsr-isis-preferred-path-routing-02> [retrieved on 20190215] * |
| E. CRABBE, PATH COMPUTATION ELEMENT COMMUNICATION PROTOCOL (PCEP) EXTENSIONS FOR PCE-INITIATED LSP SETUP IN A STATEFUL PCE MODEL, December 2017 (2017-12-01) |
| S. HEDGE, TRAFFIC ACCOUNTING FOR MPLS SEGMENT ROUTING PATHS, 30 October 2017 (2017-10-30) |
| U. CHUNDURI, PREFERRED PATH ROUTING (PPR) IN IS-IS, 9 January 2020 (2020-01-09) |
| Y. QU, YANG DATA MODEL FOR PREFERRED PATH ROUTING, 27 June 2018 (2018-06-27) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114157531A (en) * | 2020-08-18 | 2022-03-08 | 华为技术有限公司 | Method, device and network equipment for transmitting segment identification VPN SID of virtual private network |
| WO2022042610A1 (en) * | 2020-08-25 | 2022-03-03 | 中兴通讯股份有限公司 | Information processing method, network controller, node and computer-readable storage medium |
| EP4020927A1 (en) * | 2020-12-28 | 2022-06-29 | Nokia Solutions and Networks Oy | Packet forwarding on non-coherent paths |
| US11622029B2 (en) | 2021-07-28 | 2023-04-04 | International Business Machines Corporation | Optimizing information transmitted over a direct communications connection |
| EP4561010A4 (en) * | 2022-07-29 | 2025-11-05 | Huawei Tech Co Ltd | METHOD AND DEVICE FOR ESTABLISHING A RESOURCE RESERVATION PATH AND COMMUNICATION METHOD AND DEVICE |
| CN115484204A (en) * | 2022-09-15 | 2022-12-16 | 中国电信股份有限公司 | Network fault recovery method, device, system, electronic equipment and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3869751B1 (en) | Flexible algorithm aware border gateway protocol (bgp) prefix segment routing identifiers (sids) | |
| US9019865B2 (en) | Advertising traffic engineering information with the border gateway protocol | |
| US7945696B2 (en) | Differentiated routing using tunnels in a computer network | |
| US10305696B2 (en) | Group bundling priority dissemination through link-state routing protocol in a network environment | |
| US20210058260A1 (en) | Multicast Data Transmission Method, Related Apparatus, and System | |
| CN102037685B (en) | IP forwarding over Ethernet controlled by link state protocol | |
| US8077713B2 (en) | Dynamic update of a multicast tree | |
| CN101960785B (en) | Implementation of VPN on Link State Protocol Controlled Ethernet Network | |
| CN102150148B (en) | Differentiated services for unicast multicast frames in layer 2 topologies | |
| US9929946B2 (en) | Segment routing techniques | |
| WO2020160564A1 (en) | Preferred path routing in ethernet networks | |
| US11431630B2 (en) | Method and apparatus for preferred path route information distribution and maintenance | |
| US11770329B2 (en) | Advertising and programming preferred path routes using interior gateway protocols | |
| US8825898B2 (en) | Technique for optimized routing of data streams on an IP backbone in a computer network | |
| CN106063203A (en) | Software defined networking (SDN) specific topology information discovery | |
| KR20100113540A (en) | Mpls p node replacement using link state protocol controlled ethernet network | |
| US8902794B2 (en) | System and method for providing N-way link-state routing redundancy without peer links in a network environment | |
| CN111698162A (en) | Method, device and system for information synchronization | |
| WO2022042610A1 (en) | Information processing method, network controller, node and computer-readable storage medium | |
| Li et al. | Inter-domain Routing Based on Hybrid Metrics |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20719273 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 20719273 Country of ref document: EP Kind code of ref document: A1 |