US20140146664A1 - Apparatus, system and method for packet switching - Google Patents
Apparatus, system and method for packet switching Download PDFInfo
- Publication number
- US20140146664A1 US20140146664A1 US14/089,547 US201314089547A US2014146664A1 US 20140146664 A1 US20140146664 A1 US 20140146664A1 US 201314089547 A US201314089547 A US 201314089547A US 2014146664 A1 US2014146664 A1 US 2014146664A1
- Authority
- US
- United States
- Prior art keywords
- forwarding
- network
- lsp
- switches
- next hop
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims description 38
- 238000004891 communication Methods 0.000 claims description 38
- 238000013507 mapping Methods 0.000 claims description 8
- 230000002776 aggregation Effects 0.000 claims description 7
- 238000004220 aggregation Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 6
- 230000002457 bidirectional effect Effects 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 230000006855 networking Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 230000009471 action Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 230000003068 static effect Effects 0.000 description 4
- 238000007792 addition Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/50—Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/02—Topology update or discovery
- H04L45/03—Topology update or discovery by updating link state protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/02—Topology update or discovery
- H04L45/033—Topology update or discovery by updating distance vector protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/26—Route discovery packet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/28—Routing or path finding of packets in data switching networks using route fault recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/42—Centralised routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/50—Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
- H04L45/507—Label distribution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4604—LAN interconnection over a backbone network, e.g. Internet, Frame Relay
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/02—Topology update or discovery
- H04L45/036—Updating the topology between route computation elements, e.g. between OpenFlow controllers
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
Definitions
- the disclosure generally relates to computer networks, and more particularly, to an apparatus, system, and method for packet switching.
- Networks such as the Internet, have numerous networking and computing machines that are involved in transmitting data between machines in the network.
- One such networking machine is the router.
- a router is highly complex piece of networking equipment that directs data packets through a network from one machine to another.
- a router receives packets of data, determines the destination for those data packets, and then transmits the data packets to the correct port that is connected with the destination or the next stop on a path to the destination.
- a switch is a similar type networking device that directs packets of data through a network, albeit some switches may make fewer and less sophisticated decisions as to the next hop for a data packet. Regardless, both routers and switches are highly sophisticated and complex pieces of networking equipment.
- routers and switches are typically sold as a vertically integrated device, with a full computer hardware solution integrated with a full software suite. While providing excellent functionality, such vertically integrated devices are very expensive. Moreover, such vertically integrated devices do not provide network providers with the capability to customize the router or switch, to deploy a lighter weight device (one with less software, for example), or to otherwise customize the device or provide unique services or rates within the network.
- An apparatus for control of a plurality of forwarding switches using a network controller.
- the network controller executes a routing configuration application that analyzes interconnections between the forwarding switches to identify a topology of the network, determine label switched paths (LSPs) between the forwarding switches, and transmits the next hop routes to the forwarding switches.
- LSPs label switched paths
- the forwarding switches use the next hop routes to route packets through the network according to a multiprotocol label switching (MPLS) protocol.
- MPLS multiprotocol label switching
- Each LSP includes one or more next hop routes defining a forwarding address associated with one forwarding switch to an adjacent forwarding switch.
- a network controlling method includes analyzing, by a network controller, a plurality of interconnections between a plurality of forwarding switches of a communication network to identify a network topology of the communication network, determining at least one label switched path (LSP) between the forwarding switches, and transmitting the next hop routes to the forwarding switches.
- the forwarding switches use the next hop routes to route packets through the network according to a multiprotocol label switching (MPLS) protocol.
- MPLS multiprotocol label switching
- Each LSP includes one or more next hop routes defining a forwarding address associated with one forwarding switch to an adjacent forwarding switch.
- a communication network system includes multiple forwarding switches interconnected with one another, and controlled by a network controller.
- the network controller executes a routing configuration application that analyzes interconnections between the forwarding switches to identify a topology of the network, determine label switched paths (LSPs) between the forwarding switches, and transmits the next hop routes to the forwarding switches.
- LSPs label switched paths
- the forwarding switches use the next hop routes to route packets through the network according to a multiprotocol label switching (MPLS) protocol.
- MPLS multiprotocol label switching
- Each LSP includes one or more next hop routes defining a forwarding address associated with one forwarding switch to an adjacent forwarding switch.
- FIG. 1 illustrates an example communication network conforming to aspects of the present disclosure.
- FIG. 2A illustrates an example process that may be performed to manage and control routes through a communication network according to the teachings of the present disclosure.
- FIG. 2B illustrates an example process for routing packets through a communication network according to the teachings of the present disclosure.
- FIG. 3 is an example computing system that may implement various systems and methods discussed herein.
- aspects of the present disclosure involve a networking architecture and related apparatus and methods for packet switching using one or more software defined networking (SDN) controllers deployed in a network and in communication with any number of non-vertically integrated forwarding switches.
- SDN software defined networking
- the forwarding switches in the present architecture do not necessarily independently calculate routing tables.
- the forwarding switch may be a generic hardware device with forwarding plane hardware, such as one or more line cards that provide the ports for connecting to other forwarding switches, needed to forward packets.
- the forwarding switch may also include a light weight operating system and customized applications, and a SDN controller (or controllers) that runs routing protocols for the network and provides the forwarding paths to the forwarding switches.
- FIG. 1 illustrates an example communication network 100 conforming to aspects of the present disclosure.
- information flows through a backbone network 102 to and from a customer network, and particularly at a customer edge (CE) router 106 of the customer network 104 .
- CE customer edge
- FIG. 1 depicts another customer network 108 with a device 110 that receives or transmits information over the backbone network 102 through a provider edge router 112 .
- the network architecture, devices, and methods discussed herein are applicable to other embodiments where a customer/provider arrangement does not necessarily exist.
- the illustrated network is a backbone network, the architecture and devices set out herein are applicable to other forms of networks.
- the (CE) router 106 is coupled with a provider edge (PE) device 114 that provides a communication point between the customer network 104 and the backbone network 102 .
- PE provider edge
- various devices within the customer network 104 are connected to the CE router 106 .
- the CE router 106 is in communication with the provider edge device 114 , which may be connected using any type of connection, such as a gigabit Ethernet (GigE) connection.
- the PE device 114 is a conventional vertically integrated device such as a router.
- the PE device 114 is in communication with a gateway 116 of the backbone network 102 .
- the PE device 114 is configured to interoperate with legacy customer devices, such as the CE router 106 , that the backbone network may not control or operate.
- the network by using a conventional PE device 114 , may maintain interoperability with conventional devices and protocols without involving any change at the CE router 106 or customer network 104 .
- each forwarding switch may include generic hardware, such as one or more line cards that provide forwarding plane hardware and ports for connecting to other forwarding switches needed to forward packets.
- the SDN controller 120 determines routing information to be used by each forwarding switch and transmits this routing information to be used by the forwarding switches for routing packets through the communication network 102 .
- SDN controller 120 may include two or more SDN controllers 120 that function together to determine and control routes through the network 102 .
- the scale and configuration of the network 102 will play a role in determining how many SDN controllers 120 are used in the network 102 .
- multiple SDN controllers 120 may be deployed at each data center where forwarding switches and other networking components are located.
- a large, international network may include multiple SDN controller 120 distributed at varying locations for distributing the processing load of each SDN controller 120 and providing fault tolerance.
- the forwarding switches (P 1 -P 5 ) communicate with the SDN controller 120 to receive routing information to be used for routing packets through the backbone network 102 .
- the SDN controller 120 may compute routes and forward those routes to the forwarding switches (P 1 -P 5 ). That is, high speed memory within the line cards are prepopulated with routes computed by the SDN controller 120 prior to routing packets through the backbone network 102 .
- the SDN controller 120 may respond to queries from each forwarding switch concerning packet forwarding and provide routes to the forwarding switch after it has received the packets.
- the two embodiments described above may also be practiced in combination.
- Routing in the described architecture may be performed based on multiprotocol label switching (MPLS), or more specifically MPLS labels, as opposed to using layer 2 or layer 3 headers.
- MPLS multiprotocol label switching
- the present architecture may make forwarding decisions at a higher layer of abstraction where forwarding decisions are made without analyzing the specific IP address or other layer 2 or layer 3 header information, but rather an MPLS label that represents a plurality of IP addresses or other Layer-3 or Layer-2 header information.
- MPLS labels are generally shorter and easier to decipher than layer 2 or layer 3 information in each packet, thus allowing the use of high speed, hardwired routing mechanisms, such as application specific integrated circuits (ASICS) that are relatively inexpensive to implement and maintain.
- ASICS application specific integrated circuits
- multiple forwarding switches may be configured as a multi-chassis link aggregation group (MC-LAG) for one or more edge devices (e.g., provider edge device 114 and/or provider edge router 112 ).
- M-LAG multi-chassis link aggregation group
- Such a configuration may provide certain benefits, such as reduction of a configuration for static LSP label mappings on the edge devices. Specifically, only one or a few static LSP mappings for each edge device may be required, and not one for each forwarding switch provisioned in the network.
- FIG. 2A illustrates an example process that may be performed by the SDN controller 120 to manage and control routes through the communication network 102 according to the teachings of the present disclosure.
- the SDN controller 120 analyzes the network, which in the simplified example includes forwarding switches (P 1 -P 5 ), to identify the interconnections between the forwarding switches.
- P 1 is connected to P 3 and P 4
- P 2 is connected to P 3 and P 4
- P 3 is connected to P 5 as well as directly to the external provider edge router 112
- P 4 is connected to P 5 , which also has a connection to the external provider edge router.
- These interconnections represent possible paths through the network.
- a packet may traverse the network from P 1 to P 4 to P 5
- a packet may also traverse the network from P 1 to P 3 to P 5 .
- the aggregate of these interconnections represent the topology of the network.
- the SDN controller 120 discovers the forwarding switches (P 1 -P 5 ), such as through the link layer discovery protocol (LLDP) and the connections between. In other embodiments, any suitable type of protocol may be used to discover the topology of the communication network 102 . Additionally, the SDN controller 120 learns the topology of the backbone network 120 using multiple characteristics of each interconnection commonly referred to as an “IGP metric.” These characteristics may be used by the SDN controller 120 to determine one or more optimal paths for packets through the network 120 .
- IGP metric characteristics of each interconnection commonly referred to as an “IGP metric.” These characteristics may be used by the SDN controller 120 to determine one or more optimal paths for packets through the network 120 .
- the SDN controller 120 may apply any number of possible routing algorithms, as well as customized routing algorithms, to the network topology to define MPLS paths through the network in operation 210 .
- a least cost routing algorithm a dijkstra routing algorithm, a geographic routing algorithm, hierarchal routing algorithm, and/or a multipath routing algorithm may be used.
- the SDN controller may include a customized route for specific routing information.
- multiple routing algorithms may be used in combination.
- the SDN controller 120 implements a multiprotocol label switching (MPLS) mechanism for forwarding packets through the network 102 in which each route is referred to as a label switched path (LSP).
- MPLS multiprotocol label switching
- the SDN controller 120 executes a label distribution protocol (LDP) that generates label mapping information for the communication network and transmits the label mapping information to each forwarding switch in the backbone network 102 . That is, the SDN controller 120 designates unique labels for each forwarding switch in the backbone network 102 that are used for routing packets through the backbone network 102 .
- LDP label distribution protocol
- the SDN controller 120 determines the routes according to an MPLS protocol.
- the MPLS protocol is a mechanism used in data networks in which packets are routed through nodes (e.g. edge devices and forwarding switches) of the network using labels appended to each packet, rather than by inspection of each layer 2 or layer 3 address of each packet. So for example, regardless of the least cost routing route, the SDN controller 120 may determine a LSP and identify that path with a label (e.g., XYZ) such that any packet with that routing label may be directed to traverse the network according to next hop routes determined by the SDN controller 120 , and downloaded to each forwarding switch (P 1 -P 5 ).
- a label e.g., XYZ
- the forwarding switches do not require MPLS signaling or label distribution protocols (e.g.: LDP, RSVP, and/or BGP) to exchange MPLS labels. That is, the forwarding switches (P 1 -P 5 ) may be void of any routing functionality thus reducing their costs while enhancing the reliability by reducing the complexity of hardware and software used in the forwarding switches.
- MPLS signaling or label distribution protocols e.g.: LDP, RSVP, and/or BGP
- Each LSP extends from one edge device to another edge device (e.g., provider edge device 114 and provider edge router 112 ) and includes one or more next hop routes to be performed by any forwarding switch (P 1 -P 5 ) along that route.
- one particular LSP 122 may extend through provider edge device 114 , forwarding switch P 1 , forwarding switch P 3 , and end at provider edge router 112 .
- the SDN controller 120 determines a next hop route 124 a that instructs forwarding switch P 1 to forward packets along that LSP 122 to forwarding switch P 3 , and another next hop route 124 b that instructs forwarding switch P 3 to forward packets along that path to provider edge router 112 .
- the SDN controller 120 includes a route reflector (RR) function that interfaces with a border gateway protocol (BGP) instance executed on each of the provider edge device 114 and provider edge router 112 to learn destinations of all packet traffic through the backbone network 102 .
- the RR function uses the BGP instance to resolve next hop routes for each adjacent node (e.g., provider edge device 114 , provider edge router 112 , and forwarding switches (P 1 -P 5 )) in the backbone network 120 .
- the SDN controller 120 also stores loopback interface information for each edge device (i.e., provider edge device 120 , and provider edge router 112 ) since that is what is used by the BGP to resolve its next hop route to other nodes. Additionally, the SDN controller 120 uses the stored loopback interface information about each edge device to resolve the source and destinations of the LSPs.
- the packet from (CE) router to a conventional ingress provider edge router will conduct a conventional border gateway protocol (BGP) routing look-up using an IP destination address of the packet, where the look-up occurs in a BGP routing table.
- BGP border gateway protocol
- This result of the lookup in the BGP routing table is the next-hop IP address of the loopback interface of an egress PE, and an associated MPLS tunnel label with that loopback interface of the egress PE, at the far-end of the network where there is a customer (destination) network attached to that egress PE router.
- the ingress PE router then adds that MPLS label to the packet and forwards the MPLS encapsulated packet to the Backbone label switch router (P 1 or P 2 ).
- packets arriving at the backbone label switch router will cause the MPLS label switch router to perform a lookup based on the incoming MPLS label to determine the appropriate LSP that is used to forward the MPLS packet to the next forwarding switch and, ultimately, to the destination PE at the remote end of the network.
- forwarding entries (LSP entries) in each label forwarding switch are provided solely by the SDN controller.
- the SDN controller 120 uses stored loopback interface information of all PE devices to generate label forwarding information base (LFIB) entries that are subsequently transmitted to each edge device (e.g., provider edge router 112 and provider edge device 114 ) in the backbone network 102 .
- the LFIB is transmitted to each edge device using any suitable protocol, such as a netconf protocol, a command line interface (CLI) protocol, or an openflow protocol.
- CLI command line interface
- the LFIB is processed by the edge device to identify next hop routes (i.e., routing actions) corresponding with each LSP across the Backbone to a remote PE.
- the forwarding switches receive routing information for each next hop route, (egress PE Loopback interface), to LSP mapping from LFIB information generated by the SDN controller 120 .
- Certain embodiments including such functionality may reduce the complexity of the forwarding switches by placing route resolution functionality in the SDN controller 120 and edge devices rather than in the forwarding switches.
- the edge devices may also generate additional LFIB actions to enable label-swapping and/or label pushing by the forwarding switches, when these devices interface with nodes of other networks, such as a broader inter-city backbone network.
- one or more bypass LSPs may be determined for each LSP thus providing for increased reliability in the event that the primary LSP fails or begins to operate below a specified level of performance.
- next hop routes e.g., one-hop of a LSP
- the next hop routes are then transmitted to several individual forwarding switches to construct an edge-to-edge LSP (i.e., tunnel) across the Backbone network 102 .
- edge-to-edge LSP i.e., tunnel
- MPLS LSP's may be loaded into line card memory of the forwarding switches.
- FIG. 2A describes one example of a process that may be performed by the SDN controller 120 to manage and control routes through the communication network 102
- the features of disclosed process may be embodied in other specific forms without deviating from the spirit and scope of the present disclosure.
- the disclosed operations may be performed sequentially or simultaneously with one another.
- the disclosed operations may be performed in any suitable sequence and not just in the sequence described herein.
- FIG. 2B illustrates an example process for routing packets through the communication network 102 according to the teachings of the present disclosure. More specifically, FIG. 2B describes various actions that may be taken after the next hop routes are generated and stored in the forwarding switches as described above with reference to FIG. 2A .
- the forwarding switches (P 1 -P 5 ) route packets through the communication network 102 according to their programmed next hop routes.
- each forwarding switch reacts autonomously to a local link-failure and immediately switches traffic onto a Bypass LSP.
- a bypass LSP generally refers to another LSP that is redundant to the main path, but routed through differing forwarding switches such that, in the event that a forwarding switch through which the main LSP travels should fail, packets may be transferred over to the bypass LSP.
- the forwarding switch may react to any failure indication, such as a Loss-of-Signal (LOS) or Loss-of-Light (LOL) to initiate switchover.
- LOS Loss-of-Signal
- LNL Loss-of-Light
- each forwarding switch may also use a local, onboard implementation of Link Aggregation Control Protocol (LACP) and/or a Bidirectional Forwarding Detection (BFD) to detect failures, which may not be adequately noticed by the LOS or LOL indications.
- LACP Link Aggregation Control Protocol
- BFD Bidirectional Forwarding Detection
- the forwarding switch would autonomously perform an action similar to that of the Fast Re-Route Point-of-Local-Repair (PLR) where it automatically appends a new MPLS label to packets to temporarily detour traffic around the failure.
- PLR Point-of-Local-Repair
- the forwarding switch would notify the SDN Controller 120 of the failure such that the SDN controller 120 could re-calculate and re-optimize primary LSP's through the network, as appropriate. This may include programming new primary LSPs through the network and performing Make-Before-Break (MBB) actions, as required.
- MBB Make-Before-Bre
- each forwarding switch manages time-to-live (TTL) exceeded packets.
- each forwarding switch determines any TTL exceeded packets, generates an Internet control message protocol (ICMP) Destination Unreachable response, encapsulates that with the original MPLS (outermost) label set and forwarding the encapsulated packet to the egress edge device associated with the LSP.
- ICMP Internet control message protocol
- the forwarding switch transmits information associated with the TTL exceeded packet to the SDN controller 120 such that the SDN controller 120 generates a MPLS label stack that may be used for transmitting the TTL exceeded packet back to the ingress edge device.
- the SDN controller 120 may wrap the ICMP TTL exceeded message in a uniform datagram protocol (UDP) (e.g., GRE or IP) tunnel that directs the TTL exceeded packet back to the ingress edge device associated with the LSP.
- UDP uniform datagram protocol
- FIG. 2B describes one example of a process that may be performed by the forwarding switches (P 1 -P 5 ) for routing packets through the communication network 102
- the features of disclosed process may be embodied in other specific forms without deviating from the spirit and scope of the present disclosure.
- the disclosed operations may be performed after or simultaneously with the operations described above with respect to FIG. 2A .
- the forwarding switches (P 1 -P 5 ) may perform additional, fewer, or different operations than those operations as described in the present example.
- the system may provide customizable network services and allow for much more rapid introduction of new services.
- the system may be more robust as compared to vertically integrated system (particularly at software) which have tended to have more bugs simply resulting from the sheer complexity of conventional vertically integrated systems that are required to include many functions for conforming standards for interoperating autonomously with other devices.
- a substantial portion of the software complexity of the forwarding switches are provided in the SDN controller 120 allowing for far less expensive and complicated hardware switches relative to conventional routers and switches.
- the overall system (combination of SDN controller and hardware switches) can also be customizable to provide unique or customized routes not otherwise decided by conventional routing protocols.
- FIG. 3 is an example computing system 300 that may implement various systems and methods discussed herein.
- the computing system may embody the SDN controller 120 discussed herein.
- the computing system may also provide the functionality of the forwarding switches (P 1 -P 5 ) as discussed herein.
- the computing system 300 includes at least one processor 310 , at least one communication port 315 , a main memory 320 , a removable storage media 325 , a read-only memory 330 , a mass storage device 335 , and an I/O port 340 .
- Processor(s) 310 can be any known processor, such as, but not limited to, an Intel® Itanium® or Itanium 2® processor(s), AMD® Opteron® or Athlon MP® processor(s), or Motorola® lines of processors.
- the communication port 315 can be any type, such as an RS-232 port for use with a modem based dial-up connection, a 10/100 Ethernet port, a Gigabit port using copper or fiber, or a USB port.
- Communication port(s) 315 may be chosen depending on a network such as a Local Area Network (LAN), a Wide Area Network (WAN), or any network to which the computer system 300 connects.
- the computing system 300 may be in communication with peripheral devices (e.g., display screen 350 and a user input device 516 ) via Input/Output (I/O) port 340 .
- peripheral devices e.g., display screen 350 and a user input device 516
- I/O Input/Output
- Main memory 320 can be Random Access Memory (RAM) or any other dynamic storage device(s) commonly known in the art.
- Read-only memory 330 can be any static storage device(s) such as Programmable Read-Only Memory (PROM) chips for storing static information such as instructions for processor 310 .
- Mass storage device 335 can be used to store information and instructions.
- hard disks such as the Adaptec® family of Small Computer Serial Interface (SCSI) drives, an optical disc, an array of disks such as Redundant Array of Independent Disks (RAID), such as the Adaptec® family of RAID drives, or any other mass storage devices, may be used.
- SCSI Small Computer Serial Interface
- RAID Redundant Array of Independent Disks
- the bus 305 communicatively couples processor(s) 310 with the other memory, storage and communications blocks.
- the bus 305 can be a PCI/PCI-X, SCSI, or Universal Serial Bus (USB) based system bus (or other) depending on the storage devices used.
- Removable storage media 325 can be any kind of external hard drives, floppy drives, OMEGA® Zip Drives, Compact Disc—Read Only Memory (CD-ROM), Compact Disc—Re-Writable (CD-RW), Digital Video Disk—Read Only Memory (DVD-ROM), etc.
- the computer system 300 includes one or more processors 310 .
- the processor 310 may include one or more internal levels of cache (not shown) and a bus controller or bus interface unit to direct interaction with the processor bus 305 .
- the main memory 320 may include one or more memory cards and a control circuit (not shown), or other forms of removable memory, and may store a routing configuration application 365 including computer executable instructions, that when run on the processor, implement the methods and system set out herein, such as the method discussed with reference to FIGS. 2A and 2B .
- Other forms of memory such as a mass storage device 335 , a read only memory 330 , and a removable storage memory 325 , may also be included and accessible, by the processor (or processors) 310 via the bus 305 .
- the computer system 300 may further include a communication port 315 connected to a transport and/or transit network 355 by way of which the computer system 300 may receive network data useful in executing the methods and system set out herein as well as transmitting information and network configuration changes and MPLS routes or other routes determined thereby.
- the computer system 300 may include an I/O device 340 , or other device, by which information is displayed, such as at display screen 350 , or information is input, such as input device 345 .
- the input device 345 may be alphanumeric input device, including alphanumeric and other keys for communicating information and/or command selections to the processor.
- the input device 345 may be another type of user input device including cursor control, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the processors 310 and for controlling cursor movement on the display device 350 .
- cursor control such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the processors 310 and for controlling cursor movement on the display device 350 .
- the input may be through a touch screen, voice commands, and/or Bluetooth connected keyboard, among other input mechanisms.
- FIG. 3 is but one possible example of a computer system that may employ or be configured in accordance with aspects of the present disclosure.
- the methods disclosed may be implemented as sets of instructions or software readable by a device. Further, it is understood that the specific order or hierarchy of steps in the methods disclosed are instances of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the method can be rearranged while remaining within the disclosed subject matter.
- the accompanying method claims present elements of the various steps in a sample order, and are not necessarily meant to be limited to the specific order or hierarchy presented.
- the described disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure.
- a machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer).
- the machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette), optical storage medium (e.g., CD-ROM); magneto-optical storage medium, read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic instructions.
- magnetic storage medium e.g., floppy diskette
- optical storage medium e.g., CD-ROM
- magneto-optical storage medium e.g., read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic instructions.
- ROM read only memory
- RAM random access memory
- EPROM and EEPROM erasable programmable memory
- flash memory or other types of medium suitable for storing electronic instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
An apparatus is provided for control of a plurality of forwarding switches using a network controller. The network controller executes a routing configuration application that analyzes interconnections between the forwarding switches to identify a topology of the network, determine label switched paths (LSPs) between the forwarding switches, and transmits the next hop routes to the forwarding switches. The forwarding switches use the next hop routes to route packets through the network according to a multiprotocol label switching (MPLS) protocol. Each LSP includes one or more next hop routes defining a forwarding address associated with one forwarding switch to an adjacent forwarding switch.
Description
- This application claims priority under 35 U.S.C. §119 from U.S. provisional application No. 61/729,862 entitled “APPARATUS, SYSTEM, AND METHOD FOR PACKET SWITCHING” filed on Nov. 26, 2012, the entire contents of which are fully incorporated by reference herein for all purposes.
- The disclosure generally relates to computer networks, and more particularly, to an apparatus, system, and method for packet switching.
- Networks, such as the Internet, have numerous networking and computing machines that are involved in transmitting data between machines in the network. One such networking machine is the router. A router is highly complex piece of networking equipment that directs data packets through a network from one machine to another. Generally speaking, a router receives packets of data, determines the destination for those data packets, and then transmits the data packets to the correct port that is connected with the destination or the next stop on a path to the destination. There are numerous decisions and computations involved with determining the next hop on the path to the destination and the router makes those decisions for enormous amounts of data every second. A switch is a similar type networking device that directs packets of data through a network, albeit some switches may make fewer and less sophisticated decisions as to the next hop for a data packet. Regardless, both routers and switches are highly sophisticated and complex pieces of networking equipment.
- Conventional routers and switches are typically sold as a vertically integrated device, with a full computer hardware solution integrated with a full software suite. While providing excellent functionality, such vertically integrated devices are very expensive. Moreover, such vertically integrated devices do not provide network providers with the capability to customize the router or switch, to deploy a lighter weight device (one with less software, for example), or to otherwise customize the device or provide unique services or rates within the network.
- It is with these inadequacies and concerns in mind, among others, that various aspects of the present disclosure were conceived and developed.
- An apparatus is provided for control of a plurality of forwarding switches using a network controller. The network controller executes a routing configuration application that analyzes interconnections between the forwarding switches to identify a topology of the network, determine label switched paths (LSPs) between the forwarding switches, and transmits the next hop routes to the forwarding switches. The forwarding switches use the next hop routes to route packets through the network according to a multiprotocol label switching (MPLS) protocol. Each LSP includes one or more next hop routes defining a forwarding address associated with one forwarding switch to an adjacent forwarding switch.
- According to another aspect, a network controlling method includes analyzing, by a network controller, a plurality of interconnections between a plurality of forwarding switches of a communication network to identify a network topology of the communication network, determining at least one label switched path (LSP) between the forwarding switches, and transmitting the next hop routes to the forwarding switches. The forwarding switches use the next hop routes to route packets through the network according to a multiprotocol label switching (MPLS) protocol. Each LSP includes one or more next hop routes defining a forwarding address associated with one forwarding switch to an adjacent forwarding switch.
- According to yet another aspect, a communication network system includes multiple forwarding switches interconnected with one another, and controlled by a network controller. The network controller executes a routing configuration application that analyzes interconnections between the forwarding switches to identify a topology of the network, determine label switched paths (LSPs) between the forwarding switches, and transmits the next hop routes to the forwarding switches. The forwarding switches use the next hop routes to route packets through the network according to a multiprotocol label switching (MPLS) protocol. Each LSP includes one or more next hop routes defining a forwarding address associated with one forwarding switch to an adjacent forwarding switch.
- The foregoing and other objects, features and advantages of the disclosure will be apparent from the following description of particular embodiments of the disclosure, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure.
-
FIG. 1 illustrates an example communication network conforming to aspects of the present disclosure. -
FIG. 2A illustrates an example process that may be performed to manage and control routes through a communication network according to the teachings of the present disclosure. -
FIG. 2B illustrates an example process for routing packets through a communication network according to the teachings of the present disclosure. -
FIG. 3 is an example computing system that may implement various systems and methods discussed herein. - Aspects of the present disclosure involve a networking architecture and related apparatus and methods for packet switching using one or more software defined networking (SDN) controllers deployed in a network and in communication with any number of non-vertically integrated forwarding switches. Unlike a conventional vertically integrated router or switch that operates distributing routing protocols, such as open shortest path first (OSPF), border gateway protocol (BGP), or intermediate system to intermediate system (IS-IS), and independently calculates routing tables, the forwarding switches in the present architecture do not necessarily independently calculate routing tables. Instead, the forwarding switch may be a generic hardware device with forwarding plane hardware, such as one or more line cards that provide the ports for connecting to other forwarding switches, needed to forward packets. The forwarding switch may also include a light weight operating system and customized applications, and a SDN controller (or controllers) that runs routing protocols for the network and provides the forwarding paths to the forwarding switches.
-
FIG. 1 illustrates anexample communication network 100 conforming to aspects of the present disclosure. In this example, information flows through abackbone network 102 to and from a customer network, and particularly at a customer edge (CE)router 106 of thecustomer network 104. For the sake of simplicity, only one customer edge router is illustrated; however, numerous customers of thebackbone network 102 along with numerous edge devices may transmit and receive information over thebackbone network 102. Also, for the sake of simplicity, the diagram depicts anothercustomer network 108 with adevice 110 that receives or transmits information over thebackbone network 102 through aprovider edge router 112. Additionally, while the term ‘customer network’ is used herein, the network architecture, devices, and methods discussed herein are applicable to other embodiments where a customer/provider arrangement does not necessarily exist. Similarly, while the illustrated network is a backbone network, the architecture and devices set out herein are applicable to other forms of networks. In any event, the (CE)router 106 is coupled with a provider edge (PE)device 114 that provides a communication point between thecustomer network 104 and thebackbone network 102. - Generally speaking, various devices within the
customer network 104, such as local area network devices, are connected to theCE router 106. TheCE router 106 is in communication with theprovider edge device 114, which may be connected using any type of connection, such as a gigabit Ethernet (GigE) connection. In this example network implementation, thePE device 114 is a conventional vertically integrated device such as a router. ThePE device 114 is in communication with agateway 116 of thebackbone network 102. ThePE device 114 is configured to interoperate with legacy customer devices, such as theCE router 106, that the backbone network may not control or operate. Thus, the network, by using aconventional PE device 114, may maintain interoperability with conventional devices and protocols without involving any change at theCE router 106 orcustomer network 104. - Within the
backbone network 102, however, one or more conventional routers or switches may be replaced with forwarding switches (P1-P5) whose routes are determined and controlled by one ormore SDN controllers 120. The forwarding switches (P1-P5) are relatively non-complex devices in that they are not required to implement routing functionality or conform to other networking standards associated with other networking devices. For example, each forwarding switch may include generic hardware, such as one or more line cards that provide forwarding plane hardware and ports for connecting to other forwarding switches needed to forward packets. Rather than each forwarding switch calculating its own routing information, such as routing table information, theSDN controller 120 determines routing information to be used by each forwarding switch and transmits this routing information to be used by the forwarding switches for routing packets through thecommunication network 102. - Although the particular embodiment shown only includes one
SDN controller 120, other embodiments may include two ormore SDN controllers 120 that function together to determine and control routes through thenetwork 102. The scale and configuration of thenetwork 102 will play a role in determining howmany SDN controllers 120 are used in thenetwork 102. For example, in a small geographically localized network, it may be sufficient to have oneSDN controller 120. In a global network however,multiple SDN controllers 120 may be deployed at each data center where forwarding switches and other networking components are located. As another example, a large, international network may includemultiple SDN controller 120 distributed at varying locations for distributing the processing load of eachSDN controller 120 and providing fault tolerance. - The forwarding switches (P1-P5) communicate with the
SDN controller 120 to receive routing information to be used for routing packets through thebackbone network 102. In a first embodiment, theSDN controller 120 may compute routes and forward those routes to the forwarding switches (P1-P5). That is, high speed memory within the line cards are prepopulated with routes computed by theSDN controller 120 prior to routing packets through thebackbone network 102. In another embodiment, theSDN controller 120 may respond to queries from each forwarding switch concerning packet forwarding and provide routes to the forwarding switch after it has received the packets. In other embodiments, the two embodiments described above may also be practiced in combination. - Routing in the described architecture may be performed based on multiprotocol label switching (MPLS), or more specifically MPLS labels, as opposed to using layer 2 or layer 3 headers. Thus, for example, as opposed to analyzing each IPv4 or IPv6 address in a data packet, the present architecture may make forwarding decisions at a higher layer of abstraction where forwarding decisions are made without analyzing the specific IP address or other layer 2 or layer 3 header information, but rather an MPLS label that represents a plurality of IP addresses or other Layer-3 or Layer-2 header information. Such an implementation is particularly useful in a backbone network setting where hardware resources, such as table lookup capacities, are limited. Additionally, MPLS labels are generally shorter and easier to decipher than layer 2 or layer 3 information in each packet, thus allowing the use of high speed, hardwired routing mechanisms, such as application specific integrated circuits (ASICS) that are relatively inexpensive to implement and maintain.
- In one embodiment, multiple forwarding switches (P1-P2) may be configured as a multi-chassis link aggregation group (MC-LAG) for one or more edge devices (e.g.,
provider edge device 114 and/or provider edge router 112). Such a configuration may provide certain benefits, such as reduction of a configuration for static LSP label mappings on the edge devices. Specifically, only one or a few static LSP mappings for each edge device may be required, and not one for each forwarding switch provisioned in the network. -
FIG. 2A illustrates an example process that may be performed by theSDN controller 120 to manage and control routes through thecommunication network 102 according to the teachings of the present disclosure. Inoperation 200, theSDN controller 120 analyzes the network, which in the simplified example includes forwarding switches (P1-P5), to identify the interconnections between the forwarding switches. Here, it can be seen that P1 is connected to P3 and P4, P2 is connected to P3 and P4, P3 is connected to P5 as well as directly to the externalprovider edge router 112, and P4 is connected to P5, which also has a connection to the external provider edge router. These interconnections represent possible paths through the network. Thus, for example, a packet may traverse the network from P1 to P4 to P5, and a packet may also traverse the network from P1 to P3 to P5. The aggregate of these interconnections represent the topology of the network. - In one embodiment, the
SDN controller 120 discovers the forwarding switches (P1-P5), such as through the link layer discovery protocol (LLDP) and the connections between. In other embodiments, any suitable type of protocol may be used to discover the topology of thecommunication network 102. Additionally, theSDN controller 120 learns the topology of thebackbone network 120 using multiple characteristics of each interconnection commonly referred to as an “IGP metric.” These characteristics may be used by theSDN controller 120 to determine one or more optimal paths for packets through thenetwork 120. - Once the network topology is understood, the
SDN controller 120 may apply any number of possible routing algorithms, as well as customized routing algorithms, to the network topology to define MPLS paths through the network inoperation 210. For example, a least cost routing algorithm, a dijkstra routing algorithm, a geographic routing algorithm, hierarchal routing algorithm, and/or a multipath routing algorithm may be used. In another example, the SDN controller may include a customized route for specific routing information. In yet another example, multiple routing algorithms may be used in combination. - According to one aspect, the
SDN controller 120 implements a multiprotocol label switching (MPLS) mechanism for forwarding packets through thenetwork 102 in which each route is referred to as a label switched path (LSP). To accomplish this, theSDN controller 120 executes a label distribution protocol (LDP) that generates label mapping information for the communication network and transmits the label mapping information to each forwarding switch in thebackbone network 102. That is, theSDN controller 120 designates unique labels for each forwarding switch in thebackbone network 102 that are used for routing packets through thebackbone network 102. - The
SDN controller 120 determines the routes according to an MPLS protocol. The MPLS protocol is a mechanism used in data networks in which packets are routed through nodes (e.g. edge devices and forwarding switches) of the network using labels appended to each packet, rather than by inspection of each layer 2 or layer 3 address of each packet. So for example, regardless of the least cost routing route, theSDN controller 120 may determine a LSP and identify that path with a label (e.g., XYZ) such that any packet with that routing label may be directed to traverse the network according to next hop routes determined by theSDN controller 120, and downloaded to each forwarding switch (P1-P5). - In such an architecture and in contrast to an architecture in which the SDN controller determines the LSPs (e.g., routes) through the network, the forwarding switches do not require MPLS signaling or label distribution protocols (e.g.: LDP, RSVP, and/or BGP) to exchange MPLS labels. That is, the forwarding switches (P1-P5) may be void of any routing functionality thus reducing their costs while enhancing the reliability by reducing the complexity of hardware and software used in the forwarding switches.
- Each LSP extends from one edge device to another edge device (e.g.,
provider edge device 114 and provider edge router 112) and includes one or more next hop routes to be performed by any forwarding switch (P1-P5) along that route. For example, as shown inFIG. 1 , oneparticular LSP 122 may extend throughprovider edge device 114, forwarding switch P1, forwarding switch P3, and end atprovider edge router 112. In this case, theSDN controller 120 determines anext hop route 124a that instructs forwarding switch P1 to forward packets along thatLSP 122 to forwarding switch P3, and anothernext hop route 124b that instructs forwarding switch P3 to forward packets along that path toprovider edge router 112. Thus, when packets associated with thatparticular LSP 122 are subsequently received at the forwarding switch P1, it forwards the packets to forwarding switch P3 according to its receivednext hop route 124 a, which is then forwarded toprovider edge router 112 by forwarding switch P3 according to its receivednext hop route 124 b. - To generate LSPs, the
SDN controller 120 includes a route reflector (RR) function that interfaces with a border gateway protocol (BGP) instance executed on each of theprovider edge device 114 andprovider edge router 112 to learn destinations of all packet traffic through thebackbone network 102. The RR function uses the BGP instance to resolve next hop routes for each adjacent node (e.g.,provider edge device 114,provider edge router 112, and forwarding switches (P1-P5)) in thebackbone network 120. TheSDN controller 120 also stores loopback interface information for each edge device (i.e.,provider edge device 120, and provider edge router 112) since that is what is used by the BGP to resolve its next hop route to other nodes. Additionally, theSDN controller 120 uses the stored loopback interface information about each edge device to resolve the source and destinations of the LSPs. - Now referring to an example packet from the first customer network destined for the second customer network the packet from (CE) router to a conventional ingress provider edge router will conduct a conventional border gateway protocol (BGP) routing look-up using an IP destination address of the packet, where the look-up occurs in a BGP routing table. This result of the lookup in the BGP routing table is the next-hop IP address of the loopback interface of an egress PE, and an associated MPLS tunnel label with that loopback interface of the egress PE, at the far-end of the network where there is a customer (destination) network attached to that egress PE router. The ingress PE router then adds that MPLS label to the packet and forwards the MPLS encapsulated packet to the Backbone label switch router (P1 or P2). Thus, packets arriving at the backbone label switch router (forwarding switches P1 and P2) will cause the MPLS label switch router to perform a lookup based on the incoming MPLS label to determine the appropriate LSP that is used to forward the MPLS packet to the next forwarding switch and, ultimately, to the destination PE at the remote end of the network. The key point is that forwarding entries (LSP entries) in each label forwarding switch are provided solely by the SDN controller.
- In one embodiment, the
SDN controller 120 uses stored loopback interface information of all PE devices to generate label forwarding information base (LFIB) entries that are subsequently transmitted to each edge device (e.g.,provider edge router 112 and provider edge device 114) in thebackbone network 102. The LFIB is transmitted to each edge device using any suitable protocol, such as a netconf protocol, a command line interface (CLI) protocol, or an openflow protocol. Once the LFIB is received by each edge device, the LFIB is processed by the edge device to identify next hop routes (i.e., routing actions) corresponding with each LSP across the Backbone to a remote PE. The forwarding switches, (P1-P5), receive routing information for each next hop route, (egress PE Loopback interface), to LSP mapping from LFIB information generated by theSDN controller 120. Certain embodiments including such functionality may reduce the complexity of the forwarding switches by placing route resolution functionality in theSDN controller 120 and edge devices rather than in the forwarding switches. In some embodiments, the edge devices may also generate additional LFIB actions to enable label-swapping and/or label pushing by the forwarding switches, when these devices interface with nodes of other networks, such as a broader inter-city backbone network. - In one embodiment, one or more bypass LSPs (e.g., routes) may be determined for each LSP thus providing for increased reliability in the event that the primary LSP fails or begins to operate below a specified level of performance.
- In
operation 220, the next hop routes (e.g., one-hop of a LSP), are then transmitted to several individual forwarding switches to construct an edge-to-edge LSP (i.e., tunnel) across theBackbone network 102. These MPLS LSP's may be loaded into line card memory of the forwarding switches. - Although
FIG. 2A describes one example of a process that may be performed by theSDN controller 120 to manage and control routes through thecommunication network 102, the features of disclosed process may be embodied in other specific forms without deviating from the spirit and scope of the present disclosure. For example, the disclosed operations may be performed sequentially or simultaneously with one another. As another example, the disclosed operations may be performed in any suitable sequence and not just in the sequence described herein. -
FIG. 2B illustrates an example process for routing packets through thecommunication network 102 according to the teachings of the present disclosure. More specifically,FIG. 2B describes various actions that may be taken after the next hop routes are generated and stored in the forwarding switches as described above with reference toFIG. 2A . Inoperation 250, the forwarding switches (P1-P5) route packets through thecommunication network 102 according to their programmed next hop routes. - In
operation 260, each forwarding switch (P1-P5) reacts autonomously to a local link-failure and immediately switches traffic onto a Bypass LSP. A bypass LSP generally refers to another LSP that is redundant to the main path, but routed through differing forwarding switches such that, in the event that a forwarding switch through which the main LSP travels should fail, packets may be transferred over to the bypass LSP. The forwarding switch may react to any failure indication, such as a Loss-of-Signal (LOS) or Loss-of-Light (LOL) to initiate switchover. Additionally, each forwarding switch may also use a local, onboard implementation of Link Aggregation Control Protocol (LACP) and/or a Bidirectional Forwarding Detection (BFD) to detect failures, which may not be adequately noticed by the LOS or LOL indications. In effect, the forwarding switch would autonomously perform an action similar to that of the Fast Re-Route Point-of-Local-Repair (PLR) where it automatically appends a new MPLS label to packets to temporarily detour traffic around the failure. In addition, the forwarding switch would notify theSDN Controller 120 of the failure such that theSDN controller 120 could re-calculate and re-optimize primary LSP's through the network, as appropriate. This may include programming new primary LSPs through the network and performing Make-Before-Break (MBB) actions, as required. - In
operation 270, each forwarding switch (P1-P5) manages time-to-live (TTL) exceeded packets. In one embodiment, each forwarding switch determines any TTL exceeded packets, generates an Internet control message protocol (ICMP) Destination Unreachable response, encapsulates that with the original MPLS (outermost) label set and forwarding the encapsulated packet to the egress edge device associated with the LSP. In another embodiment, the forwarding switch transmits information associated with the TTL exceeded packet to theSDN controller 120 such that theSDN controller 120 generates a MPLS label stack that may be used for transmitting the TTL exceeded packet back to the ingress edge device. In yet another embodiment, theSDN controller 120 may wrap the ICMP TTL exceeded message in a uniform datagram protocol (UDP) (e.g., GRE or IP) tunnel that directs the TTL exceeded packet back to the ingress edge device associated with the LSP. - Although
FIG. 2B describes one example of a process that may be performed by the forwarding switches (P1-P5) for routing packets through thecommunication network 102, the features of disclosed process may be embodied in other specific forms without deviating from the spirit and scope of the present disclosure. For example, the disclosed operations may be performed after or simultaneously with the operations described above with respect toFIG. 2A . As another example, the forwarding switches (P1-P5) may perform additional, fewer, or different operations than those operations as described in the present example. - The described systems, methods and apparatus, provide several advantages over conventional systems. For example, the system may provide customizable network services and allow for much more rapid introduction of new services. The system may be more robust as compared to vertically integrated system (particularly at software) which have tended to have more bugs simply resulting from the sheer complexity of conventional vertically integrated systems that are required to include many functions for conforming standards for interoperating autonomously with other devices. A substantial portion of the software complexity of the forwarding switches are provided in the
SDN controller 120 allowing for far less expensive and complicated hardware switches relative to conventional routers and switches. Finally, the overall system (combination of SDN controller and hardware switches) can also be customizable to provide unique or customized routes not otherwise decided by conventional routing protocols. -
FIG. 3 is anexample computing system 300 that may implement various systems and methods discussed herein. The computing system may embody theSDN controller 120 discussed herein. The computing system may also provide the functionality of the forwarding switches (P1-P5) as discussed herein. - The
computing system 300 includes at least oneprocessor 310, at least onecommunication port 315, amain memory 320, aremovable storage media 325, a read-only memory 330, amass storage device 335, and an I/O port 340. Processor(s) 310 can be any known processor, such as, but not limited to, an Intel® Itanium® or Itanium 2® processor(s), AMD® Opteron® or Athlon MP® processor(s), or Motorola® lines of processors. Thecommunication port 315 can be any type, such as an RS-232 port for use with a modem based dial-up connection, a 10/100 Ethernet port, a Gigabit port using copper or fiber, or a USB port. Communication port(s) 315 may be chosen depending on a network such as a Local Area Network (LAN), a Wide Area Network (WAN), or any network to which thecomputer system 300 connects. Thecomputing system 300 may be in communication with peripheral devices (e.g.,display screen 350 and a user input device 516) via Input/Output (I/O)port 340. -
Main memory 320 can be Random Access Memory (RAM) or any other dynamic storage device(s) commonly known in the art. Read-onlymemory 330 can be any static storage device(s) such as Programmable Read-Only Memory (PROM) chips for storing static information such as instructions forprocessor 310.Mass storage device 335 can be used to store information and instructions. For example, hard disks such as the Adaptec® family of Small Computer Serial Interface (SCSI) drives, an optical disc, an array of disks such as Redundant Array of Independent Disks (RAID), such as the Adaptec® family of RAID drives, or any other mass storage devices, may be used. - The bus 305 communicatively couples processor(s) 310 with the other memory, storage and communications blocks. The bus 305 can be a PCI/PCI-X, SCSI, or Universal Serial Bus (USB) based system bus (or other) depending on the storage devices used.
Removable storage media 325 can be any kind of external hard drives, floppy drives, OMEGA® Zip Drives, Compact Disc—Read Only Memory (CD-ROM), Compact Disc—Re-Writable (CD-RW), Digital Video Disk—Read Only Memory (DVD-ROM), etc. - The
computer system 300 includes one ormore processors 310. Theprocessor 310 may include one or more internal levels of cache (not shown) and a bus controller or bus interface unit to direct interaction with the processor bus 305. Themain memory 320 may include one or more memory cards and a control circuit (not shown), or other forms of removable memory, and may store arouting configuration application 365 including computer executable instructions, that when run on the processor, implement the methods and system set out herein, such as the method discussed with reference toFIGS. 2A and 2B . Other forms of memory, such as amass storage device 335, a read onlymemory 330, and aremovable storage memory 325, may also be included and accessible, by the processor (or processors) 310 via the bus 305. - The
computer system 300 may further include acommunication port 315 connected to a transport and/or transit network 355 by way of which thecomputer system 300 may receive network data useful in executing the methods and system set out herein as well as transmitting information and network configuration changes and MPLS routes or other routes determined thereby. Thecomputer system 300 may include an I/O device 340, or other device, by which information is displayed, such as atdisplay screen 350, or information is input, such asinput device 345. Theinput device 345 may be alphanumeric input device, including alphanumeric and other keys for communicating information and/or command selections to the processor. Theinput device 345 may be another type of user input device including cursor control, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to theprocessors 310 and for controlling cursor movement on thedisplay device 350. In the case of a tablet device, the input may be through a touch screen, voice commands, and/or Bluetooth connected keyboard, among other input mechanisms. The system set forth inFIG. 3 is but one possible example of a computer system that may employ or be configured in accordance with aspects of the present disclosure. - In the present disclosure, the methods disclosed may be implemented as sets of instructions or software readable by a device. Further, it is understood that the specific order or hierarchy of steps in the methods disclosed are instances of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the method can be rearranged while remaining within the disclosed subject matter. The accompanying method claims present elements of the various steps in a sample order, and are not necessarily meant to be limited to the specific order or hierarchy presented.
- The described disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette), optical storage medium (e.g., CD-ROM); magneto-optical storage medium, read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic instructions.
- The description above includes example systems, methods, techniques, instruction sequences, and/or computer program products that embody techniques of the present disclosure. However, it is understood that the described disclosure may be practiced without these specific details.
- It is believed that the present disclosure and many of its attendant advantages will be understood by the foregoing description, and it will be apparent that various changes may be made in the form, construction and arrangement of the components without departing from the disclosed subject matter or without sacrificing all of its material advantages. The form described is merely explanatory, and it is the intention of the following claims to encompass and include such changes.
- While the present disclosure has been described with reference to various embodiments, it will be understood that these embodiments are illustrative and that the scope of the disclosure is not limited to them. Many variations, modifications, additions, and improvements are possible. More generally, embodiments in accordance with the present disclosure have been described in the context of particular implementations. Functionality may be separated or combined in blocks differently in various embodiments of the disclosure or described with different terminology. These and other variations, modifications, additions, and improvements may fall within the scope of the disclosure as defined in the claims that follow.
Claims (27)
1. An apparatus comprising:
a network controller comprising at least one processor and at least one memory to store a routing configuration application that is executed by the at least one processor to:
analyze a plurality of interconnections between a plurality of forwarding switches of a communication network to identify a network topology of the communication network,
determine at least one label switched path (LSP) between the forwarding switches, the LSP comprising one or more next hop routes each defining a forwarding address associated with one forwarding switch to an adjacent forwarding switch; and
transmit the next hop routes to the forwarding switches, the forwarding switches using the next hop routes to route packets through the network according to a multiprotocol label switching (MPLS) protocol.
2. The apparatus of claim 1 , wherein the forwarding switches do not autonomously determine routes through the communication network.
3. The apparatus of claim 1 , wherein the routing configuration application is executed by the processor to determine at least one bypass LSP that is redundant to one or more LSPs determined by the routing configuration application.
4. The apparatus of claim 3 , wherein each forwarding switch executes at least one of a Link Aggregation Control Protocol (LACP) and a Bidirectional Forwarding Detection (BFD) to detect a failure in the one LSP and perform switchover to the bypass LSP.
5. The apparatus of claim 1 , wherein the network controller comprises a route reflector (RR) function that interfaces with a border gateway protocol (BGP) instance executed on an edge device to determine the LSP by resolving next hop routes for each adjacent forwarding switch.
6. The apparatus of claim 5 , wherein the network controller transmits the next hop routes to the forwarding switches by generating a label forwarding information base (LFIB) including mapping information associated with the next hop routes and transmitting the LFIB to one or more edge devices configured in the communication network.
7. The apparatus of claim 6 , wherein the LFIB is transmitted to the edge devices using at least one of a netconf protocol, a CLI protocol, or an openflow protocol.
8. The apparatus of claim 1 , wherein the network controller configures one or more of the forwarding switches in a multi-chassis link aggregation group (MC-LAG).
9. The apparatus of claim 1 , wherein the network controller identifies the network topology of the communication network using a link layer discovery protocol (LLDP).
10. The apparatus of claim 1 , wherein the network controller determines the LSP using at least one of a least cost routing algorithm, a dijkstra routing algorithm, a geographic routing algorithm, hierarchal routing algorithm, or a multipath routing algorithm.
11. The apparatus of claim 1 , wherein the network controller comprises a software defined network (SDN) controller.
12. A network controlling method comprising:
analyzing, by a network controller, a plurality of interconnections between a plurality of forwarding switches of a communication network to identify a network topology of the communication network,
determining, by the network controller, at least one label switched path (LSP) between the forwarding switches, the LSP comprising one or more next hop routes each defining a forwarding address associated with one forwarding switch to an adjacent forwarding switch; and
transmitting, by the network controller, the next hop routes to the forwarding switches, the forwarding switches using the next hop routes to route packets through the network according to a multiprotocol label switching (MPLS) protocol.
13. The network control method of claim 12 , wherein the forwarding switches do not autonomously determine routes through the communication network.
14. The network control method of claim 12 , further comprising determining at least one bypass LSP that is redundant to the one LSP.
15. The network control method of claim 14 , further comprising executing, by each forwarding switch, at least one of a Link Aggregation Control Protocol (LACP) and a Bidirectional Forwarding Detection (BFD) to detect a failure in the one LSP and perform switchover to the bypass LSP.
16. The network control method of claim 12 , further comprising interfacing, by a route reflector (RR) configured in the network controller, with a border gateway protocol (BGP) instance executed on an edge device to determine the LSP by resolving next hop routes for each adjacent forwarding switch.
17. The network control method of claim 16 , further comprising transmitting the next hop routes to the forwarding switches by generating a label forwarding information base (LFIB) including mapping information associated with the next hop routes and transmitting the LFIB to one or more edge devices configured in the network.
18. The network control method of claim 17 , further comprising transmitting the LFIB to the edge devices using at least one of a netconf protocol, a CLI protocol, or an openflow protocol.
19. The network control method of claim 12 , further comprising configuring one or more of the forwarding switches in a multi-chassis link aggregation group (MC-LAG).
20. The network control method of claim 12 , further comprising identifying the network topology of the communication network using a link layer discovery protocol (LLDP).
21. The network control method of claim 12 , further comprising determining the LSP using at least one of a least cost routing algorithm, a dijkstra routing algorithm, a geographic routing algorithm, hierarchal routing algorithm, or a multipath routing algorithm.
22. A communication network system comprising:
a plurality of forwarding switches interconnected with one another; and
a network controller comprising at least one processor and at least one memory to store a routing configuration application that is executed by the at least one processor to:
analyze a plurality of interconnections between the plurality of forwarding switches of a communication network to identify a network topology of the communication network,
determine at least one label switched path (LSP) between the forwarding switches, the LSP comprising one or more next hop routes each defining a forwarding address associated with one forwarding switch to an adjacent forwarding switch; and
transmit the next hop routes to the forwarding switches, the forwarding switches using the next hop routes to route packets through the network according to a multiprotocol label switching (MPLS) protocol.
23. The system of claim 22 , wherein the routing configuration application is executed by the processor to determine at least one bypass LSP that is redundant to the one LSP determined by the routing configuration application.
24. The system of claim 23 , wherein each forwarding switch executes at least one of a Link Aggregation Control Protocol (LACP) and a Bidirectional Forwarding Detection (BFD) to detect a failure in the one LSP and perform switchover to the bypass LSP.
25. The system of claim 22 , wherein the network controller comprises a route reflector (RR) function that interfaces with a border gateway protocol (BGP) instance executed on an edge device to determine the LSP by resolving next hop routes for each adjacent forwarding switch.
26. The system of claim 25 , wherein the network controller transmits the next hop routes to the forwarding switches by generating a label forwarding information base (LFIB) including mapping information associated with the next hop routes and transmitting the LFIB to one or more edge devices configured in the communication network.
27. The system of claim 26 , wherein the LFIB is transmitted to the edge devices using at least one of a netconf protocol, a CLI protocol, or an openflow protocol.
Priority Applications (8)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/089,547 US20140146664A1 (en) | 2012-11-26 | 2013-11-25 | Apparatus, system and method for packet switching |
| PCT/US2013/071848 WO2014082056A1 (en) | 2012-11-26 | 2013-11-26 | Apparatus, system, and method for packet switching |
| EP13856185.7A EP2923462B1 (en) | 2012-11-26 | 2013-11-26 | Apparatus, system, and method for packet switching |
| CA2892362A CA2892362C (en) | 2012-11-26 | 2013-11-26 | Apparatus, system, and method for packet switching |
| HK16102351.1A HK1215825B (en) | 2012-11-26 | 2013-11-26 | Apparatus, system, and method for packet switching |
| US15/786,818 US10142225B2 (en) | 2012-11-26 | 2017-10-18 | Apparatus, system, and method for packet switching |
| US16/193,697 US10715429B2 (en) | 2012-11-26 | 2018-11-16 | Apparatus, system, and method for packet switching |
| US16/923,737 US11140076B2 (en) | 2012-11-26 | 2020-07-08 | Apparatus, system, and method for packet switching |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201261729862P | 2012-11-26 | 2012-11-26 | |
| US14/089,547 US20140146664A1 (en) | 2012-11-26 | 2013-11-25 | Apparatus, system and method for packet switching |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/786,818 Continuation US10142225B2 (en) | 2012-11-26 | 2017-10-18 | Apparatus, system, and method for packet switching |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20140146664A1 true US20140146664A1 (en) | 2014-05-29 |
Family
ID=50773200
Family Applications (4)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/089,547 Abandoned US20140146664A1 (en) | 2012-11-26 | 2013-11-25 | Apparatus, system and method for packet switching |
| US15/786,818 Active US10142225B2 (en) | 2012-11-26 | 2017-10-18 | Apparatus, system, and method for packet switching |
| US16/193,697 Active 2033-12-06 US10715429B2 (en) | 2012-11-26 | 2018-11-16 | Apparatus, system, and method for packet switching |
| US16/923,737 Active US11140076B2 (en) | 2012-11-26 | 2020-07-08 | Apparatus, system, and method for packet switching |
Family Applications After (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/786,818 Active US10142225B2 (en) | 2012-11-26 | 2017-10-18 | Apparatus, system, and method for packet switching |
| US16/193,697 Active 2033-12-06 US10715429B2 (en) | 2012-11-26 | 2018-11-16 | Apparatus, system, and method for packet switching |
| US16/923,737 Active US11140076B2 (en) | 2012-11-26 | 2020-07-08 | Apparatus, system, and method for packet switching |
Country Status (4)
| Country | Link |
|---|---|
| US (4) | US20140146664A1 (en) |
| EP (1) | EP2923462B1 (en) |
| CA (1) | CA2892362C (en) |
| WO (1) | WO2014082056A1 (en) |
Cited By (39)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140160925A1 (en) * | 2012-12-10 | 2014-06-12 | Verizon Patent And Licensing Inc. | Virtual private network to label switched path mapping |
| US20140233569A1 (en) * | 2013-02-15 | 2014-08-21 | Futurewei Technologies, Inc. | Distributed Gateway in Virtual Overlay Networks |
| US20140280898A1 (en) * | 2013-03-15 | 2014-09-18 | Cisco Technology, Inc. | Allocating computing resources based upon geographic movement |
| US20150215202A1 (en) * | 2014-01-30 | 2015-07-30 | Coriant Operations, Inc. | Method and apparatus for facilitating compatibility between communication networks |
| WO2015187256A1 (en) * | 2014-06-04 | 2015-12-10 | Burgio Al | Method and apparatus for identifying different routing paths between networks |
| US20150381428A1 (en) * | 2014-06-25 | 2015-12-31 | Ciena Corporation | Systems and methods for combined software defined networking and distributed network control |
| CN105227458A (en) * | 2014-07-01 | 2016-01-06 | 中兴通讯股份有限公司 | The route computing method of TRILL ISIS and device |
| US20160020941A1 (en) * | 2014-07-21 | 2016-01-21 | Cisco Technology, Inc. | Reliable multipath forwarding for encapsulation protocols |
| EP2983333A1 (en) * | 2014-08-06 | 2016-02-10 | Alcatel Lucent | A system and method for providing routes to physical residential gateways |
| CN105376275A (en) * | 2014-08-25 | 2016-03-02 | 中兴通讯股份有限公司 | Software-defined network (SDN)-based data management method and system |
| WO2016048390A1 (en) * | 2014-09-26 | 2016-03-31 | Hewlett Packard Enterprise Development Lp | Link aggregation configuration for a node in a software-defined network |
| US20160105306A1 (en) * | 2013-06-28 | 2016-04-14 | Hangzhou H3C Technologies Co., Ltd. | Link aggregation |
| US20160105357A1 (en) * | 2013-06-20 | 2016-04-14 | Huawei Technologies Co., Ltd. | Method and network apparatus of establishing path |
| WO2016083889A1 (en) * | 2014-11-28 | 2016-06-02 | Alcatel Lucent | Method of providing nomadic service through virtual residential gateway |
| GB2533988A (en) * | 2014-08-29 | 2016-07-13 | Metaswitch Networks Ltd | Network routing |
| US20160241459A1 (en) * | 2013-10-26 | 2016-08-18 | Huawei Technologies Co.,Ltd. | Method for acquiring, by sdn switch, exact flow entry, and sdn switch, controller, and system |
| US20160248703A1 (en) * | 2015-02-25 | 2016-08-25 | At&T Intellectual Property I, L.P. | Provider Edge Router System and Provider Edge Router System Controller for Hybrid Virtualization of Provider Edge Router Functions |
| CN105991315A (en) * | 2015-02-03 | 2016-10-05 | 华为技术有限公司 | Link protection method applied to SDN (software defined network), switching device and network controller |
| US20160315845A1 (en) * | 2013-12-31 | 2016-10-27 | Huawei Technologies Co., Ltd. | SDN Controller, Data Center System, and Routing Connection Method |
| US20160330106A1 (en) * | 2015-05-04 | 2016-11-10 | Microsoft Technology Licensing, Llc | Routing Communication Sessions |
| US20160380823A1 (en) * | 2015-06-23 | 2016-12-29 | Cisco Technology, Inc. | Virtual private network forwarding and nexthop to transport mapping scheme |
| US9660905B2 (en) | 2013-04-12 | 2017-05-23 | Futurewei Technologies, Inc. | Service chain policy for distributed gateways in virtual overlay networks |
| US20170163524A1 (en) * | 2015-12-03 | 2017-06-08 | Dell Products L.P. | Multi-chassis lag access node determination system |
| WO2017142516A1 (en) * | 2016-02-16 | 2017-08-24 | Hewlett Packard Enterprise Development Lp | Software defined networking for hybrid networks |
| WO2017148425A1 (en) * | 2016-03-03 | 2017-09-08 | Huawei Technologies Co., Ltd. | Border gateway protocol for communication among software defined network controllers |
| US10142225B2 (en) | 2012-11-26 | 2018-11-27 | Level 3 Communications, Llc | Apparatus, system, and method for packet switching |
| EP3435602A1 (en) * | 2017-07-28 | 2019-01-30 | Juniper Networks, Inc. | Service level agreement based next-hop selection |
| CN109474523A (en) * | 2017-09-07 | 2019-03-15 | 中国电信股份有限公司 | Network-building method and system based on SDN |
| US10243781B1 (en) | 2017-07-05 | 2019-03-26 | Juniper Networks, Inc. | Detecting link faults in network paths that include link aggregation groups (LAGs) |
| US10355980B2 (en) * | 2016-09-30 | 2019-07-16 | Juniper Networks, Inc. | Deterministically selecting a bypass LSP for a defined group of protected LSPS |
| US10506037B2 (en) * | 2016-12-13 | 2019-12-10 | Alcatel Lucent | Discovery of ingress provider edge devices in egress peering networks |
| US10523560B2 (en) | 2017-07-28 | 2019-12-31 | Juniper Networks, Inc. | Service level agreement based next-hop selection |
| US10594514B2 (en) | 2017-03-29 | 2020-03-17 | At&T Intellectual Property I, L.P. | Method and apparatus for creating border gateway protocol reachability on demand in a multi-protocol label switching network |
| US10644950B2 (en) | 2014-09-25 | 2020-05-05 | At&T Intellectual Property I, L.P. | Dynamic policy based software defined network mechanism |
| US10833973B1 (en) * | 2019-02-15 | 2020-11-10 | Juniper Networks, Inc. | Enabling selection of a bypass path from available paths in an open shortest path first (OSPF) domain and an intermediate system to intermediate system (ISIS) domain |
| US20210152462A1 (en) * | 2017-09-21 | 2021-05-20 | Silver Peak Systems, Inc. | Selective routing |
| JP2022507436A (en) * | 2018-12-04 | 2022-01-18 | 中興通訊股▲ふん▼有限公司 | Data center traffic sharing methods, equipment, devices and storage media |
| US20220217084A1 (en) * | 2021-01-06 | 2022-07-07 | Arista Networks, Inc. | Systems and method for propagating route information |
| CN114915591A (en) * | 2021-01-28 | 2022-08-16 | 中国电信股份有限公司 | End-to-end service guarantee method and system |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106664261B (en) | 2014-06-30 | 2019-10-25 | 华为技术有限公司 | A method, device and system for configuring flow entry |
| CN105471726B (en) * | 2014-09-05 | 2019-08-27 | 华为技术有限公司 | Method and device for forwarding parameter transfer |
| US10659352B2 (en) * | 2017-05-31 | 2020-05-19 | Juniper Networks, Inc. | Signaling private context forwarding tables for a private forwarding layer |
| US10986017B2 (en) * | 2018-08-23 | 2021-04-20 | Agora Lab, Inc. | Large-scale real-time multimedia communications |
| CN111025974A (en) * | 2019-12-13 | 2020-04-17 | 厦门宏发汽车电子有限公司 | A vehicle-mounted gateway controller, its configuration method and vehicle system |
| CN116389345B (en) | 2020-03-23 | 2025-08-15 | 华为技术有限公司 | Method and device for transmitting segmented routing strategy and network transmission system |
| CN113194036B (en) * | 2021-03-31 | 2022-12-09 | 西安交通大学 | Routing method, system, device and readable storage medium for multi-label network |
Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060171322A1 (en) * | 2002-07-31 | 2006-08-03 | Ji-Woong Lee | Method and apparatus for receivability and reachability test of explicit multicast |
| US20070091796A1 (en) * | 2005-10-20 | 2007-04-26 | Clarence Filsfils | Method of implementing a backup path in an autonomous system |
| US20070165515A1 (en) * | 2006-01-18 | 2007-07-19 | Jean-Philippe Vasseur | Dynamic protection against failure of a head-end node of one or more TE-LSPs |
| US20080198859A1 (en) * | 2007-02-21 | 2008-08-21 | At&T Knowledge Ventures, Lp | System for advertising routing updates |
| US20080310430A1 (en) * | 2006-02-10 | 2008-12-18 | Huawei Technologies Co., Ltd. | Control System, Data Message Transmission Method And Network Device In The Ethernet |
| US20090232029A1 (en) * | 2008-03-17 | 2009-09-17 | Rateb Abu-Hamdeh | Method and apparatus for providing full logical connectivity in mpls networks |
| US20100043068A1 (en) * | 2008-08-14 | 2010-02-18 | Juniper Networks, Inc. | Routing device having integrated mpls-aware firewall |
| US20120236730A1 (en) * | 2009-12-04 | 2012-09-20 | Huawei Technologies Co., Ltd. | Method, device and system for processing service traffic based on pseudo wires |
| US20130188493A1 (en) * | 2010-11-01 | 2013-07-25 | Nec Corporation | Communication system, control apparatus, packet forwarding path control method, and program |
| US20140098673A1 (en) * | 2012-10-05 | 2014-04-10 | Futurewei Technologies, Inc. | Software Defined Network Virtualization Utilizing Service Specific Topology Abstraction and Interface |
| US8767735B2 (en) * | 2010-08-04 | 2014-07-01 | Alcatel Lucent | System and method for multi-chassis link aggregation |
| US20140369186A1 (en) * | 2013-06-17 | 2014-12-18 | Telefonaktiebolaget L M Ericsspm (publ) | Methods and systems with enhanced robustness for multi-chassis link aggregation group |
| US20150110107A1 (en) * | 2012-05-25 | 2015-04-23 | Nec Corporation | Packet forwarding system, control apparatus, packet forwarding method, and program |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2572473B1 (en) * | 2010-05-19 | 2014-02-26 | Telefonaktiebolaget L M Ericsson (PUBL) | Methods and apparatus for use in an openflow network |
| US8867411B2 (en) | 2011-02-03 | 2014-10-21 | T-Mobile Usa, Inc. | Emergency call mode preference in wireless communication networks |
| US20140146664A1 (en) | 2012-11-26 | 2014-05-29 | Level 3 Communications, Llc | Apparatus, system and method for packet switching |
| US9306800B2 (en) * | 2013-05-10 | 2016-04-05 | Telefonaktiebolaget L M Ericsson (Publ) | Inter-domain fast reroute methods and network devices |
-
2013
- 2013-11-25 US US14/089,547 patent/US20140146664A1/en not_active Abandoned
- 2013-11-26 WO PCT/US2013/071848 patent/WO2014082056A1/en not_active Ceased
- 2013-11-26 EP EP13856185.7A patent/EP2923462B1/en not_active Not-in-force
- 2013-11-26 CA CA2892362A patent/CA2892362C/en not_active Expired - Fee Related
-
2017
- 2017-10-18 US US15/786,818 patent/US10142225B2/en active Active
-
2018
- 2018-11-16 US US16/193,697 patent/US10715429B2/en active Active
-
2020
- 2020-07-08 US US16/923,737 patent/US11140076B2/en active Active
Patent Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060171322A1 (en) * | 2002-07-31 | 2006-08-03 | Ji-Woong Lee | Method and apparatus for receivability and reachability test of explicit multicast |
| US20070091796A1 (en) * | 2005-10-20 | 2007-04-26 | Clarence Filsfils | Method of implementing a backup path in an autonomous system |
| US20070165515A1 (en) * | 2006-01-18 | 2007-07-19 | Jean-Philippe Vasseur | Dynamic protection against failure of a head-end node of one or more TE-LSPs |
| US20080310430A1 (en) * | 2006-02-10 | 2008-12-18 | Huawei Technologies Co., Ltd. | Control System, Data Message Transmission Method And Network Device In The Ethernet |
| US20080198859A1 (en) * | 2007-02-21 | 2008-08-21 | At&T Knowledge Ventures, Lp | System for advertising routing updates |
| US20090232029A1 (en) * | 2008-03-17 | 2009-09-17 | Rateb Abu-Hamdeh | Method and apparatus for providing full logical connectivity in mpls networks |
| US20100043068A1 (en) * | 2008-08-14 | 2010-02-18 | Juniper Networks, Inc. | Routing device having integrated mpls-aware firewall |
| US20120236730A1 (en) * | 2009-12-04 | 2012-09-20 | Huawei Technologies Co., Ltd. | Method, device and system for processing service traffic based on pseudo wires |
| US8767735B2 (en) * | 2010-08-04 | 2014-07-01 | Alcatel Lucent | System and method for multi-chassis link aggregation |
| US20130188493A1 (en) * | 2010-11-01 | 2013-07-25 | Nec Corporation | Communication system, control apparatus, packet forwarding path control method, and program |
| US20150110107A1 (en) * | 2012-05-25 | 2015-04-23 | Nec Corporation | Packet forwarding system, control apparatus, packet forwarding method, and program |
| US20140098673A1 (en) * | 2012-10-05 | 2014-04-10 | Futurewei Technologies, Inc. | Software Defined Network Virtualization Utilizing Service Specific Topology Abstraction and Interface |
| US20140369186A1 (en) * | 2013-06-17 | 2014-12-18 | Telefonaktiebolaget L M Ericsspm (publ) | Methods and systems with enhanced robustness for multi-chassis link aggregation group |
Cited By (73)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10715429B2 (en) | 2012-11-26 | 2020-07-14 | Level 3 Communications, Llc | Apparatus, system, and method for packet switching |
| US10142225B2 (en) | 2012-11-26 | 2018-11-27 | Level 3 Communications, Llc | Apparatus, system, and method for packet switching |
| US11140076B2 (en) | 2012-11-26 | 2021-10-05 | Level 3 Communications, Llc | Apparatus, system, and method for packet switching |
| US20140160925A1 (en) * | 2012-12-10 | 2014-06-12 | Verizon Patent And Licensing Inc. | Virtual private network to label switched path mapping |
| US9036477B2 (en) * | 2012-12-10 | 2015-05-19 | Verizon Patent And Licensing Inc. | Virtual private network to label switched path mapping |
| US20140233569A1 (en) * | 2013-02-15 | 2014-08-21 | Futurewei Technologies, Inc. | Distributed Gateway in Virtual Overlay Networks |
| US20140280898A1 (en) * | 2013-03-15 | 2014-09-18 | Cisco Technology, Inc. | Allocating computing resources based upon geographic movement |
| US9276827B2 (en) * | 2013-03-15 | 2016-03-01 | Cisco Technology, Inc. | Allocating computing resources based upon geographic movement |
| US9660905B2 (en) | 2013-04-12 | 2017-05-23 | Futurewei Technologies, Inc. | Service chain policy for distributed gateways in virtual overlay networks |
| US20160105357A1 (en) * | 2013-06-20 | 2016-04-14 | Huawei Technologies Co., Ltd. | Method and network apparatus of establishing path |
| US20160105306A1 (en) * | 2013-06-28 | 2016-04-14 | Hangzhou H3C Technologies Co., Ltd. | Link aggregation |
| US20160241459A1 (en) * | 2013-10-26 | 2016-08-18 | Huawei Technologies Co.,Ltd. | Method for acquiring, by sdn switch, exact flow entry, and sdn switch, controller, and system |
| US9742656B2 (en) * | 2013-10-26 | 2017-08-22 | Huawei Technologies Co., Ltd. | Method for acquiring, by SDN switch, exact flow entry, and SDN switch, controller, and system |
| US10367718B2 (en) * | 2013-10-26 | 2019-07-30 | Huawei Technologies Co., Ltd. | Method for acquiring, by SDN switch, exact flow entry, and SDN switch, controller, and system |
| US20160315845A1 (en) * | 2013-12-31 | 2016-10-27 | Huawei Technologies Co., Ltd. | SDN Controller, Data Center System, and Routing Connection Method |
| US10454806B2 (en) * | 2013-12-31 | 2019-10-22 | Huawei Technologies Co., Ltd. | SDN controller, data center system, and routing connection method |
| US10063466B2 (en) * | 2014-01-30 | 2018-08-28 | Coriant Operations, Inc. | Method and apparatus for facilitating compatibility between communication networks |
| US20150215202A1 (en) * | 2014-01-30 | 2015-07-30 | Coriant Operations, Inc. | Method and apparatus for facilitating compatibility between communication networks |
| US9832105B2 (en) | 2014-06-04 | 2017-11-28 | Console Connect Inc. | Method and apparatus for identifying different routing paths between networks |
| WO2015187256A1 (en) * | 2014-06-04 | 2015-12-10 | Burgio Al | Method and apparatus for identifying different routing paths between networks |
| US10153948B2 (en) * | 2014-06-25 | 2018-12-11 | Ciena Corporation | Systems and methods for combined software defined networking and distributed network control |
| US9774502B2 (en) * | 2014-06-25 | 2017-09-26 | Ciena Corporation | Systems and methods for combined software defined networking and distributed network control |
| US20150381428A1 (en) * | 2014-06-25 | 2015-12-31 | Ciena Corporation | Systems and methods for combined software defined networking and distributed network control |
| CN105227458A (en) * | 2014-07-01 | 2016-01-06 | 中兴通讯股份有限公司 | The route computing method of TRILL ISIS and device |
| US20170111260A1 (en) * | 2014-07-01 | 2017-04-20 | Zte Corporation | Trill isis-based route calculation method and device |
| US20160020941A1 (en) * | 2014-07-21 | 2016-01-21 | Cisco Technology, Inc. | Reliable multipath forwarding for encapsulation protocols |
| US9608858B2 (en) * | 2014-07-21 | 2017-03-28 | Cisco Technology, Inc. | Reliable multipath forwarding for encapsulation protocols |
| EP2983333A1 (en) * | 2014-08-06 | 2016-02-10 | Alcatel Lucent | A system and method for providing routes to physical residential gateways |
| CN105376275A (en) * | 2014-08-25 | 2016-03-02 | 中兴通讯股份有限公司 | Software-defined network (SDN)-based data management method and system |
| GB2533988B (en) * | 2014-08-29 | 2021-11-24 | Metaswitch Networks Ltd | Network routing |
| GB2533988A (en) * | 2014-08-29 | 2016-07-13 | Metaswitch Networks Ltd | Network routing |
| US10165090B2 (en) | 2014-08-29 | 2018-12-25 | Metaswitch Networks Ltd. | Transferring routing protocol information between a software defined network and one or more external networks |
| US10644950B2 (en) | 2014-09-25 | 2020-05-05 | At&T Intellectual Property I, L.P. | Dynamic policy based software defined network mechanism |
| US11533232B2 (en) | 2014-09-25 | 2022-12-20 | At&T Intellectual Property I, L.P. | Dynamic policy based software defined network mechanism |
| US10411742B2 (en) | 2014-09-26 | 2019-09-10 | Hewlett Packard Enterprise Development Lp | Link aggregation configuration for a node in a software-defined network |
| WO2016048390A1 (en) * | 2014-09-26 | 2016-03-31 | Hewlett Packard Enterprise Development Lp | Link aggregation configuration for a node in a software-defined network |
| US10958651B2 (en) | 2014-11-28 | 2021-03-23 | Alcatel Lucent | Method of providing nomadic service through virtual residential gateway |
| WO2016083889A1 (en) * | 2014-11-28 | 2016-06-02 | Alcatel Lucent | Method of providing nomadic service through virtual residential gateway |
| CN105991315A (en) * | 2015-02-03 | 2016-10-05 | 华为技术有限公司 | Link protection method applied to SDN (software defined network), switching device and network controller |
| EP3255838A4 (en) * | 2015-02-03 | 2018-02-14 | Huawei Technologies Co., Ltd. | Method, switching device and network controller for protecting links in software-defined network (sdn) |
| US10873527B2 (en) * | 2015-02-03 | 2020-12-22 | Huawei Technologies Co., Ltd. | Link protection method in SDN, switching device, and network controller |
| US20170359253A1 (en) * | 2015-02-03 | 2017-12-14 | Huawei Technologies Co., Ltd. | Link Protection Method In SDN, Switching Device, and Network Controller |
| US20160248703A1 (en) * | 2015-02-25 | 2016-08-25 | At&T Intellectual Property I, L.P. | Provider Edge Router System and Provider Edge Router System Controller for Hybrid Virtualization of Provider Edge Router Functions |
| US10491546B2 (en) * | 2015-02-25 | 2019-11-26 | At&T Intellectual Property I, L.P. | Provider edge router system and provider edge router system controller for hybrid virtualization of provider edge router functions |
| US20160330106A1 (en) * | 2015-05-04 | 2016-11-10 | Microsoft Technology Licensing, Llc | Routing Communication Sessions |
| US10171345B2 (en) * | 2015-05-04 | 2019-01-01 | Microsoft Technology Licensing, Llc | Routing communication sessions |
| US20160380823A1 (en) * | 2015-06-23 | 2016-12-29 | Cisco Technology, Inc. | Virtual private network forwarding and nexthop to transport mapping scheme |
| US10361884B2 (en) * | 2015-06-23 | 2019-07-23 | Cisco Technology, Inc. | Virtual private network forwarding and nexthop to transport mapping scheme |
| US9825777B2 (en) * | 2015-06-23 | 2017-11-21 | Cisco Technology, Inc. | Virtual private network forwarding and nexthop to transport mapping scheme |
| US20170163524A1 (en) * | 2015-12-03 | 2017-06-08 | Dell Products L.P. | Multi-chassis lag access node determination system |
| US10148555B2 (en) * | 2015-12-03 | 2018-12-04 | Dell Products L.P. | Multi-chassis LAG access node determination system |
| WO2017142516A1 (en) * | 2016-02-16 | 2017-08-24 | Hewlett Packard Enterprise Development Lp | Software defined networking for hybrid networks |
| US10432427B2 (en) | 2016-03-03 | 2019-10-01 | Futurewei Technologies, Inc. | Border gateway protocol for communication among software defined network controllers |
| WO2017148425A1 (en) * | 2016-03-03 | 2017-09-08 | Huawei Technologies Co., Ltd. | Border gateway protocol for communication among software defined network controllers |
| US10355980B2 (en) * | 2016-09-30 | 2019-07-16 | Juniper Networks, Inc. | Deterministically selecting a bypass LSP for a defined group of protected LSPS |
| US10506037B2 (en) * | 2016-12-13 | 2019-12-10 | Alcatel Lucent | Discovery of ingress provider edge devices in egress peering networks |
| US10594514B2 (en) | 2017-03-29 | 2020-03-17 | At&T Intellectual Property I, L.P. | Method and apparatus for creating border gateway protocol reachability on demand in a multi-protocol label switching network |
| US10243781B1 (en) | 2017-07-05 | 2019-03-26 | Juniper Networks, Inc. | Detecting link faults in network paths that include link aggregation groups (LAGs) |
| US10742488B2 (en) | 2017-07-05 | 2020-08-11 | Juniper Networks, Inc. | Detecting link faults in network paths that include link aggregation groups (LAGs) |
| EP3435602A1 (en) * | 2017-07-28 | 2019-01-30 | Juniper Networks, Inc. | Service level agreement based next-hop selection |
| US10523560B2 (en) | 2017-07-28 | 2019-12-31 | Juniper Networks, Inc. | Service level agreement based next-hop selection |
| US10454812B2 (en) | 2017-07-28 | 2019-10-22 | Juniper Networks, Inc. | Service level agreement based next-hop selection |
| CN109474523A (en) * | 2017-09-07 | 2019-03-15 | 中国电信股份有限公司 | Network-building method and system based on SDN |
| US20210152462A1 (en) * | 2017-09-21 | 2021-05-20 | Silver Peak Systems, Inc. | Selective routing |
| US11805045B2 (en) * | 2017-09-21 | 2023-10-31 | Hewlett Packard Enterprise Development Lp | Selective routing |
| JP2022507436A (en) * | 2018-12-04 | 2022-01-18 | 中興通訊股▲ふん▼有限公司 | Data center traffic sharing methods, equipment, devices and storage media |
| JP7190569B2 (en) | 2018-12-04 | 2022-12-15 | 中興通訊股▲ふん▼有限公司 | Data center traffic sharing method, apparatus, device and storage medium |
| US10833973B1 (en) * | 2019-02-15 | 2020-11-10 | Juniper Networks, Inc. | Enabling selection of a bypass path from available paths in an open shortest path first (OSPF) domain and an intermediate system to intermediate system (ISIS) domain |
| US11711290B2 (en) | 2019-02-15 | 2023-07-25 | Juniper Networks, Inc. | Enabling selection of a bypass path from available paths in an open shortest path first (OSPF) domain and an intermediate system to intermediate system (ISIS) domain |
| US11671357B2 (en) * | 2021-01-06 | 2023-06-06 | Arista Networks, Inc. | Systems and method for propagating route information |
| US20220217084A1 (en) * | 2021-01-06 | 2022-07-07 | Arista Networks, Inc. | Systems and method for propagating route information |
| US11962497B2 (en) | 2021-01-06 | 2024-04-16 | Arista Networks, Inc. | Systems and method for propagating route information |
| CN114915591A (en) * | 2021-01-28 | 2022-08-16 | 中国电信股份有限公司 | End-to-end service guarantee method and system |
Also Published As
| Publication number | Publication date |
|---|---|
| US20190089628A1 (en) | 2019-03-21 |
| EP2923462A1 (en) | 2015-09-30 |
| HK1215825A1 (en) | 2016-09-15 |
| US20180041430A1 (en) | 2018-02-08 |
| CA2892362A1 (en) | 2014-05-30 |
| US10715429B2 (en) | 2020-07-14 |
| WO2014082056A1 (en) | 2014-05-30 |
| EP2923462B1 (en) | 2017-11-08 |
| US11140076B2 (en) | 2021-10-05 |
| US20200336417A1 (en) | 2020-10-22 |
| US10142225B2 (en) | 2018-11-27 |
| EP2923462A4 (en) | 2016-04-13 |
| CA2892362C (en) | 2019-06-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11140076B2 (en) | Apparatus, system, and method for packet switching | |
| US8576721B1 (en) | Local forwarding bias in a multi-chassis router | |
| AU2011300438B2 (en) | Automated traffic engineering for multi-protocol label switching (MPLS) with link utilization as feedback into the tie-breaking mechanism | |
| KR101895092B1 (en) | Mpls fast re-route using ldp (ldp-frr) | |
| JP7389742B2 (en) | communication routing system | |
| US10298499B2 (en) | Technique of operating a network node for load balancing | |
| EP2730069B1 (en) | Mpls fast re-route using ldp (ldp-frr) | |
| CN109309623A (en) | Maximum redundancy tree to redundant multicast source nodes for multicast protection | |
| EP4211883A1 (en) | Segment routing traffic engineering (sr-te) with awareness of local protection | |
| WO2014018541A1 (en) | System, method and apparatus conforming path cost criteria across multiple abrs | |
| CN101523354A (en) | Protection of multi-segment pseudowires | |
| EP4002776A1 (en) | End-to-end flow monitoring in a computer network | |
| CN105471599A (en) | Protection switching method and network device | |
| EP4020927A1 (en) | Packet forwarding on non-coherent paths | |
| US20210075728A1 (en) | Unequal cost load balancing for redundant virtualized fabric edge devices | |
| US11451478B1 (en) | Distributed tactical traffic engineering (TE) using loop free alternative (LFA), remote-LFA (R-LFA) and/or topology independent-LFA (TI-LFA) secondary paths | |
| US20150036542A1 (en) | Method for receiving information, method for sending information, and apparatus for the same | |
| CN107770061A (en) | The method and forwarding unit to E-Packet | |
| US20150036508A1 (en) | Method and Apparatus For Gateway Selection In Multilevel SPB Network | |
| HK1215825B (en) | Apparatus, system, and method for packet switching | |
| Castoldi et al. | Segment routing in multi-layer networks | |
| EP4513848A1 (en) | Communication method and apparatus | |
| WO2014149960A1 (en) | System, method and apparatus for lsp setup using inter-domain abr indication |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: LEVEL 3 COMMUNICATIONS, LLC, COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AMANTE, SHANE;REEL/FRAME:031701/0252 Effective date: 20131125 |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |