[go: up one dir, main page]

EP3001609B1 - Procédé et dispositif de traitement de message de diffusion groupée dans un réseau nvo3, et réseau nvo3 - Google Patents

Procédé et dispositif de traitement de message de diffusion groupée dans un réseau nvo3, et réseau nvo3 Download PDF

Info

Publication number
EP3001609B1
EP3001609B1 EP13888499.4A EP13888499A EP3001609B1 EP 3001609 B1 EP3001609 B1 EP 3001609B1 EP 13888499 A EP13888499 A EP 13888499A EP 3001609 B1 EP3001609 B1 EP 3001609B1
Authority
EP
European Patent Office
Prior art keywords
multicast packet
port
nve
multicast
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP13888499.4A
Other languages
German (de)
English (en)
Other versions
EP3001609A1 (fr
EP3001609A4 (fr
Inventor
Weiguo Hao
Yizhou Li
Zhenbin Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP3001609A1 publication Critical patent/EP3001609A1/fr
Publication of EP3001609A4 publication Critical patent/EP3001609A4/fr
Application granted granted Critical
Publication of EP3001609B1 publication Critical patent/EP3001609B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1886Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with traffic restrictions for efficiency improvement, e.g. involving subnets or subdomains
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing

Definitions

  • the present invention relates to the field of communications, and in particular, to a method and an apparatus for processing a multicast packet on a network virtualization over layer 3 (NV03) network, and an NV03 network.
  • NV03 network virtualization over layer 3
  • NV03 can implement a layer 2 virtual private network (VPN) by using MAC in IP encapsulation.
  • Virtual extensible local area network VXLAN) and network virtualization generic routing encapsulation (NVGRE) are two typical technologies for implementing NV03 networking.
  • VXLAN and NVGRE by using MAC In UDP encapsulation or MAC In GRE encapsulation, layer 2 packets in different VPNs can be transmitted across a layer 3 IP network.
  • VXLAN tunnel encapsulation and NVGRE tunnel encapsulation both include a 24-bit virtual overlay network (VN) identifier (ID). By encapsulating a VN ID in a packet, traffic can be isolated between different virtual overlay networks. In a data center, one tenant may be corresponding to one or more virtual overlay networks.
  • VN virtual overlay network
  • a VN edge device is called a network virtualization edge (NVE), and a main function of the NVE is to join a tenant end system (TES) to a virtual overlay network.
  • the NVE can isolate traffic between different virtual overlay networks by using VN IDs.
  • Corresponding multicast and unicast forwarding tables of each virtual overlay network are stored in the NVE.
  • the NVE replicates and sends, according to a local multicast forwarding table, multicast traffic (including unknown unicast, broadcast, and multicast, which are uniformly referred to as multicast herein) sent by a local TES to another TES.
  • the NVE replicates and forwards, according to a network-side multicast forwarding table corresponding to a virtual overlay network of the TES, the multicast traffic sent by the local TES, to a remote NVE.
  • a multicast packet is sent to a remote NVE, two manners, head-end replication and multicast hop-by-hop replication, may be used.
  • Unicast NV03 encapsulation is used in the head-end replication manner, where a destination IP of a tunnel is an IP address of a destination NVE.
  • the destination IP of a tunnel is a multicast IP address.
  • a correspondence between VNs and multicast IP addresses is preset on each NVE by a network administrator.
  • an NVE forwards, to a local TES or a remote NVE, a unicast packet sent by a TES.
  • a unicast packet sent by a TES.
  • unicast NV03 encapsulation needs to be performed on the unicast packet.
  • FIG. 1 is a schematic structural diagram of an NV03 network in the prior art. Each TES separately accesses a VN by using a respective NVE. To ensure TES reliability, a TES may access a network by using multiple NVEs. As shown in FIG. 1 , a TES 1 accesses the NV03 network separately by using a port 1 of an NVE 1 and a port 2 of an NVE 2. This access manner is called multihoming access. The NVE 1 and the NVE 2 for connecting the TES 1 are called multihoming NVEs. The port 1 of the NVE 1 and the port 2 of the NVE 2 form a cross-device link aggregation group (LAG).
  • LAG cross-device link aggregation group
  • TES1 is called a multihomed TES
  • an NVE other than a multihoming NVE is called a remote NVE of the multihoming NVE.
  • this access manner is called all-active or active-active access.
  • a multihoming NVE configures, for each LAG, an IP address as a source IP address for the multihoming NVE to send a packet by using a port in the LAG.
  • the multihoming NVE records a correspondence between the LAG and the IP address.
  • NVEs on the NV03 network are mutually notified of their correspondences between LAGs and IP addresses, and each NVE records correspondences between LAGs and IP addresses of other NVEs.
  • a first NVE After receiving a multicast packet sent by a second NVE, a first NVE searches, according to a source IP address in the multicast packet, a correspondence between LAGs and IP addresses that is recorded by the first NVE. If there is no LAG corresponding to the source IP address, the first NVE replicates, according to a VN ID in the multicast packet, the multicast packet to all local ports corresponding to the VN ID. If there is a LAG corresponding to the source IP address, and the first NVE and the second NVE belong to a same LAG, the first NVE does not replicate the multicast packet to a local port corresponding to the LAG.
  • the local port refers to a port connected to a TES.
  • the multihoming NVE needs to allocate an IP address to each LAG. If there is a large quantity of LAGs on a network, IP address wasting is caused. In addition, each multihoming NVE needs to determine whether an IP address of every other multihoming NVE and an IP address of the multihoming NVE belong to a same LAG. Consequently, packet forwarding efficiency drops if there is a large quantity of multihoming NVEs.
  • the present invention provides a method and an apparatus for processing a multicast packet on an NV03 network, and an NV03 network, used to solve a problem that is of IP address wasting and forwarding performance degradation and is caused in an all-active or active-active TES access scenario according to independent claims 1, 7 and 13.
  • a first aspect of the present invention provides a method for processing a multicast packet on a network virtualization over a layer 3 network NV03 network, where the method includes:
  • the determining whether the ingress port is a designated forwarder DF of the VN ID includes: searching a DF table according to the ingress port and the VN ID of the first multicast packet, and determining whether the ingress port is the DF of the VN ID of the first multicast packet, where each entry of the DF table includes a VN ID, a port and a DF flag.
  • the method further includes:
  • a second aspect of the present invention provides an apparatus for processing a multicast packet on a network virtualization over layer 3 NV03 network, where the apparatus includes:
  • the determining module is specifically configured to search a DF table according to the ingress port and the VN ID of the first multicast packet, and determine whether the ingress port is the DF of the VN ID of the first multicast packet, where each entry of the DF table includes a VN ID, a port and a DF flag.
  • the apparatus further includes a second VN ID acquiring module, an egress port acquiring module, and a second judging module, where
  • a third aspect of the present invention provides a network virtualization over layer 3 NV03 network, where the NV03 network includes a multihomed tenant end system TES, a first multihomed network virtualization edge NVE, and a second multihoming NVE, where the multihomed TES is separately connected to the first multihoming NVE and the second multihoming NVE; the multihomed TES is configured to send a first multicast packet; and
  • the first multihoming NVE searches a DF table according to the ingress port and the VN ID of the first multicast packet, and determines whether the ingress port is the DF of the VN ID of the first multicast packet, where each entry of the DF table includes a VN ID, a port and a DF flag.
  • the second multihoming NVE is configured to send a second multicast packet; the first multihoming NVE is further configured to receive the second multicast packet, perform NV03 decapsulation on the second multicast packet, and acquire a VN ID of the second multicast packet from an NV03 header of the second multicast packet; look up a local multicast forwarding entry corresponding to the VN ID of the second multicast packet, and acquire an egress port in the local multicast forwarding entry; and determine whether the egress port is a DF of the VN ID of the second multicast packet, and process the decapsulated second multicast packet according to a result of the determining.
  • a fourth aspect of the present invention provides an apparatus for processing a multicast packet on a network virtualization over layer 3 NV03 network, where the apparatus includes: a processor, a memory, a network interface, and a bus, where the processor, the memory, and the network interface are all connected to the bus, and the memory is configured to store a program instruction; and
  • a first multihoming NVE when a first multihoming NVE receives a first multicast packet from a multihomed TES, and when it is determined that an ingress port of the first multicast packet is not a DF of a VN ID carried in the first multicast packet, the first multihoming NVE does not forward the first multicast packet to a local port, thereby avoiding that the first multicast packet sent by the multihomed TES is looped back to the multihomed TES.
  • IP address wasting can be avoided, thereby improving the forwarding efficiency.
  • the NV03 network 20 includes a multihomed TES 21 and multiple multihoming NVEs, such as a first multihoming NVE 22a and a second multihoming NVE 22b in the figure.
  • the multihomed TES 21 is separately connected to the first multihoming NVE 22a and the second multihoming NVE 22b.
  • the first multihomed TES is configured to send a first multicast packet, where the first multicast packet includes all packets, such as a broadcast packet, a multicast packet, or an unknown unicast packet, that need to be sent in a multicast manner on the NV03 network.
  • the first multihoming NVE 22a is configured to receive the first multicast packet, determine a sender of the first multicast packet, and when it is determined that the first multicast packet is sent by the multihomed TES, acquire an ingress port of the first multicast packet and a VLAN ID of the first multicast packet; acquire a VN ID of the first multicast packet according to the ingress port of the first multicast packet and the VLAN ID; determine whether the ingress port is a DF of the VN ID of the first multicast packet; when the ingress port is not the DF of the VN ID of the first multicast packet, encapsulate the first multicast packet with an extended NV03 header; send the first multicast packet that is encapsulated with the extended NV03 header to another NVE that includes the second multihoming NVE, where the extended NV03 header carries the VN ID of the first multicast packet and a link aggregation group identifier LAG ID that corresponds to the ingress port.
  • the first multihoming NVE may send the first multicast packet that is encapsulated with the extended NV03 header to the another NVE in a manner of head-end replication or multicast hop-by-hop replication.
  • the first multihoming NVE replicates the packet for each destination NVE, and sends the replicated packet to each destination NVE through a unicast NV03 tunnel, where an outer destination IP address of the NV03 tunnel is a unicast IP address of each destination NVE. If the multicast hop-by-hop replication manner is used, the first multihoming NVE sends the packet to each destination NVE through a multicast NV03 tunnel, where an outer destination IP address of the NV03 tunnel is a multicast IP address. A correspondence between VN IDs and multicast IP addresses is preset on the first multihoming NVE.
  • the LAG ID only needs to be unique on each NVE, where a LAG ID in the extended NV03 header is a LAG ID allocated on the destination NVE.
  • a TES is multihomed to port 1 of NVE1, port 2 of NVE2, and port 3 of NVE3. Then, port 1, port 2, and port 3 form a LAG.
  • NVE1 allocates an identifier 10 to the LAG
  • NVE2 allocates an identifier 20 to the LAG
  • NVE3 allocates an identifier 30 to the LAG.
  • NVE1 sends a packet to NVE2 and NVE3, 20 and 30 are respectively filled in LAG IDs of extended NV03 headers.
  • NVE1, NVE2, and NVE3 all record an identifier 10 of the LAG.
  • NVE1 sends a packet to NVE 2 and NVE3, 10 is filled in LAG IDs of extended NV03 headers.
  • the first multihoming NVE searches a DF table according to the ingress port and the VN ID of the first multicast packet, and determines whether the ingress port is a designated forwarder DF of the VN ID of the first multicast packet.
  • the DF table there is no entry that is corresponding to the VN ID of the multicast packet of the first multicast packet and the ingress port, or when a DF flag in a found entry indicates that the ingress port is not the DF, it is determined that the ingress port is not the DF of the VN ID.
  • the first multihoming NVE is further configured to record link aggregation group information, where the link aggregation group information includes a link aggregation group identifier and ports included in the link aggregation group; and negotiate with the second multihoming NVE according to the link aggregation group information, select one port from all the ports in the link aggregation group as a DF of a VN ID of the multihomed TES 21, and record a negotiation result in a DF table.
  • the VN ID is used to identify a VN to which the multihomed TES 21 belongs.
  • the DF table includes VN IDs, ports, and DF flags, where the DF flag is used to mark whether a port is a DF of a VN ID.
  • link aggregation group information shown in FIG. 2 includes a link aggregation group, where a LAG ID of the link aggregation group is LAG 1, which includes a first port and a second port.
  • the first multihoming NVE is further configured to receive a second multicast packet sent by the second multihoming NVE, perform NV03 decapsulation on the second multicast packet, and acquire a VN ID of the second multicast packet from an NV03 header of the second multicast packet; look up a local multicast forwarding entry corresponding to the VN ID of the second multicast packet, and acquire an egress port in the local multicast forwarding entry; and determine whether the egress port is a designated forwarder DF of the VN ID of the second multicast packet, and process the decapsulated second multicast packet according to a result of the determining.
  • the first multihoming NVE is further configured to discard the decapsulated second multicast packet.
  • the first multihoming NVE when it is determined that the egress port is the DF of the VN ID of the second multicast packet, the first multihoming NVE is further configured to determine whether the second multicast packet and the egress port have a same LAG ID, where a LAG ID of the second multicast packet is obtained from an NV03 header of the second multicast packet.
  • the first multihoming NVE is further configured to discard the decapsulated second multicast packet; when the second multicast packet and the egress port do not have a same LAG ID, the first multihoming NVE is further configured to forward the decapsulated second multicast packet by using the egress port.
  • a local multicast forwarding table and a local DF table include only ports.
  • the port is a port for receiving a packet
  • the port is an ingress port
  • the port is used for sending a packet
  • the port is an egress port. Therefore, the ingress port and the egress port are only differentiated in terms of multicast packet direction, and do not affect information in the local multicast forwarding table or the local DF table.
  • a first multihoming NVE when a first multihoming NVE receives a first multicast packet from a multihomed TES, and when it is determined that an ingress port of the first multicast packet is not a DF of a VN ID carried in the multicast packet, the first multihoming NVE does not forward the first multicast packet to a local port, thereby avoiding that the first multicast packet sent by the multihomed TES is looped back to the multihomed TES.
  • IP address wasting can be avoided, thereby improving the forwarding efficiency.
  • An embodiment of the present invention further provides a method for processing a multicast packet on an NV03 network.
  • the method applies to the NV03 network shown in FIG. 2 , and may be executed by any multihoming NVE in the multiple multihoming NVEs.
  • a first multihoming NVE is used as an example for descriptions of FIG. 3 .
  • the method 30 includes:
  • step 302 when the multicast packet is an NV03-encapsulated packet, it is determined that the sender of the multicast packet is another NVE; when the multicast packet is an Ethernet packet without NV03 encapsulation and an ingress port of the multicast packet is a port in a LAG, it is determined that the sender of the multicast packet is a multihomed TES.
  • the another NVE is any NVE other than the first multihoming NVE on the NV03 network, which may be a multihoming NVE or may be an ordinary NVE.
  • the first multihoming NVE performs NV03 decapsulation on the multicast packet, and acquires a VN ID of the multicast packet from an NV03 header of the multicast packet.
  • the NV03 header herein includes both a normal NV03 header and an extended NV03 header.
  • the first multihoming NVE looks up a local multicast forwarding entry corresponding to the VN ID, and acquires an egress port in the local multicast forwarding entry.
  • a local multicast forwarding table is pre-configured or pre-generated on an NVE, as shown in Table 1.
  • Each local multicast forwarding entry includes a VN ID, a port, and a virtual local area network identifier (VLAN ID).
  • VLAN ID virtual local area network identifier
  • the port When the first multihoming NVE receives a multicast packet from a port, the port is called an ingress port; when the first multihoming NVE sends a multicast packet from a port, the port is called an egress port.
  • the multicast packet is received from another NVE and needs to be sent to a TES connected to the first multihoming NVE; therefore, the port is called the egress port.
  • the first multihoming NVE After finding the local multicast forwarding entry, the first multihoming NVE acquires the egress port from the local multicast forwarding entry. Before sending the NV03-decapsulated multicast packet by using the egress port that is found in the local multicast forwarding entry, the first multihoming NVE encapsulates the multicast packet with a corresponding VLAN ID.
  • the first multihoming NVE determines whether the egress port is a DF of the VN ID, and processes the decapsulated second multicast packet according to a result of the determining.
  • the processing the decapsulated second multicast packet according to a result of the determining includes: performing step 307 when the egress port is the DF of the VN ID, or performing step 306 when the egress port is not the DF of the VN ID.
  • the DF is determined by negotiation between the first multihoming NVE and a second multihoming NVE (which refers to another one or more multihoming NVEs, where a port of the second multihoming NVE and a port of the first multihoming NVE belong to a same LAG), or pre-configured by an administrator.
  • a second multihoming NVE which refers to another one or more multihoming NVEs, where a port of the second multihoming NVE and a port of the first multihoming NVE belong to a same LAG
  • the first multihoming NVE searches a DF table according to the VN ID and the egress port, and determines whether the egress port is the DF of the VN ID according to a found DF entry.
  • the DF table is pre-configured on the first multihoming NVE or pre-generated by the first multihoming NVE.
  • An entry of the DF table includes a VN ID, an egress port, and a DF flag. The DF flag is used to mark whether the egress port is the DF of the VN ID.
  • the first multihoming NVE discards the decapsulated second multicast packet.
  • the first multihoming NVE discards the multicast packet on the egress port, that is, the first multihoming NVE does not send the multicast packet to a multihomed TES connected to an egress port in the local multicast forwarding entry.
  • the first multihoming NVE determines whether the multicast packet and the egress port have a same LAG ID; performs step 308 when the multicast packet and the egress port have a same LAG ID; and performs step 309 when the multicast packet and the egress port do not have a same LAG ID.
  • a LAG ID of the multicast packet is obtained from the NV03 header of the multicast packet.
  • That the multicast packet and the egress port do not have a same LAG ID specifically includes: the NV03 header is a normal NV03 header without carrying a LAG ID, or a LAG ID carried in the NV03 header is an invalid value, or a LAG ID carried in the NV03 header is different from a LAG ID of the egress port.
  • the LAG ID of the egress port is pre-configured.
  • the first multihoming NVE discards the decapsulated second multicast packet.
  • That the multicast packet and the egress port have a same link aggregation group identifier indicates that the multicast packet is forwarded by using a non-DF port of the VN ID by another multihoming NVE that belongs to a same link aggregation group as the first multihoming NVE where the multicast packet is sent by a multihomed TES connected to the egress port.
  • the first multihoming NVE discards the multicast packet on the egress port, that is, the multicast packet is not forwarded by using the egress port.
  • the first multihoming NVE forwards the decapsulated second multicast packet by using the egress port.
  • That the multicast packet and the egress port have different link aggregation group identifiers indicates that the multicast packet is not a multicast packet coming from the egress port, and that sending the multicast packet to the egress port does not cause a loop on a multihomed TES connected to the egress port.
  • the egress port is a DF corresponding to the VN ID, that the first multihoming NVE forwards the multicast packet by using the egress port also does not cause that the multihomed TES connected to the egress port repeatedly receives the multicast packet.
  • the first multihoming NVE acquires an ingress port of the multicast packet and a VLAN ID of the multicast packet, and acquires a VN ID of the first multicast packet according to the ingress port and the VLAN ID.
  • NV03 encapsulation is not performed on the multicast packet sent by the multihomed TES; therefore, a VN ID is not carried in a packet header.
  • a local multicast forwarding table may be searched according to the ingress port of the multicast packet and the VLAN ID of the multicast packet, to acquire the VN ID of the multicast packet.
  • the local multicast forwarding table in this step is shown in Table 1.
  • the multicast packet is a packet received from the multihomed TES; therefore, a port from which the multicast packet is received is called an ingress port.
  • the first multihoming NVE determines whether the ingress port is a DF of the VN ID; performs step 312 when the ingress port is the DF of the VN ID; and performs step 313 when the ingress port is not the DF of the VN ID.
  • the first multihoming NVE searches a DF table according to the ingress port and the VN ID of the multicast packet, and determines whether the ingress port is the DF of the VN ID of the multicast packet, where each entry of the DF table includes a VN ID, a port, and a DF flag. When there is no entry that is corresponding to the VN ID of the multicast packet and the ingress port and in the DF table, or a DF flag in a found entry indicates that the ingress port is not the DF, it is determined that the ingress port is not the DF of the VN ID.
  • the first multihoming NVE performs normal NV03 encapsulation on the multicast packet and sends the multicast packet to another NVE.
  • the ingress port is the DF of the VN ID and can forward a packet of the VN ID; therefore, the first multihoming NVE encapsulates the multicast packet by adding the VN ID to an NV03 header of the multicast packet, and then sends the multicast packet to the another NVE.
  • the first multihoming NVE may further multicast, according to the local multicast forwarding table, the multicast packet by using another local port other than the ingress port to another local TES other than the multihomed TES that sends the multicast packet.
  • the normal NV03 encapsulation described in this embodiment means that an NV03 header resulting from the encapsulation carries only a VN ID.
  • the first multihoming NVE encapsulates the multicast packet with an extended NV03 header, and sends the multicast packet that is encapsulated with the extended NV03 header to another NVE.
  • the extended NV03 header described in this embodiment is an NV03 header that carries a VN ID and a LAG ID.
  • the LAG ID is a LAG ID of a LAG to which the ingress port belongs.
  • the extended NV03 header may further carry a flag bit, where the flag bit is used to indicate that the extended NV03 header carries a LAG ID.
  • the ingress port is not the DF of the VN ID; therefore, to avoid a loop on a TES that sends the multicast packet, the first multihoming NVE does not multicast the multicast packet to local ports, but sends the multicast packet only to another NVE. Therefore, the first multihoming NVE encapsulates the multicast packet with the VN ID and a LAG ID of the ingress port, so that a second multihoming NVE that receives the multicast packet determines whether the second multihoming NVE can multicast the multicast packet to local ports.
  • FIG. 3 various possible processing procedures are included after the first multihoming NVE receives the multicast packet.
  • FIG. 3 may be split into the following processing procedures, where each procedure can independently form a method for processing a multicast packet on an NV03 network.
  • the foregoing first, second, third, fourth, and fifth are merely used for exemplary descriptions, and do not limit the order of the multicast packets.
  • the foregoing procedures A to E can each independently complete processing of one type of multicast packet. Therefore, the method for processing a multicast packet on an NV03 network in this embodiment of the present invention only needs to include at least any one of the foregoing procedures. When one of the procedures is chosen to be protected, steps in the other procedures may be considered to be an optional implementation manner of the method described in this embodiment of the present invention.
  • a first multihoming NVE receives a first multicast packet, and when it is determined that a sender of the first multicast packet is a local multihomed TES, performs steps 310, 311, and 313 for the first multicast packet, where other steps are optional.
  • the first multihoming NVE further receives a second multicast packet in addition to receiving the first multicast packet, and then performs one or more steps of 303 to 309 according to characteristics of the second multicast packet.
  • a first multihoming NVE when a first multihoming NVE receives a first multicast packet from a multihomed TES, and when it is determined that an ingress port of the first multicast packet is not a DF of a VN ID carried in the multicast packet, the first multihoming NVE does not forward the first multicast packet to a local port, thereby avoiding that the first multicast packet sent by the multihomed TES is looped back to the multihomed TES.
  • IP address wasting can be avoided, thereby improving the forwarding efficiency.
  • an embodiment of the present invention provides an apparatus for processing a multicast packet on an NV03 network, configured to execute the method shown in FIG. 3 .
  • the apparatus 40 includes:
  • a determining module 402 is configured to determine a sender of the multicast packet, trigger a second VN ID acquiring module 403 when it is determined that the sender of the multicast packet is an NVE, and trigger a first VN ID acquiring module 410 when it is determined that the sender of the first multicast packet is a local multihomed TES.
  • the determining module 402 is specifically configured to: when the multicast packet is an NV03-encapsulated packet, determine that the sender of the multicast packet is another NVE; and when the multicast packet is an Ethernet packet without NV03 encapsulation and an ingress port of the multicast packet is a port in a LAG, determine that the sender of the multicast packet is a multihomed TES.
  • the another NVE is any NVE on the NV03 network, which may be a multihoming NVE or may be an ordinary NVE.
  • the second VN ID acquiring module 403 is configured to perform NV03 decapsulation on the multicast packet, and acquire a VN ID of the multicast packet from an NV03 header of the multicast packet.
  • the NV03 header herein includes both a normal NV03 header and an extended NV03 header.
  • An egress port acquiring module 404 is configured to look up a local multicast forwarding entry corresponding to the VN ID, of the multicast packet, acquired by the second VN ID acquiring module, and acquire an egress port in the local multicast forwarding entry.
  • the local multicast forwarding table is shown in Table 1 of the embodiment shown in FIG. 3 .
  • a second judging module 405 is configured to determine whether the egress port is a DF of the VN ID; trigger a third judging module 407 when the egress port is the DF of the VN ID; and trigger a first discarding module 406 when the egress port is not the DF of the VN ID.
  • the DF is determined by negotiation among the multiple multihoming NVEs, or pre-configured by an administrator.
  • the second judging module 405 is specifically configured to search a DF table according to the VN ID and the egress port, and determine whether the egress port is the DF of the VN ID according to a found DF entry.
  • An entry of the DF table includes a VN ID, an egress port, and a DF flag.
  • the DF flag is used to mark whether the egress port is the DF of the VN ID.
  • the first discarding module 406 is configured to discard, triggered by the second judging module 405, the decapsulated second multicast packet.
  • the first discarding module 406 discards the multicast packet on the egress port.
  • the third judging module 407 is configured to determine whether the multicast packet and the egress port have a same LAG ID; trigger a second discarding module 408 when the multicast packet and the egress port have a same LAG ID; and trigger the third sending module 409 when the multicast packet and the egress port do not have a same LAG ID.
  • a LAG ID of the multicast packet is obtained from the NV03 header of the multicast packet.
  • That the multicast packet and the egress port do not have a same LAG ID specifically includes: the NV03 header is a normal NV03 header without carrying a LAG ID, or a LAG ID carried in the NV03 header is an invalid value, or a LAG ID carried in the NV03 header is different from a LAG ID of the egress port.
  • the LAG ID of the egress port is pre-configured.
  • a second discarding module 408 is configured to discard, triggered by the third judging module, the decapsulated second multicast packet.
  • That the multicast packet and the egress port have a same link aggregation group identifier indicates that the multicast packet is forwarded by using a non-DF port of the VN ID by another multihoming NVE that belongs to a same link aggregation group as the apparatus, where the multicast packet is a multicast packet from a multihomed TES connected to the egress port.
  • the second discarding module 408 discards the multicast packet on the egress port, that is, the multicast packet is not forwarded by using the egress port.
  • the third sending module 409 is configured to forward the decapsulated second multicast packet by using the egress port.
  • That the multicast packet and the egress port have different link aggregation group identifiers indicates that the multicast packet is not a multicast packet coming from the egress port, and that sending the multicast packet to the egress port does not cause a loop on a multihomed TES connected to the egress port.
  • the egress port is a DF corresponding to the VN ID, that the third sending module 409 forwards the multicast packet by using the egress port also does not cause that the multihomed TES connected to the egress port repeatedly receives the multicast packet.
  • the first VN ID acquiring module 410 is configured to acquire an ingress port of the multicast packet and a VLAN ID of the multicast packet, and acquire a VN ID of the first multicast packet according to the ingress port and the VLAN ID.
  • the first VN ID acquiring module 410 searches a local multicast forwarding table according to the ingress port of the multicast packet and the VLAN ID of the multicast packet, to acquire the VN ID of the multicast packet.
  • the local multicast forwarding table herein is shown in Table 1.
  • the multicast packet is a packet received from the multihomed TES; therefore, a port from which the multicast packet is received is called an ingress port.
  • the first judging module 411 is configured to determine whether the ingress port is a DF of the VN ID, trigger a second sending module 412 when the ingress port is the DF of the VN ID, and trigger a first sending module 413 when the ingress port is not the DF of the VN ID.
  • the first judging module 411 searches the DF table according to the ingress port and the VN ID, and determines whether the ingress port is the DF of the VN ID of the multicast packet, where each entry of the DF table includes a VN ID, a port, and a DF flag. Specifically, when there is no entry corresponding to the VN ID and the ingress port of the multicast packet in the DF table, or a DF flag in a found entry indicates that the ingress port is not the DF, it is determined that the ingress port is not the DF of the VN ID.
  • the second sending module 412 is configured to send a multicast packet with normal NV03 encapsulation to another NVE.
  • the ingress port is the DF of the VN ID and can forward a packet of the VN ID; therefore, the second module 412 encapsulates the multicast packet by adding the VN ID to an NV03 header of the multicast packet, and then sends the multicast packet to the another NVE.
  • the second sending module 412 may further multicast, according to the local multicast forwarding table, the multicast packet by using another local port other than the ingress port to another local TES other than a multihomed TES of the multicast packet.
  • the normal NV03 encapsulation described in this embodiment means that an NV03 header resulting from the encapsulation carries only a VN ID.
  • the first sending module 413 is configured to encapsulate the multicast packet with an extended NV03 header, and send the multicast packet that is encapsulated with the extended NV03 header to another NVE.
  • the extended NV03 header described in this embodiment is an NV03 header that carries a VN ID and a LAG ID.
  • the LAG ID is a LAG ID of a LAG to which the ingress port belongs.
  • the extended NV03 header may further carry a flag bit, where the flag bit is used to indicate that the extended NV03 header carries a LAG ID.
  • the ingress port is not the DF of the VN ID; therefore, to avoid causing a loop on a TES that sends the multicast packet, the first sending module 413 does not multicast the multicast packet to local ports, but sends the multicast packet only to another NVE. Therefore, the first sending module 413 encapsulates the multicast packet with the VN ID and a LAG ID of the ingress port, so that a multihoming NVE that receives the multicast packet determines whether the multihoming NVE can multicast the multicast packet to the local port.
  • FIG. 4 includes various possible modules that need to participate in processing a multicast packet after an apparatus for processing a multicast packet on an NV03 network receives the multicast packet.
  • the modules in FIG. 4 may be classified into the following several groups. Each group can independently form an apparatus for processing a multicast packet on an NV03 network.
  • the foregoing first, second, third, fourth, and fifth are merely used for exemplary descriptions, and do not limit the order of the multicast packets.
  • the foregoing groups of modules may simultaneously be called for processing different multicast packets, or may be called separately. Any manner of calling the foregoing groups of modules shall fall within the protection scope of the present invention.
  • the foregoing group A to group E can each independently complete processing of one type of multicast packet. Therefore, the apparatus for processing a multicast packet on an NV03 network in this embodiment of the present invention only needs to include at least any one group of the foregoing groups of modules. When one group of modules is chosen to be protected, the other groups of modules may all be considered as optional implementation manners of the apparatus described in this embodiment of the present invention.
  • the receiving module 401 receives a first multicast packet, when the determining module 402 determines that a sender of the first multicast packet is a local multihomed TES, the determining module 402 triggers the first VN ID acquiring module 410, the first judging module 411 and the first sending module 413 to perform corresponding operations.
  • the receiving module 401 further receives a second multicast packet in addition to receiving the first multicast packet, and then calls one or more of modules 403 to 409 according to characteristics of the second multicast packet.
  • FIG. 5 is a schematic structural diagram of another apparatus for processing a multicast packet on an NV03 network according to an embodiment of the present invention.
  • the apparatus 50 for processing a multicast packet on an NV03 network includes a processor 501, a memory 502, a network interface 503, and a bus 504.
  • the processor 501, the memory 502, and the network interface 503 are all connected to the bus 504.
  • the processor 501 is configured to receive a multicast packet by using the network interface 503, and execute, according to characteristics of the multicast packet, one or more of procedures A to E in the method that is shown in FIG. 3 .
  • the foregoing processing process executed by the processor 501 is generally implemented under control of one or more software programs, and program instructions of the one or more software programs are stored in the memory 502.
  • the processor 51 reads the program instructions, and performs, according to the program instructions, some or all of the steps of the method that is shown in FIG. 3 .
  • the first multicast packet when an apparatus receives a first multicast packet from a multihomed TES, and when it is determined that an ingress port of the first multicast packet is not a DF of a VN ID carried in the multicast packet, the first multicast packet is not forwarded to a local port, thereby avoiding that the first multicast packet sent by the multihomed TES is looped back to the multihomed TES.
  • IP address wasting can be avoided, thereby improving the forwarding efficiency.
  • the program may be stored in a computer-readable storage medium.
  • the foregoing storage medium includes: any medium that can store program code, such as a ROM, a RAM, a magnetic disk, or an optical disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Claims (13)

  1. Procédé de traitement d'un paquet de multidiffusion sur un réseau à virtualisation de réseau de couche 3, NV03, comprenant les étapes suivantes :
    recevoir (301) un premier paquet de multidiffusion ;
    acquérir (310) un port d'entrée du premier paquet de multidiffusion et un identifiant de réseau local virtuel, VLAN ID, du premier paquet de multidiffusion, lorsqu'un émetteur du premier paquet de multidiffusion est un système d'extrémité locataire multi-hôte local, TES ;
    acquérir (310) un identifiant de réseau superposé virtuel, VN ID, du premier paquet de multidiffusion conformément au port d'entrée et à l'identifiant VLAN ID ;
    déterminer (311) si le port d'entrée est un expéditeur désigné, DF, de l'identifiant VN ID;
    encapsuler (313) le premier paquet de multidiffusion avec un en-tête NV03 étendu, lorsque le port d'entrée n'est pas le DF de l'identifiant VN ID ; et
    envoyer (313) le premier paquet de multidiffusion qui est encapsulé avec l'en-tête NV03 étendu à un autre bord de virtualisation de réseau, NVE, où l'en-tête NV03 étendu achemine l'identifiant VN ID du premier paquet de multidiffusion et un identifiant de groupe d'agrégation de liens, LAG ID, qui correspond au port d'entrée ;
    où déterminer si le port d'entrée est un expéditeur désigné, DF, de l'identifiant VN ID comprend les étapes suivantes :
    rechercher une table DF conformément au port d'entrée et à l'identifiant VN ID du premier paquet de multidiffusion, et déterminer si le port d'entrée est le DF de l'identifiant VN ID du premier paquet de multidiffusion, où chaque entrée de la table DF comprend un identifiant VN ID, un port et un marqueur DF.
  2. Procédé selon la revendication 1, comprenant en outre les étapes suivantes :
    recevoir un second paquet de multidiffusion ;
    procéder à une désencapsulation NV03 sur le second paquet de multidiffusion lorsqu'un émetteur du second paquet de multidiffusion est un NVE ;
    acquérir un identifiant VN ID du second paquet de multidiffusion à partir d'un en-tête NV03 du second paquet de multidiffusion ;
    rechercher une entrée de transfert de multidiffusion locale correspondant à l'identifiant VN ID du second paquet de multidiffusion ;
    acquérir un port de sortie dans l'entrée de transfert de multidiffusion locale ;
    déterminer si le port de sortie est un DF de l'identifiant VN ID du second paquet de multidiffusion ; et
    traiter le second paquet de multidiffusion désencapsulé en fonction d'un résultat de la détermination.
  3. Procédé selon la revendication 2, dans lequel le traitement du second paquet de multidiffusion désencapsulé en fonction d'un résultat de la détermination comprend l'étape suivante :
    rejeter le second paquet de multidiffusion désencapsulé lorsqu'il est déterminé que le port de sortie n'est pas le DF de l'identifiant VN ID du second paquet de multidiffusion.
  4. Procédé selon la revendication 2, dans lequel le traitement du second paquet de multidiffusion désencapsulé en fonction d'un résultat de la détermination comprend l'étape suivante :
    déterminer si le second paquet de multidiffusion et le port de sortie ont un même identifiant LAG ID lorsqu'il est déterminé que le port de sortie est le DF de l'identifiant VN ID du second paquet de multidiffusion, où l'identifiant LAG ID du second paquet de multidiffusion est obtenu à partir de l'en-tête NV03 du second paquet de multidiffusion.
  5. Procédé selon la revendication 4, dans lequel
    lorsque le second paquet de multidiffusion et le port de sortie ont un même identifiant LAG ID, le procédé comprend en outre de rejeter le second paquet de multidiffusion désencapsulé.
  6. Procédé selon la revendication 4, dans lequel
    lorsque le second paquet de multidiffusion et le port de sortie n'ont pas un même identifiant LAG ID, le procédé comprend en outre de transférer le second paquet de multidiffusion désencapsulé en utilisant le port de sortie.
  7. Appareil pour traiter un paquet de multidiffusion sur un réseau à virtualisation de réseau de couche 3, NV03, comprenant :
    un module de réception (401), configuré pour recevoir un premier paquet de multidiffusion ;
    un module de détermination (402), configuré pour déterminer un expéditeur du premier paquet de multidiffusion, et déclencher un premier module d'acquisition d'identifiant de réseau superposé virtuel, VN ID, lorsqu'il est déterminé que l'expéditeur du premier paquet de multidiffusion est un système d'extrémité locataire multi-hôte local, TES ;
    le premier module d'acquisition d'identifiant VN ID (410), configuré pour acquérir un port d'entrée du premier paquet de multidiffusion et un identifiant de réseau local virtuel, VLAN ID, du premier paquet de multidiffusion, et acquérir un identifiant VN ID du premier paquet de multidiffusion en fonction du port d'entrée et de l'identifiant VLAN ID ;
    un premier module de jugement (411), configuré pour déterminer si le port d'entrée est un expéditeur désigné, DF, de l'identifiant VN ID, et déclencher un premier module d'envoi lorsque le port d'entrée n'est pas le DF de l'identifiant VN ID ; et
    le premier module d'envoi (413), configuré pour encapsuler le premier paquet de multidiffusion avec un en-tête NV03 étendu, et envoyer le premier paquet de multidiffusion qui est encapsulé avec l'en-tête NV03 étendu à un autre bord de virtualisation de réseau, NVE, où l'en-tête NV03 étendu achemine l'identifiant VN ID du premier paquet de multidiffusion et un identifiant de groupe d'agrégation de liens, LAG ID, qui correspond au port d'entrée ;
    où le module de détermination est spécifiquement configuré pour rechercher une table DF conformément au port d'entrée et à l'identifiant VN ID du premier paquet de multidiffusion, et déterminer si le port d'entrée est le DF de l'identifiant VN ID du premier paquet de multidiffusion, où chaque entrée de la table DF comprend un identifiant VN ID, un port et un marqueur DF.
  8. Appareil selon la revendication 7, comprenant en outre un second module d'acquisition d'identifiant VN ID (403), un module d'acquisition de port de sortie (404), et un deuxième module de jugement (405), où
    le module de réception (401) est en outre configuré pour recevoir un second paquet de multidiffusion ;
    le module de détermination (402) est en outre configuré pour déterminer un expéditeur du second paquet de multidiffusion, et déclencher le deuxième module d'acquisition d'identifiant VN ID, lorsqu'il est déterminé que l'expéditeur du second paquet de multidiffusion est un NVE ;
    le second module d'acquisition d'identifiant VN ID (403) est configuré pour procéder à une désencapsulation NV03 sur le second paquet de multidiffusion, et acquérir un identifiant VN ID du second paquet de multidiffusion à partir d'un en-tête NV03 du second paquet de multidiffusion ;
    le module d'acquisition de port de sortie (404) est configuré pour rechercher une entrée de transfert de multidiffusion locale correspondant à l'identifiant VN ID du second paquet de multidiffusion, et acquérir un port de sortie dans l'entrée de transfert de multidiffusion locale ; et
    le deuxième module de jugement (405) est configuré pour déterminer si le port de sortie est un DF de l'identifiant VN ID du second paquet de multidiffusion, et traiter le second paquet de multidiffusion désencapsulé en fonction d'un résultat de la détermination.
  9. Appareil selon la revendication 8, dans lequel l'appareil comprend en outre un premier module de rejet (406) ;
    le deuxième module de jugement est en outre configuré pour déclencher le premier module de rejet lorsqu'il est déterminé que le port de sortie n'est pas le DF de l'identifiant VN ID du second paquet de multidiffusion ; et
    le premier module de rejet est configuré pour écarter, suite au déclenchement par le premier module de rejet, le second paquet de multidiffusion désencapsulé.
  10. Appareil selon la revendication 9, dans lequel l'appareil comprend en outre un troisième module de jugement (407) ;
    le deuxième module de jugement (405) est en outre configuré pour déclencher le troisième module de jugement lorsqu'il est déterminé que le port de sortie est le DF de l'identifiant VN ID du second paquet de multidiffusion ; et
    le troisième module de jugement (407) est configuré pour déterminer si le second paquet de multidiffusion et le port de sortie ont un même identifiant LAG ID, où un identifiant LAG ID du second paquet de multidiffusion est obtenu à partir de l'en-tête NV03 du second paquet de multidiffusion.
  11. Appareil selon la revendication 10, dans lequel l'appareil comprend en outre un second module de rejet (408) ;
    le troisième module de jugement (407) est en outre configuré pour déclencher le second module de rejet lorsqu'il est déterminé que le second paquet de multidiffusion et le port de sortie ont un même identifiant LAG ID ; et
    le second module de rejet (408) est configuré pour écarter, suite au déclenchement par le troisième module de jugement, le second paquet de multidiffusion désencapsulé.
  12. Appareil selon la revendication 10, dans lequel l'appareil comprend en outre un troisième module d'envoi (409) ;
    le troisième module de jugement (408) est en outre configuré pour déclencher le troisième module d'envoi (409) lorsqu'il est déterminé que le second paquet de multidiffusion et le port de sortie n'ont pas un même identifiant LAG ID ; et
    le troisième module d'envoi (409) est configuré pour transférer le second paquet de multidiffusion désencapsulé en utilisant le port de sortie.
  13. Réseau à virtualisation de réseau de couche 3, NV03, où le réseau NV03 comprend un système d'extrémité locataire multi-hôte, TES, un premier bord de virtualisation de réseau multi-hôte, NVE, et un second NVE multi-hôte, et le TES multi-hôte est connecté séparément au premier NVE multi-hôte et au second NVE multi-hôte, où :
    le TES multi-hôte est configuré pour envoyer un premier paquet de multidiffusion ; et
    le premier NVE multi-hôte est configuré pour recevoir le premier paquet de multidiffusion, déterminer un expéditeur du premier paquet de multidiffusion, et, lorsqu'il est déterminé que le premier paquet de multidiffusion est envoyé par le TES multi-hôte, acquérir un port d'entrée du premier paquet de multidiffusion et un identifiant de réseau local virtuel, VLAN ID, du premier paquet de multidiffusion ; acquérir un identifiant de réseau superposé virtuel, VN ID, du premier paquet de multidiffusion conformément au port d'entrée du premier paquet de multidiffusion et à l'identifiant VLAN ID ; déterminer si le port d'entrée est un expéditeur désigné, DF, de l'identifiant VN ID du premier paquet de multidiffusion ; lorsque le port d'entrée n'est pas le DF de l'identifiant VN ID du premier paquet de multidiffusion, encapsuler le premier paquet de multidiffusion avec un en-tête NV03 étendu ; et envoyer le premier paquet de multidiffusion qui est encapsulé avec l'en-tête NV03 étendu à un autre NVE qui comprend le second NVE multi-hôte, où l'en-tête NV03 étendu achemine l'identifiant VN ID du premier paquet de multidiffusion et un identifiant de groupe d'agrégation de liens, LAG ID, qui correspond au port d'entrée ;
    où le premier NVE multi-hôte recherche une table DF conformément au port d'entrée et à l'identifiant VN ID du premier paquet de multidiffusion, et détermine si le port d'entrée est le DF de l'identifiant VN ID du premier paquet de multidiffusion, où chaque entrée de la table DF comprend un identifiant VN ID, un port et un marqueur DF.
EP13888499.4A 2013-06-28 2013-06-28 Procédé et dispositif de traitement de message de diffusion groupée dans un réseau nvo3, et réseau nvo3 Active EP3001609B1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2013/078386 WO2014205784A1 (fr) 2013-06-28 2013-06-28 Procédé et dispositif de traitement de message de diffusion groupée dans un réseau nvo3, et réseau nvo3

Publications (3)

Publication Number Publication Date
EP3001609A1 EP3001609A1 (fr) 2016-03-30
EP3001609A4 EP3001609A4 (fr) 2016-06-01
EP3001609B1 true EP3001609B1 (fr) 2017-08-09

Family

ID=52140865

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13888499.4A Active EP3001609B1 (fr) 2013-06-28 2013-06-28 Procédé et dispositif de traitement de message de diffusion groupée dans un réseau nvo3, et réseau nvo3

Country Status (4)

Country Link
US (1) US9768968B2 (fr)
EP (1) EP3001609B1 (fr)
CN (1) CN105264834B (fr)
WO (1) WO2014205784A1 (fr)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016063267A1 (fr) * 2014-10-24 2016-04-28 Telefonaktiebolaget L M Ericsson (Publ) Gestion de trafic de multidiffusion dans un réseau superposé
US9716660B2 (en) * 2014-12-11 2017-07-25 Intel Corporation Hierarchical enforcement of service flow quotas
CN106209553B (zh) 2015-04-30 2019-07-23 华为技术有限公司 报文处理方法、设备及系统
CN106209689B (zh) * 2015-05-04 2019-06-14 新华三技术有限公司 从vxlan至vlan的组播数据报文转发方法和设备
CN106209648B (zh) 2015-05-04 2019-06-14 新华三技术有限公司 跨虚拟可扩展局域网的组播数据报文转发方法和设备
CN106209636B (zh) 2015-05-04 2019-08-02 新华三技术有限公司 从vlan至vxlan的组播数据报文转发方法和设备
US9749221B2 (en) * 2015-06-08 2017-08-29 International Business Machines Corporation Multi-destination packet handling at overlay virtual network tunneling endpoints
CN106936939B (zh) * 2015-12-31 2020-06-02 华为技术有限公司 一种报文处理方法、相关装置及nvo3网络系统
CN106941437B (zh) 2016-01-04 2020-11-17 中兴通讯股份有限公司 一种信息传输方法及装置
US10063407B1 (en) 2016-02-08 2018-08-28 Barefoot Networks, Inc. Identifying and marking failed egress links in data plane
US10313231B1 (en) 2016-02-08 2019-06-04 Barefoot Networks, Inc. Resilient hashing for forwarding packets
US10326694B2 (en) * 2016-04-08 2019-06-18 Cisco Technology, Inc. Asymmetric multi-destination traffic replication in overlay networks
CN107733765B (zh) * 2016-08-12 2020-09-08 中国电信股份有限公司 映射方法、系统和相关设备
US10084687B1 (en) 2016-11-17 2018-09-25 Barefoot Networks, Inc. Weighted-cost multi-pathing using range lookups
US10237206B1 (en) 2017-03-05 2019-03-19 Barefoot Networks, Inc. Equal cost multiple path group failover for multicast
US10404619B1 (en) 2017-03-05 2019-09-03 Barefoot Networks, Inc. Link aggregation group failover for multicast
CN107547341B (zh) * 2017-06-23 2020-07-07 新华三技术有限公司 虚拟扩展局域网vxlan的接入方法及装置
US11102108B2 (en) * 2017-08-31 2021-08-24 Oracle International Corporation System and method for a multicast send duplication instead of replication in a high performance computing environment
US11115330B2 (en) * 2018-03-14 2021-09-07 Juniper Networks, Inc. Assisted replication with multi-homing and local bias
CN113765815B (zh) * 2020-06-05 2024-03-26 华为技术有限公司 组播报文负载分担的方法、设备和系统
CN112019420B (zh) * 2020-09-04 2022-03-29 苏州盛科科技有限公司 一种vxlan边缘节点组播报文转发的实现方法及装置
CN112291116A (zh) * 2020-11-23 2021-01-29 迈普通信技术股份有限公司 链路故障检测方法、装置及网络设备
US11652748B2 (en) * 2021-07-01 2023-05-16 Vmware, Inc. Multicast routing through multi-tier edge gateways

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4836008B2 (ja) * 2006-01-06 2011-12-14 日本電気株式会社 通信システム、通信方法、ノード、およびノード用プログラム
US8724513B2 (en) * 2009-09-25 2014-05-13 Qualcomm Incorporated Methods and apparatus for distribution of IP layer routing information in peer-to-peer overlay networks
US8837493B2 (en) * 2010-07-06 2014-09-16 Nicira, Inc. Distributed network control apparatus and method
US8665883B2 (en) * 2011-02-28 2014-03-04 Alcatel Lucent Generalized multi-homing for virtual private LAN services
JP5797849B2 (ja) * 2011-11-03 2015-10-21 華為技術有限公司Huawei Technologies Co.,Ltd. ホストが仮想プライベートネットワークに参加/離脱するための境界ゲートウェイプロトコルの拡張
US20130142201A1 (en) * 2011-12-02 2013-06-06 Microsoft Corporation Connecting on-premise networks with public clouds
CN103580980B (zh) * 2012-07-24 2019-05-24 中兴通讯股份有限公司 虚拟网络自动发现和自动配置的方法及其装置
CN102868642B (zh) * 2012-10-09 2015-11-18 盛科网络(苏州)有限公司 在asic中实现nvgre报文转发的方法和装置
US9350558B2 (en) * 2013-01-09 2016-05-24 Dell Products L.P. Systems and methods for providing multicast routing in an overlay network
CN103095546B (zh) * 2013-01-28 2015-10-07 华为技术有限公司 一种处理报文的方法、装置及数据中心网络
US9660905B2 (en) * 2013-04-12 2017-05-23 Futurewei Technologies, Inc. Service chain policy for distributed gateways in virtual overlay networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
EP3001609A1 (fr) 2016-03-30
EP3001609A4 (fr) 2016-06-01
CN105264834A (zh) 2016-01-20
US9768968B2 (en) 2017-09-19
US20160142220A1 (en) 2016-05-19
CN105264834B (zh) 2018-12-07
WO2014205784A1 (fr) 2014-12-31

Similar Documents

Publication Publication Date Title
EP3001609B1 (fr) Procédé et dispositif de traitement de message de diffusion groupée dans un réseau nvo3, et réseau nvo3
US11240065B2 (en) NSH encapsulation for traffic steering
US10985945B2 (en) Method and system for virtual and physical network integration
US9374323B2 (en) Communication between endpoints in different VXLAN networks
US10237230B2 (en) Method and system for inspecting network traffic between end points of a zone
US10050877B2 (en) Packet forwarding method and apparatus
KR102054338B1 (ko) 개별 관리들을 이용하는 vlan 태깅된 패킷들의 가상 포워딩 인스턴스들의 원단 주소들로의 라우팅
US10200212B2 (en) Accessing IP network and edge devices
CN114124618B (zh) 一种报文传输方法及电子设备
CN102413061B (zh) 一种报文传输方法及设备
US10924299B2 (en) Packet forwarding
WO2016101646A1 (fr) Procédé et appareil d'accès destinés à un réseau virtuel ethernet
WO2020108531A1 (fr) Transfert de paquets
WO2015135499A1 (fr) Virtualisation de réseaux
US20180270084A1 (en) Technique for exchanging datagrams between application modules
EP2775663B1 (fr) Procédé et dispositif de rétablissement d'un service à la clientèle
US10757066B2 (en) Active-active access to transparent interconnection of lots of links (TRILL) edges
CN103685007B (zh) 一种边缘设备报文转发时的mac学习方法及边缘设备
EP2728795A1 (fr) Procédé de traitement, dispositif et système de contrôle de diffusion de paquets
US20170264461A1 (en) Communication apparatus and communication method

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20151222

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20160429

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/46 20060101AFI20160422BHEP

Ipc: H04L 12/709 20130101ALI20160422BHEP

Ipc: H04L 12/18 20060101ALI20160422BHEP

DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170221

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 917927

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170815

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602013024988

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20170809

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 917927

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171109

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171209

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171109

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171110

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602013024988

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20180511

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20180630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180628

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180630

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180630

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180630

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180628

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180628

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20130628

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20250507

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20250508

Year of fee payment: 13