CN116800662A - Message processing methods, devices, equipment and storage media - Google Patents
Message processing methods, devices, equipment and storage media Download PDFInfo
- Publication number
- CN116800662A CN116800662A CN202210743425.6A CN202210743425A CN116800662A CN 116800662 A CN116800662 A CN 116800662A CN 202210743425 A CN202210743425 A CN 202210743425A CN 116800662 A CN116800662 A CN 116800662A
- Authority
- CN
- China
- Prior art keywords
- sid
- segment
- message
- length
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
本申请提供了一种报文处理方法、装置、设备及存储介质,属于网络技术领域。本申请相较于定长SID的方法而言,由于SID的长度不是固定值,而是得到了缩短,因此实现了SID的压缩,进而实现了段列表segment list的压缩,从而节省了报文开销。
This application provides a message processing method, device, equipment and storage medium, which belongs to the field of network technology. Compared with the fixed-length SID method, this application realizes the compression of SID and segment list because the length of SID is not a fixed value but shortened, thereby saving message overhead. .
Description
The present application claims priority from chinese patent application No. 202210270908.9 entitled "method and apparatus for message processing" filed at 18/03/2022, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of network technologies, and in particular, to a method, an apparatus, a device, and a storage medium for processing a message.
Background
Segment Routing (SR) is a technique for forwarding data messages based on a source routing mechanism. The basic implementation principle of segment routing is that an ingress node inserts a segment list (segment list) into a data packet, where the segment list includes a series of Segment IDs (SIDs) through which nodes or links through which a forwarding path passes can be explicitly indicated.
Presently, the SID in a message is usually fixed-length, i.e. the SID has a fixed value. However, in the scheme of the fixed-length SID, the space occupied by the SID in the message is larger, resulting in larger message overhead.
Disclosure of Invention
The embodiment of the application provides a message processing method, a device, equipment and a storage medium, which can save message overhead. The technical scheme is as follows.
In a first aspect, a method for processing a message is provided, where the method includes:
the forwarding device receives a first message, wherein a destination address of the first message comprises a first SID, the first message comprises a first residual section, a first identifier and a section list, the first residual section is used for indicating the offset of the first SID in the section list, the first SID comprises a first parameter, and the first identifier is used for identifying that the SIDs in the section list are different in length;
the forwarding device obtains a second message based on the first message, wherein the second message comprises a second residual section determined based on the first residual section and the first parameter, a destination address of the second message comprises a second SID, and the second SID is the SID determined based on the second residual section in the section list included in the second message;
The forwarding device sends the second message based on the second SID.
Compared with the fixed-length SID method, the method of the first aspect shortens the SID length instead of the fixed value, so that the SID is compressed, further the segment list is compressed, and the message overhead is saved.
In some embodiments, the first message further includes a segment length (segment length), and the first parameter is used to indicate a multiple of a next SID of the first SID relative to the segment length.
By the embodiment, the length of the SID can be defined according to the compression requirement, so that a flexible and variable compression scheme is supported, and in addition, the length of the SID can be obtained by reading a message header through a data plane without controlling the expansion of the plane.
In some embodiments, the first message and the second message are unicast messages.
In some embodiments, the first message further includes a segment routing header SRH, the SRH including the first remaining segment, the first identifier, and the segment list.
In some embodiments, the first message further includes a segment length, the first SID further includes a second parameter, the first parameter is used to indicate a multiple of a next SID of the first SID relative to the segment length, the second parameter is used to indicate an offset of the next SID of the first SID relative to the first SID, and the second SID is a SID in the segment list determined based on the second remaining segment and the second parameter.
By the implementation mode, a group of downstream nodes to which the duplicate message needs to be forwarded are positioned, so that path programming in a multicast scene is supported.
In some embodiments, the first message and the second message are multicast messages.
In some embodiments, the first message includes a multicast segment routing header, MRH, including the first remaining segment, the segment list, and the first identifier.
In some embodiments, the segment length is less than 128 bits.
In some embodiments, before the forwarding device receives the first packet, the method further includes:
the forwarding device receives a third message sent by the next-hop device, wherein the third message comprises the second SID and flavor flag of the second SID, and the flag identifies a compression mode of the second SID.
In a second aspect, a method for processing a message is provided, where the method includes:
the forwarding equipment receives a first message, wherein a destination address of the first message comprises a first SID, the first message comprises a segment length, a first residual segment, a first identifier and a segment list, the first residual segment is used for indicating the offset of the first SID in the segment list, and the first identifier is used for identifying that the SIDs in the segment list have the same length;
The forwarding device obtains a second message based on the first message, wherein the second message comprises the segment length, a second remaining segment and the segment list, the second remaining segment is an offset determined based on the first remaining segment, a destination address of the second message comprises a second SID, and the second SID is a SID determined based on the second remaining segment and the segment length in the segment list included in the second message;
the forwarding device sends the second message based on the second SID.
In the method provided in the second aspect, the segment length is not fixed at 128 bits, but is shortened, so that a flexible and lengthened compression scheme is supported, thereby helping to reduce message overhead.
In some embodiments, the segment length is less than 128 bits.
In some embodiments, the first message and the second message are unicast messages.
In some embodiments, the first message further includes a segment routing header SRH, the SRH including the segment length, the first remaining segment, a first identifier, and the segment list.
In some implementations, the first SID includes a first parameter for indicating an offset of a next SID of the first SID relative to the first SID, the second SID being a SID in the segment list determined based on the second remaining segment, the first parameter, and the segment length.
In some embodiments, the first message and the second message are multicast messages.
In some embodiments, the first message includes a multicast segment routing header, MRH, including the segment length, the first remaining segment, the segment list, and the first identification.
In a third aspect, a message processing apparatus is provided, where the message processing apparatus is provided on the forwarding device in the first aspect or any optional manner of the first aspect. The message processing apparatus comprises at least one unit configured to implement the method provided in the first aspect or any of the alternatives of the first aspect. In some embodiments, the unit in the message processing apparatus provided in the third aspect is implemented by software, and the unit in the message processing apparatus is a program module. In other embodiments, the units in the packet processing device provided in the third aspect are implemented by hardware or firmware. The details of the message processing apparatus provided in the third aspect may be found in the foregoing first aspect or any optional manner of the first aspect, which is not described herein.
In a fourth aspect, there is provided a message processing apparatus provided on a forwarding device in the second aspect or any of the alternatives of the second aspect. The message processing apparatus comprises at least one unit configured to implement the method provided in the second aspect or any of the optional manners of the second aspect. In some embodiments, the unit in the message processing apparatus provided in the fourth aspect is implemented by software, and the unit in the message processing apparatus is a program module. In other embodiments, the units in the message processing apparatus provided in the fourth aspect are implemented by hardware or firmware. The details of the message processing apparatus provided in the fourth aspect may be found in the second aspect or any optional manner of the second aspect, which is not described herein.
In a fifth aspect, there is provided a forwarding device comprising a processor coupled to a memory, the memory having stored therein at least one computer program instruction that is loaded and executed by the processor to cause the forwarding device to implement the method provided in the first aspect or any of the alternatives of the first aspect. The details of the forwarding device provided in the fifth aspect may be referred to the above first aspect or any optional manner of the first aspect, which is not described herein.
In a sixth aspect, there is provided a forwarding device comprising a processor coupled to a memory, the memory having stored therein at least one computer program instruction that is loaded and executed by the processor to cause the forwarding device to implement the method provided in the second aspect or any of the alternatives of the second aspect. The details of the forwarding device provided in the sixth aspect may be referred to in the second aspect or any optional manner of the second aspect, which are not described herein.
In a seventh aspect, there is provided a forwarding device including: the main control board and the interface board further comprise a switching network board. The forwarding device is configured to perform the method of the first aspect or any possible implementation of the first aspect. In particular, the forwarding device comprises means for performing the method of the first aspect or any possible implementation of the first aspect. In one possible implementation, an IPC channel is established between the main control board and the interface board, and communication is performed between the main control board and the interface board through the IPC channel.
In an eighth aspect, there is provided a forwarding apparatus including: the main control board and the interface board further comprise a switching network board. The forwarding device is configured to perform the method of the second aspect or any possible implementation of the second aspect. In particular, the forwarding device comprises means for performing the method of the second aspect or any possible implementation of the second aspect. In one possible implementation, an IPC channel is established between the main control board and the interface board, and communication is performed between the main control board and the interface board through the IPC channel.
In a ninth aspect, there is provided a computer readable storage medium having stored therein at least one instruction that when executed on a computer causes the computer to perform the method provided in the first aspect or any of the alternatives of the first aspect.
In a tenth aspect, there is provided a computer readable storage medium having stored therein at least one instruction which when executed on a computer causes the computer to perform the method provided in the second aspect or any of the alternatives of the second aspect.
In an eleventh aspect, there is provided a computer program product comprising one or more computer program instructions which, when loaded and run by a computer, cause the computer to carry out the method provided in the first aspect or any of the alternatives of the first aspect.
In a twelfth aspect, there is provided a computer program product comprising one or more computer program instructions which, when loaded and run by a computer, cause the computer to carry out the method provided in the second aspect or any of the alternatives of the second aspect described above.
In a thirteenth aspect, a chip is provided, comprising a memory for storing computer instructions and a processor for calling and executing the computer instructions from the memory to perform the method of the first aspect and any possible implementation of the first aspect.
In a fourteenth aspect, there is provided a chip comprising a memory for storing computer instructions and a processor for calling and executing the computer instructions from the memory to perform the method provided in the second aspect or any of the alternatives of the second aspect described above.
Drawings
Fig. 1 is a schematic diagram of a unicast scenario provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a multicast scenario provided in an embodiment of the present application;
FIG. 3 is a flowchart of a message processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a SID structure according to an embodiment of the present application;
FIG. 5 is a diagram illustrating a message format according to an embodiment of the present application;
FIG. 6 is a flow chart of a SID notification method provided by an embodiment of the present application;
FIG. 7 is a diagram illustrating a message format according to an embodiment of the present application;
FIG. 8 is a diagram illustrating a message format according to an embodiment of the present application;
FIG. 9 is a diagram illustrating a message format according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a message processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a forwarding device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a forwarding device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
In this specification, unless otherwise indicated, the term "node" and "device" are used to denote the same concept, and a device in this specification may optionally be a forwarding device. Forwarding devices refer to devices, such as routers, switches, etc., that have a primary function of forwarding messages to other devices. Alternatively, the device in this specification is a host or a server.
The application scenario of the embodiment of the present application is illustrated below.
From the dimension of the network communication mode, the application scene of the embodiment of the application comprises a unicast scene and a multicast scene; from the dimension of the arrangement mode of the SIDs, the application scene of the embodiment of the application comprises a mixed-coded scene and a non-mixed-coded scene. The unicast scene and the multicast scene are respectively exemplified in conjunction with (1) and (2), and the mixed scene and the non-mixed scene are exemplified in conjunction with (a) and (B).
(1) Unicast scene
Fig. 1 is a schematic diagram of a unicast scenario provided in an embodiment of the present application. The scenario illustrated in fig. 1 includes an ingress (ingress) node, an intermediate node, and an egress node (egress). The ingress node corresponds to an ingress of the SR domain, the ingress node being operable to add a segment routing header (Segment Routing Header, SRH) to the data message; an ingress node is also referred to as an ingress node, a head node or a start node. The intermediate node is a node that needs to be traversed as indicated by the SRH segment list. The intermediate node is configured to locate a next SID from the segment list, and forward the message to the next hop node based on the next SID. The exit node corresponds to the exit of the SR domain, and is configured to decapsulate the SRH, and forward the data packet according to the destination address of the data packet.
(2) Multicast scene
Multicast, also known as point-to-multipoint (Point To Multipoint, P2 MP) communication, is used to forward data messages from a node to a group of nodes. Fig. 2 is a schematic diagram of a multicast scenario provided in an embodiment of the present application. The scenario shown in fig. 2 includes an ingress node, an intermediate node 1, an intermediate node 2, an egress node 3, an egress node 4, an egress node 5, and an egress node 6. The ingress node corresponds to the ingress of the multicast domain, or the root node of the multicast tree. The entry node is configured to add a multicast segment routing header (Multicast Routing Header, MRH) to the data packet, copy the packet including the MRH to obtain 2 packets, forward one packet obtained by copying to the intermediate node 1, and forward the other packet obtained by copying to the intermediate node 2.
The intermediate node is the node that needs to be traversed as indicated by the segment list in the MRH. The intermediate node 1 is configured to receive a message sent by the ingress node, and locate the SID of the egress node 3 and the SID of the egress node 4 from a segment list of the message; the intermediate node 1 copies the message to obtain 2 parts of messages, and the intermediate node 1 forwards one copied part of the message to the outlet node 3 based on the SID of the outlet node 3; the intermediate node 1 forwards the copied further message to the egress node 4 based on the SID of the egress node 4. The intermediate node 2 is configured to receive a packet sent by the ingress node, and locate the SID of the egress node 5 and the SID of the egress node 6 from a segment list of the packet; the intermediate node 1 copies the message to obtain 2 parts of messages, and the intermediate node 1 forwards one copied part of the message to the outlet node 5 based on the SID of the outlet node 5; the intermediate node 1 forwards the copied further message to the egress node 6 based on the SID of the egress node 6. Egress node 3, egress node 4, egress node 5 and egress node 6 correspond to egress of the multicast domain or leaf nodes of the multicast tree. The exit node is used for decapsulating the MRH and forwarding the data message according to the destination address of the data message.
In a multicast scenario, when a message is replicated and forwarded from an upstream node to a plurality of downstream nodes, it is understood that a parent-child relationship is established between the upstream node and the plurality of downstream nodes. In order to distinguish the roles of the description nodes, the nodes for copying the message are described below by using a parent node, and the receiving nodes for copying the message are described below by using a child node. For example, in view of the scenario shown in fig. 2, the ingress node has a parent-child relationship with the intermediate node 1 and the intermediate node 2, and among the three nodes of the ingress node, the intermediate node 1 and the intermediate node 2, the ingress node is a parent node of the intermediate node 1 and the intermediate node 2, and the intermediate node 1 and the intermediate node 2 are child nodes of the ingress node. The intermediate node 1 has a parent-child relationship with the exit node 3 and the exit node 4, and among the three nodes of the intermediate node 1, the exit node 3 and the exit node 4, the intermediate node 1 is the parent node of the exit node 3 and the exit node 4, and the exit node 3 and the exit node 4 are child nodes of the intermediate node 1. A child node of a certain parent node may also be referred to as a next-hop node as a parent node.
(A) Mixed-knitting scene
The mixed coding means that one segment list contains SIDs of various lengths. A typical mixed scene is one in which a segment list contains compressed and uncompressed SIDs, such as a segment list contains a 32-bit SID and a 128-bit SID. For example, as seen in connection with the scenario shown in fig. 1, the mixed scenario may be a segment list in which the length of the SID of the intermediate node is different from the length of the SID of the egress node. For example, in connection with the scenario shown in fig. 2, the mixed scenario may be a segment list in which the SID of the intermediate node 1 is different from the SID of the intermediate node 2, and for example, the mixed scenario may be a segment list in which the SID of the intermediate node 1 is different from the SID of the exit node 3 and the SID of the exit node 4.
(B) Non-mixed-knitting scene
Non-shuffled means that the length of each SID in a segment list is the same. For example, as seen in connection with the scenario shown in fig. 1, the non-shuffled scenario may be a segment list in which the length of the SID of the ingress node, the length of the SID of the intermediate node, and the length of the SID of the egress node are all the same.
The following illustrates the technical scheme of the embodiment of the application based on the network scene, which comprises two major parts of a data plane scheme and a control plane scheme.
Fig. 3 is a flowchart of a message processing method according to an embodiment of the present application, where the method shown in fig. 3 is an illustration of a data plane scheme, and the method shown in fig. 3 includes the following steps S301 to S309. The interaction body of the method shown in fig. 3 includes an ingress node, an intermediate node, and an egress node, where the ingress node, the intermediate node, and the egress node may be forwarding devices.
For example, the method shown in fig. 3 is used for unicast scenarios, for example, the network scenario on which the method shown in fig. 3 is based is shown in fig. 1 described above. For example, as seen in connection with fig. 1, the ingress node in the method of fig. 3 is the ingress node in fig. 1, the intermediate node in the method of fig. 3 is the intermediate node in fig. 1, and the egress node in the method of fig. 3 is the egress node in fig. 1. Or the method shown in fig. 3 is used for multicast scenarios, for example, the network scenario on which the method shown in fig. 3 is based is as shown in fig. 2 above. For example, as seen in connection with fig. 2, the ingress node in the method shown in fig. 3 is the ingress node in fig. 2, and when the intermediate node in the method shown in fig. 3 is the intermediate node 1 in fig. 2, the egress node in the method shown in fig. 3 is the egress node 3 or the egress node 4 in fig. 2. When the intermediate node in the method shown in fig. 3 is the intermediate node 2 in fig. 2, the exit node in the method shown in fig. 3 is the exit node 5 or the exit node 6 in fig. 2.
Step S301, the ingress node receives the data packet.
The data message mentioned in the embodiment of the application is also called a service message. For example, in the scenario shown in fig. 1, the data message received by the ingress node is a unicast message. In the scenario shown in fig. 2, the data packet received by the ingress node is a multicast packet. For example, the data packet in the embodiment of the present application is an internet protocol version 4 (internet protocol version, ipv 4) packet or an internet protocol version 6 (internet protocol version, ipv 6) packet.
Step S302, the entry node obtains a first message based on the data message.
The payload (payload) of the first message comprises the data message and the destination address (destination address, DA) of the first message comprises the first SID. The first message includes information added by the ingress node, and the description of the information added by the ingress node is described below, see (1) to (4) below.
(1) First sign
For example, the first identifier is used to indicate whether the SID in the segment list is the same in length, i.e., the first identifier is used to identify whether the siding is a hash. For example, the value of the first identifier includes a first value and a second value. When the value of the first identifier is the first value, the lengths of SIDs in the segment list are indicated to be different, namely, the segment list adopts a mixed-knitting mode. And when the value of the first identifier is the second value, indicating that the SIDs in the segment list have the same length, namely the segment list adopts a non-mixed editing mode. Alternatively, the first identifier may be embodied as a 1-bit (bit) identifier bit. When the value of the identification bit is 1, namely the identification bit is set, indicating that the SIDs in the segment list are different in length; when the value of the identification bit is 0, i.e., the identification bit is not set (set to 0), it indicates that the SID in the segment list is the same length. Where 1 is an example of the above-described first value and 0 is an example of the above-described second value.
For example, the carrying manner of the first identifier in the first message includes the following possible implementation manners. In some embodiments, the segment routing header of the first message includes a first identification. Optionally, a flag field of the segment routing header includes a first identification. Alternatively, the type length value (type length value, TLV) field of the segment routing header includes a first identifier or other spare field includes a first identifier, and the carrying position of the first identifier is not limited in this embodiment. For example, in a unicast scenario, the SRH of the first message includes a first identifier; in a multicast scenario, the MRH of the first message includes a first identifier.
(2) Segment length (segment length)
For example, segment length is used to indicate the unit length of compression. The value of segment length includes a plurality of cases. In some embodiments, the segment length is less than 128 bits. Alternatively, segment length is an integer multiple of 8 bits. For example, segment length is 8 bits, 16 bits, 32 bits, or 64 bits. By designing the bit-alignment device to be an integer multiple of 8 bits, byte alignment is convenient, and the expandability is good. For example, segment length is 32 bits, which is consistent with MPLS labels or IPv4 address length, and thus more friendly to hardware processing of devices; alternatively, the segment length is 20 bits or 10 bits. In other embodiments, the segment length is greater than 128 bits; in still other embodiments, segment length is equal to 128 bits. The value of the segment length may be defined as required, and the value of the segment length is not limited in this embodiment.
The length of the SID in the segment list is related to the segment length described above. For example, the SID in the segment list has the same length as the segment length, or the SID in the segment list has a length n times the segment length, and n is an integer of 1 or more. For example, the segment length is 16 bits, and the SID in the segment list has any one of 16 bits, 32 bits, 48 bits, 64 bits, 80 bits, … … bits. By designing the length of the SID in the segment list to be an integer multiple of the segment length so that each offset is an integer multiple of a fixed amount (i.e., segment length), implementation difficulties caused by full-length compression, where full-length compression refers to the amount of each offset that may be any integer, are avoided.
The multiple of the lengths of different SIDs in the segment list to the segment length described above includes a number of cases. For example, in a non-shuffled scenario, the length of each SID in a segment list is the same as the multiple of the segment length described above. For example, segment length is 16 bits, and each SID in segment list is 16 bits in length; as another example, segment length is 16 bits, and each SID in a segment list is 32 bits in length (i.e., 2 times the segment length). For another example, in a mixed-editing scenario, the length of each SID in the segment list is not exactly the same as the multiple of the segment length. For example, segment length is 16 bits, and one SID in a segment list is 16 bits (i.e., one time segment length), and another SID in a segment list is 32 bits (i.e., 2 times segment length).
The function of defining segment length in the message is that the length of SID can be defined according to the compression requirement, so as to support flexible variable compression scheme, in addition, the length of SID can be obtained by reading the header through the data plane without need of management and control plane expansion. The effect of flexible lengthening is manifested in two ways. In the first aspect, the lengths of the SIDs in different messages or different segment lists may be specified by different segment lengths, so as to support different messages to have SIDs with different lengths, or support different segment lists to use different compression modes. In an exemplary scenario, when the payload data size in the forwarded packet is large (e.g., forwarding video traffic), the overhead requirements for the packet header are typically not high, and the segment length or SID length allows for a relatively long value; when the load (payload) data size of the forwarded packet is small (e.g., forwarding an industrial control packet), the requirement on the overhead of the packet header is generally high, and the segment length or SID is allowed to take a relatively short value, where the industrial control packet is a packet transmitted when communication is performed between devices such as an instrument, a controller, and a servo motor in an industrial field network. In summary, the segment length in the message of different services can be flexibly defined according to the compression requirements of different services. In a second aspect, different lengths of SIDs in the same segment list are supported. In one exemplary scenario, a segment list includes a series of compressed SIDs and a non-compressed SID, such as a virtual private network (Virtual Private Network, VPN) SID where the last SID in the segment list is 128 bits, each SID before the last SID in the segment list is a compressed SID. In another exemplary scenario, where the partial network traversed by one forwarding path supports SID compression and the partial network does not support SID compression, the segment list includes compressed SIDs and non-compressed SIDs. Based on the method of this embodiment, the multiples of different SIDs in the segment list relative to the segment length may be different, and by the segment length and the multiples of the SIDs relative to the segment length (optionally determined by parameters in the SIDs), the lengths of the SIDs in the segment list may be indicated, so as to support the different lengths of the SIDs in the same segment list.
In some embodiments, before the message of the data plane carries the segment length, the segment length may be learned through control plane configuration or protocol interaction. For example, the control plane is extended, and when the SID is flooded between forwarding devices through the IGP protocol, not only the SID is carried in the IGP packet, but also a field is added to indicate the segment length corresponding to the SID. For another example, when the controller issues SR policy through border gateway protocol (Border Gateway Protocol, BGP) SR policy, the segment length is carried in the SR policy. As another alternative, the segment length is statically configured on the forwarding device, and the embodiment does not limit how the device obtains the segment length.
The carrying location of the segment length in the first message includes a plurality of implementations. In some implementations, the segment routing header of the first message includes segment length. For example, in a unicast scenario, the SRH of the first packet includes segment length; in the multicast scenario, the MRH of the first message includes segment length.
(3) Residual section (segment left, SL)
For example, segment left corresponds to a pointer for determining the next SID in the segment list, that is, segment left is used to indicate the position of the next SID in the segment list, so that the next SID is used as the destination address. If the IPv6 protocol is employed, the next SID can be the destination address in the IPv6 basic header.
For example, a forwarding path of a message in a unicast scenario is an ingress node→a first intermediate node→a second intermediate node→a third intermediate node→an egress node, where the first intermediate node is a next-hop node of the ingress node, the second intermediate node is a next-hop node of the first intermediate node, the third intermediate node is a next-hop node of the second intermediate node, and the egress node is a next-hop node of the third intermediate node. In this scenario, the operation flow of each node for segment left includes, for example: the segment left in the message sent by the entry node points to the SID of the first intermediate node on the forwarding path, and the destination address of the message includes the SID of the first intermediate node. After the first intermediate node receives the message, the segment left is updated, the updated segment left points to the SID of the second intermediate node on the forwarding path, the first intermediate node locates the SID of the second intermediate node based on the updated segment left, and the message is forwarded after updating the destination address of the message based on the SID of the second intermediate node. After the second intermediate node receives the message, updating the segment left, the updated segment left points to the SID of the third intermediate node on the forwarding path, the second intermediate node locates the SID of the third intermediate node based on the updated segment left, and forwards the message after updating the destination address of the message based on the SID of the third intermediate node, and so on, after the third intermediate node receives the message, updating the segment left, the updated segment left points to the SID of the exit node, the third intermediate node obtains the SID of the exit node based on the updated segment left, forwards the message after updating the destination address of the message based on the SID of the exit node, the exit node receives the message, and the updated segment left is usually 0 after updating the segment left, which indicates that the message has passed through all the nodes and links indicated by the segment list, and the message has reached the end point of the forwarding path.
In some embodiments, the count unit (or offset bit number unit) of segment left is the segment length described above. For example, the value of segment left represents a multiple of the total length of the SIDs remaining in the segment list relative to segment length. For example, the total length of the remaining SIDs in the segment list is the product of the value of segment left and segment length. For example, if the node subtracts one from segment left, it indicates that segment length bits are to be shifted in segment list to locate the next SID; if the node pair segment left subtracts k, which represents that k segment length bits are to be shifted in segment list to locate the next SID, where k is a positive integer. For example, if the value of segment left in the received message is x and the length of the next SID of the current SID is k times the segment length, the node updates the value of segment left to x-k, offsets k bits from the current SID in the segment list, and updates the found next SID to the destination address field.
segment left is used for realizing one-dimensional array positioning SID, and complexity of positioning SID is reduced. Specifically, generic SRv (Generalized SRv6, G-SRv) technology locates a SID by two-dimensional array (segment left and SID index) and requires the control plane to advertise the length of the compressed SID. In the forwarding process, a 128-bit compressed SID is positioned in the segment list according to the segment left, and then one of the 128-bit compressed SIDs is positioned according to the SID index and the length of the compressed SID, so that the complexity of positioning the compressed SID is higher. By providing segment left in the above-described sense, the present embodiment omits SID index, and the position of SID can be indicated by segment left, so that the complexity is lower than that of G-SRv 6.
(4)segment list
The segment list is used to indicate forwarding paths. The segment list includes at least one SID. Each SID in the segment list identifies a node or a link on the forwarding path. The type of SID in the segment list includes a number of cases. Optionally, the SIDs in the segment list are compressed SIDs. Alternatively, the SIDs in the segment list are complete SIDs.
The full SID refers to a SID of 128 bits in length. The data structure of the complete SID as shown in fig. 4 (a), the complete SID generally includes a locator (location information), a function (function information), and at least one parameter (parameters). The locator is used to identify a node in the network (typically the node that issued the SID) in order to direct the forwarding of data messages to that node. The locator can be subdivided into B: n. Wherein, B identifies SID Block, which is typically allocated to a subnet by the operator, B is a common prefix (common prefix) of SID; n represents node identification (node ID), which is an identification within a subnet for distinguishing nodes. The function is used to indicate the forwarding action to be performed by the node. The parameters are optional parts of the SID.
Compressed SIDs refer to SIDs that are shortened in length, i.e., SIDs that are less than 128 bits in length. The compressed SID is typically the portion remaining after the common portion is removed from a complete SID. For example, considering that different SIDs in a segment list may contain common parts (i.e., the same content as one another between different SIDs, such as common prefix), the common parts in a SID belong to repeatedly redundant content, the common parts in a SID may be extracted, and the parts in a SID different from other SIDs may be reserved as compressed SIDs. In this way, redundant information in the segment list is deleted, and compression of the segment list is realized. Since segment list is usually the part with the largest length ratio in the message header, the compression of the message header is realized by realizing the compression of the segment list, and the message overhead is reduced.
Alternatively, the data structure of the compressed SID includes three parts of a node ID, a function, and an parameters, as shown in (b) of fig. 4. In this way, the compressed SIDs are retained, i.e., the compressed SIDs are not compressed, as compared to compression schemes that extract the artifacts as a common part, thereby accommodating the situation where the artifacts in each SID change. For example, in a multicast scenario, it is often necessary to carry the 2 parameters replication number and pointer in the SID to indicate the multicast path, so that multicast programming is supported by means of the metrics without compression.
In some embodiments, the dimensions of the SID include a length dimension. The length indication is used to indicate the length of the next SID. For example, the length dimension is used to indicate a multiple of the length of the next SID relative to the segment length. For example, the segment list includes a first SID and a second SID, the second SID being a next SID to the first SID, the first SID including a first parameter, the first parameter being a length segment, the first parameter indicating a multiple of the length of the second SID relative to the segment length. For example, the value of the length of the first SID, the value of n, indicates that the length of the second SID is n times the segment length, where n is a positive integer. The method is suitable for mixed-knitting scenes by designing the length pattern, is convenient for equipment to determine the specific bit of the length of the next SID, thereby accurately positioning the next SID, and reduces the complexity without informing the length of the SID through control plane expansion.
Optionally, in a multicast scenario, one SID includes a set of length references indicating the length of each SID in the next set of SIDs. For example, the next set of SIDs of the first SID includes a second SID and a third SID, the length of the first SID includes a first length dimension indicating that the length of the second SID is a first multiple of the segment length and a second length dimension indicating that the length of the third SID is a second multiple of the segment length. The next hop node of the node corresponding to the first SID comprises a node corresponding to the second SID and a node corresponding to the third SID.
The encoding of SIDs carrying length segments in segment lists includes a variety of implementations. Optionally, each SID in the segment list carries a length of segment. For example, in a mixed-editing scenario, each SID in a segment list optionally carries a length dimension, and the values of the length dimension in different SIDs may be different. For example, the first SID in the segment list has a length of segment with value n 1 The length of the 2 nd SID in the segment list has a value of n 2 The length of the 3 rd SID in the segment list has a value of n 3 Thereby indicating that the length of the 2 nd SID is n of segment length 1 The 3 rd SID is n of segment length in length 2 The 4 th SID is n of segment length in length 3 Multiple times, and so on. Alternatively, the first SID to the 2 nd SID in the segment list each carry a length of an indication, and the last SID in the segment list need not carry a length of an indication. Furthermore, in a non-shuffled scenario, the SID optionally need not carry a length pattern; alternatively, in a non-shuffled scenario, segments lisEach SID in t carries a length indication, and the value of the length indication in each SID is the same.
The above-described carrying a length indication in the SID to indicate the length of the next SID is an alternative way of determining the SID length. In other embodiments, the length of the next SID is indicated by expanding on the control plane. For example, when the controller issues SR policy through BGP SR policy, the segment list is carried in the SR policy, and the length of each SID in the segment list.
In some implementations, the segments portion of the SID (e.g., the segments field in fig. 4) includes a number of copies (replication number) and pointers (pointers) to support the forwarding of copies of the message in a multicast scenario. replication number is used to indicate the copy number of the message, i.e. how many copies the parent node should copy the message. For example, if the value of replication number in the SID is r, which means that the number of child nodes is r, when the node corresponding to the SID (i.e. serving as a parent node) receives a multicast message, r-1 times of copying the multicast message is needed to obtain r parts of messages, and the r parts of messages are forwarded to the r child nodes respectively. The pointer is used to indicate the location of the SID of the first child node in the segment list. For example, the first SID includes a second parameter, which is a pointer, and the second parameter is used to indicate an offset of a next SID of the first SID with respect to the first SID, and in a multicast tree, the SIDs of a group of child nodes of the same parent node are typically arranged consecutively in a segment list, so that the SID of each child node in the group of child nodes can be located according to pointers and replication number. For example, if replication number in the SID in the destination address indicates that r-1 copies are to be made to obtain r copies of the message, and the pointer of the SID in the destination address takes a value of p, the node updates the destination address of the first copy message based on the SID located in the segment list [ p ], and forwards the first copy message according to the SID located in the segment list [ p ]; the node updates the destination address of the second replication message based on the SID at the segment list [ p-1], and forwards the second replication message according to the SID at the segment list [ p-1 ]; this operation is repeated until r copies of the message are forwarded. Wherein p is a positive integer, and the maximum value of p is the ratio of the length of the segment list to the segment length.
The above-described pointer and segment length may be used in combination. In one possible implementation, the parent node determines a position offset between SIDs of neighboring child nodes in a set of child nodes based on segment length. For example, the position offset between SIDs of neighboring child nodes in a group of child nodes of the same parent node is segment length. For example, the parent node determines the position of the SID of the first child node according to the product of the pointer and the segment length, and then the parent node shifts the segment length bits from the SID of the first child node to determine the position of the SID of the second child node; then, the parent node shifts segment length bits from the SID of the second child node as a starting point, thereby determining the position where the SID of the third child node is located, and so on. Through the embodiment, in the multicast scene, on one hand, the receiving end (namely, the group of child nodes in the text) of a group of replication messages can be accurately positioned, so that the segment list accurately shows the multicast tree, on the other hand, the flexible lengthening of the SID is supported, and the flexibility of the length of the SID used by the nodes in the multicast tree is improved.
The above pointer may be used in combination with the length dimension. Wherein the length values, for example, carry the values field in the SID. In one possible implementation, the parent node determines the location of the SID of a child node based on the segment length and the length of the segment in the SID of that child node. For example, in a group of children of the same parent node, the position offset between the SID of the next child node and the SID of the last child node is the product of the length of the SID of the last child node, the length of the segment, and the segment length. For example, a group of child nodes of a parent node includes a first child node, a second child node and a third child node, where a length of an edge in the SID of the first child node indicates that the SID of the second child node is m times as long as a segment length, and a length of an edge in the SID of the second child node indicates that the SID of the second child node is q times as long as the segment length, after the parent node locates the SID of the first child node, the parent node shifts the segment length by m bits from the SID of the first child node as a starting point, thereby determining the position of the SID of the second child node; the parent node then offsets segment length q bits from the SID of the second child node as a starting point, thereby determining where the SID of the third child node is located, and so on. By the embodiment, a multicast mixed-editing scene is supported, and different nodes in a multicast tree are allowed to use SIDs with different lengths, for example, SIDs used by a father node are different from SIDs used by child nodes, for example, SIDs used by different child nodes are different.
In some embodiments, a flavor (flavor) is added to the SID to support the compression method. For example, the flag of the SID includes a compression mode flag for identifying the compression mode of the SID. As another example, the SID's flag includes a mixed flag and a non-mixed flag, where the SID of the mixed flag is used to indicate that the SID in the segment list is different in length, that is, the segment list adopts the mixed mode. The SID of the non-shuffled flag is used to indicate that the SIDs in the segment list are the same length, i.e., the segment list adopts the non-shuffled mode.
Illustratively, the ingress node encapsulates the segment routing header towards the outer layer of the data message to obtain a first message comprising the data message and the segment routing header. In some embodiments, the ingress node further encapsulates the IPv6 base header to an outer layer of the segment routing header to obtain a first message comprising the data message, the segment routing header, and the IPv6 base header. Wherein the segment routing header includes segment length, first segment left, first identifier, and segment list. The destination address of the IPv6 base header includes the first SID. In a unicast scenario, the segment routing header is, for example, SRH. In a multicast scenario, the segment routing header is, for example, an MRH.
Step S303, the entry node sends a first message based on the first SID.
Step S304, the intermediate node receives the first message.
Step S305, the intermediate node obtains a second message based on the first message.
This embodiment relates to a process in which an intermediate node updates a segment left and a SID in a destination address. To distinguish the descriptions, the "first segment left" is used to describe the segment left in the message received by the intermediate node, and the "second segment left" is used to describe the segment left in the message sent by the intermediate node; the first SID is used to describe the SID included in the destination address in the message received by an intermediate node, and the second SID is used to describe the SID included in the destination address in the message sent by the intermediate node. The first SID is used to identify an intermediate node. The second SID is the next SID to the first SID. The second SID is used to identify a next-hop node for the intermediate node.
In some embodiments, the flow of forwarding the message by the intermediate node is basically: after receiving the message, if the first SID included in the destination address of the message is the SID of the node, the intermediate node locates the next SID from the segment list, uses the next SID to update the destination address of the message, and searches the IPv6 forwarding table according to the updated destination address to forward the message. In the multicast scenario, the intermediate node copies multiple copies of the message, and forwards each copy of the message in a similar manner.
How the intermediate node locates the next SID includes a variety of implementations. The following is an example of an implementation of locating the next SID in connection with four scenarios, see scenario one through scenario four below.
Scene one, unicast non-mixed braiding
Scene one includes the following scene (1-1) and scene (1-2).
Scene (1-1) is unicast and each SID in the segment list is segment length in length.
In scenario (1-1), the intermediate node determines the next SID from segment left minus one. In pseudo code, segment left minus one may be denoted as segment left-. By subtracting one from segment left, it is equivalent to shifting this pointer once to point the pointer to the next SID. For example, the second segment left is one less than the first segment left. Wherein the segment left minus one represents the number of bits of the offset as segment length. For example, from the position pointed to by the first segment left (the position of the first SID) as the start position, the segment length bits are shifted to find the second SID.
Scenes (1-2) are unicast, and the length of each SID in the segment list is n times the length of the segment length, and the length of each SID in the segment list is the same as a multiple of the segment length.
In scenario (1-2), the intermediate node determines the next SID from the segment left minus n. For example, the second segment left is n less than the first segment left. Where segment left minus n represents the number of bits of the offset as the product of segment length and n. For example, the second SID is found by shifting the segment length by n bits from the position pointed to by the first segment left (the position of the first SID) as the start position. The multiple of the SID length in the segment list with respect to the segment length, that is, the value of n, may be determined by the length of the segment in the SID, or may be determined by static configuration, or may be determined by SR policy issued in advance, which is not limited in this embodiment.
Scene two, multicast non-mixed braiding
Scene two includes the following scene (2-1) and scene (2-2).
Scene (2-1) is multicast, and the length of each SID in the segment list is segment length.
In scenario (2-1), the intermediate node determines the position of the SID of the first child node of the intermediate node based on the pointer value in the first SID, determines the position of the SID of the second child node of the intermediate node based on the pointer value in the first SID minus one, determines the position of the SID of the third child node of the intermediate node based on the pointer value in the first SID minus two, and so on. Wherein the pointer value minus one represents the number of bits of the offset is segment length.
Scene (2-2) is multicast, and the length of each SID in the segment list is n times the length of the segment length, and the length of each SID in the segment list is the same multiple with respect to the segment length.
Taking a situation that a message is copied by one node for 2 times and three messages are forwarded to three nodes after the three messages are obtained as an example, an intermediate node determines the SID position of a first child node of the intermediate node according to a pointer in a first SID, determines the SID position of a second child node of the intermediate node according to a pointer minus n in the first SID, and determines the SID position of a third child node of the intermediate node according to a pointer minus 2n in the first SID. Wherein the pointer minus n represents that the number of bits of the offset is n times the segment length.
Scene three, unicast mixed braiding
For example, the segment list includes a first SID, a second SID, and a third SID. The first SID identifies a first intermediate node, the second SID identifies a second intermediate node, and the third SID identifies a third intermediate node. The second intermediate node is the next-hop node of the first intermediate node and the third intermediate node is the next-hop node of the second intermediate node. The value of the length dimension in the first SID is n 1 The value of the length dimension of the second SID is n 2 . The first intermediate node subtracts n from the first segment left based on the first SID 1 To obtain a second segment left, the first intermediate node determines a second SID according to the second segment left, so as to forward the message to the second intermediate node based on the second SID. The second intermediate node subtracts n from the second segment left based on the second SID 2 To obtain a third segment left, and determining a third SID according to the third segment left, so as to forward the message to a third intermediate node based on the third SID. Wherein segment left is subtracted by n 1 N representing the number of bits of the offset segment length 1 Multiplying segment left minus n 2 N representing the number of bits of the offset segment length 2 Multiple times.
Scene four, multicast mixed braiding
Taking the scenario that one node copies a message 2 times to obtain three messages and forward the three messages to three nodes as an example, the segment list includes a first SID, a second SID, a third SID and a fourth SID. The first SID identifies a first intermediate node, the second SID identifies a second intermediate node, the third SID identifies a third intermediate node, and the fourth SID identifies a fourth intermediate node. The first intermediate node is a father node and is used for copying the message for 2 times to obtain three copies of the message, and the three copies of the message are respectively forwarded to the second intermediate node, the third intermediate node and the fourth intermediate node. The second intermediate node, the third intermediate node and the fourth intermediate node are all child nodes of the first intermediate node and are used for receiving the copy message sent by the first intermediate node. For example, the first SID includes a pointer, and the value of the length indication in the first SID includes n 1 、n 2 And n 3 N representing the length of the second SID as segment length 1 Multiple of the third SID is n of segment length 2 Multiple of the fourth SID is n of segment length 3 Multiple times. The first intermediate node subtracts n according to the pointer in the first SID 1 The second SID is determined to forward the first duplicate message to the second intermediate node based on the second SID. The first intermediate node subtracts (n) according to the pointer in the first SID 1 +n 2 ) A third SID is determined to forward the second duplicate message to a third intermediate node based on the third SID. The first intermediate node subtracts (n) according to the pointer in the first SID 1 +n 2 +n 3 ) The fourth SID is determined to forward the third duplicate message to the fourth intermediate node based on the fourth SID.
Under the unicast scene, under the condition that the next hop node of the intermediate node is an exit node, the intermediate node reads a first identifier in a first message, if the lengths of SIDs in a first identifier section list in the first message are different, the intermediate node subtracts a first parameter (length value) in the first SID from the first segment left to obtain a second segment left, and the intermediate node determines the SID of the exit node from the segment list of the first message according to the second segment left and updates a destination address according to the SID of the exit node. If the lengths of SIDs in the first identification mark section list in the first message are the same, subtracting 1 from the first segment left by the intermediate node to obtain a second segment left, determining the SID of the exit node from the segment list of the first message according to the second segment left by the intermediate node, and updating the destination address according to the SID of the exit node.
Under the multicast scene, under the condition that the next hop node of the intermediate node is an exit node, the intermediate node reads a first identifier in a first message, if the lengths of SIDs in a first identifier section list in the first message are different, the intermediate node updates the first segment left as a difference between a second parameter (pointer) in the first SID and the first parameter (length segment) in the first SID to obtain a second segment left, and the intermediate node determines the SID of the exit node from the segment list of the first message according to the second segment left and updates the destination address according to the SID of the exit node. If the lengths of SIDs in the first identification mark section list in the first message are the same, the intermediate node updates the first segment left to a second parameter (pointer) in the first SID to obtain a second segment left, and the intermediate node determines the SID of the exit node from the segment list of the first message according to the second segment left and updates the destination address according to the SID of the exit node.
The method by which the intermediate node determines the mixed-coded scene and the non-mixed-coded scene includes a plurality of implementations. In some embodiments, the intermediate node determines whether to mix or non-mix the scene according to the value of the first identifier. For example, the intermediate node reads a first identifier in a first message; when the value of the first identifier is the first value, determining that the first identifier is a mixed scene, namely that the SIDs in the segments list in the first message are different in length; and when the value of the first identifier is the second value, determining that the non-mixed scene is the same as the SID in the segment list in the first message. In other embodiments, the intermediate node determines whether to mix or not mix according to the flag of the SID. For example, the intermediate node determines a flag of the first SID; when the flag of the first SID is the mixed-coded flag, the intermediate node determines that the first SID is a mixed-coded scene; when the flag of the first SID is a non-shuffled flag, determining a non-shuffled scene.
The method by which the intermediate node determines the unicast scenario and the multicast scenario includes a variety of implementations. In some implementations, the intermediate node determines whether to unicast or multicast the scene according to the type of segment routing header. For example, the intermediate node determines whether to unicast or multicast the scene according to the type of the segment routing header. For example, the intermediate node determines whether the type of the segment routing Header is SRH or MRH according to the Next Header field in the last Header of the segment routing Header. If the Next Header field in the last Header indicates that the type of the segment routing Header is SRH, determining to be a unicast scene; if the Next Header field in the last Header indicates that the type of segment routing Header is MRH, it is determined to be a multicast scenario.
Step S306, the intermediate node sends a second message based on the second SID.
Step S307, the egress node receives the second message.
Step S308, the outlet node obtains a data message based on the second message.
And the exit node reads the destination address of the second message to obtain a second SID. The egress node determines that the second SID is the SID of the egress node itself. Under the multicast scene, the exit node reads the pointer in the second SID, the exit node determines that the read pointer is 0 or a default value, and determines that the exit node is the last hop node, and the exit node strips the MRH in the second message to obtain the data message. Under the unicast scene, the exit node reads the segment left in the second message, subtracts one from the read segment left, determines that the segment left after subtracting one is 0, determines that the exit node is the last hop node, and strips the SRH in the second message to obtain the data message.
Step 309, the egress node sends the data message.
Compared with the scheme of fixed-length SID, the method provided by the embodiment shortens the length of SID instead of the fixed value, so that the SID is compressed, further the segment list is compressed, and the message cost is saved.
The following illustrates the message format provided in the embodiments of the present application.
Fig. 5 is a schematic diagram of a message format provided in an embodiment of the present application, and as shown in fig. 5, a new type of Routing Header (RH) is defined, where the routing header includes a new segment length field and a segment left field with a new meaning.
Specifically, a segment length field is defined in the routing header, and is used to indicate the length of the SID; in a mixed scene, the length of the SID in the segment list is equal to the segment length, or a multiple of the segment length. The value of the segments left originally means the number of remaining segments, and in this embodiment, the value of the segments left means "the length of the remaining segment list" expressed by a multiple of "segment length"; each segment left is shifted once by the length of segment length. The present embodiment defines a new type of compressed SID, which is commonly referred to as a specific compressed SID (specified compressed SID, S-SID). Each S-SID contains three parts, node ID, function, and parameters, which are not extracted as common parts. The S-SID has the following two types: the S-SID of the non-mixed-coded flag is used for the same segment length in the segment list; S-SIDs of the mixed-compiling flag are used for different segment lengths in segment lists; the length is carried and is used to specify the length of the next segment. A mixed flag is defined in the flag field. When the value of the mixed-coded flag is 0, it is indicated that the segments in the message segment list are of the same length, and when the value of the mixed-coded flag is 1, it is indicated that the segments in the message segment list have different lengths. Based on the message format shown in fig. 5, the message processing flow of the intermediate node is as follows.
The intermediate node reads the DA field in the IPv6 header, and if the DA field hits in the local area, any one of the following modes (1) to (3) is executed.
Mode (1) when the mixed flag is 0 and unicast, and the mixed flag in RH is 0 (unset), the intermediate node performs the following operations: updating segment left to segment left-; the segment left unit offset is segment length; replacing the post segment length bit in DA with segment list [ SL ]; forwarding according to the replaced DA.
Mode (2) when the mixed flag is 0 and multicast, and the mixed flag in RH is 0 (unset), the intermediate node performs the following operations: updating segment left to be a pointer value in the current SID; the segment left unit offset is segment length; replacing the post segment length bit in DA with segment list [ segment left ]; forwarding according to the replaced DA.
Mode (3) when the mixed flag is 1, the intermediate node performs the following operations: updating segment left to (segment left-length edge); replacing post segment length in DA with SIDs of (segment left-length segment) segment length positions; forwarding according to the replaced DA.
Fig. 6 is a flowchart of a SID notification method according to an embodiment of the present application, where the method shown in fig. 6 is an illustration of a control plane scheme, and the method shown in fig. 6 includes the following steps S401 to S403.
Step S401, the intermediate node generates an announcement message.
The advertisement message is used to publish the SID of the intermediate node. The advertisement message includes the SID and the SID's flag. In some embodiments, the SID flag is used to indicate that the SID is a mixed flag SID or a non-mixed flag SID. In other embodiments, the flag of the SID is used to indicate the compression of the SID. Optionally, the advertisement message is an IGP message. For example, the notification message IS an IS-IS message or an OSPF message. For example, the advertisement message includes SRv SID sub TLV, the SRv6 SID sub TLV includes endpoint behavior (endpoint behavior) field and SID field, endpoint behavior field carries the flag of the SID, and SID field carries the value of the SID.
Step S402, the intermediate node sends an announcement message to the entry node.
Step S403, the entry node receives the notification message.
Step S404, the entry node obtains SID and SID flag based on the notification message.
According to the method provided by the embodiment, the SID and the SID flag are issued to other devices together in the SID notification process, so that the SID and the SID flag are collected among the supporting devices, and whether the SID is the SID of the mixed-encoded flag or the SID of the non-mixed-encoded flag is known.
In connection with the four examples below, it is illustrated how the method shown in fig. 3 can be implemented on the basis of the message format shown in fig. 5. The following example is an illustration of the first flag in the method shown in fig. 3. The S-SID in the following example is an illustration of the SID in the method shown in FIG. 6.
Example 1 (multicast non-hybrid)
Fig. 2 is a schematic diagram of a network scenario on which example 1 is based. Fig. 7 is a schematic diagram of a package format of a multicast packet sent by the ingress node in example 1. In the multicast message sent by the ingress node, segment length is used to indicate that the SID has a length of 32 bits, and the mixed flag=0 and segment left=7. Based on the message format shown in fig. 7, the message processing flow of each node in example 1 is as follows.
And (3) ingress: the ingress determines that the destination address of the message is the SID (S-SID 1) of the ingress itself, and the ingress determines that the function of the mixed-coded flag is 0,S-SID1 is end.RL; the first parameter value in S-SID1 is 2, which represents replication number, i.e. the number of copies of the message, so that the message is copied by the ingress to obtain two copies of the message, namely a copy message one and a copy message Wen Er; the second parameter value in S-SID1 is 6, and the second parameter value represents pointer, namely the SL value of the first message, the ingress determines that the SID of the intermediate node 1 is segment list [6] and the SID of the intermediate node 2 is segment list [5] according to the pointer of 6, so that the SL value of the copy message is 6, and the S-SID2 of the segment list [6] is replaced by 32 bits after the DA of the copy message; the copy message Wen Er SL takes a value of (6-1) =5, and 32 bits after copying the DA of message two are replaced with the S-SID3 of segment list [5 ]. Here, each time segment left is shifted, the length of segment length, i.e., 32 bits, is shifted.
The intermediate node 1 determines that the destination address of the message is the SID of the intermediate node 1, namely S-SID2, and the intermediate node 1 determines that the mixed-coded flag is 0; the function of the S-SID2 is end.RL, the first parameter value in the S-SID2 is 2, and the first parameter value represents replication number, namely the number of copies of the message, so that the intermediate node 1 copies the message to obtain two copies of the message, namely a copy message I and a copy message Wen Er; the second parameter value in S-SID2 is 4, the second parameter value represents the pointer, namely the SL value of the first message, the intermediate node 1 determines that the SID of the exit node 3 is segment list [4] according to the pointer of 4, and determines that the SID of the exit node 4 is segment list [3], thus the SL value of the copy message is 4, and the S-SID4 of the segment list [4] is replaced by 32 bits after DA of the copy message; the copy message Wen Er SL has a value of (4-1) =3, and the 32 bits after DA of the copy message two are replaced with the S-SID5 of segment list [3 ]. Here, each time segment left is shifted, the length of segment length, i.e., 32 bits, is shifted.
The intermediate node 2 determines that the destination address of the message is the SID of the intermediate node 2, namely S-SID3, and the intermediate node 2 determines that the function of the mixed coding flag is 0,S-SID3 is end.RL; the first parameter value in the S-SID3 is 2, and the first parameter value represents replication number, that is, the number of copies of the message, so that the intermediate node 2 copies the message to obtain two messages; the second parameter value in S-SID3 is 2, and the second parameter value represents the pointer, namely the SL value of the first message, the intermediate node 2 determines that the SID of the exit node 5 is segment list [2] and determines that the SID of the exit node 6 is segment list [1] according to the pointer being 2, so that the SL value of the copy message is 2, and the S-SID6 of the segment list [2] is replaced by 32 bits after DA of the copy message is finished; the copy message Wen Er SL has a value of (2-1) =1, and the 32 bits after DA of the copy message two are replaced with the S-SID7 of segment list [1 ].
The exit node (node 3-node 6) determines the destination address of the message as SID of the message itself, the function of the SID as end.RL and the mixed-coded flag as 0; both parameters in the S-SID are 0; and judging that the last jump is not duplicated.
By the method of example 1, RH compression of the multicast scene can be supported, and message overhead is reduced.
Example 2 (unicast non-Mixed knitting)
Fig. 8 is a schematic diagram of a package format of a unicast message sent by the ingress node in example 1. As shown in fig. 8, in the unicast message sent by the ingress node, segment length is used to indicate that the SID has a length of 32 bits, and the mixed flag=0 and segment left=4.
Based on the message format shown in fig. 8, the processing flow of the message of each node in example 2 is as follows.
And (3) ingress: the ingress determines the SID of the message destination address as the ingress, namely S-SID1, the function of the S-SID1 is End, and the mixed-coded flag is determined to be 0; segment left- =3, the last 32 bits of da are replaced with segment list [3] as S-SID2; forwarding according to the updated DA. Here, each time segment left is shifted, the length of segment length, i.e., 32 bits, is shifted.
The intermediate node 1 is that the destination address of the message is the SID of the intermediate node 1, namely S-SID2, the function of the S-SID2 is End, and the mixed coding flag is determined to be 0; segment left- =2, the last 32 bits of da are replaced with segment list [2] as S-SID3; forwarding according to the updated DA;
Here, each time segment left is shifted, the length of segment length, i.e., 32 bits, is shifted.
The intermediate node 2, wherein the destination address of the message is the SID of the intermediate node 2, namely S-SID4, and the function of which the mixed coding flag is 0,S-SID4 is determined as End; segment left- =1, the last 32 bits of da are replaced with segment list [1] as S-SID4; forwarding according to the updated DA. Wherein intermediate node 2 is the next hop node of intermediate node 1.
Here, each time segment left is shifted, the length of segment length, i.e., 32 bits, is shifted.
An egress node (egress): segments left= 1, reaching the egress node.
By the method provided in example 2, the compression length can be flexibly defined independent of the control plane.
Example 3 (unicast Mixed plaited)
Fig. 9 is a schematic diagram of a package format of a multicast packet sent by the ingress node in example 1. As shown in fig. 9, in the unicast message sent by the ingress node, segment length is used to indicate that the SID has a length of 32 bits, and the mixed flag=1 and segment left=7.
Based on the message format shown in fig. 9, the message processing flow of each node in example 3 is as follows.
And (3) ingress: the ingress determines the SID of the message destination address as the ingress, namely S-SID1, and determines the mixed coding flag as 1 and the function of the S-SID1 as End; segment left-1=6, the last 32 bits of DA are replaced with segment list [6] as S-SID2; forwarding according to the updated DA; here, each time segment left is shifted, the length of segment length, i.e., 32 bits, is shifted.
The intermediate node 1, wherein the destination address of the message is the SID of the intermediate node 1, namely S-SID2, and the mixed coding flag is determined to be 1, and the function of the S-SID2 is determined to be End; segment left-1=5, the last 32 bits of DA are replaced with segment list [5] as S-SID3; forwarding according to the updated DA; here, each time segment left is shifted, the length of segment length, i.e., 32 bits, is shifted.
The intermediate node 2, wherein the destination address of the message is the SID of the intermediate node 2, namely S-SID3, and the mixed coding flag is determined to be 1, and the function of the S-SID3 is determined to be End; segment left-4=1, the last 32 bits of da are replaced with segment list [4-1] as VPN SID; forwarding according to the updated DA; here, each time segment left is shifted, the length of segment length, i.e., 32 bits, is shifted.
egress: segments left= 1, reaching the egress node.
Example 4 (multicast Mixed plaited)
The process flow and encapsulation format of example 4 are similar to those of example 1 (multicast-unmixed), the main difference between example 4 and example 1 is that each SID carries an added parameter indicating that the next SID length is a multiple of segment length, i.e. the length of the next SID is N-segment length, thus indicating mixed-braiding.
By the method provided in example 4, support of the parameters is not compressed, and multicast programming can be supported.
In view of the method shown in each example, the method provided in the example can directly define the compression length in the message according to the compression requirement without configuration of the control plane: in addition, flexible and variable-length message compression is supported; in addition, no control surface expansion is required; in addition, the message content can be parsed; in addition, the artifacts in the SID are compressible.
Fig. 10 is a schematic structural diagram of a message processing apparatus 700 according to an embodiment of the present application. The message processing apparatus 700 includes a receiving unit 701, an obtaining unit 702, and a transmitting unit 703. Optionally, as seen in connection with the network environment shown in fig. 1 or fig. 2, the message processing apparatus 700 shown in fig. 10 is provided at an ingress node, an intermediate node or a tail node in fig. 1 or fig. 2. Optionally, as seen in connection with the method flow shown in fig. 3, in some embodiments, the message processing apparatus 700 shown in fig. 10 is provided at an intermediate node in fig. 3, the receiving unit 701 is configured to execute S304, the obtaining unit 702 is configured to execute S305, and the sending unit 703 is configured to execute S306. In other embodiments, the message processing apparatus 700 shown in fig. 10 is provided at the ingress node in fig. 3, the receiving unit 701 is configured to execute S301, the obtaining unit 702 is configured to execute S302, and the sending unit 703 is configured to execute S303. In other embodiments, the message processing apparatus 700 shown in fig. 10 is provided at the egress node in fig. 3, the receiving unit 701 is configured to execute S307, the obtaining unit 702 is configured to execute S308, and the sending unit 703 is configured to execute S309.
The embodiment of the apparatus depicted in fig. 10 is merely illustrative, and for example, the division of the above units is merely a logical function division, and there may be other manners of division in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. The functional units in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The various elements in message processing apparatus 700 are implemented in whole or in part by software, hardware, firmware, or any combination thereof.
Some possible implementations using hardware or software to implement the various functional units in the message processing apparatus 700 are described below in connection with the forwarding device 800 described below.
In the case of a software implementation, for example, the above-described obtaining unit 702 is implemented by a software functional unit generated after the program code stored in the memory 802 is read by at least one processor 801 in fig. 11.
In the case of a hardware implementation, for example, each of the units described above in fig. 10 is implemented by different hardware in the forwarding device, respectively, for example, the obtaining unit 702 is implemented by a part of processing resources in at least one processor 801 in fig. 11 (for example, one core or two cores in a multi-core processor), or by the rest of processing resources in at least one processor 801 in fig. 11 (for example, other cores in a multi-core processor), or is implemented by a programmable device such as a field-programmable gate array (field-programmable gate array, FPGA), or a coprocessor. The receiving unit 701 and the transmitting unit 703 are implemented by a network interface 803 in fig. 11.
Fig. 11 is a schematic structural diagram of a forwarding device 800 according to an embodiment of the present application. Forwarding device 800 includes at least one processor 801, memory 802, and at least one network interface 803.
Alternatively, the forwarding device 800 shown in fig. 11 is an ingress node, an intermediate node, or a tail node in fig. 1 or fig. 2, as viewed in connection with the network environment shown in fig. 1 or fig. 2.
Alternatively, as seen in connection with the method flow shown in fig. 3, in some embodiments, the forwarding device 800 shown in fig. 11 is an intermediate node in fig. 3, the network interface 803 is used to execute S304, the processor 801 is used to execute S305, and the network interface 803 is used to execute S306. In other embodiments, forwarding device 800 shown in fig. 11 is an ingress node in fig. 3, network interface 803 is used to perform S301, processor 801 is used to perform S302, and network interface 803 is used to perform S303. In other embodiments, forwarding device 800 shown in fig. 11 is an egress node in fig. 3, network interface 803 is used to perform S307, processor 801 is used to perform S308, and network interface 803 is used to perform S309. In connection with the method flow shown in fig. 4, the processor 801 is configured to execute S401 or S404, and the network interface 803 is configured to execute S402 or S403.
The processor 801 is, for example, a general-purpose central processing unit (central processing unit, CPU), a network processor (network processer, NP), a graphics processor (graphics processing unit, GPU), a neural-network processor (neural-network processing units, NPU), a data processing unit (data processing unit, DPU), a microprocessor, or one or more integrated circuits for implementing aspects of the present application. For example, the processor 801 includes application-specific integrated circuits (application-specific integrated circuit, ASICs), programmable logic devices (programmable logic device, PLDs), or combinations thereof. PLDs are, for example, complex programmable logic devices (complex programmable logic device, CPLD), field-programmable gate arrays (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof.
The Memory 802 is, for example, but not limited to, a read-only Memory (ROM) or other type of static storage device that can store static information and instructions, as well as a random access Memory (random access Memory, RAM) or other type of dynamic storage device that can store information and instructions, as well as an electrically erasable programmable read-only Memory (electrically erasable programmable read-only Memory, EEPROM), compact disc read-only Memory (compact disc read-only Memory) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media, or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Optionally, the memory 802 is independent and is connected to the processor 801 by an internal connection 804. Alternatively, memory 802 and processor 801 are integrated together.
The network interface 803 uses any transceiver-like device for communicating with other devices or communication networks. The network interface 803 includes, for example, at least one of a wired network interface or a wireless network interface. The wired network interface is, for example, an ethernet interface. The ethernet interface is, for example, an optical interface, an electrical interface, or a combination thereof. The wireless network interface is, for example, a wireless local area network (wireless local area networks, WLAN) interface, a cellular network interface, a combination thereof, or the like.
In some embodiments, processor 801 includes one or more CPUs, such as CPU0 and CPU1 shown in fig. 11.
In some embodiments, forwarding device 800 optionally includes multiple processors, such as processor 801 and processor 805 shown in fig. 11. Each of these processors is, for example, a single-core processor (single-CPU), and is, for example, a multi-core processor (multi-CPU). A processor herein may optionally refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In some embodiments, forwarding device 800 also includes internal connections 804. The processor 801, the memory 802 and the at least one network interface 803 are connected by an internal connection 804. The internal connections 804 include vias that communicate information between the components described above. Optionally, the internal connection 804 is a board or bus. Optionally, the internal connections 804 are divided into address buses, data buses, control buses, and the like.
In some embodiments, forwarding device 800 also includes an input-output interface 806. An input-output interface 806 is connected to the internal connection 804.
Alternatively, the processor 801 implements the method in the above embodiment by reading the program code 810 stored in the memory 802, or the processor 801 implements the method in the above embodiment by internally storing the program code. In the case where the processor 801 implements the method in the above embodiment by reading the program code 810 stored in the memory 802, the program code implementing the method provided by the embodiment of the present application is stored in the memory 802.
For more details on the implementation of the above-described functions by the processor 801, reference is made to the description of the previous method embodiments, which is not repeated here.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a forwarding device according to an embodiment of the present application. The forwarding apparatus 900 includes: a main control board 910 and an interface board 930.
Alternatively, the forwarding device 800 shown in fig. 11 is an ingress node, an intermediate node, or a tail node in fig. 1 or fig. 2, as viewed in connection with the network environment shown in fig. 1 or fig. 2.
Optionally, as seen in connection with the method flow shown in fig. 3, in some embodiments, the forwarding device 800 shown in fig. 11 is an intermediate node in fig. 3, and the interface board 930 is configured to perform S304, S305, and S306. In other embodiments, forwarding device 800 shown in fig. 11 is an ingress node in fig. 3, and interface board 930 is used to perform S301, S302, and S303. In other embodiments, forwarding device 800 shown in fig. 11 is the egress node in fig. 3, and interface board 930 is used to perform S307, S308, and S309. In connection with the method flow shown in fig. 4, the main control board 910 is used for executing S401 or S404, and the interface board 930 is used for executing S402 or S403.
The main control board is also called a main processing unit (main processing unit, MPU) or a routing processing card (route processor card), and the main control board 910 is used for controlling and managing various components in the forwarding device 900, including routing computation, device management, device maintenance, and protocol processing functions. The main control board 910 includes: a central processing unit 911 and a memory 912.
The interface board 930 is also referred to as a line interface unit card (line processing unit, LPU), line card, or service board. The interface board 930 is used to provide various service interfaces and to implement forwarding of data packets. The service interfaces include, but are not limited to, ethernet interfaces, such as flexible ethernet service interfaces (flexible ethernet clients, flexE clients), POS (packet over sONET/SDH) interfaces, etc. The interface board 930 includes: central processor 931, network processor 932, forwarding table entry memory 934, and physical interface cards (physical interface card, PIC) 933.
The central processor 931 on the interface board 930 is used to control and manage the interface board 930 and communicate with the central processor 911 on the main control board 910.
The network processor 932 is configured to implement forwarding processing of the packet. The network processor 932 is in the form of, for example, a forwarding chip. Specifically, the network processor 932 is configured to forward the received packet based on the forwarding table stored in the forwarding table entry memory 934, and if the destination address of the packet is the address of the forwarding device 900, upload the packet to the CPU (e.g. the central processing unit 911) for processing; if the destination address of the message is not the address of the forwarding device 900, the next hop and the outbound interface corresponding to the destination address are found from the forwarding table according to the destination address, and the message is forwarded to the outbound interface corresponding to the destination address. The processing of the uplink message comprises the following steps: processing a message input interface and searching a forwarding table; and (3) processing a downlink message: forwarding table lookup, etc.
The physical interface card 933 is used to implement the docking function of the physical layer, from which the original traffic enters the interface board 930, and from which the processed messages are sent out from the physical interface card 933. A physical interface card 933, also referred to as a daughter card, may be mounted on the interface board 930 and is responsible for converting the optical signals into messages and forwarding the messages to the network processor 932 for processing after a validity check. In some embodiments, the central processor may also perform the functions of the network processor 932, such as implementing software forwarding based on a general purpose CPU, so that the network processor 932 is not needed in the physical interface card 933.
Optionally, the forwarding device 900 includes a plurality of interface boards, for example, the forwarding device 900 further includes an interface board 940, and the interface board 940 includes: a central processor 941, a network processor 942, a forwarding table entry memory 944, and a physical interface card 943.
Optionally, forwarding device 900 further includes a switch fabric 920. The switching fabric 920 is also referred to as, for example, a switching fabric unit (switch fabric unit, SFU). In the case of a network device having a plurality of interface boards 930, the switch fabric 920 is used to complete data exchange between the interface boards. For example, communication between interface board 930 and interface board 940 is via, for example, switch fabric 920.
The main control board 910 is coupled to the interface board 930. For example. The main control board 910, the interface board 930 and the interface board 940 are connected with the system backboard through a system bus to realize intercommunication among the switching network boards 920. In one possible implementation, an inter-process communication protocol (inter-process communication, IPC) channel is established between the main control board 910 and the interface board 930, and communication is performed between the main control board 910 and the interface board 930 through the IPC channel.
Logically, forwarding device 900 includes a control plane that includes a main control board 910 and a central processor 931, and a forwarding plane that includes various components that perform forwarding, such as a forwarding table entry memory 934, a physical interface card 933, and a network processor 932. The control plane performs the functions of router, generating forwarding table, processing signaling and protocol messages, configuring and maintaining the status of the device, etc., and the control plane issues the generated forwarding table to the forwarding plane, where the network processor 932 forwards the message received by the physical interface card 933 based on the forwarding table issued by the control plane. The forwarding table issued by the control plane is stored, for example, in forwarding table entry memory 934. In some embodiments, the control plane and forwarding plane are, for example, completely separate and not on the same device.
Operations on interface board 940 are consistent with those of interface board 930 and will not be described again for brevity. It should be understood that the forwarding device 900 of this embodiment may correspond to the ingress node, the intermediate node, or the tail node in the foregoing respective method embodiments, and the main control board 910, the interface boards 930, and/or 940 in the forwarding device 900 implement, for example, functions and/or various steps implemented by the ingress node, the intermediate node, or the tail node in the foregoing respective method embodiments, which are not described herein for brevity.
The master control board may have one or more pieces, and the plurality of pieces include, for example, a main master control board and a standby master control board. The interface boards may have one or more, the more data processing capabilities the network device is, the more interface boards are provided. The physical interface card on the interface board may also have one or more pieces. The switching network board may not be provided, or may be provided with one or more blocks, and load sharing redundancy backup can be jointly realized when the switching network board is provided with the plurality of blocks. Under the centralized forwarding architecture, the network device may not need to exchange network boards, and the interface board bears the processing function of the service data of the whole system. Under the distributed forwarding architecture, the network device may have at least one switching fabric, through which data exchange between multiple interface boards is implemented, providing high-capacity data exchange and processing capabilities. Therefore, the data access and processing power of the network devices of the distributed architecture is greater than that of the devices of the centralized architecture. Alternatively, the network device may be in the form of only one board card, i.e. there is no switching network board, the functions of the interface board and the main control board are integrated on the one board card, and the central processor on the interface board and the central processor on the main control board may be combined into one central processor on the one board card, so as to execute the functions after stacking the two, where the data exchange and processing capability of the device in this form are low (for example, network devices such as a low-end switch or a router). The specific architecture employed is not limited in any way herein, depending on the specific networking deployment scenario.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are referred to each other, and each embodiment is mainly described as a difference from other embodiments.
A refers to B, referring to a simple variation where A is the same as B or A is B.
The terms first and second and the like in the description and in the claims of embodiments of the application, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order of the objects, and should not be interpreted to indicate or imply relative importance. For example, a first SID and a second SID are used to distinguish between different SIDs, rather than to describe a particular order of SIDs, nor should they be interpreted as the first SID being more important than the second SID.
Information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals, which are all authorized by the user or sufficiently authorized by the parties, and the collection, use, and processing of the relevant data require compliance with relevant laws and regulations and standards of the relevant country and region.
In the embodiments of the present application, unless otherwise indicated, the meaning of "at least one" means one or more, and the meaning of "a plurality" means two or more. For example, a plurality of SIDs refers to two or more SIDs.
The above-described embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
Claims (34)
1. A method for processing a message, the method comprising:
the forwarding device receives a first message, wherein a destination address of the first message comprises a first SID, the first message comprises a first residual section, a first identifier and a section list, the first residual section is used for indicating the offset of the first SID in the section list, the first SID comprises a first parameter, and the first identifier is used for identifying that the SIDs in the section list are different in length;
the forwarding device obtains a second message based on the first message, wherein the second message comprises a second residual section determined based on the first residual section and the first parameter, a destination address of the second message comprises a second SID, and the second SID is the SID determined based on the second residual section in the section list included in the second message;
The forwarding device sends the second message based on the second SID.
2. The method of claim 1, wherein the first message further comprises a segment length, and wherein the first parameter is used to indicate a multiple of a next SID of the first SID relative to the segment length.
3. The method according to claim 1 or 2, wherein the first message and the second message are unicast messages.
4. A method according to any one of claims 1 to 3, wherein the first message further comprises a segment routing header, SRH, comprising the first remaining segment, the first identity and the segment list.
5. The method of claim 1, wherein the first message further comprises a segment length, wherein the first SID further comprises a second parameter indicating a multiple of a next SID of the first SID relative to the segment length, wherein the second parameter is used to indicate an offset of the next SID of the first SID relative to the first SID, and wherein the second SID is a SID in the segment list that is determined based on the second remaining segment and the second parameter.
6. The method of claim 5, wherein the first message and the second message are multicast messages.
7. The method according to claim 5 or 6, wherein the first message comprises a multicast segment routing header, MRH, the MRH comprising the first remaining segment, the segment list and the first identification.
8. The method of any one of claims 1 to 7, wherein the segment length is less than 128 bits.
9. The method according to any one of claims 1 to 8, wherein before the forwarding device receives the first message, the method further comprises:
the forwarding device receives a third message sent by the next-hop device, wherein the third message comprises the second SID and flavor flag of the second SID, and the flag identifies a compression mode of the second SID.
10. A method for processing a message, the method comprising:
the forwarding equipment receives a first message, wherein a destination address of the first message comprises a first SID, the first message comprises a segment length, a first residual segment, a first identifier and a segment list, the first residual segment is used for indicating the offset of the first SID in the segment list, and the first identifier is used for identifying that the SIDs in the segment list have the same length;
The forwarding device obtains a second message based on the first message, wherein the second message comprises the segment length, a second remaining segment and the segment list, the second remaining segment is an offset determined based on the first remaining segment, a destination address of the second message comprises a second SID, and the second SID is a SID determined based on the second remaining segment and the segment length in the segment list included in the second message;
the forwarding device sends the second message based on the second SID.
11. The method of claim 10, wherein the segment length is less than 128 bits.
12. The method according to claim 10 or 11, wherein the first message and the second message are unicast messages.
13. The method according to any of claims 10 to 12, wherein the first message further comprises a segment routing header, SRH, the SRH comprising the segment length, the first remaining segment, the first identifier and the segment list.
14. The method of claim 10, wherein the first SID comprises a first parameter indicating an offset of a next SID of the first SID relative to the first SID, the second SID being a SID in the segment list determined based on the second remaining segment, the first parameter, and the segment length.
15. The method of claim 14, wherein the first message and the second message are multicast messages.
16. The method according to claim 14 or 15, wherein the first message comprises a multicast segment routing header, MRH, the MRH comprising the segment length, the first remaining segment, the segment list and the first identification.
17. A message processing apparatus, the apparatus comprising:
a receiving unit, configured to receive a first packet, where a destination address of the first packet includes a first SID, where the first packet includes a first remaining segment, a first identifier, and a segment list, where the first remaining segment is used to indicate an offset of the first SID in the segment list, where the first SID includes a first parameter, and the first identifier is used to identify that lengths of SIDs in the segment list are different;
an obtaining unit, configured to obtain a second packet based on the first packet, where the second packet includes a second remaining segment determined based on the first remaining segment and the first parameter, and a destination address of the second packet includes a second SID, where the second SID is a SID determined based on the second remaining segment in the segment list included in the second packet;
And the sending unit is used for sending the second message based on the second SID.
18. The apparatus of claim 17, wherein the first message further comprises a segment length, and wherein the first parameter is used to indicate a multiple of a next SID of the first SID relative to the segment length.
19. The apparatus of claim 17 or 18, wherein the first message and the second message are unicast messages.
20. The apparatus according to any one of claims 17 to 19, wherein the first message further comprises a segment routing header, SRH, the SRH comprising the first remaining segment, the first identity and the segment list.
21. The apparatus of claim 17 or 18, wherein the first message further comprises a segment length, wherein the first SID further comprises a second parameter, wherein the first parameter is used to indicate a multiple of a next SID of the first SID relative to the segment length, wherein the second parameter is used to indicate an offset of the next SID of the first SID relative to the first SID, and wherein the second SID is a SID in the segment list that is determined based on the second remaining segment and the second parameter.
22. The apparatus of claim 21, wherein the first message and the second message are multicast messages.
23. The apparatus according to claim 21 or 22, wherein the first message comprises a multicast segment routing header, MRH, the MRH comprising the first remaining segment, the segment list, and the first identification.
24. The apparatus of any one of claims 17 to 23, wherein the segment length is less than 128 bits.
25. The apparatus according to any one of claims 17 to 24, further comprising:
the receiving unit is used for receiving a third message sent by the next-hop equipment, wherein the third message comprises the second SID and flavor flag of the second SID, and the flag identifies the compression mode of the second SID.
26. A message processing apparatus, the apparatus comprising:
a receiving unit, configured to receive a first packet, where a destination address of the first packet includes a first SID, where the first packet includes a segment length, a first remaining segment, a first identifier, and a segment list, where the first remaining segment is used to indicate an offset of the first SID in the segment list, and the first identifier is used to identify that the lengths of the SIDs in the segment list are the same;
an obtaining unit, configured to obtain a second packet based on the first packet, where the second packet includes the segment length, a second remaining segment, and the segment list, the second remaining segment is an offset determined based on the first remaining segment, and a destination address of the second packet includes a second SID, where the second SID is a SID determined based on the second remaining segment and the segment length in the segment list included in the second packet;
And the sending unit is used for sending the second message based on the second SID.
27. The apparatus of claim 26, wherein the segment length is less than 128 bits.
28. The apparatus of claim 26 or 27, wherein the first message and the second message are unicast messages.
29. The apparatus according to any one of claims 26 to 28, wherein the first message further comprises a segment routing header, SRH, the SRH comprising the segment length, the first remaining segment, the first identifier, and the segment list.
30. The apparatus of claim 26, wherein the first SID comprises a first parameter indicating an offset of a next SID of the first SID relative to the first SID, and wherein the second SID is a SID in the segment list determined based on the second remaining segment, the first parameter, and the segment length.
31. The apparatus of claim 30, wherein the first message and the second message are multicast messages.
32. The apparatus of claim 30 or 31, wherein the first message comprises a multicast segment routing header, MRH, the MRH comprising the segment length, the first remaining segment, the segment list, and the first identification.
33. A forwarding device, the forwarding device comprising: a processor coupled to a memory having stored therein at least one computer program instruction that is loaded and executed by the processor to cause the forwarding device to implement the method of any of claims 1-16.
34. A computer readable storage medium having stored therein at least one instruction which when executed on a computer causes the computer to perform the method of any one of claims 1 to 16.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210270908 | 2022-03-18 | ||
| CN2022102709089 | 2022-03-18 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116800662A true CN116800662A (en) | 2023-09-22 |
Family
ID=88035137
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210743425.6A Pending CN116800662A (en) | 2022-03-18 | 2022-06-27 | Message processing methods, devices, equipment and storage media |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116800662A (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113259239A (en) * | 2020-02-10 | 2021-08-13 | 华为技术有限公司 | Method, equipment and system for forwarding message in hybrid network |
| CN113824638A (en) * | 2020-06-18 | 2021-12-21 | 华为技术有限公司 | Method, equipment and system for forwarding message |
| CN113904983A (en) * | 2017-12-27 | 2022-01-07 | 华为技术有限公司 | Message processing method, network node and system |
-
2022
- 2022-06-27 CN CN202210743425.6A patent/CN116800662A/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113904983A (en) * | 2017-12-27 | 2022-01-07 | 华为技术有限公司 | Message processing method, network node and system |
| CN113259239A (en) * | 2020-02-10 | 2021-08-13 | 华为技术有限公司 | Method, equipment and system for forwarding message in hybrid network |
| CN113824638A (en) * | 2020-06-18 | 2021-12-21 | 华为技术有限公司 | Method, equipment and system for forwarding message |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113824638B (en) | A method, device and system for forwarding a message | |
| CN113300949B (en) | Method for forwarding message, method, device and system for releasing routing information | |
| CN113132229B (en) | Segment identifier determination method and device | |
| CN116319617B (en) | A method, device, and system for forwarding messages in an SR network. | |
| CN117640489A (en) | A method, device and system for sending messages | |
| CN113285878B (en) | Load sharing method and first network equipment | |
| CN112787923A (en) | Message processing method, device and system | |
| CN111698162A (en) | Method, device and system for information synchronization | |
| WO2022111606A1 (en) | Message transmission method, segment list generation method, compressed segment identifier acquisition method, and apparatuses | |
| WO2022257854A1 (en) | Message publishing method and apparatus, and forwarding path processing method and apparatus | |
| CN115550252A (en) | Method, device, equipment and storage medium for routing, publishing and forwarding messages | |
| JP7609382B2 (en) | Route advertisement method, device, and system | |
| CN116094987B (en) | Method and device for determining forwarding path | |
| CN115955431B (en) | Data transmission method, device and storage medium | |
| EP4294080A1 (en) | Route processing method and network device | |
| CN115801663A (en) | Route generation method, device and storage medium | |
| CN117176631A (en) | A segment routing policy processing method and device | |
| US20240195729A1 (en) | Communication method and apparatus | |
| CN116137632A (en) | A message processing method, device and equipment | |
| CN114650255A (en) | Message processing method and network equipment | |
| CN116800662A (en) | Message processing methods, devices, equipment and storage media | |
| CN116506379A (en) | A flow forwarding method, message sending method, message sending method and device | |
| CN116781618A (en) | Route generation method, data message forwarding method and device | |
| CN116527642B (en) | A message processing method and related equipment | |
| CN114629834B (en) | Communication method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |