Detailed Description
The present application has been described in terms of several embodiments, but the description is illustrative and not restrictive, and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the described embodiments. Although many possible combinations of features are shown in the drawings and discussed in the detailed description, many other combinations of the disclosed features are possible. Any feature or element of any embodiment may be used in combination with or in place of any other feature or element of any other embodiment unless specifically limited.
The present application includes and contemplates combinations of features and elements known to those of ordinary skill in the art. The disclosed embodiments, features and elements of the present application may also be combined with any conventional features or elements to form a unique inventive arrangement. Any feature or element of any embodiment may also be combined with features or elements from other inventive arrangements to form another unique inventive arrangement. It is therefore to be understood that any of the features shown and/or discussed in the present application may be implemented alone or in any suitable combination. Accordingly, the embodiments are not to be restricted except in light of the attached claims and their equivalents. Further, various modifications and changes may be made within the scope of the appended claims.
Furthermore, in describing representative embodiments, the specification may have presented the method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. Other sequences of steps are possible as will be appreciated by those of ordinary skill in the art. Accordingly, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. Furthermore, the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the embodiments of the present application.
In the embodiment of the application, the exchange chip is provided with an external communication port and a CPU port, wherein at least part of the external communication ports are configured as VLAN interfaces, each VLAN interface is provided with a respective IP address, and the CPU port is connected with a CPU in the exchange.
Wherein the VLAN interface may dynamically configure supported three-layer protocols, wherein the three-layer protocols may include at least one of Secure Shell (SSH), remote network termination (Telecomputer Network, TELNET), border gateway (Border Gateway Protocol, BGP), open Shortest Path first (OpenShortest PATH FIRST, OSPF), internet control message Protocol (Internet Control Message Protocol, ICMP), internet control message Protocol version six (Internet Control Message Protocol version 6, ICMPv 6), trireal File transfer Protocol (TRIVIAL FILE TRANSFER Protocol, TFTP), file transfer Protocol (FILE TRANSFER Protocol, FTP), HTTP: hypertext transfer Protocol (Hypertext Transfer Protocol, HTTP) and Hypertext transfer Security Protocol (Hypertext Transfer Protocol Secure, HTTPS).
Fig. 1 is a schematic diagram of an application scenario of a switch chip. As shown in fig. 1, the external communication PORT of the switch chip is PORT1, where PORT1 is added to VLAN1, and therefore, the VLAN interface of VLAN1 is configured on PORT1, where the IP address of the VLAN interface is 1.1.1.1.
In order to ensure that the key protocol messages can be processed preferentially when being sent to the CPU, the stability and the performance of the network are maintained. And the three-layer protocol messages with the local destination address are required to be classified.
In the related art, by configuring ACL rules to identify a specific protocol packet and assign it to a specified CPU queue, a specific implementation manner is as follows:
the first implementation mode is software scheduling, and the exchange chip does not limit speed.
The working mode is that the exchange chip can send an ARP list item and direct the message with the destination address of 1.1.1.1 to the CPU port. Functions supporting three-layer protocols, such as SSH, TELNET, BGP, etc., are turned on the VLAN interface. When a message enters from PORT 1, it matches this ARP entry and is then sent to some queue of the CPU PORT, say queue 7. Queue 7 has no speed limit, but the CPU port has. The CPU recognizes the importance of the message according to the type of the message and performs software scheduling.
The technical defect is that the mode can lead the CPU to have larger load, has high requirement on the CPU performance and has more complex software processing. When the number of messages is large, although the queues are not limited, if the sent messages exceed the limit of the CPU ports, important messages can be washed out by messages with low priority, so that network oscillation is caused, and network stability is affected.
And in the second implementation mode, the ACL matches the IP address and the protocol type and redirects the message to the CPU queue.
The working mode is that an ACL is configured to match a specific protocol and an IP address, and the message is redirected to different queues of a CPU port. For example, configuration ACL matches ospf+ip 1.1.1.1, redirecting messages to CPU port queue 7, matches bgp+ip 1.1.1.1, redirecting messages to CPU port queue 6, and so on. Thus, the priority classification and scheduling of the messages of different protocols can be realized.
The technical defect is that when the VLAN interfaces configured on the switch are more and each VLAN interface is configured with a plurality of IP addresses, a large number of ACL list items need to be issued. For example, a switch has thousands of interfaces, each interface configured with multiple IP addresses, and if each interface enables multiple protocols, a large amount of ACL resources are consumed, while the ACL resources of the chip are limited and valuable.
The ACL matching output port is a CPU port and a protocol type, and the message is redirected to a CPU queue.
The working mode is that the ACL matching protocol type and the output port are configured as CPU ports, and the message is redirected to different CPU queues. For example, configuring the ACL to match OSPF+ egress port as a CPU port, redirecting the message to CPU port queue 7, matching BGP+ egress port as a CPU port, redirecting the message to CPU port queue 6, and so on. This way the number of ACL entries used can be reduced.
The technical defect is that few chips can support the ACL matching of the incoming direction to the outgoing port, so that the scheme is not strong in universality and is difficult to realize on most switch equipment.
Implementation IV, ACL matches host hit status and protocol type, and the CPU queue is uploaded through copy (copy).
In the switch, the host table is a hardware table entry for storing IP address information configured on the VLAN interface. When the three-layer message arrives at the switch, the switch checks whether the destination IP address of the message matches an entry in the host table. When the destination IP address of the message matches an entry in the host table, the switch generates a host hit state. This means that the destination address of the message is the IP address of the VLAN interface on the switch's own, requiring further processing.
The operation mode is that the IP address configured on the VLAN interface is issued to the host table of the chip, and when the three-layer protocol message hits the host table, the chip generates a host hit state. And configuring ACL to match the host hit state and the protocol type, and copying the three-layer protocol message to a corresponding CPU queue. For example, configuration ACL matches OSPF+host hit, copy the message to CPU port queue 7, matches BGP+host hit, copy the message to CPU port queue 6, and so on.
The technical defect is that the method has the problem of double uploading of the message, one part hits the ARP list to be uploaded for the CPU port, and the other part is copy to be uploaded, so that the CPU load is increased, protocol oscillation is possibly caused, and the network stability is affected. In addition, the message in the same network segment as the address resolution protocol (Address Resolution Protocol, ARP) in the VLAN interface may be sent to the CPU in an unnecessary manner, resulting in resource waste and unnecessary processing burden.
In view of the technical drawbacks in the related art, the embodiments of the present application propose the following solutions, including:
Fig. 2 is a flow chart of a message processing method according to an embodiment of the present application. As shown in fig. 2, the method is applied to a switching chip, wherein the method includes:
Step 201, obtaining a received message to be processed by a hardware routing table, wherein the received message comprises a message with a destination address being an IP address of a VLAN interface;
In the prior art, a hardware routing table is generally used to guide the forwarding of messages between different network devices, and the main focus is to forward out messages whose destination address is not the device. In the embodiment of the present application, the received message to be processed by the hardware routing table includes a message with a destination address being an IP address of the VLAN interface, that is, the hardware routing table is explicitly required to participate in the processing of the message with a destination address being a local address. By adding local routing entries in the hardware routing table to identify the messages, the switching chip can rapidly identify and process the messages with the local destination address by utilizing the high-efficiency processing capacity of the hardware routing table. The improvement overcomes the inherent thinking that the hardware routing table only pays attention to forwarding the non-native message in the traditional method, fully utilizes the high efficiency and the accuracy of the hardware routing table, and avoids the problems of low efficiency and inaccurate target message identification caused by complex software processing or processing flow in the related technology.
By definitely incorporating messages with destination addresses being IP addresses of the VLAN interface into the processing range of the hardware routing table, it is possible to ensure that these messages are processed timely and efficiently. This is particularly important for switches supporting multiple VLANs, and can improve the processing efficiency of the switch to the local message, and enhance the stability and performance of the network.
Step 202, processing the received message by using a local routing entry corresponding to the VLAN interface in the hardware routing table, to obtain a target message satisfying the local routing entry, where the local routing entry is used to identify a message with a destination address being the switch;
The local route item is added in the hardware route table, so that the exchange chip can identify which message destination addresses are the exchange itself, and the messages conforming to the local route item are screened out as target messages.
And the target message needing to be sent to the CPU is rapidly and accurately identified through the efficient searching mechanism of the hardware routing table, so that the key protocol message is ensured to be processed in time. Hardware routing tables have higher processing speed and efficiency than traditional methods (e.g., screening by software or ACL rules only). In addition, the use of the hardware routing table reduces the processing pressure of message identification, improves the speed and accuracy of message processing, and enhances the stability and performance of the network.
Step 203, each target message is allocated to a corresponding CPU queue.
By distributing the target message to the corresponding CPU queue, the priority scheduling of the messages of different protocols can be realized, and the priority processing of the important protocol messages is ensured.
The method provided by the embodiment of the application realizes the efficient identification and processing of the message with the destination address being the local of the switch through the hardware routing table and the local routing entry, improves the stability and the performance of the network, and reduces the resource consumption and the processing burden.
The reason why the embodiment of the application adopts the hardware routing table to complete the message identification is that:
1. Processing power advantage of hardware routing table
The high-speed processing is that the hardware routing table is usually integrated in the exchange chip, and the quick searching and matching of the messages are realized through a special hardware circuit. Compared with the software routing table searching, the hardware routing table searching speed is faster, and the identification and forwarding decision of the message destination address can be completed in a very short time. This enables the switch to efficiently process a large number of messages, meeting the performance requirements in a high-speed network environment.
Efficient resource utilization the lookup process of the hardware routing table is typically based on a dedicated memory architecture (e.g., TCAM, ternary Content-Addressable Memory) with fast parallel lookup capability. In contrast, software routing table lookup requires CPU resources, and processing speed is limited by CPU performance. Under high load conditions, software routing table lookup may cause CPU overload, thereby affecting network performance. The hardware routing table can effectively share the burden of the CPU, so that the CPU can concentrate on tasks of other control layers.
2. Other advantages
The hardware routing table can accurately identify the destination address of the message, and the message with the destination address being the local message of the switch can be specially identified by configuring the local routing entry. The accurate identification capability ensures that only messages which need to be sent to the CPU for processing can be screened out, and other irrelevant messages are prevented from occupying CPU resources.
Real-time performance in network environment, the processing of the message needs to have real-time performance, especially when processing the key protocol message. The hardware routing table can rapidly complete identification and forwarding decision at the moment that the message arrives, so that the key protocol message can be timely sent to the CPU for processing, and the stability and response speed of the network are improved.
Scalability-hardware routing table designs are typically capable of supporting a large number of routing entries, and as the network scale expands and traffic grows, new routing entries can be conveniently added to accommodate more users and traffic demands. In contrast, the capacity and performance of software routing tables may be limited by system resources, making it difficult to meet the needs of large-scale networks.
In summary, the hardware routing table is used for completing the message identification, so that the advantages of high-speed processing, high-efficiency resource utilization and the like of the message can be fully exerted, and meanwhile, the characteristics of accurate identification, instantaneity, expandability and the like are combined, so that a powerful guarantee is provided for the stable and efficient operation of the switch in a complex network environment.
The following describes the method provided by the embodiment of the application:
in one exemplary embodiment, the specific composition and content of the hardware routing table is further defined, emphasizing compatibility with the original routing entries.
Specifically, the hardware routing table includes local routing entries corresponding to each IP address of the VLAN interface one-to-one;
The hardware routing table comprises an IP address and an outlet port, wherein in each local routing table, the content of the IP address table is the IP address of the VLAN interface, and the content of the outlet port table is the CPU port.
For other items in the hardware routing table, which are not local routing items, the format and the content of the other items are not affected, and the forwarding processing can still be performed according to the traditional routing table items. The local route entry is newly added on the basis of the original route table, coexists with the original route entry and does not interfere with each other. The compatibility design ensures that when the exchanger processes the message with the destination address being the local, other conventional message forwarding services are not affected, and the overall stability and connectivity of the network are ensured.
By defining the composition of the hardware routing table and the compatibility with the original entries, clear basis and guarantee are provided for the subsequent operations of screening target messages by using the updated hardware routing table, distributing the target messages to corresponding CPU queues and the like. The exchange chip can accurately identify the target message according to the local route item and distribute the target message to the corresponding CPU queue, and the normal operation of other services is not affected.
In one exemplary embodiment, the concept of classification identification (classId) is further introduced, and the entries of the hardware routing table are expanded and refined to more accurately manage and process messages.
Specifically, the entries in the hardware routing table further comprise classification identifiers;
in the local route entries, the value of the classification identifier is a preset value, and the classification identifier is used for identifying that the entries are messages specially used for identifying the destination address as the local of the switch. And in other non-local routing entries in the hardware routing table, the classification is identified as null or other value than a preset value, so that the local routing entries can be distinguished from other conventional routing entries.
The main function of the classification mark is to help the exchange chip to recognize and screen the target message more effectively in the message processing process. When a message enters the switching chip and is subjected to route matching, whether the message is a message with a destination address of the local switch can be rapidly judged by checking the classification identifier in the route item matched with the message. If yes, the message is determined to be the target message, and a subsequent processing step, such as distribution to a corresponding CPU queue, is carried out.
In the above exemplary embodiment, the introduction of the classification identifier does not affect the format and functionality of the original routing entry. For non-native routing entries, the classification identifier may be null or other numerical value, which enables the hardware routing table to support coexistence of both native and native routing entries. When the exchange chip processes the message, the type and the processing mode of the message can be flexibly judged according to the existence of the classification mark and the value thereof, thereby realizing the differentiated processing of the messages of different types.
By introducing the classification mark, a more definite basis is provided for the subsequent message processing steps. When the exchange chip processes the message by using the updated hardware routing table, the exchange chip can more efficiently screen out the target message and distribute the target message to the corresponding CPU queue. The method not only improves the efficiency of message processing, but also enhances the identification and priority processing capacity of the key protocol messages, and further improves the stability and performance of the network.
In one exemplary embodiment, the specific implementation of the assignment of the target message to the corresponding CPU queue is further refined, particularly in terms of the communication protocol of the message. The following is a detailed description:
The communication protocol is acquired for each target message, and the exchange chip needs to acquire the communication protocol type. A communication protocol refers to a protocol standard, such as SSH, TELNET, BGP, OSPF, ICMP, etc., that a message follows. The protocols have clear marks at the header of the message, and the exchange chip can acquire the communication protocol type by analyzing the message header information.
And distributing the target messages to the corresponding CPU queues according to the acquired communication protocols. For example, assume that the CPU of the switch has multiple queues, with different queues corresponding to different priorities or processing tasks. The SSH protocol messages may be assigned to queue 7, bgp protocol messages to queue 6, icmp protocol messages to queue 0, etc. Therefore, the messages can be distributed to different queues according to the importance of the protocol or the processing requirement, and priority scheduling is realized.
Illustrating:
SSH protocol when the communication protocol of a target message is SSH, the exchange chip distributes the target message to a corresponding high-priority CPU queue (such as queue 7) so that the CPU can rapidly process the SSH connection request and the timeliness and safety of remote management are ensured.
BGP protocol for BGP protocol message, because BGP is used for routing between autonomous systems, it has an important effect on stability and topology structure of the network, so it is allocated to CPU queues (e.g., queue 6) with higher priority, so as to ensure that BGP routing information can be updated and processed in time.
ICMP protocol is commonly used for network diagnostics and control messaging, such as ping commands. These messages may be assigned to a lower priority CPU queue (e.g., queue 0) because the processing of ICMP messages is relatively less urgent during normal network operation.
HTTP protocol for HTTP protocol message, if the exchange needs to process HTTP related management request or service, it can be distributed to CPU queue (such as queue 3) with medium priority to ensure the response and availability of Web management interface.
By the method, the refined management of the messages of different communication protocols is realized, and the priority processing of the key protocol messages is ensured, so that the stability and the performance of the network are improved. The queue allocation mechanism based on the communication protocol enables the exchange to more efficiently utilize CPU resources and meet the processing requirements of different protocols.
In one exemplary embodiment, the specific implementation of the assignment of the target message to the corresponding CPU queue is further refined, in particular by configuring ACL rules that are one-to-one with the communication protocol. The following is a detailed description:
ACL rules corresponding to the communication protocols one by one are configured, namely ACL rules corresponding to the communication protocols one by one are required to be configured. The ACL rules are defined based on the communication protocol type of the message and are used for guiding the exchange chip how to process the messages of different protocols. For example, the ACL rules may be configured such that messages of the SSH protocol are assigned to queue 7, messages of the bgp protocol are assigned to queue 6, messages of the icmp protocol are assigned to queue 0, etc.
And distributing the target message to the corresponding CPU queues according to the ACL rule, namely after the communication protocol of the target message is acquired, distributing the message to the CPU queues corresponding to the respective communication protocols by the exchange chip according to the pre-configured ACL rule. The ACL rules define the processing actions of messages of different protocols, including redirecting the messages to a particular CPU queue.
Illustrating:
For SSH protocol, ACL rules are configured, messages matching SSH protocol are redirected to CPU queue 7. In this way, the messages of the SSH protocol are distributed to the high-priority queues, and timeliness and safety of remote management are ensured.
For BGP protocol, ACL rules are configured, messages matching BGP protocol are redirected to CPU queue 6. Messages of the BGP protocol are distributed to higher priority queues, so that BGP routing information can be updated and processed in time.
For ICMP protocol, ACL rule is configured, message matching ICMP protocol is redirected to CPU queue 0. The ICMP protocol messages will be assigned to lower priority queues because the handling of ICMP messages is relatively less urgent in normal network operation.
For the HTTP protocol, ACL rules are configured, messages of the HTTP protocol are matched, and redirected to the CPU queue 3. The HTTP protocol messages will be assigned to medium priority queues, ensuring responsiveness and availability of the Web management interface.
In the foregoing, it was mentioned that the messages are allocated to corresponding CPU queues according to their communication protocols, and this exemplary embodiment provides a specific implementation by introducing ACL rules. The configuration of ACL rules enables the exchange chip to distribute messages of different protocols to different CPU queues according to predefined rules, thereby realizing fine management and priority scheduling of the messages.
The technical advantage of the above exemplary embodiments is that, by configuring ACL rules corresponding to communication protocols one by one, the switching chip can process messages of various protocols more flexibly, and ensure that key protocol messages are processed preferentially. The distribution mechanism based on the ACL rule improves the stability and the performance of the network, and simultaneously enhances the adaptability and the processing capacity of the switch to different protocols.
In one exemplary embodiment, the action performed in the ACL rule is further defined as a redirect action.
In the ACL rule, the redirection action refers to forwarding the matched message directly to the designated CPU queue without performing other conventional forwarding processes. This means that the destination of the message is explicitly set to a particular queue of the CPU, to which the switch chip will send the message directly for subsequent processing by the CPU.
For example, for SSH protocol, ACL rules are configured, messages matching SSH protocol are executed, and redirection actions are directly sent to CPU queue 7. Thus, the CPU can immediately process the SSH connection request, and the timeliness and the safety of remote management are ensured.
In a fourth implementation manner of the related art, the copy action is to copy a message and send the message to the CPU queue, and at the same time, the message is still processed according to the original forwarding flow. This may result in the same message being processed in both the CPU and forwarding planes, increasing the burden on the CPU and in some cases may cause problems with protocol concurrency or data inconsistency.
In contrast, the redirection action ensures that the message is sent directly to the CPU queue, rather than replicating a copy, thereby avoiding repeated processing of the message.
In particular, the advantage of using a redirect action over copy action is:
the CPU burden is reduced, the redirection action only sends the message to the appointed CPU queue once, and the copy action creates a copy of the message, so that the CPU needs to process more messages, and the burden is increased. The redirection action can effectively reduce the processing load of the CPU and improve the system efficiency.
Avoiding protocol concurrency-copy actions may cause the same message to be processed in both the CPU and forwarding planes, which may cause protocol concurrency or data inconsistency in some cases. The redirecting action ensures that the message is only processed in the CPU, so that the problem is avoided, and the stability and reliability of the network are improved.
The processing efficiency is improved, namely, the redirection action directly sends the message to the CPU queue, so that the processing steps of the message in the exchange chip are reduced, the processing efficiency of the message is improved, and the timely processing of the key protocol message is ensured
In one exemplary embodiment, the manner in which the ACL rules are determined is further defined, emphasizing that the ACL rules are configured in accordance with the communication protocols currently supported by the switch.
Specifically, the ACL rules are determined according to the communication protocol currently supported by the switch. This means that the switch will automatically generate or adjust ACL rules during operation, depending on the enabled and configured protocol. For example, if the switch has SSH, BGP, and ICMP protocols enabled, the ACL rules will automatically contain matches for these protocols and assign them corresponding CPU queues.
Closely associated with the switch configuration, such dynamic configuration approach makes ACL rules closely related to the actual configuration and operational state of the switch. When an administrator enables or disables certain protocols on the switch, the ACL rules are updated accordingly to ensure that only the protocol messages currently in actual use are correctly allocated to the CPU queues, avoiding unnecessary rule configuration and potential errors.
Illustrating:
The new protocol is enabled assuming that the switch is initially configured with ACL rules for the SSH and BGP protocols. If the administrator later enables the ICMP protocol, the switch automatically updates the ACL rules according to the currently supported communication protocol, adds the ICMP protocol's matching item, and assigns it to the corresponding CPU queue (e.g., queue 0). In this way, the messages of the ICMP protocol will be properly redirected to the CPU for processing.
Disabling protocol-if the administrator disabled the BGP protocol, the switch automatically removes the BGP protocol matches from the ACL rules. Thus, the message of BGP protocol will not be distributed to CPU queue, avoiding wasting CPU resource.
By dynamically configuring ACL rules according to the communication protocol currently supported by the switch, the following advantages can be achieved:
and the automatic management reduces the manual configuration work of an administrator and reduces the risk of configuration errors. The exchanger automatically generates corresponding ACL rules according to the enabled protocol, and ensures the accuracy and the integrity of the rules.
And (3) resource optimization, namely, the configuration of ACL rules for protocols which are not enabled is avoided, and ACL table entry resources are saved. This is particularly important where the switch supports a large number of protocol and VLAN interfaces, helping to increase the overall efficiency of the system.
Flexibility and adaptability, the exchange can quickly adapt to the change of network configuration, when the protocol is enabled or disabled, the ACL rule can be updated in time, and the stability and performance of the network are ensured.
In one exemplary embodiment, a mechanism is added to dynamically update the hardware routing table to accommodate changes in the network configuration of the switch. The following is a detailed description:
The IP address of the VLAN interface is periodically checked by the switch to periodically check the IP address configured for each VLAN interface to monitor for any changes. Such detection may be timed, for example, once at regular intervals (e.g., every 30 seconds or minutes), or may be triggered based on certain events, for example, when a network traffic change is detected or a specific management command is received.
When the change occurs, the local routing entry in the hardware routing table is updated, and if the change (such as the addition, deletion or modification of the IP address) of the IP address of the VLAN interface is detected, the switch correspondingly updates the local routing entry in the hardware routing table. The specific operation comprises the following steps:
and adding a local routing entry, namely adding a corresponding local routing entry in a hardware routing table if the IP address of the VLAN interface is newly added to a certain external communication port, so as to ensure that the switch can identify the new destination address as a message of the local.
And deleting the local routing entry, namely deleting the corresponding local routing entry from the hardware routing table if the IP address of the VLAN interface configured in a certain external communication port is deleted, so as to avoid that invalid or outdated routing information influences message processing.
Updating the local routing entry, namely updating the corresponding local routing entry in the hardware routing table to reflect the new IP address information if the IP address of the VLAN interface configured in a certain external communication port is modified.
By introducing periodic detection and dynamic update mechanisms, it is ensured that local routing entries in the hardware routing table remain consistent with the actual configuration of the switch at all times. Therefore, even if the network configuration changes, the switch can accurately identify and process the message with the destination address as the local address, and the accuracy and efficiency of message processing are ensured.
Illustrating:
The new IP address is that an external communication port of the switch is originally configured with the IP address IP_1. Later, the administrator adds an IP address ip_2 to the pair of external communication ports. The switch finds this change in periodic detection, and adds a new local routing entry in the hardware routing table, corresponding to IP address ip_2, and the output port is a CPU port. In this way, a message with destination address ip_2 will be correctly identified and assigned to the corresponding CPU queue.
Deleting the IP address if the administrator deletes the IP address IP_1 of the VLAN interface, and after detecting the change, the switch deletes the corresponding local routing entry from the hardware routing table. Thus, the message with the destination address of IP_1 is not mistakenly identified as the message which needs to be sent to the CPU for processing, and unnecessary processing is avoided.
Modifying the IP address if the IP address of a VLAN interface is modified from IP_1 to IP_3, the switch will update the corresponding entry in the hardware routing table and update the IP address to IP_3. In this way, a message with destination address ip_3 will be correctly identified and processed.
By periodically detecting the IP address change of the VLAN interface and dynamically updating the hardware routing table, the switch can adapt to the change of network configuration in real time, thereby ensuring the accuracy and efficiency of message processing. The dynamic updating mechanism avoids the tedious and potential errors of manually configuring the routing table, and improves the automation degree and the maintenance convenience of the switch. Meanwhile, the method ensures that the switch can quickly respond when the IP address is changed, reduces network problems caused by inconsistent configuration, and enhances the stability and reliability of the network.
The scheme provided by the embodiment of the application (hereinafter referred to as the scheme) realizes the accurate identification and high-efficiency processing of the message with the local destination address through the local routing entry and the ACL rule in the hardware routing table, and has obvious advantages compared with the related technology.
Compared with the first implementation mode, the method and the device have the advantages that the messages needing to be sent to the CPU are recognized in advance at the hardware level, the queues are allocated, the CPU software processing is not relied on, the message loss or delay caused by insufficient CPU performance is avoided, meanwhile, the CPU load is reduced, and the processing efficiency and the network stability are improved.
Compared with the second implementation mode, the scheme is classified through the local routing entries and classId, reduces dependence on ACL rules, saves ACL resources, simplifies configuration management, and is particularly suitable for a large number of VLAN interfaces and IP address scenes.
Compared with the third implementation mode, the scheme does not depend on the output port matching, so that the universality is improved, and the scheme cannot be implemented because the output port matching is not supported by a chip.
Compared with the fourth implementation mode, the method adopts the redirection action, avoids double uploading of the message, reduces the load of the CPU, prevents protocol concussion, accurately identifies the message needing uploading of the CPU, and ensures the network stability and the message processing accuracy.
In summary, the present solution is superior to the implementation one to four in terms of processing efficiency, resource utilization, versatility and network stability.
The following is described by taking a certain application scenario as an example:
Two VLAN interfaces, namely a VLAN 1 interface and a VLAN 2 interface, are configured on the switch, and the corresponding IP addresses are respectively 1.1.1.1 and 2.2.2.2. Meanwhile, the switch learns two ARP entries with IP addresses 1.1.1.2 and 2.2.2.3.
The following is a detailed operation flow, which clearly shows how to implement efficient processing of a message with a local destination address by configuring a classification identifier and an ACL rule:
1. IP address configuration and classId assignment:
After the VLAN 1 interface is configured with the IP address 1.1.1.1, the switch assigns a classId value (e.g., 100) to it and issues the IP address and its corresponding classId 100 to the hardware routing table.
Similarly, after the VLAN 2 interface is configured with the IP address 2.2.2.2, the existing classId value (100) is searched, and the IP addresses 2.2.2.2 and classId are updated into the hardware routing table entry together. Thus, the IP addresses of both VLAN interfaces are categorized under the same classId, indicating that they all belong to the class that requires the CPU processing.
2. ARP table entry processing:
ARP entries learned by the switch (e.g., 1.1.1.2 and 2.2.2.3) are processed according to conventional flow. Because the IP addresses corresponding to these ARP entries are not VLAN interface configured IP addresses, they do not need to be assigned classId. These ARP entries are directly issued into hardware entries to support normal two-layer forwarding functions.
3. Protocol function opening and ACL configuration:
When a particular protocol (e.g., OSPF, BGP, ICMP, HTTP, etc.) is enabled on each VLAN interface, the switch configures the corresponding ACL rules according to the enabled protocol. For example:
When the VLAN1 interface enables the OSPF function, an ACL rule is configured, matches the OSPF protocol with classId a, and redirects the matched message to the queue 7 of the CPU port.
Similarly, when BGP functionality is enabled, ACL rules are configured to redirect BGP protocol messages to queue 6, when ICMP functionality is enabled, ICMP messages are redirected to queue 0, http protocol messages are also redirected to queue 0, and so on. In this way, messages of different protocols are distributed to different CPU queues, so that priority processing of the messages of the key protocols is ensured.
4. The message processing flow is as follows:
When an OSPF message with a destination IP address of 1.1.1.1 enters the switch from the port 1, the switch chip firstly queries the routing table to hit the routing entry corresponding to 1.1.1.1. Since the routing entry contains classId a, the switch chip carries the classId value to the next processing step.
Next, the switching chip queries the ACL entries using classId and the original message data. Since the message is an OSPF protocol message and matches classId a, the corresponding ACL rule is hit and the action of redirecting to CPU port queue 7 is performed. This process ensures that the message is sent completely to the CPU queue for processing, rather than just duplicating one copy, thereby avoiding repeated processing of the message and increased CPU burden.
5. Non-native message processing:
for an OSPF message with a destination IP of 1.1.1.2 (i.e., an IP address of a non-native VLAN interface), when it enters the switch, it also first looks up the hardware routing table and hits the corresponding routing entry of 1.1.1.2. Since the routing entry is not assigned classId (default classId value of 0), the switch chip continues with the subsequent process flow.
Because no valid classId value is carried, the message cannot match the ACL rules configured for classId 100, and thus its forwarding behavior is not modified. The switch will process the message according to the normal route forwarding flow and forward it out of port 3. The processing mode ensures that the non-native message is not intercepted by mistake or sent to the CPU by mistake, thereby avoiding unnecessary resource consumption and interference to normal communication of the network.
Through the flow, the exchange can accurately distinguish the message with the destination address of the exchange as the local message from the passing message, so that the key protocol message is ensured to be timely sent to the CPU for processing, and the normal forwarding of other messages is not influenced. This not only improves the stability and performance of the network, but also effectively saves ACL resources, simplifying network configuration and management.
Fig. 3 is a schematic structural diagram of a switch according to an embodiment of the present application. As shown in fig. 3, the switch includes the above-described switch chip and CPU, wherein:
the exchange chip is used for processing the message by adopting the method;
Specifically, after receiving the message, the switching chip searches the hardware routing table to determine whether the destination address of the message is the IP address of the VLAN interface. If yes, the message is redirected to a specific queue of the CPU according to the configured rule (such as ACL matching), and if not, the message is forwarded to a corresponding output port according to the normal route forwarding flow. Meanwhile, the exchange chip also needs to classify and prioritize the data flow to the CPU and limit the bandwidth of the CPU to ensure that the CPU will not have too high load.
The CPU is used for processing the message according to the priority of the CPU queue.
Specifically, after the exchange chip sends the message to the CPU, the CPU determines the priority according to the type and importance of the message and carries out corresponding processing.
The switch provided by the embodiment of the application jointly ensures the effective processing of various protocol messages and the stable operation of the network by the CPU and the close cooperation of the switch chip.
In the application scenario of the present solution, the deployment scenario of the switch mainly includes the following several kinds of scenarios:
1. Enterprise network
Park network-devices connecting different departments and areas within an enterprise park. By configuring VLAN interfaces and corresponding IP addresses, the switch can effectively isolate network traffic of different departments, and improve the security and management efficiency of the network. Meanwhile, the key protocol messages (such as management protocol SSH, network time protocol and the like) are processed preferentially by the scheme, so that stable operation and efficient management of the enterprise network are ensured.
Branch office interconnection, in which a switch is deployed in a branch office of an enterprise and connected to a headquarter via a wide area network link. The exchange can identify and preferentially process key protocol messages (such as Virtual Private Network (VPN) related protocol messages) communicated with the headquarters, so that smooth data transmission and business coordination between the branch offices and the headquarters are ensured.
2. Data center network
And the server access layer is used for deploying a switch on the server access layer of the data center and connecting a large number of server devices. By reasonably configuring VLAN interfaces and corresponding IP addresses, the switch can realize classification and management of server traffic. And carrying out priority scheduling on specific protocol messages (such as HTTP, HTTPS and other application layer protocol messages) so as to improve the service quality and response speed of the data center.
And the network equipment interconnection layer is used for deploying a switch on the network equipment interconnection layer of the data center and connecting network equipment such as a router, a firewall and the like. The switch can ensure that key network control protocol messages (such as routing protocol messages of BGP, OSPF and the like) are processed in time, and ensure the topological stability of the data center network and the accurate propagation of the routes.
3. Operator network
And the broadband access network is characterized in that a deployment switch is used for connecting the user terminal equipment and the upper layer network equipment in the broadband access network. By configuring VLAN interfaces and corresponding IP addresses for different users or user groups, isolation and management of user traffic are achieved. And reasonably scheduling the service messages of the users, guaranteeing the network experience of the users, and simultaneously ensuring the stability and reliability of network operation.
And (3) the mobile network backhaul, namely, deploying a switch at the backhaul part of the mobile network, and connecting the base station with core network equipment. The switch can identify and preferentially process protocol messages (such as GTP (GPRS Tunneling Protocol, general packet radio service tunneling protocol) messages) related to mobile network services, and ensure the data transmission efficiency and service quality of the mobile network.
4. Cloud computing environment
And the cloud service provider network is used for deploying a switch in a data center of the cloud service provider and connecting cloud resources such as a virtual machine, storage equipment and the like. By flexibly configuring the VLAN interfaces and corresponding IP addresses, the switch can support network isolation and traffic management in a multi-tenant environment. And (3) carrying out priority processing on management protocol messages (such as messages related to virtual machine monitoring and resource scheduling) of the cloud platform, so that high availability and efficient operation of cloud services are ensured.
And the mixed cloud access is realized by arranging a switch in an enterprise mixed cloud environment and connecting private cloud and public cloud resources of an enterprise. The exchange can identify and process protocol messages related to inter-cloud communication preferentially, ensure safe and rapid transmission of data between different cloud environments, and support hybrid cloud application and service deployment of enterprises.
According to the scheme, the hardware routing table and the CPU queue allocation mechanism of the switch are reasonably configured, so that the priority processing of key protocol messages is ensured, and the requirements on network stability and performance under different deployment scenes are met.
In addition, the embodiment of the application also provides a message processing device, which is applied to a switching chip in a switch, wherein the switching chip comprises a storage unit and a processing unit, the storage unit stores a computer program, and the processing unit is used for running the computer program to execute the method.
The embodiment of the application also provides a storage medium in which a computer program is stored, wherein the computer program is arranged to perform the method described above when run.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components, for example, one physical component may have a plurality of functions, or one function or step may be cooperatively performed by several physical components. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term "computer storage media" includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.