CN119276819A - A cache clearing method and device supporting dynamic link switching scenarios - Google Patents
A cache clearing method and device supporting dynamic link switching scenarios Download PDFInfo
- Publication number
- CN119276819A CN119276819A CN202411381857.2A CN202411381857A CN119276819A CN 119276819 A CN119276819 A CN 119276819A CN 202411381857 A CN202411381857 A CN 202411381857A CN 119276819 A CN119276819 A CN 119276819A
- Authority
- CN
- China
- Prior art keywords
- queue
- time slot
- slot information
- receiving end
- dynamic link
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/32—Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/39—Credit based
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application provides a buffer emptying method and a buffer emptying device supporting a dynamic link switching scene, wherein the method comprises the steps of feeding back credit information to a sending end according to buffer zone state information of a receiving end of a dynamic link when the sending end of the dynamic link initiates a credit request based on a queue, feeding back local time slot information, wherein the time slot information is used for identifying the switching state of the queue, sending message data and associated time slot information to the queue of the receiving end if the credit information and the time slot information received by the sending end meet a first predefined condition, checking the state of the message data at the receiving end, and discarding the message data when the time slot information meets a second predefined condition. The technical scheme of the application realizes hardware self-adaptive cache emptying.
Description
Technical Field
The application belongs to the field of network transmission, and particularly relates to a cache emptying method and device supporting a dynamic link switch scene.
Background
The Credit (Credit) system is a flow control mechanism for preventing the sender from sending too much data during data transmission, resulting in buffer overflow at the receiver. In a general application scenario, the data transmitting end needs to apply for a data transmitting license to the data receiving end, that is, the receiving end needs to have a received Credit, and after obtaining the license, the data transmitting end can initiate actual data to the receiving end, and the flow is shown in fig. 1. First, a sender initiates a Credit request based on a Queue (Queue). And the receiving end returns Credit effective information based on the queue, and finally the transmitting end transmits data to the queue corresponding to the receiving end.
Fig. 2 shows a first dynamic link switch scenario. When a receiving end closes a certain Queue, three parts of data, namely a cache of the corresponding Queue of the current Queue, returned Credit data and received data after the cache, need to be emptied.
Fig. 3 shows a second dynamic link switch scenario. When the receiving end continuously executes the operations of opening, closing, opening and closing a certain Queue, all data in the operation period need to be emptied, and the data also comprises a cache of the corresponding Queue of the current Queue locally, returned Credit data and received data after the cache.
If the data to be discarded during the first Queue Disable is already opened under the condition that the complete discarding of the data is not completed yet, it is difficult to determine at the receiving end whether the receiving end receives new data or old data to be discarded.
Disclosure of Invention
The application aims to provide a cache emptying method and device supporting a dynamic link switch scene, and aims to realize hardware self-adaptive cache emptying.
According to a first aspect of the present application, there is provided a method for flushing a buffer memory supporting a dynamic link switch scenario, including:
When a sending end of a dynamic link initiates a credit request based on a queue, feeding back credit information to the sending end and feeding back local time slot information according to buffer zone state information of a receiving end of the dynamic link, wherein the time slot information is used for identifying a switching state of the queue;
If the credit information and the time slot information received by the sending end meet the first predefined condition, sending message data and the associated time slot information to a queue of the receiving end;
And carrying out state check on the message data at a receiving end, and discarding the message data when the time slot information meets a second predefined condition.
In an alternative embodiment, after sending the message data and the associated timeslot information to the queue of the receiving end, the method further includes:
if the time slot information indicates that the queue is marked as discarding, the sending end discards the message data of the queue at the sending outlet.
In an alternative embodiment, when the timeslot information meets a second predefined condition, discarding the packet data further includes:
discarding the message data when the time slot information indicates that the queue is in a closed state or the receiving end requests not to receive the data;
And when the time slot information indicates that the queue is in a pending state, discarding the message data received by the queue in an enabling state, and completing the emptying of the queue.
In an alternative embodiment, after the completion queue is emptied, when the receiving end receives the message in the pending state, the queue is re-enabled.
In an alternative embodiment, after the completion queue is emptied, the method further comprises:
if the receiving end does not receive the message of the pending state within a predefined time period, the receiving end is forced to enter the queue enabling state.
According to a second aspect of the present application, there is provided a buffer flushing device supporting a dynamic link switch scenario, including:
The feedback unit is used for feeding back credit information to the sending end and feeding back local time slot information according to buffer area state information of the receiving end of the dynamic link when the sending end of the dynamic link initiates a credit request based on the queue, wherein the time slot information is used for identifying the switching state of the queue;
a packet sending unit, configured to send message data and associated time slot information to a queue of a receiving end when the credit information and the time slot information received by the sending end meet a first predefined condition;
and the checking unit is used for performing state checking on the message data at the receiving end, and discarding the message data when the time slot information meets a second predefined condition.
Compared with the related art, the technical scheme of the application has at least the following advantages:
the hardware self-adaptive cache emptying scheme allows software to automatically control the emptying operation of the hardware cache by simply configuring the switch of the Queue, so that the direct participation of the software in the hardware operation is reduced, and the complexity of the system is reduced. The hardware can automatically perform cache management according to preset conditions, software is not required to continuously monitor the hardware state, the intelligence and autonomy of the system are improved, and meanwhile, software resources are released, so that the system can concentrate on higher-level tasks;
By adding only 2 bits of message state storage overhead per message, the actual added storage overhead is only 0.1% relative to the message length of 256 Bytes. The additional storage requirement is obviously reduced, the storage efficiency is improved, the overall cost of the system is reduced, meanwhile, for large-scale data processing, the consumption of storage resources is obviously reduced, and the sustainability and expansibility of the system are improved;
The method has the advantages that the interaction mode between the sending end and the receiving end is simplified, the complexity of the system is reduced, the communication efficiency is improved, the system is easier to understand and maintain by engineers, the development time is shortened, the maintenance cost is reduced, the stability and the reliability of the system are improved, and an efficient and economical solution is provided for data transmission and management.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application may be realized and attained by the structure and flow of the instrumentalities and methods pointed out in the specification and drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to the drawings without any inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a credit flow control mechanism according to the related art.
Fig. 2 and 3 are schematic diagrams of two dynamic link switching scenarios according to the related art.
Fig. 4 is a flowchart of a cache flushing method supporting a dynamic link switch scenario according to an exemplary embodiment of the present application.
Fig. 5 is a diagram of a credit flow control mechanism with slot information according to an exemplary embodiment of the application.
Fig. 6 is a state machine diagram of a receiving end TSLOC according to an example embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which are derived by a person skilled in the art from the embodiments according to the application without creative efforts, fall within the protection scope of the application.
Based on the analysis, the application provides a buffer emptying method and a buffer emptying device supporting a dynamic link switch scene, and the buffer emptying scheme is realized by setting a state machine at a receiving end and synchronizing the link states of a message level in a mode of transmitting the states to a transmitting end packet by packet.
To solve the above problem, a Time Slot (TS) concept is introduced for defining a queue switch state. Illustratively, the following symbols are employed to define different queue states:
0x0: queue enable, which means that the queue is in an enabled state, and normal data transmission can be performed;
The 0x1: queue is closed, which means that the queue is disabled and cannot transmit data;
0x2:Queue Pending, which indicates that the queue is in a state to be processed, and the received message needs to be checked and confirmed to ensure the integrity and correctness of the data;
An idle state, 0x3, indicates that the queue is in an idle state, i.e., no data transfer is in progress, and may be reset to an initial state.
A state machine for local Time Slots (TSLOC) is set at the receiving end for tracking and managing the state of the queues.
And adding a TS identifier based on a message at the transmitting end, wherein the TS identifier is used for indicating the state of the receiving end when the Crodit is applied. The transmitting end can decide whether to transmit data or not and how to transmit the data according to the current state of the receiving end. This mechanism helps to ensure reliability and efficiency of data transmission.
Referring to the flowchart of fig. 4, the method for supporting the cache flushing of the dynamic link switch scenario provided by the present application exemplarily includes:
Step 401, when a sending end of a dynamic link initiates a credit request based on a queue, feeding back credit information to the sending end and feeding back local time slot information according to buffer area state information of a receiving end of the dynamic link, wherein the time slot information is used for identifying a switching state of the queue.
The sender first initiates a Credit request based on the queue, and inquires whether the receiver has enough buffer space to receive data. Each queue at the receiving end may have a different priority or characteristic. The sending end initiates a Credit request according to the state of the queues, and each queue is ensured not to exceed the processing capacity of the receiving end.
Referring to fig. 5, illustratively, after receiving a Credit (Credit) request, the receiving end returns valid Credit information according to the current use condition of the buffer, and notifies the transmitting end of the amount of data that can be safely transmitted without causing the buffer to overflow.
The receiving end returns the state machine state of its local time slot (TS_LOC) to the transmitting end in the form of TS. The sending end knows the current state of the receiving end, including enabling, closing, waiting to process or idle, and the like.
Step 402, if the credit information and the time slot information received by the sending end meet the first predefined condition, the message data and the associated time slot information are sent to a queue of the receiving end.
The transmitting end decides whether to transmit data to the corresponding queue according to the received Credit information and the TS state of the receiving end. If a certain queue is in a closed state, the message of the corresponding queue is marked as discarding, and the sending end discards the message data of the queue at the sending outlet so as to avoid wasting bandwidth and buffering resources.
The dropping strategy is beneficial to timely reducing the data transmission and keeping the stable operation of the system when the network is congested or the processing capacity of the receiving end is insufficient.
Step 403, performing state check on the message data at the receiving end, and discarding the message data when the time slot information meets a second predefined condition.
Illustratively, the receiving end performs a unified check on the TS state of the messages according to the ts_loc state of the current queue to determine whether to discard some messages. The judgment basis of packet loss comprises any one of the following:
TS_LOC is 0x1 (Queue off);
TS is 0x1 (indicating that the receiving end requests not to receive data), or
TS_LOC is greater than TS (in the Pending state, messages received during the previous queue Enable are discarded).
The state check is the key for ensuring the consistency and the integrity of the data, and the receiving end can avoid processing wrong or invalid data by discarding the messages which do not meet the conditions, so that the reliability of the system is improved.
In general, a queue-based Credit system achieves efficient data transmission control through close collaboration between a sender and a receiver. By means of accurate state management and discarding strategies, the system can avoid buffer overflow and data loss while guaranteeing data transmission efficiency, and the performance and stability of the whole network are improved.
Illustratively, the state machine jump of TS_LOC at the receiving end is as shown in FIG. 6:
in an initial IDLE (0 x 3) state, after resetting, if the configuration is detected as queue enabling, entering an Enable state, otherwise, entering an Disable state;
In the Queue Enable (0 x 0) state, if the software closes the Queue (Queue Disable) by configuration, jumping to the Disable state;
in the queue closing Disable (0 x 1) state, if the software activates the queue again through configuration, jumping to the Pending state;
In the queue Pending (0 x 2) state, when the receiving end receives a message with the state of 0x2, the receiving end indicates that a new message is received after the queue is re-enabled, at this time, the previous message is completely emptied, and the queue can be switched to the Enable state, and meanwhile, in order to avoid that no message is in a predefined time period after the queue is re-enabled to identify the queue empty state, a Timeout mechanism is utilized to forcedly enter the Enable state.
Compared with the related technology, the cache emptying method supporting the dynamic link switch scene provided by the application has the following advantages:
the hardware self-adaptive cache emptying scheme allows software to automatically control the emptying operation of the hardware cache by simply configuring the switch of the Queue, so that the direct participation of the software in the hardware operation is reduced, and the complexity of the system is reduced. The hardware can automatically perform cache management according to preset conditions, software is not required to continuously monitor the hardware state, the intelligence and autonomy of the system are improved, and meanwhile, software resources are released, so that the system can concentrate on higher-level tasks;
By adding only 2 bits of message state storage overhead per message, the actual added storage overhead is only 0.1% relative to the message length of 256 Bytes. The additional storage requirement is obviously reduced, the storage efficiency is improved, the overall cost of the system is reduced, meanwhile, for large-scale data processing, the consumption of storage resources is obviously reduced, and the sustainability and expansibility of the system are improved;
The method has the advantages that the interaction mode between the sending end and the receiving end is simplified, the complexity of the system is reduced, the communication efficiency is improved, the system is easier to understand and maintain by engineers, the development time is shortened, the maintenance cost is reduced, the stability and the reliability of the system are improved, and an efficient and economical solution is provided for data transmission and management.
Accordingly, the present application in a second aspect exemplarily provides a buffer flushing apparatus supporting a dynamic link switch scenario, including:
The feedback unit is used for feeding back credit information to the sending end and feeding back local time slot information according to buffer area state information of the receiving end of the dynamic link when the sending end of the dynamic link initiates a credit request based on the queue, wherein the time slot information is used for identifying the switching state of the queue;
a packet sending unit, configured to send message data and associated time slot information to a queue of a receiving end when the credit information and the time slot information received by the sending end meet a first predefined condition;
and the checking unit is used for performing state checking on the message data at the receiving end, and discarding the message data when the time slot information meets a second predefined condition.
The above apparatus may be implemented by a method for clearing a buffer memory supporting a dynamic link switch scenario provided by the embodiment of the first aspect, and specific implementation manner may be referred to the description in the embodiment of the first aspect, which is not repeated herein.
It is understood that the structures, names and parameters described in the above embodiments are only examples. Those skilled in the art may also make and adjust the structural features of the above embodiments as desired without limiting the inventive concept to the specific details of the examples described above.
Although the present application has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that modifications may be made to the technical solutions described in the foregoing embodiments or equivalents may be substituted for some of the technical features thereof, and these modifications or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application in essence.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411381857.2A CN119276819A (en) | 2024-09-29 | 2024-09-29 | A cache clearing method and device supporting dynamic link switching scenarios |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411381857.2A CN119276819A (en) | 2024-09-29 | 2024-09-29 | A cache clearing method and device supporting dynamic link switching scenarios |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN119276819A true CN119276819A (en) | 2025-01-07 |
Family
ID=94110580
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411381857.2A Pending CN119276819A (en) | 2024-09-29 | 2024-09-29 | A cache clearing method and device supporting dynamic link switching scenarios |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119276819A (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100061390A1 (en) * | 2008-09-11 | 2010-03-11 | Avanindra Godbole | Methods and apparatus for defining a flow control signal related to a transmit queue |
| US9007901B2 (en) * | 2012-02-09 | 2015-04-14 | Alcatel Lucent | Method and apparatus providing flow control using on-off signals in high delay networks |
| CN113141313A (en) * | 2020-01-19 | 2021-07-20 | 华为技术有限公司 | Congestion control method, device and system and storage medium |
| CN118282960A (en) * | 2022-12-30 | 2024-07-02 | 北京罗克维尔斯科技有限公司 | Credit-based traffic shaping method, device, electronic device and storage medium |
-
2024
- 2024-09-29 CN CN202411381857.2A patent/CN119276819A/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100061390A1 (en) * | 2008-09-11 | 2010-03-11 | Avanindra Godbole | Methods and apparatus for defining a flow control signal related to a transmit queue |
| US9007901B2 (en) * | 2012-02-09 | 2015-04-14 | Alcatel Lucent | Method and apparatus providing flow control using on-off signals in high delay networks |
| CN113141313A (en) * | 2020-01-19 | 2021-07-20 | 华为技术有限公司 | Congestion control method, device and system and storage medium |
| CN118282960A (en) * | 2022-12-30 | 2024-07-02 | 北京罗克维尔斯科技有限公司 | Credit-based traffic shaping method, device, electronic device and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20120054362A1 (en) | Mechanism for autotuning mass data transfer from a sender to a receiver over parallel connections | |
| US7161907B2 (en) | System and method for dynamic rate flow control | |
| JP4728013B2 (en) | Dynamic ARQ window management method and device | |
| US20060031519A1 (en) | System and method for flow control in a network | |
| US20080259821A1 (en) | Dynamic packet training | |
| US20060203730A1 (en) | Method and system for reducing end station latency in response to network congestion | |
| CN112104562B (en) | Congestion control method and device, communication network and computer storage medium | |
| US20140362693A1 (en) | Systems and Methods for Dynamically Adjusting QoS Parameters | |
| JPH0662042A (en) | Improvement regarding data transmission system | |
| CN102573064A (en) | Idle mode notification | |
| JPH0832623A (en) | Network system data delivery apparatus and method | |
| JP3214454B2 (en) | Packet processing device with built-in program | |
| EP0959572A2 (en) | Radio communication system, and apparatus, communication method and program recording medium therefor | |
| WO2019192318A1 (en) | Traffic smoothing method, server, and forwarding device | |
| CN119276819A (en) | A cache clearing method and device supporting dynamic link switching scenarios | |
| JP2002044136A (en) | Flow controller for multi-protocol networks | |
| JPH04839A (en) | Method of making packet communication | |
| CN101188555B (en) | A method for improving reliability of unidirectional communication under non-reliable communication environment | |
| CN114285803A (en) | Congestion control method and device | |
| CN119182719A (en) | Network congestion control method, storage medium and electronic equipment | |
| KR102211005B1 (en) | A middleware apparatus of data distribution services for providing a efficient message processing | |
| JP2000295281A (en) | Multicast packet flow control apparatus and method | |
| CN110601996B (en) | Looped network anti-starvation flow control method adopting token bottom-preserving distributed greedy algorithm | |
| CN113630337A (en) | Data stream receiving method, device and system and computer readable storage medium | |
| KR102231481B1 (en) | A middleware apparatus of data distribution services for providing a efficient message loss detection and retransmission processing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |