Disclosure of Invention
In view of the above, the present invention provides a data processing method, apparatus, computer device and storage medium, so as to solve the problem of low transmission efficiency of interleaved data and avoid the problem of blocking of the whole system caused by data interleaving.
In a first aspect, the invention provides a data processing method, which comprises the steps of receiving M pieces of interleaving data returned by N slave devices, wherein the M pieces of interleaving data comprise M pieces of reading data and corresponding M pieces of data identifiers, each piece of reading data and each piece of data identifier are generated and fed back by any slave device based on a reading command issued by a master device, M is larger than or equal to N >1, obtaining L buffer areas divided in advance in a storage resource pool, each buffer area corresponds to one data identifier, N is smaller than or equal to L and is smaller than or equal to M, storing interleaving data belonging to the same data identifier in the buffer area corresponding to the same data identifier according to the M pieces of data identifiers, and transmitting the interleaving data stored in the L buffer areas to the master device based on a preset priority transmission rule.
Based on the method of the first aspect, based on the M data identifiers contained in the received M interleaved data, the M interleaved data are stored in the buffer area corresponding to each data identifier, that is, all the interleaved data belonging to the same data identifier in the M interleaved data are stored in one buffer area, so as to realize packet storage of the M interleaved data, and after part or all of the L buffer areas store the interleaved data, the interleaved data stored in the L buffer areas are transmitted to the master device based on a preset priority transmission rule, so that the master device can receive the interleaved data stored in each buffer area according to the transmission rule. Furthermore, after the main device receives the interleaving data with the same data identification, the interleaving data with the same data identification can be transmitted to an upstream module or a downstream module of the main device, so that the interleaving data with the same data identification can be identified by the upstream module or the downstream module, the problem that the interleaving data cannot be transmitted under the condition that the upstream module or the downstream module is incompatible with the interleaving data is solved, the interleaving data is ensured to conform to the transmission protocol specification, the optimal transmission efficiency of the interleaving data is ensured to the greatest extent, and the problem that the whole system is blocked due to data interleaving is avoided.
In an optional implementation mode, M pieces of interleaving data comprise ith interleaving data, i is less than or equal to M, the ith interleaving data comprise ith data identifiers, L candidate buffer areas are divided in advance in a storage resource pool and correspond to the L buffer areas one by one, the interleaving data belonging to the same data identifier are stored in the buffer areas corresponding to the same data identifier according to the M data identifiers, and the method comprises the steps of storing the ith interleaving data in the buffer areas corresponding to the ith data identifiers according to the ith data identifiers included in the ith interleaving data, and i is less than or equal to M.
According to the ith data identifier included in the ith interleaved data, storing the ith interleaved data in a buffer area corresponding to the ith data identifier, wherein the method comprises the steps of judging whether a target candidate buffer area exists in L candidate buffer areas, and storing the interleaved data comprising the ith data identifier in the target candidate buffer area.
If the target candidate buffer area exists, the ith interleaving data is stored in the target candidate buffer area, and when the state of the buffer area corresponding to the target candidate buffer area is in an unfilled state, the interleaving data stored in the target candidate buffer area is stored in the buffer area corresponding to the target candidate buffer area.
If the target candidate buffer area does not exist, judging whether the target buffer area exists in the L buffer areas, wherein the target buffer area stores interleaved data comprising the ith data identifier.
If the target buffer area does not exist, the ith interleaved data is stored in any buffer area of the L buffer areas, wherein the state of any buffer area is empty.
If the target buffer area exists, judging whether the state of the target buffer area is a full state or not.
If so, storing the i-th interleaving data into a candidate buffer area corresponding to the target buffer area, and storing the interleaving data stored in the candidate buffer area corresponding to the target buffer area into the target buffer area when the storage state of the target buffer area is a non-full storage state.
If not, the ith interleaved data is stored in the target buffer area.
Based on the method, L candidate buffer areas can be divided in advance, so that the situation that the receiving of data is stopped too early under the condition that the buffer areas are full of interleaved data is avoided, and the number of interleaved data which can be throughput of the L buffer areas is increased. And after the buffer area releases the space, when the buffer area is in an unfilled state, writing the data in the corresponding candidate buffer area into the buffer area, thereby improving the data transmission efficiency.
In an alternative embodiment, in a case that a state in which one candidate buffer exists in the L candidate buffers is a full state, the receiving of the interleaved data returned from the N slave devices is stopped.
Based on the method, under the condition that the state of one candidate buffer zone in the L candidate buffer zones is in a full state, the fact that the interleaved data stored in the full buffer zone exists in the L candidate buffer zones at the current moment is not transmitted to the main equipment, and the new interleaved data cannot be received, so that the receiving of the interleaved data returned by the N slave equipment is stopped, and the deadlock problem is avoided.
In an alternative embodiment, each piece of interleaving data further comprises a reading status bit and a reading flag bit, wherein the reading status bit is used for indicating whether the reading of the reading data is successful, and the reading flag bit is used for indicating whether the interleaving data of the same data identification is stored in all L buffer areas.
Based on the method, whether the read data is successfully read can be judged based on the read status bit, and whether the interleaved data with the same data identification are all stored in the L buffer areas can be judged based on the read flag bit, so that the sequence of transmitting the interleaved data stored in the L buffer areas can be conveniently determined later.
In an alternative implementation mode, under the condition that interleaving data are stored in L buffer areas, interleaving data stored in the L buffer areas are transmitted to a main device based on a preset priority transmission rule, the method comprises the steps of transmitting interleaving data stored in one buffer area to the main device based on the preset priority transmission rule when interleaving data are stored in one buffer area, or transmitting interleaving data stored in one buffer area to the main device based on the preset priority transmission rule when interleaving data are stored in a plurality of buffer areas, wherein the interleaving data are stored in the first interleaving data corresponding to different data identifications are transmitted to the main device according to the storage time sequence of the first interleaving data corresponding to the different data identifications, and transmitting interleaving data stored in the buffer areas comprising target reading flag bits to the main device according to the storage time sequence of interleaving data comprising the target reading flag bits when interleaving data are not stored in the buffer areas, wherein the target reading flag bits are used for indicating that interleaving data of the same data identification are stored in all L interleaving data.
Based on the method, the interleaved data stored in one buffer area can be transmitted to the main equipment when one buffer area stores interleaved data in the L buffer areas, or the interleaved data stored in the buffer areas storing first interleaved data corresponding to different data identifications are preferentially transmitted to the main equipment when a plurality of buffer areas store interleaved data in the L buffer areas, and then the interleaved data stored in the buffer areas storing the interleaved data comprising target reading zone bits are transmitted to the main equipment, so that after the interleaved data comprising one data identification is transmitted, the interleaved data comprising the next data identification is transmitted, and the M interleaved data can be sequentially transmitted.
In an alternative embodiment, the buffer zone storing the interleaved data comprising the target read flag bit comprises a first buffer zone and a second buffer zone, the storage time of the interleaved data comprising the target read flag bit in the first buffer zone is a first storage time, the storage time of the interleaved data comprising the target read flag bit in the second buffer zone is a second storage time, and transmitting the interleaved data comprising the target read flag bit stored in the buffer zone to the main device comprises transmitting the interleaved data comprising the first buffer zone to the main device if the first storage time is earlier than the second storage time, and transmitting the interleaved data comprising the second buffer zone to the main device when the state of the first buffer zone is detected to be empty and the first buffer zone transmits the interleaved data comprising the target read flag bit.
Based on the above method, when the state of the first buffer area is empty and the first buffer area transmits the interleaved data including the target read flag bit, the interleaved data stored in the second buffer area is transmitted to the main device, so as to ensure that after the interleaved data stored in one buffer area is transmitted, the interleaved data stored in the next buffer area is transmitted.
In an alternative implementation, before acquiring the L pre-divided buffers in the storage resource pool, the method further comprises the steps of acquiring the maximum number of the M pieces of interleaving data, determining the depth of each buffer according to the maximum number, and determining the number L of the pre-divided buffers in the storage resource pool according to the total data amount and the depth of the M pieces of interleaving data.
Based on the method, the number of the pre-divided buffer areas can be accurately determined based on the actual requirement of the interleaving data to be transmitted according to the requirement, so that the excessive occupation of resources in a storage resource pool is avoided.
The invention provides a data processing device, which comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for receiving M interleaving data returned by N slave devices, each interleaving data comprises reading data returned by each slave device in response to a reading command issued by a master device and data identifiers, M is larger than or equal to N >1, the acquisition module is used for acquiring L buffering areas which are divided in advance in a storage resource pool, each buffering area corresponds to one data identifier, N is smaller than or equal to L and is smaller than or equal to M, the processing module is used for storing each interleaving data in the buffering area corresponding to the data identifier included in each interleaving data according to the M data identifiers included in the M interleaving data, and the processing module is also used for transmitting the interleaving data stored in the L buffering areas to the master device based on a preset priority transmission rule under the condition that the interleaving data is stored in the L buffering areas.
In a third aspect, the present invention provides a computer device comprising a memory and a processor, the memory and the processor being communicatively coupled to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the data processing method of the first aspect or any of its corresponding embodiments.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the data processing method of the first aspect or any of its corresponding embodiments.
In a fifth aspect, the present invention provides a computer program product comprising computer instructions for causing a computer to perform the data processing method of the first aspect or any of its corresponding embodiments.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical scheme provided by the embodiment of the invention is applied to a scene of data transmission based on an advanced extensible interface (Advanced eXtensible Interface, AXI) protocol, and under the scene, the situation of data interleaving transmission can be generated when the data transmission is performed based on the AXI protocol.
Data interleaving refers to interleaving data between multiple data requests to improve the efficiency and throughput of bus communications. This approach allows multiple data requests to be processed in one clock cycle and returned in an interleaved fashion, thereby reducing response time and improving performance. Data interleaving transmission is an important mechanism in the AXI protocol, and improves the efficiency and performance of bus communication by processing multiple data requests, interleaving data, out-of-order execution, burst transmission, and the like at the same time. The transmission mode is beneficial to optimizing data interaction in the system, reducing waiting time and improving data throughput, thereby improving the overall performance of the system.
The AXI protocol may be an AXI3 protocol or an AXI4 protocol. The AXI3 protocol and the AXI4 protocol can generate the condition of data interleaving transmission, and the AXI3 protocol can also generate the condition of data interleaving.
Read data interleaved transmission is one form of implementation of out-of-order output of read data. In the burst mode, a plurality of slave device slave ports transmit data simultaneously, continuous read data on a master port of the master device are not from the same slave port, and slave device ports to which each piece of interleaved data belong can be distinguished through read identifiers.
In the process of interleaving and transmitting the write data, the write data with different identifications can be transmitted in an interleaving way. But the write data of the same identification needs to be kept in order.
Based on the current data interleaving transmission process, when the data interleaving transmission is carried out in SINGLE SLAVE mode, the transmission efficiency of the interleaving data is low, and when a certain command is abnormal, the data transmission is failed.
In order to solve the above technical problems, the embodiments of the present invention provide a data processing method, which stores each piece of interleaved data in a buffer area corresponding to a data identifier included in each piece of interleaved data, and performs packet ordering, so as to solve the problem that the transmission efficiency of the interleaved data is low, and avoid the problem that the whole system is blocked due to data interleaving.
The method provided by embodiments of the present application will now be described with reference to data processing system 10 shown in FIG. 1. Fig. 1 is only a schematic diagram, and does not limit the applicable scenario of the technical solution provided by the present application.
With reference now to FIG. 1, FIG. 1 is a diagram of a topology of a data processing system in accordance with an embodiment of the present invention. In fig. 1, a data processing system 10 may include a data processing apparatus 101, a master device 102, a first slave device 103, a second slave device 104, and a storage resource pool 105.
The data processing apparatus 101 may be any device having a communication function and managing a storage resource pool. For example, the data processing device may be a memory controller.
The master device 102 may be any device having communication and computing capabilities. The master device 102 is configured to issue a read command to the first slave device 103 or the second slave device 104, and receive read data returned by the data processing apparatus 101.
The first slave device 103 or the second slave device 104 may be any device having a communication function and a calculation function. The first slave device 103 or the second slave device 104 is used to return read data to the storage resource pool 105.
The storage resource pool 105 may be any device having a storage function. For example, the storage resource pool may be a physical storage device, such as a hard disk or the like.
The data processing system 10 shown in FIG. 1 is for illustration only and is not intended to limit the scope of the present application. Those skilled in the art will appreciate that data processing system 10 may include other masters or slaves during a particular implementation, and that the number of masters and slaves may be determined based on particular needs, without limitation.
According to an embodiment of the present invention, there is provided a data processing method embodiment, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
In this embodiment, a data processing method is provided, which may be used in the data processing apparatus 101 described above, and fig. 2 is a flowchart of a data processing method according to an embodiment of the present invention, as shown in fig. 2, where the flowchart includes the following steps:
s201, M-stroke interleaving data returned by N slave devices are received.
The M-pen interleaving data comprise M reading data and corresponding M data identifiers, wherein each reading data and each data identifier are generated and fed back by any slave device based on a reading command issued by the master device, and M is more than or equal to N >1. Each of the interleaved data includes a read data and a data identification. Further, each interleaved data also includes a read status bit and a read flag bit.
In one example, the method further comprises, prior to S201, a read command issued by the master device to the at least one slave device. Each slave device receives the read command, generates interleaved data, and transmits the interleaved data to the data processing apparatus.
Wherein the read command is a command of the AXI protocol for reading data to at least one slave device.
The read data may be rdata data in the R channel of the AXI protocol.
The data identification is used to identify each of the interleaved data. The data identity may be a rid identity in the R-channel of the AXI protocol.
The read status bit is used to indicate whether the master device successfully reads the data in the slave device and whether the data is successfully read. The read status bit may be rresp signal bits in the R channel of the AXI protocol.
The read flag bit is used to indicate whether interleaved data of the same data identifier has been stored in all of the L buffers. The read flag bit may be rlast signal bits in the R channel of the AXI protocol.
S202, acquiring L buffers which are divided in advance in a storage resource pool.
Wherein each buffer area corresponds to a data identifier, and N is less than or equal to L and less than or equal to M. The Buffer may also be referred to as an id_buffer.
In the embodiment of the invention, L candidate buffer areas are also divided in the storage resource pool in advance, and the L candidate buffer areas are in one-to-one correspondence with the L buffer areas. The candidate Buffer may also be referred to as a reserved_buffer.
In some alternative embodiments, the data processing device acquires the maximum number of the maximum interleaving data in the M interleaving data before acquiring the L pre-divided buffers in the storage resource pool, determines the depth of each buffer according to the maximum number, and determines the number L of the pre-divided buffers in the storage resource pool according to the total data amount and the depth of the M interleaving data.
It will be appreciated that the buffer may store a greater amount of interleaved data than each interleaved data.
Optionally, the number of the buffer areas does not necessarily coincide with the maximum value of the data identifier, and the data processing device may also adjust the number of the buffer areas according to the frequency of use of the data identifier.
S203, according to M data identifications, storing the interweaved data belonging to the same data identification in a buffer area corresponding to the same data identification.
In some alternative embodiments, the data processing apparatus combines the rdata data, the rid flag, the rlast signal bits, and the rresp signal bits in the R channel into a multi-bit data and names the multi-bit data as port_m_data before storing each interleaved data in the buffer corresponding to the data flag included in each interleaved data.
Wherein, the bit width of port_m_data = rdata bit width + rid bit width + rlast bit width + rresp bit width. For example, as shown in fig. 3, fig. 3 is a schematic diagram of an id_buffer according to an embodiment of the present invention, where the id_buffer includes a plurality of Buffer groups therein, and each Buffer group may store port_m_data.
As shown in FIG. 4, FIG. 4 is a schematic flow chart of storing the ith interleaved data in the buffer area in M interleaved data according to the embodiment of the invention, i is less than or equal to M. In fig. 4, the data processing apparatus stores the ith interleaved data in a buffer corresponding to the ith data identifier according to the ith data identifier included in the ith interleaved data, and the data processing apparatus performs the following steps:
S401, judging whether a target candidate buffer area exists in the L candidate buffer areas.
S402, if a target candidate buffer area exists, storing the ith interleaved data into the target candidate buffer area, and storing the interleaved data stored in the target candidate buffer area into the buffer area corresponding to the target candidate buffer area when the state of the buffer area corresponding to the target candidate buffer area is in an unfilled state;
s403, if the target candidate buffer area does not exist, judging whether the target buffer area exists in the L buffer areas;
S404, if the target buffer area does not exist, storing the ith interleaved data into any buffer area of the L buffer areas, wherein the state of any buffer area is empty;
s405, if the target buffer area exists, judging whether the state of the target buffer area is a full state;
S406, if yes, storing the ith interleaving data into a candidate buffer area corresponding to the target buffer area, and storing the interleaving data stored in the candidate buffer area corresponding to the target buffer area into the target buffer area when the state stored in the target buffer area is a non-full state;
s407, if not, storing the ith interleaved data into the target buffer area.
The target candidate buffer area stores interleaved data comprising an ith data identifier.
In the embodiment of the invention, the target buffer area stores interleaved data comprising the ith data identifier.
In the embodiment of the invention, the data processing device can monitor the state of the buffer area or the candidate buffer area at the current moment in each clock cycle (clk). The state may be a full state or an empty state. The full state is the full state and the empty state is the empty state.
Optionally, the data processing apparatus stops receiving the interleaved data returned by the N slave devices when the state in which one candidate buffer exists in the L candidate buffers is a full state.
The data processing device stops receiving the interleaved data returned by the N slave devices, and can inhibit the read interleaved data from being returned again by pulling down the signal RREADY.
S204, based on a preset priority transmission rule, transmitting the interleaved data stored in the L buffer areas to the master device.
The preset priority transmission rule can be set according to actual needs.
In some alternative embodiments, the data processing apparatus transmits the interleaved data stored in one buffer to the master device based on a preset priority transmission rule in the case where there is one buffer in the L buffers in which the interleaved data is stored.
In some optional embodiments, in case that the L buffers have a plurality of buffers storing interleaved data, transmitting the interleaved data stored in the buffers storing the first interleaved data corresponding to the different data identifications to the host device according to a storage time sequence of the first interleaved data corresponding to the different data identifications based on a preset priority transmission rule, and in case that the buffers having no buffers storing the first interleaved data exist in the plurality of buffers, transmitting the interleaved data stored in the buffers storing the interleaved data including the target read flag bit to the host device according to a storage time sequence of the interleaved data including the target read flag bit, the target read flag bit being used to indicate that the interleaved data of the same data identification has been stored in all of the L buffers.
In an example, taking a buffer in which interleaved data including a target read flag bit is stored as a first storage time, and a second buffer in which interleaved data including a target read flag bit is stored as a second storage time, the data processing apparatus may transmit the interleaved data stored in the first buffer to the host device, and when it is detected that the state of the first buffer is empty and the first buffer has transmitted the interleaved data including the target read flag bit, the interleaved data stored in the second buffer is transmitted to the host device.
Specifically, when the data processing device stores the interleaved data into the id_buffer, a IDNUM state machine is set, when it detects that the interleaved data is not stored in the L id_buffers, the IDNUM state machine is started to be in an IDLE state, and when it detects that the interleaved data starts to be stored in the L id_buffers, a pointer is pointed to the id_buffer stored in the interleaved data through a IDNUM state machine.
It can be understood that the id_buffer pointed by the pointer is a Buffer area for transmitting interleaved data at the current time.
For example, in the case where one Buffer exists among the L buffers and interleaved data is stored in one Buffer, the data processing apparatus causes a pointer to point to the id_buffer through IDNUM state machines and transmits the interleaved data stored in the id_buffer to the host device.
In one example, the data processing apparatus may further set a special register, record the arrival sequence of the interleaved data including the target read flag bit in the L buffers, and sequentially write the data identifier of the interleaved data including the target read flag bit into the register.
The register allows continuous writing of the same data identifier, as shown in the following table, wherein '2' in the table is the identifier of the first interleaved data including the target read flag bit in the L buffer areas, '1' in the table is the identifier of the second interleaved data including the target read flag bit in the L buffer areas, the first '3' in the table is the identifier of the third interleaved data including the target read flag bit in the L buffer areas, and the second '3' in the table is the identifier of the fourth interleaved data including the target read flag bit in the L buffer areas.
It can be understood that the pointer switching only occurs when the data with the target reading zone bit is read, and the switching is not performed at other times, so that the situation that a group of complete data is not transmitted, the next data with different data identifications begins to be transmitted and data interleaving occurs is avoided. When the data processing device detects that the interleaved data enter the buffer, if the state of the last buffer pointed by the pointer is the empty state and the data with the target reading zone bit is sent, the data processing device is in a state of waiting to be switched, and the states of other buffers are all empty states, the pointer is switched to the current buffer to start transmitting data.
Based on the above method from S201 to S204, the data processing apparatus may receive M pieces of interleaved data returned from N slave devices, obtain L buffers divided in advance in the storage resource pool, store each piece of interleaved data in a buffer corresponding to a data identifier included in each piece of interleaved data according to M data identifiers included in the M pieces of interleaved data, and transmit the interleaved data stored in the L buffers to the master device based on a preset priority transmission rule in case that the interleaved data is stored in the L buffers.
The M interleaving data can be stored in the buffer area corresponding to the data identifier included in each interleaving data according to the M data identifiers, namely the interleaving data with the same data identifier in the M interleaving data are stored in one buffer area, the M interleaving data are grouped, meanwhile, the interleaving data stored in the L buffer areas are transmitted to the main equipment based on a preset priority transmission rule, namely the M interleaving data are ordered and then transmitted to the main equipment, so that the M interleaving data read by the main equipment can be identified by a module incompatible with the interleaving data, the interleaving data are ensured to conform to the protocol specification, the optimal transmission efficiency of the interleaving data is ensured to the greatest extent, and the problem that the whole system is blocked due to data interleaving is avoided.
In this embodiment, a data processing device is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and will not be described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The embodiment provides a data processing device, as shown in fig. 5, fig. 5 is a block diagram of a data processing device according to an embodiment of the present invention, where the device includes:
The acquisition module 501 is configured to receive M interleaved data returned by N slave devices, where the M interleaved data includes M read data and M data identifiers corresponding to the M read data and the M data identifiers are generated and fed back by any slave device based on a read command issued by the master device, and M is greater than or equal to N >1.
The obtaining module 501 is configured to obtain L buffers that are pre-divided in the storage resource pool, where each buffer corresponds to a data identifier, and N is equal to or greater than L and equal to or less than M.
And the processing module 502 is configured to store, according to the M data identifiers, interleaved data belonging to the same data identifier in a buffer corresponding to the same data identifier.
The processing module 502 is further configured to transmit the interleaved data stored in the L buffers to the master device based on a preset priority transmission rule.
In some alternative embodiments, the M-th interleaved data includes an ith interleaved data, i is less than or equal to M, the ith interleaved data includes an ith data identifier, L candidate buffers are further divided in advance in the storage resource pool, the L candidate buffers correspond to the L buffers one by one, and the processing module 502 is specifically configured to store the ith interleaved data in the buffer corresponding to the ith data identifier according to the ith data identifier included in the ith interleaved data, i is less than or equal to M.
The processing module 502 is further specifically configured to store, according to an ith data identifier included in the ith interleaved data, the ith interleaved data in a buffer area corresponding to the ith data identifier, where the processing module includes:
The processing module 502 is further specifically configured to determine whether a target candidate buffer exists in the L candidate buffers, where the target candidate buffer stores interleaved data including the ith data identifier.
The processing module 502 is further specifically configured to store the ith interleaved data into the target candidate buffer if the target candidate buffer exists, and store the interleaved data stored in the target candidate buffer into the buffer corresponding to the target candidate buffer when the buffer corresponding to the target candidate buffer is in a state of not being full.
The processing module 502 is further specifically configured to determine whether a target buffer exists in the L buffers if the target candidate buffer does not exist, where the target buffer stores interleaved data including the ith data identifier;
the processing module 502 is further specifically configured to store the ith interleaved data into a buffer in which any one of the L buffers is empty if the target buffer does not exist.
The processing module 502 is further specifically configured to determine whether the state of the target buffer is a full state if the target buffer exists.
The processing module 502 is further specifically configured to store the i-th interleaved data into the candidate buffer corresponding to the target buffer if the i-th interleaved data is in the full state, and store the interleaved data stored in the candidate buffer corresponding to the target buffer into the target buffer if the state stored in the target buffer is not full.
The processing module 502 is further specifically configured to store the ith interleaved data in the target buffer if not.
In some optional embodiments, the processing module 502 is further configured to stop receiving the interleaved data returned by the N slave devices when a state in which one candidate buffer exists in the L candidate buffers is a full state.
In some alternative embodiments, each piece of interleaving data further comprises a reading status bit and a reading flag bit, wherein the reading status bit is used for indicating whether the reading of the reading data is successful, and the reading flag bit is used for indicating whether the interleaving data of the same data identification is stored in all L buffers.
In some optional embodiments, the processing module 502 is further specifically configured to, in a case where one buffer exists in the L buffers and the interleaved data is stored in one buffer, transmit the interleaved data stored in one buffer to the master device based on a preset priority transmission rule.
Or the processing module 502 is further specifically configured to, when the L buffers have a plurality of buffers storing interleaved data, transmit, based on a preset priority transmission rule, the interleaved data stored in the buffers storing the first interleaved data corresponding to different data identifiers to the master device according to a storage time sequence of the first interleaved data corresponding to different data identifiers.
The processing module 502 is further specifically configured to, in a case where there is no buffer in the plurality of buffers in which the first interleaved data is stored, transmit, to the host device, the interleaved data stored in the buffer in which the interleaved data including the target read flag bit is stored, in a storage time sequence of the interleaved data including the target read flag bit, where the target read flag bit is used to indicate that the interleaved data of the same data identifier has been stored in all of the L buffers.
In some alternative embodiments, the buffer in which the interleaved data including the target read flag bit is stored includes a first buffer and a second buffer, where the storage time of the interleaved data including the target read flag bit in the first buffer is a first storage time, and the storage time of the interleaved data including the target read flag bit in the second buffer is a second storage time, the processing module 502 is further specifically configured to transmit the interleaved data stored in the first buffer to the host device if the first storage time is earlier than the second storage time, and the processing module 502 is further specifically configured to transmit the interleaved data stored in the second buffer to the host device when the state of the first buffer is detected to be empty and the first buffer has transmitted the interleaved data including the target read flag bit.
In some alternative embodiments, before acquiring the L pre-divided buffers in the storage resource pool, the acquiring module 501 is further configured to acquire a maximum number of maximum interleaved data in the M-numbered interleaved data, the processing module 502 is further configured to determine a depth of each buffer according to the maximum number, and the processing module 502 is further configured to determine the number L of pre-divided buffers in the storage resource pool according to the total data amount and the depth of the M-numbered interleaved data.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The data processing apparatus in this embodiment is presented in the form of a functional unit, where the unit refers to an ASIC (Application SPECIFIC INTEGRATED Circuit) Circuit, a processor and a memory that execute one or more software or firmware programs, and/or other devices that can provide the above-described functions.
The embodiment of the invention also provides computer equipment, which is provided with the data processing device shown in the figure 5.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a computer device according to an alternative embodiment of the present invention, and as shown in fig. 6, the computer device includes one or more processors 10, a memory 20, and interfaces for connecting components, including a high-speed interface and a low-speed interface. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 6.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform the methods shown in implementing the above embodiments.
The memory 20 may include a storage program area that may store an operating system, application programs required for at least one function, and a storage data area that may store data created according to the use of the computer device, etc. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The memory 20 may comprise volatile memory, such as random access memory, or nonvolatile memory, such as flash memory, hard disk or solid state disk, or the memory 20 may comprise a combination of the above types of memory.
The computer device also includes a communication interface 30 for the computer device to communicate with other devices or communication networks.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random-access memory, a flash memory, a hard disk, a solid state disk, or the like, and further, the storage medium may further include a combination of the above types of memories. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or aspects in accordance with the present application by way of operation of the computer. Those skilled in the art will appreciate that the existence of computer program instructions in a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and accordingly, the manner in which computer program instructions are executed by a computer includes, but is not limited to, the computer directly executing the instructions, or the computer compiling the instructions and then executing the corresponding compiled programs, or the computer reading and executing the instructions, or the computer reading and installing the instructions and then executing the corresponding installed programs. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.