WO2002014998A2 - Procede et dispositif de transfert de donnees dans un systeme de transfert de donnees - Google Patents
Procede et dispositif de transfert de donnees dans un systeme de transfert de donnees Download PDFInfo
- Publication number
- WO2002014998A2 WO2002014998A2 PCT/US2001/024641 US0124641W WO0214998A2 WO 2002014998 A2 WO2002014998 A2 WO 2002014998A2 US 0124641 W US0124641 W US 0124641W WO 0214998 A2 WO0214998 A2 WO 0214998A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- packet
- packets
- command
- receiving
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 139
- 238000000034 method Methods 0.000 title claims abstract description 131
- 239000000872 buffer Substances 0.000 claims abstract description 108
- 230000008569 process Effects 0.000 claims abstract description 96
- 238000012546 transfer Methods 0.000 claims description 49
- 238000004891 communication Methods 0.000 claims description 27
- 230000005540 biological transmission Effects 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims 12
- 230000007246 mechanism Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 10
- 238000005457 optimization Methods 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 6
- 239000000835 fiber Substances 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/901—Buffering arrangements using storage descriptor, e.g. read or write pointers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/19—Flow control; Congestion control at layers above the network layer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
Definitions
- the present invention related generally to an improved data processing system and in particular to a method and apparatus for transferring data. Still more particularly, the present invention relates to a method and apparatus for transferring data to and from a storage device using data packets .
- Data is often transferred from one application to another application or to a storage device. Data transfers also may involve transferring the data from one computer to another computer or remote device. This type of transfer is facilitated through the use of a protocol. For example, if the transfer of data is over a network, the protocol TCP/IP may be used. If the transfer of data is over a device channel, the protocol SCSI may be used. Also, one protocol may be executed embedded within another protocol, for example sending data via the SCSI protocol over networks using the TCP/IP protocol. Applications typically send large numbers of identical commands when data is being read or written. Applications usually send a small amount of data with each command, compared with the destination device or application capability. Data sizes of 321* ⁇ bytes and 64k bytes are typical sizes for such transfers.
- a protocol engine When a data packet is received, a protocol engine is employed to process the packet. Currently, the protocol engine will identify the command in the data packet and allocate a buffer to process the data in the data packet . This process is performed each time a data packet is received.
- the present invention recognizes that with the large number of identical commands and the individual processing of each data packet, performance is degraded. The degradation is caused by having to process each of the data packets as potentially unrelated events and allocate resources for each data packet. Therefore, it would be advantageous to have an improved method and apparatus for transferring data in which performance degradation associated with data packet processing is avoided.
- the present invention provides a method and apparatus for transferring data.
- a plurality of packets is received, wherein each of the plurality of packets includes a command and data. Packets within the plurality of packets having identical commands are identified to form a set of selected packets.
- a buffer is allocated to process the set of selected packets. Packets not having identical commands to those in the set of selected packets are allocated to other buffers for processing.
- Figure 1 depicts a pictorial representation of a distributed data processing system in which the present invention may be implemented
- FIG. 2 is a block diagram illustrating a data processing system in which the present invention may be implemented
- Figure 3 is a block diagram illustrating components used to process packets in accordance with a preferred embodiment of the present invention
- Figures 4 and 5 are diagrams illustrating read and write command protocol phases in accordance with a preferred embodiment of the present invention.
- Figures 6 and 7 are diagrams illustrating data flow in read command processing and write command processing in accordance with a preferred embodiment of the present invention
- Figure 8 is a flowchart of a process for grouping commands in data transfers in accordance with a preferred embodiment of the present invention.
- Figure 9 is an illustration of a data transfer through protocol stacks in accordance with a preferred embodiment of the present invention.
- Figure 10 is a diagram illustrating data structures used in decoding and receiving packets in accordance with a preferred embodiment of the present invention
- Figure 11 is a flowchart of a process in an application layer for generating a packet set and sending data using the packet set in accordance with a preferred embodiment of the present invention
- Figure 12 is a flowchart of a process in a physical layer used to generate a packet set in accordance with a preferred embodiment of the present invention
- Figure 13 is a flowchart of a process in a physical layer for sending packets from a packet set across a data channel in accordance with a preferred embodiment of the present invention
- Figure 14 is a flowchart of a process in a physical layer used to receive a packet in accordance with a preferred embodiment of the present invention
- Figure 15 is a flowchart of a process for handling packets in an application layer in accordance with a preferred embodiment of the present invention.
- Figure 16 is a flowchart for identifying buffer space in accordance with a preferred embodiment of the present invention.
- FIG. 1 depicts a pictorial representation of a distributed data processing system in which the present invention may be implemented.
- Distributed data processing system 100 is a network of computers in which the present invention may be implemented.
- Distributed data processing system 100 contains a network 102, which is the medium used to provide communications links between various devices and computers connected together within distributed data processing system 100.
- Network 102 may include permanent connections, such as wire or fiber optic cables, or temporary connections made through telephone connections.
- a server 104 is connected to network 102 along with storage unit 106.
- clients 108, 110, and 112 also are connected to network 102.
- These clients 108, 110, and 112 may be, for example, personal computers or network computers .
- a network computer is any computer, coupled to a network, which receives a program or other application from another computer coupled to the network.
- Clients 108, 110, and 112 are clients to server 104.
- Distributed data processing system 100 may include additional servers, clients, and other devices not shown.
- distributed data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the TCP/IP suite of protocols to communicate with one another.
- distributed data processing system 100 also may be implemented using a number of different types of networks.
- the different computers or devices may be connected using physical links.
- the networks may be, for example, an intranet, a local area network (LAN) , or a wide area network (WAN) .
- the links found in the network or those making up physical links may be, for example, fiber optic links, packet switched communication links, Enterprise Systems Connection (ESCON) fibers, SCSI cable links, wireless communication links, and the like.
- Figure 1 is intended as an example, and not as an architectural limitation for the present invention.
- the present invention provides a method, apparatus, and computer implemented instructions for transferring data from one device to another device.
- This transfer of data may take place between two computers, such as server 104 and client 110.
- the transfer may be between a computer and a storage device, such as client 112 and storage unit 106.
- These transfers take place through network 102, which may be a traditional network or a direct connection between the two devices. Further, these transfers may take place between a host and a device located in the same data processing system, such as client 112.
- the mechanism of the present invention involves identifying a new packet or set of packets containing commands identical to those received in previous packets or sets of packets and processing those newly received packets without incurring additional the overhead or allocation of resources generally required for receiving such packets. It is possible that a command and related data may be received in a single packet. However, the general case is for such to be received in a series of packets (not necessarily contiguous) . The text of this invention will refer to the single packet or to the series of related packets containing the command and data in the singular as the packet' . For example, when a read command and data are received in a packet by a target device, resources,- such as buffer space and processing time to decode the command are used to direct the data to the appropriate location in the target device .
- the system then remembers that a read command has been processed and also remembers the location of the buffer containing a series of data spaces for later data buffering. If another packet containing a new read command and data is received by the target device, the system first checks to see if this is a remembered' command type. This is done via exclusive or or masking the command with its appropriate parameters. A value of zero distinguishes an exact map and the command is on that has been remembered. If it is, the resources allocated to processing such commands are used to process the data for this packet. The system will presume that the decode of the new command is predetermined and the data will go to the preassigned buffer. In this manner, additional processing time and resources are not required to be used to process the new command.
- FIG. 2 a block diagram illustrating a data processing system in which the present invention may be implemented.
- Server 104 and clients 108-112 in Figure 1 may be implemented using data processing system 200.
- Data processing system 200 employs a peripheral component interconnect (PCI) local bus architecture.
- PCI peripheral component interconnect
- AGP Accelerated Graphics Port
- ISA Industry Standard Architecture
- Processor 202 and main memory 204 are connected to PCI local bus 206 through PCI bridge 208.
- PCI bridge 208 also may include an integrated memory controller and cache memory for processor 202.
- PCI local bus 206 may be made through direct component interconnection or through add-in boards.
- communications adapter 210, SCSI host bus adapter 212, and expansion bus interface 214 are connected to PCI local bus 206 by direct component connection.
- audio adapter 216, graphics adapter 218, and audio/video adapter 219 are connected to PCI local bus 206 by add-in boards inserted into expansion slots.
- Expansion bus interface 214 provides a connection for a keyboard and mouse adapter 220, modem 222, and additional memory 224.
- Small computer system interface (SCSI) host bus adapter 212 provides a connection for hard disk drive 226, tape drive 228, and CD- ROM drive 230.
- SCSI Small computer system interface
- An operating system runs on processor 202 and is used to coordinate and provide control of various components within data processing system 200 in Figure 2.
- the operating system may be a commercially available operating system, such as Windows NT, which is available from Microsoft Corporation. Instructions for the operating system, the object-oriented operating system, and applications or programs are located on storage devices, such as hard disk drive 226, and may be loaded into main memory 204 for execution by processor 202.
- the mechanism of the present invention may be implemented as instructions executed by processor 202 to identify commands in packets as they arrive.
- the mechanism of the present invention also may be implemented as part of the host adapters 210 or 212, in a form that may be either software or hardware. Further, the mechanism of the present invention may be implemented in a protocol stack in a protocol engine used to process packets.
- the mechanism of the present invention also may be implemented in a manner that reduces the amount of decoding and processing within the protocol stack. Once a command type has been decoded, the parameters and resources used for processing that packet may be used as an example or template for another packet containing the same command type . As a result, the resources used by the protocol stack to process the next packet containing the same command type are reduced.
- the hardware in Figure 2 may vary depending on the implementation. Other internal hardware or peripheral devices, such as flash ROM (or equivalent nonvolatile memory) or optical disk drives and the like, may be used in addition to or in place of the hardware depicted in Figure 2. Also, the processes of the present invention may be applied to a multiprocessor data processing system.
- data processing system 200 also may be a notebook computer or hand held computer in addition to taking the form of a PDA.
- data processing system 200 also may be a kiosk or a Web app1iance .
- accelerator 300 works with protocol engine 302 to process packets received from a communications channel at an adapter, such as communications adapter 210 or SCSI adapter 212 in Figure 2.
- Protocol engine 302 processes packets received from a communications channel at an adapter, such as communications adapter 210 or SCSI adapter 212 in Figure 2.
- Processing overhead is reduced by recognizing and grouping a series of identical commands. Such a process avoids the need for the processor to be utilized to process each subsequent command after it is received.
- processing would include command decode to determine the exact command operation code, interpreting each accompanying parameter, and doing a buffer allocation operation. Therefore, the latency time induced by the time the protocol engine 302 spends processing each command also is reduced.
- a buffer such as buffer 306, is allocated to hold the command and data from packet 304. Initially several sets of data space are allocated and concatenated together in a series to form the buffer 306. The processing required for allocating a serial set is much more efficient than that required for allocating each amount of space needed individually.
- the information in the packet 304 is passed to protocol engine 302, which decodes the information to transfer the data to the appropriate destination.
- accelerator 300 identifies the command in the packet. Normally, another buffer, such as buffer 310, is allocated for processing packet 308. Instead, if the command is of the same type for a packet already being processed, the command and data in the packet is placed into next data space of the same buffer, such as buffer 306. As the data for several commands are placed in buffer 306, only one copy of the original command is placed in this buffer, which is used as a prototype for comparison with subsequent commands, but buffer 306 also contains the actual number of commands stored in the buffer in this example.
- ⁇ command' here means the operation code passed to the device and associated parameters required for such an operation code.
- buffer 306 has been allocated for read commands of a specific format, all packets containing such read commands for the target are discovered via a simple compare or mask logic and have their data placed into buffer 306. Further, the data does not have to be decoded by protocol engine 302. Once the destination has been identified and data is being transferred to the destination, additional data may be placed in the buffer and transferred to the destination without requiring additional resources and processing time from protocol engine 302. If packet 308 contains a different command type, a new buffer, such as buffer 310, is allocated to process packet 308. A new allocation of data space is required and must be added to the buffer if the command type is the same, but the current buffer being used for that command type is full or unable to accept additional data for processing.
- a buffer is selected to be used by several identical commands.
- the identical commands are for a data transfer to or from a specific device.
- new commands are received, they are compared to the current command being processed.
- a different command may result in the allocation of a new buffer, such as buffer 310.
- the processes of the present invention are implemented in accelerator 300. These processes, however, also may be implemented in other locations, such as in protocol engine 302.
- FIGs 4 and 5 diagrams illustrating read and write command protocol phases are depicted in accordance with a preferred embodiment of the present invention. As these phases can be interpreted as logical steps in the processing of a command, some of them may not be included in all the protocols that may be used in the implementation of the present invention. For instance, the SCSI protocol includes all of these phases, while the ESCON protocol or the TCP/IP protocol do not implement any device ready phase 506, which represents a flow control phase in the processing of the commands. Also, many applications may send the data with the command when they have to transfer data .
- phases for read commands between host 400 and device 402 are illustrated. Host 400 and device 402 may be in the same computer or located on different computers.
- read commands involve a command phase 404, a data phase 406, and a status phase 408.
- Read commands are sent to device 402 from host 400 during command phase 404.
- Data is returned from device 402 to host 400 during data phase 406.
- a status phase occurs in which the status of the command is returned during status phase 408.
- the optimization occurs between command phase 404 and data phase 406. The optimizations in read operations allows for the data for several read commands to be acquired in one operation at the side of device 402 and for device 402 to send the data related to each of several subsequent read commands to host 400 without additional command processing or buffer allocation overhead.
- write commands are sent from host 500 to device 502.
- Write commands involve a command phase 504, a device ready phase 506, a data phase 508, and a status phase 510.
- the phases involved in write operations are similar to those described above for read operations.
- Device ready phase 506 is an additional phase used to indicate that the device is ready or available for data transfer.
- the optimizations provided by the present invention occur between command phase 504 and device ready phase 506. Further, optimizations occur between data phase 508 and status phase 510.
- the first optimization comes from the fact that, after a first write command has been received from host 500 by device 502, a buffer able to store the data for several of these commands has been allocated by device 502, and no additional processing is required before device 502 accepts the command and notifies host 500 by the way of device ready phase 506.
- the second optimization comes from the fact that device 502 does not try to move the data received from previous write commands before a full buffer has been filled. Instead, device 502 returns a continuation status as soon as the last message of data has been received and this allows host 500 to issue a new command as soon as it can.
- read operations involve an adapter 600, an accelerator 602, and a protocol engine 604.
- Adapter 600 is the hardware used to receive and send data. Commands are received by accelerator 602 for a requestor through adapter 600. Memory allocation occurs to allocate a buffer for transferring data. The buffer allocated is large enough to hold data for several read commands. Read commands are sent to protocol engine 604 with data being read from the media. If additional read commands are received, the data for these read commands also are place in the buffer. When the buffer is filled, the data is returned to accelerator 602 and a decision is made whether or not to add data space to the buffer.
- the data is transferred to adapter 600 for transfer to the requestor asynchronously to the transfer of data between the accelerator 602 and the protocol engine 604. These additional transfers of data occur without requiring additional overhead for setting up buffers and without spending the time to decode and process the parameters for each read command received.
- adapter 700, accelerator 702, and protocol engine 704 are components involved in write operations to a device.
- a write operation involves allocating memory, such as buffer space to receive data.
- data is received from a host or requestor of the write operation with the device being the target of the data.
- the buffer is allocated such that data for multiple write commands may be stored in the buffer.
- additional write commands are received, the data for these commands are stored in the buffer.
- the buffer is full, the data is then written to the device through protocol engine 704.
- additional write operations may occur without requiring the overhead involved in setting up additional buffers and without spending the time to decode and process the parameters .
- the data is written to the device or sent to the requestor after the buffer has been filled.
- data may be transferred continuously from the buffer to the device.
- FIG. 8 a flowchart of a process for grouping commands in data transfers is depicted in accordance with a preferred embodiment of the present invention.
- the processes illustrated in Figure 8 are implemented in an accelerator in these examples.
- the process begins by receiving a command and data (step 800) . Only information about the data such as the length can be received at this point, since the application can assist in delivering data to the protocol engine without additional copy.
- step 802 if a command or list of commands is currently being processed, the received command is compared the current command or in the case of a list is compared to each command in the list (step 804) .
- the order of the list can be varied (e.g., most recently used, most frequently use, etc.) as the processing continues such that the most probable match is found early in the compare process .
- a determination is then made as to whether the commands are the same or identical (step 806) . This determination involves identifying whether the type of command is of the same type. For example, the determination may be whether both commands are read commands. Further, the determination also may involve identifying whether the source of the commands are the same . The grouping of commands, in these example, may be performed by the source application sending the command.
- step 808 a determination is then made as to whether buffer space is available in the buffer allocated for these commands (step 808) . If buffer space is available, the command and the data are placed in the buffer (step 810) . A determination is then made as to whether the buffer is full (step 812) . If the buffer is full, the data is then transferred to the protocol engine (step 814) with the process terminating thereafter. If continuous data transfer is used in the process rather than buffer full transfer, the decision at 808 when there is insufficient buffer space available will branch to a function that will make a further decision about allocating more data space to the buffer. If more space is allocated, the logic will return and re ask the question at step 808.
- step 816 If more space is not allocated, then the logic will flow on to step 816 as shown in Figure 8. With reference again to step 812, if the buffer is not full, the process terminates. With reference back to step 808, if buffer space is unavailable, a new buffer is allocated for this command type (step 816) . The process then proceeds to step 810. Returning to step 806, if the command is not the same command, the process also proceeds to step 816 as described above. The process also proceeds to step 816 from step 802 if a command is not currently being processed.
- the commands also may be grouped at the protocol engine associated with the application when data transfers occur using packets transferred across a network or other communications channel .
- the protocol engine associated with the application when data transfers occur using packets transferred across a network or other communications channel .
- an application located in one computer may request data transfer from a storage device using a network communications protocol, such as TCP/IP.
- protocol stack 900 includes an application layer 902, a presentation layer 904, a transport layer 906, a network layer 908, a data link layer 910, and a physical layer 912.
- protocol stack 900 is located in a client with an application that performs data transfers.
- Protocol stack 914 includes an application layer 916, a presentation layer 918, a transport layer 920, a network layer 922, a data link layer 924, and a physical layer 926.
- protocol stack 914 is located in a system containing the storage device or application- that is involved in the data transfer.
- Protocol stack 900 and protocol stack 914 may be found in a protocol engine, such as protocol engine 302 in Figure 3.
- application layer 902 sends data directly to physical layer 912 for transfer to protocol stack 914 across a communications channel.
- Pseudo block 928 is a packet generated by the application in application layer 902, which will be transformed into the appropriate format for transfer over a communications channel . This transformation typically includes placing the data from pseudo block 928 into a number of packets, as well as generating the header information needed to send the packets to the target.
- Pseudo block 928 includes a flag 930 and data 932.
- Data 932 is dummy data, which is processed by the different layers in protocol stack 900. This processing is used to encode the data in the pseudo block into the appropriate format and packets for transfer over a communications channel.
- a packet set is generated by physical layer 912.
- Physical layer 912 is configured to return the packet set to application layer 902.
- the application in application layer 902 that is to receive the packet set may be identified by flag 930.
- packet set 934 is returned to application layer 902 in buffer space 936.
- Application layer 902 will replicate or make copies of packet set 934, such as packet sets 938 and 940.
- Packet 942 is an example of a packet found in packet sets 934, 938, and 940.
- Packet 942 includes a header 944, which was generated by physical layer 912 to transport packet 942 to the target. Additionally, packet 942 includes a flag 946 and data 948, which forms a payload section for packet 942.
- flag 946 may be located in header 944. Flag 946 is used to indicate that they are preprocessed and ready for transfer across the communications channel. Flag 946 also may be unique for a particular transfer by a particular application, such that all packets containing a flag can be associated with a particular application.
- Application layer 902 will place data into packets in the packet set.
- this data includes a command and the data that is to be processed in response to the command.
- the data that is to be processed or transferred to another application or device is referred to as "customer data" .
- the command and the customer data are placed into the data or payload areas in the packets for a packet set, such as packet set 934.
- Packet 942 is an illustration of a packet transferred by physical layer 912.
- physical layer 926 will decode a packet and send the packet up through protocol stack 914 to application layer 916 if a flag is absent from the packet.
- Application layer 916 may then process the data or place it in storage. If a flag is present, the packet may be sent directly to application layer 916 for processing.
- Application layer 916 will take packets with flags and recreate packet sets to extract data. When an entire packet set has been recreated, the block of data sent from the source may be extracted and processed according to the command associated with the packet set. Alternatively, the data may be extracted as packets in a packet set are received by application layer 916.
- the packet is decoded and the information to decode the packet is stored to build an inventory of preprocessed decode examples.
- decode examples may include, for example, the parameters, registers, variables, and buffers required to decode the data into a form used by an application in application layer 916. These examples may be built on a per packet or per packet set basis.
- the flag may be used to identify the appropriate decode example for use in processing the packet. In this manner, the overhead required to decode and process the packet is reduced.
- the decode process may be performed vertically on a subset of the layers in stacks 900 and 914.
- physical layer 912 may group commands and data for sending after application layer 902 has sent a local set-up message defining a packet set type.
- Transport layer 926 may group received commands after application layer 916 has sent a local set-up message describing a packet set type.
- Physical layers 912 or 926, application layers 902 or 916, or any intermediate layer may determine packet set boundaries based on statistical elements and without any external intervention. The processes described with respect to the physical layers in Figure 9 may be implemented as part of the application or by another program in the application layer.
- FIG. 10 a diagram illustrating data structures used in decoding and receiving packets is depicted in accordance with a preferred embodiment of the present invention.
- a comparator stack 1000 an example decode matrix 1002, and an example data structure 1004 are used to process packets received by a physical layer, such as physical layer 926 in Figure 9.
- a flag within the packet or other identification information is used to identify the packet type. More specifically, the packet type, in these examples, is associated with a command or other instructions used to perform an operation on the data in the packet. This identification information is compared to identification information stored in comparator stack 1000.
- Each of these packet types are categories by the type of command or operation that is to be carried out on the data in a packet. In these examples, the packet types are "A", "B", and "C" .
- the packet identification information is placed into comparator stack 1000, and the packet is decoded.
- the data structures, the parameters, the variables, as well as any other information or setting required to decode and place the data into a format for use by an application in the application layer is stored as a data structure, such as example data structure 1004.
- this data structure contains command information 1008, parameter information 1010, and data 1012. All of this information is used to place data from a packet into a format for use by the application.
- Example data structure 1004 in these examples, is replicated a number of different times. Pointers to these data structures are placed in example decode matrix 1002. In this example, pointer 1014 points to example decode data structure 1004.
- a pointer from example decode matrix 1002 to an example decode data structure for that packet type is used to select a data structure to process the packet. In this manner, the resources and time used in decoding a packet may be reduced. This mechanism may be applied to entire packet sets in addition to individual packets. In this example, a packet set corresponds to a block of data handled by an application in the application layer.
- a larger data structure may be used for a number of packet or packet sets.
- This larger allocation of memory or buffer space may be selected to be large enough to handle predicted numbers of packets or packet sets. Additionally, the memory allocation or buffer space may be dynamically varied to take into account increasing or decreasing needs in processing data.
- FIG. 11-16 flowcharts illustrating processes used to group commands in packets are depicted in accordance with a preferred embodiment of the present invention.
- the processes illustrated in Figures 11-13 are those used to send packets, while the processes depicted in Figures 14-16 are those used to receive packets.
- the processes, in these examples, are implemented in a protocol stack, such as protocol stack 900 or protocol stack 914 in Figure 9. Of course, the processes may be located elsewhere depending on the implementation.
- FIG 11 a flowchart of a process in an application layer for generating a packet set and sending data using the packet set is depicted in accordance with a preferred embodiment of the present invention. This process may be implemented in an application layer, such as application layer 902 in Figure 9.
- the process begins by generating a pseudo block (step 1100) .
- Pseudo block 928 in Figure 9 is an example of a pseudo block that is generated in step 1100.
- This pseudo block includes a flag used to identify the application that is transferring data. Further, this flag is used by other layers, such as the physical layer, as an indication to return a set of packets to the application layer.
- the pseudo block in these examples, takes the form of a packet generated by an application, which is typically placed into smaller sized packets for transport across a communications channel.
- the pseudo block is then passed to the next layer (step 1102) .
- the next layer in an OSI model is a presentation layer.
- the process then waits to receive a packet set (step 1104) .
- the packet set is a set of data structures, which are ready for transport over the communications channel by the physical layer.
- the data to actually be transported is placed within the appropriate places in these packet sets. These places are typically the payload portions of the packet.
- the packet set is placed repetitively in a buffer space (step 1106) . This replication of the packet set allows for multiple blocks of data to be filled by the application layer and passed to the physical layer for transfer.
- the data space of a packet in a packet set is filled, and the packet is sent to the physical layer (step 1108) .
- This data space in the packet is also referred to as the "payload" .
- the data space is filled with the customer data and the command for the operation to be performed on the customer data. Further, a flag is placed in the payload if a flag is not already present elsewhere in the packet.
- FIG. 12 a flowchart of a process in a physical layer used to generate a packet set is depicted in accordance with a preferred embodiment of the present invention.
- the processes illustrated in Figure 12 may be implemented in a physical layer, such as physical layer 912 in Figure 9.
- the process begins by receiving a packet from the previous layer (step 1200) . With an OSI model, this layer would be a data link layer. A determination is made as to whether the packet includes a flag (step 1202) . If the packet does not include a flag, the packet is sent using normal processing within the physical layer (step 1204) with the process terminating thereafter. With reference again to step 1202, if a flag is present, the packet is broken into a set of physical packets for transfer on a communications channel or link (step 1206) . This set of packets is sent back to the application associated with the flag (step 1208) with the process terminating thereafter. Further, these packets include a flag to identify the packets as being part of the same set of packets or to identify the set of packets to be part of a data transfer for the application.
- FIG. 13 a flowchart of a process in a physical layer for sending packets from a packet set across a data channel is depicted in accordance with a preferred embodiment of the present invention.
- the process in Figure 13 is implemented in a physical layer in these examples .
- the process begins by receiving a packet from the application layer (step 1300) .
- a determination is made as to whether a flag is present in the packet (step 1302) . If a flag is present, the packet is sent onto the communications channel for transfer to the target (step 1304) with the process terminating thereafter.
- a flag is absent in the packet, an error message' is generated for return to the application (step 1306) with the process terminating thereafter.
- Each packet received directly from the application layer should include a flag in these examples. If a flag is absent, then some error in processing in the application layer is assumed.
- FIG.14 a flowchart of a process in a physical layer used to receive a packet is depicted in accordance with a preferred embodiment of the present invention.
- the process in Figure 14 may be implemented in a physical layer, such as physical layer 926 in Figure 9.
- the process begins by receiving a packet from the physical media (step 1400) .
- This physical media is a communications channel in this example.
- a determination is made as to whether a flag is present in the packet (step 1402) . If a flag is present, the packet is sent directly to the application (step 1404) with the process terminating thereafter. If a flag is absent in the packet, then the packet is sent to the next layer above (step 1406) with the process terminating thereafter.
- the next layer is a data link layer if an OSI model is used.
- the flag indicates that the mechanism of the present invention is being used to process these packets. If the physical layer does not recognize the flag or is not configured to use the mechanism of the present invention the flag is ignored and the packet is sent to the next layer.
- FIG 15 a flowchart of a process for handling packets in an application layer is depicted in accordance with a preferred embodiment of the present invention.
- the process illustrated in Figure 15 may be implemented in an application layer, such as application layer 916 in Figure 9, when an OSI model is used.
- the process begins by receiving a packet from the physical layer (step 1500) .
- a determination is made as to whether this packet is a first packet of a set of packets (step 1502) . If the packet is a first packet in a set of packets, the packet is associated with a set (step 1504) . This step is used to begin a new set in which the packet will be placed.
- a determination is made as to whether this packet is the first time a packet has been received from an entity originating the packet (step 1506) . The entity is identified by a device address or file address in these examples. If this packet is the first time a packet has been received from the entity, the process begins building the set and extracting data (step 1508) .
- decode is performed against the data (step 1510) .
- This step involves performing the necessary actions to place the data into a form for processing for a command or for use by an application.
- information is placed in a comparator stack (step 1512) . This information may be for the data or a packet set.
- an inventory of preprocessed examples of decode is created (step 1514) with the process terminating thereafter.
- decode are also referred to as example decodes. The examples include the information necessary to decode or place the data in a form for use by an application, which is a target of the data transfer.
- a decode example such as example decode data structure 1004 in Figure 10, includes information, such as, for example, registers in which data is to be placed and pointers to allocated buffer space.
- a decode example is a template used to process data for a particular command. With a decode example, processing of data for the command does not require identifying where the data should be placed or what buffer space should be used. In this manner, the mechanism of the present invention reduces the resources and processing time needed to handle data transfers or other data operations.
- step 1506 if the packet is not the first packet for a particular entity, the packet is added to a set and data extraction continues (step 1516) . Thereafter, the data or the set is compared against information in the comparator stack (step 1518) . A determination is made as to whether a match is present between the data or set and the information in the comparator stack (step 1520) . If a match is not present, the process returns to step 1510 as described above. When a match is present, an example decode associated with the match is obtained (step 1522) .
- the example decode may be obtained from a data structure, such as example decode matrix 1002 in Figure 10. This matrix is a matrix of pointers to different example decode structures, which may be used to process the data for a particular type of command. The data is then placed using the example decode (step 1524) with the process terminating thereafter.
- step 1502 if the packet is not a first packet in a set of packets, the process then proceeds to step 1508 as described above.
- buffer space is allocated for these examples.
- the amount of space that is needed may be determined in a number of ways.
- FIG 16 a flowchart for identifying buffer space is depicted in accordance with a preferred embodiment of the present invention.
- the process illustrated in Figure 16 is used to identify and allocate buffer space for write operations.
- the process begins by identifying a need for write buffer space (step 1600) .
- a determination is made as to whether the size for the write buffer space is provided (step 1602) . If a size for the write buffer space is provided, this size is used in building examples (step 1604) with the process terminating thereafter.
- a default size is selected (step 1606) .
- the history of the appropriateness of the default size is monitored (step 1608) .
- This step includes determining whether the default size provides the correct amount of space, too much space, or too little space for the examples.
- the size of the write buffer space is adjusted based on the history (step 1610) with the process terminating thereafter.
- the examples illustrated above identify commands in packets on a per packet basis.
- the processes of the present invention also may be applied to recognizing patterns of commands being received in successive packets. Further, packet processing also may be based on a number of strategies, such as, for example, first-in-first-out (FIFO) , frequency of packet types, and ordered set list processing.
- FIFO first-in-first-out
- decode examples are set up for different sequences of command types in received packets. For example, a decode example may be set up for a command sequence of read, read, and write. Another decode example in this methodology may be set up for a command sequence of read, write, and verify. Different lengths may be selected for these sequences .
- the present invention provides an improved method, apparatus, and computer implemented instructions for reducing command processing in data transfers. This advantage is provided through the different mechanisms for grouping a series of identical commands or identical command patterns.
- the mechanism of the present invention reduces the number of buffer allocation operations by avoiding such an operation for every command.
- the present invention reduces the latency time in use of resources in fully processing each individual command. In this way, bottlenecks or congestion occurring at the protocol engine in high bandwidth data transfers are reduced or eliminated.
- the mechanism of the present invention may be applied to existing mechanisms, such as in an application layer and a physical layer in an OSI stack within a protocol engine.
- the processes of the present invention may be applied to data transfers involving many types of readable and/or. writable media devices, such as, for example, floppy disk drive, a hard disk drive, a CD-ROM drive, digital versatile disk (DVD) drive, and a magnetic tape drive. Further, this mechanism may be applied to data transfers between two applications in to data transfers to and from a storage device. Additionally, although the depicted examples illustrate the processes implemented in an OSI model, the processes of the present invention may be applied to other types of protocol models and may be located in other layers depending on the implementation.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Communication Control (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2001284730A AU2001284730A1 (en) | 2000-08-11 | 2001-08-06 | Method and apparatus for transferring data in a data processing system |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US63817300A | 2000-08-11 | 2000-08-11 | |
| US09/638,173 | 2000-08-11 |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| WO2002014998A2 true WO2002014998A2 (fr) | 2002-02-21 |
| WO2002014998A3 WO2002014998A3 (fr) | 2003-04-03 |
| WO2002014998A9 WO2002014998A9 (fr) | 2004-04-22 |
Family
ID=24558936
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2001/024641 WO2002014998A2 (fr) | 2000-08-11 | 2001-08-06 | Procede et dispositif de transfert de donnees dans un systeme de transfert de donnees |
Country Status (2)
| Country | Link |
|---|---|
| AU (1) | AU2001284730A1 (fr) |
| WO (1) | WO2002014998A2 (fr) |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH06132974A (ja) * | 1992-10-20 | 1994-05-13 | Toshiba Corp | パケット・ディスアセンブル用バッファ |
| JP2626585B2 (ja) * | 1994-11-02 | 1997-07-02 | 日本電気株式会社 | 通信資源管理型パケット交換装置 |
| US5802278A (en) * | 1995-05-10 | 1998-09-01 | 3Com Corporation | Bridge/router architecture for high performance scalable networking |
| US5870394A (en) * | 1996-07-23 | 1999-02-09 | Northern Telecom Limited | Method and apparatus for reassembly of data packets into messages in an asynchronous transfer mode communications system |
-
2001
- 2001-08-06 WO PCT/US2001/024641 patent/WO2002014998A2/fr active Application Filing
- 2001-08-06 AU AU2001284730A patent/AU2001284730A1/en not_active Abandoned
Also Published As
| Publication number | Publication date |
|---|---|
| WO2002014998A9 (fr) | 2004-04-22 |
| WO2002014998A3 (fr) | 2003-04-03 |
| AU2001284730A1 (en) | 2002-02-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6272581B1 (en) | System and method for encapsulating legacy data transport protocols for IEEE 1394 serial bus | |
| US8009672B2 (en) | Apparatus and method of splitting a data stream over multiple transport control protocol/internet protocol (TCP/IP) connections | |
| US6728929B1 (en) | System and method to insert a TCP checksum in a protocol neutral manner | |
| US7660866B2 (en) | Use of virtual targets for preparing and servicing requests for server-free data transfer operations | |
| US7142540B2 (en) | Method and apparatus for zero-copy receive buffer management | |
| JP2544877B2 (ja) | デ―タ処理方法及びシステム | |
| EP0788267A2 (fr) | Langage interactif de paquets de réseau extensible par l'utilisateur | |
| US20060101111A1 (en) | Method and apparatus transferring arbitrary binary data over a fieldbus network | |
| US20080279208A1 (en) | System and method for buffering data received from a network | |
| JPH10276227A (ja) | データ・ストリームの処理方法及びデータ処理システム | |
| US20040047361A1 (en) | Method and system for TCP/IP using generic buffers for non-posting TCP applications | |
| US10673768B2 (en) | Managing data compression | |
| US7283527B2 (en) | Apparatus and method of maintaining two-byte IP identification fields in IP headers | |
| US20080263171A1 (en) | Peripheral device that DMAS the same data to different locations in a computer | |
| US10958588B2 (en) | Reliability processing of remote direct memory access | |
| WO2002014998A2 (fr) | Procede et dispositif de transfert de donnees dans un systeme de transfert de donnees | |
| CN116074553B (zh) | 视频流传输方法、装置、电子设备及存储介质 | |
| CN1266912C (zh) | 用于在网络中从数据帧移除不需要的报头信息的方法和系统 | |
| US20010018732A1 (en) | Parallel processor and parallel processing method | |
| US6922833B2 (en) | Adaptive fast write cache for storage devices | |
| US7469305B2 (en) | Handling multiple data transfer requests within a computer system | |
| JPH0458646A (ja) | バッファ管理方式 | |
| CN119484451A (zh) | 基于堆叠网络架构的传输装置和方法、芯片及电子设备 | |
| US20140068139A1 (en) | Data transfer system and method | |
| EP1347597A2 (fr) | Système integré avec un canal pour la réception de plusieurs données |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
| REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
| 122 | Ep: pct application non-entry in european phase | ||
| COP | Corrected version of pamphlet |
Free format text: PAGE 1, DESCRIPTION, REPLACED BY A NEW PAGE 1; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE |
|
| NENP | Non-entry into the national phase |
Ref country code: JP |