[go: up one dir, main page]

WO2014209270A1 - Procédé et appareil pour système de gestion de nuage attribuant des ressources de fonction d'optimisation dynamique de wan - Google Patents

Procédé et appareil pour système de gestion de nuage attribuant des ressources de fonction d'optimisation dynamique de wan Download PDF

Info

Publication number
WO2014209270A1
WO2014209270A1 PCT/US2013/047414 US2013047414W WO2014209270A1 WO 2014209270 A1 WO2014209270 A1 WO 2014209270A1 US 2013047414 W US2013047414 W US 2013047414W WO 2014209270 A1 WO2014209270 A1 WO 2014209270A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
computer
network device
network
network devices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2013/047414
Other languages
English (en)
Inventor
Hideki Okita
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to PCT/US2013/047414 priority Critical patent/WO2014209270A1/fr
Publication of WO2014209270A1 publication Critical patent/WO2014209270A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Definitions

  • the present application is related generally to wireless protocols, and more specifically, to configuration of the demodulation reference signal in wireless networks.
  • VM virtual machine
  • L2 layer-2 tunnels between a source host and a destination host are used to transfer traffic for the migration as shown in FIG. 1.
  • WODs are network devices that utilize several methods such as a TCP-window control, caching, de- duplication, and compression to improve the TCP performance.
  • TCP transmission control protocol
  • WODs are network devices that utilize several methods such as a TCP-window control, caching, de- duplication, and compression to improve the TCP performance.
  • the TCP performance issue in hybrid cloud environments can be addressed.
  • storage access performance Because the cost of a WOD is generally higher than network switches or routers, the allocated resources of WODs are less than the resources of the switches or routers. Thus, if all traffic across the WAN is processed by WODs, actual throughput of the WAN is decreased compared to that of switches/routers. Further, important traffic can be delayed by low-priority traffic.
  • aspects of the present application include a management computer, which may include a memory and a processor.
  • the processor may be configured to retrieve first information associated with a network device from a plurality of network devices, each having a function for optimizing network traffic and being managed by using the memory, based on second information indicating that an object is one of copied and migrated from a source computer to a destination computer, wherein the network device of the plurality of network devices is configured to modify network traffic of data flow from the source computer to the destination computer so that third information regarding data flow is registered to the network device of the plurality of network devices.
  • Aspects of the present application include a computer program containing instructions.
  • the instructions may include retrieving first information associated with a network device from a plurality of network devices, each having a function for optimizing network traffic and being managed by using the memory, based on second information indicating that an object is one of copied and migrated from a source computer to a destination computer, wherein the network device of the plurality of network devices is configured to modify network traffic of data flow from the source computer to the destination computer so that third information regarding data flow is registered to the network device of the plurality of network devices.
  • the instructions may be stored in a computer readable storage medium, or a computer readable signal medium.
  • aspects of the present application may include a system involving a plurality of network devices, and a management computer containing a memory and a processor.
  • the processor may be configured to retrieve first information associated with a network device from a plurality of network devices, each having a function for optimizing network traffic and being managed by using the memory, based on second information indicating that an object is one of copied and migrated from a source computer to a destination computer, wherein the network device of the plurality of network devices is configured to modify network traffic of data flow from the source computer to the destination computer so that third information regarding data flow is registered to the network device of the plurality of network devices.
  • FIG. 1 illustrates an example of a related art hybrid cloud system that can migrate VMs among server hosts via a L2 tunnel.
  • FIG. 2 illustrates an example of a related art network using WODs to improve the performance of network transfer across a WAN.
  • FIG. 3 illustrates an example of a hybrid cloud system adopting WODs to improve the storage access performance across a WAN, in accordance with an example implementation.
  • FIG. 4 illustrates an example of the architecture of the cloud network manager, in accordance with an example implementation.
  • FIG. 5 illustrates an example of the target flow information stored on the cloud network manager, in accordance with an example implementation.
  • FIG. 6 illustrates an example of the host location stored on the cloud network manager, in accordance with an example implementation.
  • FIG. 7 illustrates an example of the preset distance configuration stored on the cloud network manager, in accordance with an example implementation.
  • FIG. 8 illustrates an example of the RTT monitoring information stored on the cloud network manager, in accordance with an example implementation.
  • FIG. 9 illustrates an example of the preset RTT configuration stored on the cloud network manager, in accordance with an example implementation.
  • FIG. 10 shows an example of the architecture of a WOD, in accordance with an example implementation.
  • FIG. 11 shows an example of target flow information of a WOD, in accordance with an example implementation.
  • FIG. 12 shows an example of target flow information of a WOD, in accordance with an example implementation.
  • FIG. 13 illustrates an example of flow chart of a WOD set-up process of the cloud network manager, in accordance with an example implementation.
  • FIG. 14 illustrates an example of network wherein both VM migration and storage migration are utilized, in accordance with an example implementation.
  • FIG. 15 illustrates an example of the target information of the cloud network manager, in accordance with an example implementation.
  • FIG. 16 illustrates an example of the target flow information of the WOD (wodl), in accordance with an example implementation.
  • FIG. 17 illustrates an example of the target flow information of the WOD (wod2), in accordance with an example implementation.
  • FIG. 18 illustrates an example of the volume migration information of the cloud network manager, in accordance with an example implementation.
  • FIG. 19 illustrates an example of the storage controller information of the cloud network manager, in accordance with an example implementation.
  • FIG. 20 illustrates an example of the flow chart of the cloud network manager to allocate WOD resources for VM migration, in accordance with an example
  • FIG. 21 illustrates an example of a network that contains a cloud network manager which has a capability to allocate WOD resources dynamically, in accordance with an example implementation.
  • FIG. 22 illustrates an example of the unknown flow information composed of the flow information that is sent from the WODs and is received by the cloud network manager.
  • FIG. 23 illustrates an example of the flow information of the cloud network manager, in accordance with an example implementation.
  • FIG. 24 illustrates an example of the replication information which the storage manager stores to manage the configurations of volume replication between two hosts, in accordance with an example implementation.
  • FIG. 25 illustrates an example of the flow chart of the set-up process of the cloud network manager, in accordance with an example implementation.
  • FIG. 26 illustrates an example of the iSCSI WRITE check flow, in accordance with an example implementation.
  • the process is described while a program is handled as a subject in some cases.
  • the program executes the predetermined processing operations. Consequently, the program being processed can also be a processor.
  • the processing that is disclosed while a program is handled as a subject can also be a process that is executed by a processor that executes the program or an apparatus that is provided with the processor (for example, a control device, a controller, and a storage system).
  • a part or a whole of a process that is executed when the processor executes a program can also be executed by a hardware circuit as substitute for or in addition to a processor.
  • the instructions for the program may be stored in a computer readable storage medium, which includes tangible media such as flash memory, random access memory (RAM), Hard Disk Drive (HDD) and the like.
  • instructions may be stored in the form of a computer readable signal medium, which includes other media such as carrier waves.
  • FIG. 1 illustrates an example of a related art hybrid cloud system that can migrate VMs 102 and 109 among server hosts 101 and 108 via a L2 tunnel 114.
  • Host 101 the onsite system, contains VM 102 and virtual switch (VSW) 103, and connects to local area network (LAN) 105, router 106 and WAN 107.
  • Host 108 the cloud system, contains VM 109 and VSW 110, and connects to LAN 112, router 113 and WAN 107.
  • LANs 105 and 112 connect to datastores 104 and 111, respectively.
  • FIG. 2 illustrates an example of a related art network using WODs to improve the performance of network transfer across a WAN.
  • Host 201 connects to LAN 203, WOD 204, router 205 and WAN 206.
  • Host 207 connects to LAN 209, WOD 210, router 211 and WAN 206.
  • LANs 203 and 209 connect to datastores 202 and 208, respectively.
  • the example implementations of the present application are directed to the above described problem and to improve system availability of multiple link systems that are connected to a network fabric.
  • one or more WODs are allocated to modify the network traffic of data flow (e.g., optimize or otherwise improve the traffic flow), as described below.
  • the example implementations may involve optimizing the traffic flow, the present application is not limited as such, and the traffic flow may be improved or otherwise configured by the WODs depending the desired implementation.
  • the first example implementation involves a hybrid cloud management system that allocates WOD resources for storage access traffic between a host and a network-attached storage (NAS) after VM migration across the WAN occurs.
  • NAS network-attached storage
  • FIG. 3 illustrates an example of a hybrid cloud system adopting WODs to improve the storage access performance across a WAN 308, in accordance with an example implementation.
  • FIG. 3 involves a situation wherein a system administrator migrates a VM 302 from an onsite host 301 (e.g., a source computer) in a site to a cloud host 309 (e.g., a destination computer) in another site with a VM manager 315.
  • Host 301 which contains VM 302 and VSW 303, is connected to LAN 305 which connects to WOD 306, router 307 and WAN 308.
  • FIG. 4 illustrates an example of the architecture of the cloud network manager 316 connected to Management LAN 710, in accordance with an example implementation.
  • Cloud network manager 316 stores target flow information 409, host location information 410, preset distance configuration 411, round trip time (RTT) monitoring information 412, and preset RTT configuration 413 in memory 406. Cloud network manager 316 also stores and executes a hybrid cloud control program 408 and operating system (OS) 407.
  • OS operating system
  • Memory 406 is connected to central processing unit (CPU) 402, input/output interface (I/O) 403, network interface controller (NIC) 404, and Storage 405.
  • Memory 406 and storage 405 may take the form of a computer readable storage medium or can be replaced by a computer readable signal medium.
  • Cloud network manager 316 may be implemented in the form of a management computer.
  • FIG. 5 illustrates an example of the target flow information 409 stored on the cloud network manager 316, in accordance with an example implementation.
  • Target flow information 409 can be implemented as a table wherein each entry is composed of WOD ID, destination IP address, source IP address, destination TCP/UDP port number, source TCP/UDP port number, and optimization status. Each entry of this table describes a TCP/UDP flow managed by the cloud network manager 316 of the example
  • the first entry indicates at a WOD associated with the onsite system (wodl) it is configured to accelerate upward traffic from the onsite system to the cloud.
  • the second entry indicates that a WOD associated with the cloud system (wod2) accelerates downward traffic from the cloud to the onsite system.
  • FIG. 6 illustrates an example of the host location information 410 stored on the cloud network manager 316, in accordance with an example implementation.
  • the host location 410 can be implemented as a table wherein each entry is composed of host ID, location ID, and coordinates of the location. Each entry of this table describes a location of a host. For example, the first entry and the second entry of this table are equivalent to the host 301 and the host 309, respectively. Also, the third entry is equivalent to the NAS 304.
  • FIG. 7 illustrates an example of the preset distance configuration 411 stored on the cloud network manager 316, in accordance with an example implementation.
  • the preset distance configuration 41 1 can be implemented as a variable that stores distance value.
  • This value represents a maximum distance between two sites which communicate without WODs.
  • the preset distance configuration 411 can be adjusted according to a desired implementation. This distance configuration 411 and the above host location information 410 are utilized by the network manager 316 to determine whether to apply acceleration. For example, because the distance between VM 310 and NAS 304 is more than 1000km, the network manager 316 applies acceleration to the flow.
  • FIG. 8 illustrates an example of the RTT monitoring information 412 stored on the cloud network manager 316, in accordance with an example implementation.
  • RTT monitoring information 412 can be implemented as a table wherein each entry is composed of two location IDs and the RTT measured between two location ID sites. The network manager 316 measures periodically RTT between two sites and updates the RTT monitoring information 412.
  • FIG. 9 illustrates an example of the preset RTT configuration 413 stored on the cloud network manager 316, in accordance with an example implementation.
  • the preset RTT configuration 413 can be implemented as a variable that stores a time value. This value represents a maximum RTT between two sites communicating without WODs.
  • FIG. 10 illustrates an example of the architecture of a WOD network device 1001, in accordance with an example implementation.
  • WOD 1001 may include a controller 1002, a backplane switch 1003, routing module 1004, and WAN optimization module 1005.
  • the controller 1002 executes a management program 1013 and stores target flow information 1012 and has OS 1011 on memory 1009 which connects to CPU 1006, I/O 1007, NIC 1008 and Storage 1010.
  • the routing module 1004 contains a routing engine 1014, flow configuration 1015, routing table 1016, and multiple Media Access Control Physical Layers (MAC/PHYs) 1017-1018.
  • the management program 1013 of the controller 1002 configures the flow configuration 1015 on the routing module 1004 according to the target flow information 1012 when it is updated by the cloud network manager 316.
  • the WAN optimization module 1005 contains a WAN optimization engine 1019 which processes packets transferred from the routing module 1004.
  • FIG. 11 illustrates an example of target flow information 1012 of a WOD 306, in accordance with an example implementation.
  • Each entry is composed of a destination IP address, a source IP address, a destination TCP/UDP port, a source TCP/UDP port, and optimization status of the flow.
  • the first entry represents that NFS requests from a host 309 to the NAS 304 are optimized with a WOD 313.
  • the cloud network manager 316 creates entries of this target flow information 1012 based on the entries of its target flow information 409. In this example, the first entry of the target flow information 409 is configured to the target flow information 1012 of the WOD 306.
  • FIG. 12 illustrates an example of target flow information 1012 of a WOD 313, in accordance with an example implementation.
  • the target flow information 1012 has a same structure with the above described target flow information 1012 of the WOD 306.
  • the first entry represents that NFS responses from the NAS 304 to host 309 are optimized with a WOD 306.
  • the cloud network manager 316 creates entries of this target flow information 1012 of the WOD 313 based on the entries of its target flow information 409.
  • the second entry of the target flow information 409 is configured to the target flow
  • FIG. 13 illustrates an example of flow chart of a WOD set-up process of the cloud network manager 316, in accordance with an example implementation.
  • the WOD set-up process begins when the cloud network manager 316 receives a notification of VM migration from a VM manager 315.
  • the cloud network manager retrieves the source and destination hosts of the VM migration at 1301 and retrieves the location of the migrated VM at 1302.
  • the cloud network manager retrieves the datastore of the migrated VM, and retrieves the location of the datastore (e.g. the onsite NFS datastore) at 1304.
  • the cloud network manager By accessing the VM manager 315 and retrieving host location information 410, the cloud network manager calculates the distance between the destination host and the datastore at 1305 that the migrated VM is using. The distance can be determined based on the RTT between the host and the datastore. If the calculated distance is longer than the preset distance at 1306 (Y), the cloud network manager retrieves, from a VM manager 315, a protocol that the migrated VM is using to access the datastore at 1307. Further, the cloud network manager retrieves a closer WOD located on the route at 1308 between the destination host and the datastore and then registers a flow entry at 1309 on the target flow information 409, if the calculated distance is not longer than the preset distance (N), no operation is taken at 1310.
  • the example implementation can allow users to allocate efficiently limited resources of WAN optimization devices selectively for the specific use such as the remote NAS access from the migrated VM.
  • Second example implementation Storage volume migration based WOD resource allocation
  • the second example implementation involves a hybrid cloud management system that allocates WOD resources for storage access traffic between two storage volumes after the storage systems are configured to migrate a storage volume between them.
  • FIG. 14 illustrates an example of a network wherein both VM migration and storage migration are utilized, in accordance with an example implementation.
  • the storage is migrated after or simultaneously with the VM migration.
  • the two storage systems 1403 and 1413 use Internet Small Computer System Interfaces (iSCSI) to transfer data.
  • iSCSI Internet Small Computer System Interfaces
  • the host 1410 accesses the storage system 1413 instead of the storage system 1403.
  • the host reads data from a volume 1415 via a controller 1414 if the requested data is synched between two storage systems.
  • the storage system 1413 access the storage system 1403 to obtain the data that is not synched between the two storage systems.
  • Storage system 1403 reads a volume 1405 via a controller 1404.
  • Host 1410 which contains VM 1411 and VSW 1412, is connected to LAN 1416, WOD 1417, router 1416, and WAN 1409.
  • Host 1401, which contains VSW 1402, is connected to LAN 1406, WOD 1407, router 1408 and WAN 1409.
  • Storage Manager 1417 includes volume migration information 1421 and storage controller information 1422 and is connected to LAN 1419.
  • Cloud Network Manager 1418 is also connected to LAN 1419 which connects to router 1420 and WAN 1409.
  • FIG. 15 illustrates an example of the target flow information 409 of the cloud network manager 1418, in accordance with an example implementation.
  • target flow information 409 stores two entries that represent iSCSI response flow and iSCSI request flow. The iSCSI response flow is optimized at the WOD 1407 and the iSCSI request flow is optimized at the WOD 1417.
  • FIG. 16 illustrates an example of the target flow information of the WOD 1407 (wodl) 1012, in accordance with an example implementation.
  • WOD 1407 (wodl) 1012 the target flow information of the WOD 1407 (wodl) 1012
  • target flow information of wodl 1012 is a subset of the first entry of the above described target flow information 409 of the cloud network manager 1418.
  • the first entry of this table represents a flow of iSCSI responses from the storage system 1403 to the storage system 1413.
  • FIG. 17 illustrates an example of the target flow information of the WOD 1417 (wod2) 1012.
  • target flow information of wod2 1012 is a subset of the second entry of the above descried target flow information 409 of the cloud network manager 1418.
  • the first entry of this table represents a flow of iSCSI requests from the storage system 1413 to the storage system 1403.
  • FIG. 18 illustrates an example of the volume migration information 1421 of the cloud network manager 1418, in accordance with an example implementation.
  • Each entry of the volume migration information 1421 represents a migration of a volume between two storage systems.
  • Each entry includes a storage system ID, a volume ID, and controller ID for each of a destination storage system and a source storage system.
  • the controller ID represents a storage system's controller used to handle data stream of volume migration.
  • the first entry of the table represents storage volume migration from the storage system 1403 to the storage system 1413. More specifically, a volume 1405 on the storage system 1403 is migrated to a volume 1415 on the storage system 1413.
  • FIG. 19 illustrates an example of the storage controller information 1422 of the cloud network manager 1418, in accordance with an example implementation.
  • Each entry represents a storage controller of the system.
  • Each entry includes a storage system ID, a controller ID, protocol type, an IP address of a controller, and a World Wide Name
  • WWN of a controller.
  • IP address a registered trademark of Cisco Systems, Inc.
  • WWN a registered trademark of Cisco Systems, Inc.
  • iSCSI controller information is listed up in the table.
  • FIG. 20 illustrates an example of the flow chart of the hybrid cloud control program 408 of the cloud network manager 1418 to allocate WOD resources for VM migration, in accordance with an example implementation.
  • the cloud network manager 1418 starts the allocation process when it receives a storage volume migration notification from the storage manager 1417 at 2001. Contents of the notification are equivalent to the changed entries in the volume migration information 1421. Then it retrieves storage controllers at 2002 and IP addresses at 2003 that are used for the storage migration. It retrieves the storage controllers from the volume migration information 1421. Also, it retrieves the IP addresses from the storage controller information 1422.
  • the cloud network manager 1418 retrieves two WODs on a route between the controllers using network topology information 414 at 2004.
  • the cloud network manager 1418 then creates and registers the iSCSI request flow at 2005 and the iSCSI response flow at 2006.
  • the first entry of the target information 409 of the cloud network manager 1418 and the first entry of the target flow information 1012 of WOD 1407 (wodl) are both equivalent to the iSCSI response flow.
  • the second entry of the target information 409 of the cloud network manager 1418 and the first entry of the target flow information 1012 of WOD 1417 (wod2) are both equivalent to the iSCSI request flow.
  • the example implementation can allow users to allocate efficiently limited resources of WAN optimization devices selectively for the specific use such as the storage volume access between multiple storage systems that are configured to migrate a volume from one side to another side.
  • the third example implementation involves a hybrid cloud management system that dynamically allocates WOD resources for storage access traffic toward a storage system according to the protocol type of the traffic.
  • FIG. 21 illustrates an example of a network that contains a cloud network manager 2120 which has a capability to allocate WOD resources dynamically, in accordance with an example implementation.
  • Storage manager 2122 and cloud network manager 2120 are connected by LAN 2121 and router 2123 to the WAN 2140.
  • the cloud network manager 2120 also has a capability to detect the traffic types and selects appropriate WAN optimization methods.
  • the WODs 2109 and 2118 detect storage access packets and notify the cloud network manager 2120 via routers 2139 and 2119 which are connected to WAN 2140 which connects to router 2123 and LAN 2121.
  • the cloud network manager 2120 then configures the WOD 2109 and 2118.
  • the storage access has three types; iSCSI READ from the host 2110 for the data that is stored on the storage system 2103, iSCSI READ from the storage controller 2114 of the storage system 2113, iSCSI WRITE from the controller 2104 of the storage system 2103 to the volume 2116 on the storage system 2113.
  • Storage system 2113 has an additional volume 2115 and storage system 2103 has volumes 2105-2107.
  • Host 2101 contains VSW 2102 and connects to LAN 2108.
  • Host 2110 contains VM 2111 and VSW 2112 and connects to LAN 2117. [0066] FIG.
  • the 22 illustrates an example of the unknown flow information composed of the flow information 2201 that is sent from the WODs 2109 and 2118 and is received by the cloud network manager 2120, in accordance with an example implementation.
  • the three entries represent the three types of iSCSI traffic which are described above.
  • FIG. 23 illustrates an example of the flow information 2301 of the cloud network manager 2120, in accordance with an example implementation.
  • Flow information 2301 has an extended structure compared with the flow information 409 described in the FIG. 15.
  • Flow information 2301 has a "Optimized Method” column in its table.
  • the column can take "Cache” or "TCP” as its value.
  • Cache indicates that the WOD corresponding to the entry uses caching method for the specified flow.
  • TCP indicates that the WOD corresponding to the entry uses TCP-window control method for the specified flow.
  • FIG. 24 illustrates an example of the replication information 2401 which the storage manager 2122 stores to manage the configurations of volume replication between two hosts, in accordance with an example implementation.
  • Replication information 2401 is implemented as a table wherein each entry is composed of a destination storage system ID, a destination volume ID, an ID of the destination-side controller used for the replication, source storage system ID, a source volume ID, and an ID of the source-side controller used for the replication.
  • FIG. 25 illustrates an example of the flow chart of the set-up process of the cloud network manager 2120, in accordance with an example implementation.
  • the cloud network manager 2120 receives an unknown flow packet at 2601 and checks at 2602 if the unknown (unregistered) flow is an iSCSI READ flow. If it is, the cloud network manager 2120 retrieves iSCSI initiator/target IP address and logical unit (LUN) information at 2603, retrieves volume migration information 1421 at 2604 and checks if the iSCSI READ flow is designated to a migrated volume at 2605.
  • LUN logical unit
  • the cloud network manager 2120 If the iSCSI READ flow is designated to the migrated volume (Y), the cloud network manager 2120 utilizes a TCP-window control method to modify the flow across the WAN. Otherwise (N), the cloud network manager checks for an iSCSI WRITE check flow at 2608, as described in FIG. 26.
  • the TCP-window control method is utilized so that once a data block stored on a storage system 2103 is read by the storage system 2113 the data block is stored on the storage system 2113 and is not read across the WAN. The caching method is therefore not needed in this instance. Finally if the volume is migrated at 2605 (Y) the cloud network manager 2120 registers the opposite flow to target flow information 2301 at 2606. The opposite flow means the iSCSI READ response replied from the storage system 2103 to the storage system 2113.
  • the cloud network manager 2120 utilizes a caching method and stores the read data block on the WOD 2118 at 2607. [0072] Further, if the flow is iSCSI WRITE from the storage system 2103 to the storage system 2113, the cloud network manager 2120 processes an iSCSI WRITE check flow at 2608.
  • FIG. 26 illustrates an example of the iSCSI WRITE check flow, in accordance with an example implementation.
  • the cloud network manager 2120 checks if the unknown flow is an iSCSI WRITE Flow or not at 2609. If it is (Y), the cloud network manager 2120 retrieves the corresponding volume for the flow from the replication information 2401 at 2610. If a corresponding entry is found at 2611 (Y), the cloud network manager 2120 registers the flow as a flow that is optimized with the TCP-windows control method at 2612. If not, the flow is registered as a flow that is optimized with the caching method at 2613.
  • the example implementation can allow users to utilize limited resources of WAN optimization devices efficiently and to keep the performance of iSCSI traffic flows for storage volume migration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente invention concerne, dans des modes de réalisation décrits ici à titre d'exemples, un système de gestion de nuage hybride permettant la gestion d'entrées de flux appliquées à un processus d'optimisation de dispositifs (WOD) d'optimisation de réseau étendu (WAN) et l'attribution de ressources de WOD en fonction d'une notification de nouveau trafic à travers le WAN. Des modes de réalisation décrits à titre d'exemple peuvent déterminer l'attribution de ressources de WOD en se basant sur des opérations de copie/de migration d'un ordinateur source, tel qu'un système de stockage sur site, à un ordinateur de destination sur le nuage.
PCT/US2013/047414 2013-06-24 2013-06-24 Procédé et appareil pour système de gestion de nuage attribuant des ressources de fonction d'optimisation dynamique de wan Ceased WO2014209270A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2013/047414 WO2014209270A1 (fr) 2013-06-24 2013-06-24 Procédé et appareil pour système de gestion de nuage attribuant des ressources de fonction d'optimisation dynamique de wan

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2013/047414 WO2014209270A1 (fr) 2013-06-24 2013-06-24 Procédé et appareil pour système de gestion de nuage attribuant des ressources de fonction d'optimisation dynamique de wan

Publications (1)

Publication Number Publication Date
WO2014209270A1 true WO2014209270A1 (fr) 2014-12-31

Family

ID=52142419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/047414 Ceased WO2014209270A1 (fr) 2013-06-24 2013-06-24 Procédé et appareil pour système de gestion de nuage attribuant des ressources de fonction d'optimisation dynamique de wan

Country Status (1)

Country Link
WO (1) WO2014209270A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107645421A (zh) * 2017-11-09 2018-01-30 郑州云海信息技术有限公司 一种分布式存储的iscsi协议实现方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070233857A1 (en) * 2006-03-30 2007-10-04 Nebuad, Inc. Network device for monitoring and modifying network traffic between an end user and a content provider
US20110022812A1 (en) * 2009-05-01 2011-01-27 Van Der Linden Rob Systems and methods for establishing a cloud bridge between virtual storage resources
US20110185064A1 (en) * 2010-01-26 2011-07-28 International Business Machines Corporation System and method for fair and economical resource partitioning using virtual hypervisor
US20110238458A1 (en) * 2010-03-24 2011-09-29 International Business Machines Corporation Dynamically optimized distributed cloud computing-based business process management (bpm) system
US20120191858A1 (en) * 2010-06-29 2012-07-26 International Business Machines Corporation Allocating Computer Resources in a Cloud Environment
US20120215920A1 (en) * 2010-06-30 2012-08-23 International Business Machines Corporation Optimized resource management for map/reduce computing
US20120260247A1 (en) * 2011-04-05 2012-10-11 International Business Machines Corporation Fine-Grained Cloud Management Control Using Nested Virtualization

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070233857A1 (en) * 2006-03-30 2007-10-04 Nebuad, Inc. Network device for monitoring and modifying network traffic between an end user and a content provider
US20110022812A1 (en) * 2009-05-01 2011-01-27 Van Der Linden Rob Systems and methods for establishing a cloud bridge between virtual storage resources
US20110185064A1 (en) * 2010-01-26 2011-07-28 International Business Machines Corporation System and method for fair and economical resource partitioning using virtual hypervisor
US20110238458A1 (en) * 2010-03-24 2011-09-29 International Business Machines Corporation Dynamically optimized distributed cloud computing-based business process management (bpm) system
US20120191858A1 (en) * 2010-06-29 2012-07-26 International Business Machines Corporation Allocating Computer Resources in a Cloud Environment
US20120215920A1 (en) * 2010-06-30 2012-08-23 International Business Machines Corporation Optimized resource management for map/reduce computing
US20120260247A1 (en) * 2011-04-05 2012-10-11 International Business Machines Corporation Fine-Grained Cloud Management Control Using Nested Virtualization

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107645421A (zh) * 2017-11-09 2018-01-30 郑州云海信息技术有限公司 一种分布式存储的iscsi协议实现方法

Similar Documents

Publication Publication Date Title
US10284430B2 (en) Storage provisioning and configuration of network protocol parameters
US9948566B2 (en) Selective network traffic throttling
US9336041B2 (en) Fabric distributed resource scheduling
US10067779B2 (en) Method and apparatus for providing virtual machine information to a network interface
US10985999B2 (en) Methods, devices and systems for coordinating network-based communication in distributed server systems with SDN switching
US9766833B2 (en) Method and apparatus of storage volume migration in cooperation with takeover of storage area network configuration
US9049204B2 (en) Collaborative management of shared resources
US9609086B2 (en) Virtual machine mobility using OpenFlow
US8990374B2 (en) Method and apparatus of cloud computing subsystem
US8055736B2 (en) Maintaining storage area network (‘SAN’) access rights during migration of operating systems
CN107079060A (zh) 用于运营商级nat优化的系统和方法
US20160080255A1 (en) Method and system for setting up routing in a clustered storage system
US11500678B2 (en) Virtual fibre channel port migration
US20140289198A1 (en) Tracking and maintaining affinity of machines migrating across hosts or clouds
WO2014209270A1 (fr) Procédé et appareil pour système de gestion de nuage attribuant des ressources de fonction d'optimisation dynamique de wan
CN108351795A (zh) 用于映射虚拟机通信路径的方法和系统
US12393533B2 (en) Host multi-path layer with congestion mitigation through interaction with centralized discovery controller
Aravindan Performance analysis of an iSCSI block device in virtualized environment
AU2015202178A1 (en) Fabric distributed resource scheduling

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13887679

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13887679

Country of ref document: EP

Kind code of ref document: A1