[go: up one dir, main page]

WO2018177042A1 - 一种实现资源调度的方法及装置 - Google Patents

一种实现资源调度的方法及装置 Download PDF

Info

Publication number
WO2018177042A1
WO2018177042A1 PCT/CN2018/076386 CN2018076386W WO2018177042A1 WO 2018177042 A1 WO2018177042 A1 WO 2018177042A1 CN 2018076386 W CN2018076386 W CN 2018076386W WO 2018177042 A1 WO2018177042 A1 WO 2018177042A1
Authority
WO
WIPO (PCT)
Prior art keywords
load
physical machine
physical
resource scheduling
machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/076386
Other languages
English (en)
French (fr)
Inventor
童遥
申光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to ES18774667T priority Critical patent/ES2939689T3/es
Priority to EP18774667.2A priority patent/EP3606008B1/en
Publication of WO2018177042A1 publication Critical patent/WO2018177042A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1044Group management mechanisms 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/563Data redirection of data network streams
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This document relates to, but not limited to, virtualization technology, and more particularly to a method and apparatus for implementing resource scheduling.
  • server clusters based on virtualization technology are getting larger and larger.
  • the number of virtual machines is even tens of thousands, and their deployment in the cluster. It is crucial for the entire cluster.
  • the resource load of a large-scale cluster is highly variable. System administrators are often unable to make accurate judgments on the resource load of the current cluster in a short period of time, and cannot be used for many virtual machines.
  • the need for scheduling control to complete resource scheduling Therefore, more and more technicians are beginning to pay attention to the dynamic resource scheduling problem in virtualized clusters.
  • the typical resource scheduling scenarios are as follows: 1) the level of load of the physical machine is uneven, the density of resource utilization is low; 2) the load of some physical machines is too low, and the resource utilization rate is low; 3) The physical load of some physical machines is too high, and the performance of the virtual machine is affected. 4) The request to open the virtual machine needs to select a suitable placement point; how to effectively meet the needs of the above application scenarios is the current virtualized cluster management. A crucial issue, so the realization of dynamic resource scheduling in a large-scale virtualized cluster environment is very significant.
  • An embodiment of the present invention provides a method for implementing resource scheduling, including:
  • the algorithm pool includes two or more resource scheduling algorithms.
  • the obtaining performance parameter information of the physical machine and the virtual machine in the cluster includes:
  • the performance parameter information collected by each of the monitoring clients in a multicast manner is synchronously collected.
  • the performance parameter information includes:
  • Each of the physical machines Internet Protocol IP address, name, unique identifier, CPU total CPU, CPU usage, total memory size, memory usage, disk I/O port I/O read/write speed, physical machine
  • the set virtual machine here, the virtual machine set on the physical machine includes virtual machine parameter information such as the number and name of the set virtual machine.
  • Name of each virtual machine unique name, allocated memory size, memory usage, number of allocated CPUs, read/write speed of virtual machine disk I/O, affinity description, mutual exclusion description, status flag .
  • the algorithm pool expands the interface by using a preset algorithm, and separately registers the resource scheduling algorithm according to a one-to-one correspondence.
  • the resource scheduling algorithm includes at least one of a hotspot cancellation algorithm, an energy saving integration algorithm, and a load balancing algorithm.
  • the energy-saving integration algorithm includes:
  • the physical machine When the load of the cluster is less than or equal to the preset load threshold, the physical machine is sorted according to the load according to the cluster load information, and the preset physical machine with the lowest load order is added to the list of the physical machine to be powered off, and the unselected The physical machine is added to the candidate physical machine list; wherein the load includes: CPU usage and memory usage;
  • the virtual machines in the to-be-migrated list are attempted to be placed one by one on the physical machine with the highest load from the candidate physical machine list;
  • the migration decision sequence is output according to the migrated location relationship, and the power-off decision sequence is generated according to the to-be-migrated list;
  • the virtual machine fails to be placed, remove the physical machine with the highest load from the list of physical machines to be powered off, add the deleted physical machine to the end of the list of candidate physical machines, and delete the physical machine added to the list of candidate physical machines from the list to be migrated.
  • the virtual machine updates the load sorting list of the candidate physical machine list, the list to be migrated, and continues the placement attempt of the virtual machine in the to-be-migrated list.
  • the load balancing algorithm includes:
  • the physical machines in the cluster are arranged according to the load;
  • the hotspot cancellation algorithm includes:
  • the hotspot cancellation algorithm includes:
  • an embodiment of the present invention further provides an apparatus for implementing resource scheduling, including: an obtaining and parsing unit, a determining unit, a selecting unit, and an output scheduling unit;
  • the obtaining and parsing unit is configured to: obtain performance parameter information of the physical machine and the virtual machine in the cluster, and parse the performance data information converted into the preset format;
  • the determining unit is configured to: determine cluster load information according to the obtained performance data information;
  • the selecting unit is configured to: select, according to the determined cluster load information and the received request configuration, a resource scheduling algorithm for performing resource scheduling from a preset algorithm pool;
  • the output scheduling unit is configured to: output a resource scheduling decision according to the selected resource scheduling algorithm, to perform resource scheduling on the cluster according to the output resource scheduling decision;
  • the obtaining and parsing unit is set to:
  • the performance parameter information collected by each of the monitoring clients in a multicast manner is synchronously collected.
  • the performance parameter information includes:
  • Each of the physical machines Internet Protocol IP address, name, unique identifier, CPU total CPU, CPU usage, total memory size, memory usage, disk I/O port I/O read/write speed, physical machine
  • the set virtual machine here, the virtual machine set on the physical machine includes virtual machine parameter information such as the number and name of the set virtual machine.
  • Name of each virtual machine unique name, allocated memory size, memory usage, number of allocated CPUs, read/write speed of virtual machine disk I/O, affinity description, mutual exclusion description, status flag .
  • the device further includes:
  • the energy-saving integration algorithm includes:
  • the physical machine When the load of the cluster is less than or equal to the preset load threshold, the physical machine is sorted according to the load according to the cluster load information, and the preset physical machine with the lowest load order is added to the list of the physical machine to be powered off, and the unselected The physical machine is added to the candidate physical machine list; wherein the load includes: CPU usage and memory usage;
  • the virtual machines in the to-be-migrated list are attempted to be placed one by one on the physical machine with the highest load from the candidate physical machine list;
  • the migration decision sequence is output according to the migrated location relationship, and the power-off decision sequence is generated according to the to-be-migrated list;
  • the virtual machine fails to be placed, remove the physical machine with the highest load from the list of physical machines to be powered off, add the deleted physical machine to the end of the list of candidate physical machines, and delete the physical machine added to the list of candidate physical machines from the list to be migrated.
  • the virtual machine updates the load sorting list of the candidate physical machine list, the list to be migrated, and continues the placement attempt of the virtual machine in the to-be-migrated list.
  • the load balancing algorithm includes:
  • the physical machines in the cluster are arranged according to the load;
  • the hotspot cancellation algorithm includes:
  • the hotspot cancellation algorithm includes:
  • the virtual machines in the hotspot physical machine are sorted according to the performance data information according to the performance data, and the virtual machines with the highest load after the sorting are migrated to the physical machines with the lowest load in the cluster.
  • the application includes: obtaining performance parameter information of a physical machine and a virtual machine in a cluster, and parsing performance data information converted into a preset format; determining cluster load information according to the obtained performance data information; and determining the cluster load information and the received Requesting configuration, selecting a resource scheduling algorithm for resource scheduling from a preset algorithm pool; outputting a resource scheduling decision according to the selected resource scheduling algorithm, to perform resource scheduling on the cluster according to the output resource scheduling decision; wherein the algorithm pool includes two Or more than two resource scheduling algorithms.
  • the embodiment of the invention implements dynamic resource scheduling for different virtualization platforms.
  • FIG. 1 is a flowchart of a method for implementing resource scheduling according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of an energy-saving integration algorithm according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a load balancing algorithm according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a hotspot cancellation algorithm according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of a hotspot cancellation algorithm according to another embodiment of the present invention.
  • FIG. 6 is a structural block diagram of an apparatus for implementing resource scheduling according to an embodiment of the present invention.
  • Dynamic Resource Balancer is a commercial dynamic resource scheduling solution launched by Vmware. DRS dynamically migrates virtual machines to low load according to the resource load of VMware's ESX host. On the host, and then shutting down the ESX host, the dynamic migration of the virtual machine on different ESX hosts is implemented by VMware VMotion, and the migration process is completely transparent to the end user. Vmware provides a complete dynamic resource scheduling solution, but because this dynamic resource adjustment is limited by the architecture set by VMware itself, it cannot be applied to include Xen (Xen is an open source virtual machine monitor), based on Other virtualization platforms, including Kernel-based Virtual Machine (KVM).
  • KVM Kernel-based Virtual Machine
  • OpenNebula (OpenNebula is an open source toolkit for cloud computing) supports the creation and management of private clouds with virtualization platforms such as Xen and KVM, as well as Deltacloud (Red Hat launched a set of open source in September 2009).
  • API Application Programming Interface
  • EC2 Amazon's Elastic Compute Cloud
  • OpenNebula interconnects virtual machine networks. Integration of protocols such as protocol (IP), image file, memory, and central processing unit (CPU), and virtual machine resource usage statistics management to provide a unified operation portal for cluster administrators, but OpenNebula's focus is on virtualized cluster management. Its own scheduling function is relatively weak. OpenNebula only provides the initial placement function of virtual machine deployment, and cannot implement dynamic resource scheduling.
  • FIG. 1 is a flowchart of a method for implementing resource scheduling according to an embodiment of the present invention. As shown in FIG. 1 , the method includes:
  • Step 100 Obtain performance parameter information of a physical machine and a virtual machine in the cluster, and parse performance data information converted into a preset format.
  • the performance data information of the preset format includes: a format compatible with each resource scheduling system, and the specific format may be determined according to a format supported by the resource scheduling system.
  • the obtaining, by the embodiment of the present invention, the performance parameter information of the physical machine and the virtual machine in the cluster includes:
  • the performance parameter information collected by each monitoring client in a multicast manner is synchronously collected.
  • the performance parameter information of the embodiment of the present invention includes:
  • Each physical machine Internet Protocol IP address, name, unique identifier, CPU total CPU, CPU usage, total memory size, memory usage, disk I/O port I/O read/write speed, physical machine settings
  • a virtual machine; here, the virtual machine set on the physical machine may include virtual machine parameter information such as the number and name of the set virtual machine.
  • Name of each virtual machine unique name, allocated memory size, memory usage, number of allocated CPUs, read/write speed of virtual machine disk I/O, affinity description, mutual exclusion description, status flag .
  • Step 101 Determine cluster load information according to the obtained performance data information.
  • Step 102 Select, according to the determined cluster load information and the received request configuration, a resource scheduling algorithm for performing resource scheduling from a preset algorithm pool;
  • the algorithm pool includes two or more resource scheduling algorithms.
  • the algorithm pool expands the interface by using a preset algorithm, and respectively registers the resource scheduling algorithm according to the one-to-one correspondence.
  • more algorithm extension interfaces may be added according to the requirements of the cluster for resource scheduling, and more resource scheduling algorithms are added.
  • the resource scheduling algorithm in the embodiment of the present invention includes at least one of a hotspot cancellation algorithm, an energy saving integration algorithm, and a load balancing algorithm.
  • the hotspot cancellation algorithm, the energy-saving integration algorithm, and the load balancing algorithm are optional algorithms of the embodiments of the present invention.
  • the resource scheduling algorithm may be added or deleted according to the requirements of the cluster for resource scheduling.
  • FIG. 2 is a schematic flowchart of an energy-saving integration algorithm according to an embodiment of the present invention.
  • the energy-saving integration algorithm includes:
  • Step 200 When the load of the cluster is less than or equal to the preset load threshold, the physical machine is sorted according to the load according to the cluster load information, and the preset physical machine with the lowest load order is added to the physical machine list to be powered off, and the physical machine is not selected. The physical machine is added to the list of candidate physical machines; wherein the load includes: CPU usage, memory usage;
  • the number of selected physical machines with the lowest load ordering may be determined according to the cluster size, and may include: selecting three physical machines with the lowest load ordering;
  • Step 201 Add all virtual machines in the physical machine list to be powered off to the to-be-migrated list.
  • Step 202 For each virtual machine in the migration list, try to place the virtual machines in the to-be-migrated list one by one onto the physical machine with the highest load from the candidate physical machine list;
  • Step 203 it is determined whether the virtual machine is placed successfully; if the virtual machine is placed successfully, step 204 is performed; if the virtual machine fails to be placed, step 205 is performed;
  • Step 204 Output a migration decision sequence according to the migrated location relationship, and generate a power-off decision sequence according to the to-be-migrated list, and the process ends.
  • Step 205 If the virtual machine fails to be placed, the physical machine with the highest load is deleted from the list of physical machines to be powered off, and the deleted physical machine is added to the tail of the list of candidate physical machines, and the list added to the candidate physical machine is deleted from the to-be-migrated list.
  • the virtual machine on the physical machine updates the load sorting list of the candidate physical machine list, the list to be migrated, and continues the placement attempt of the virtual machine in the to-be-migrated list, and the process ends.
  • the load may include one or more of a CPU load, a memory usage, and the like.
  • the load may be sorted according to one of the loads, for example, only according to the CPU load. Sorting; or, only sort according to memory usage, or CPU load and memory usage are calculated according to a certain calculation formula, obtain a parameter indicating the load, and sort according to the obtained parameters, for example, CPU load and memory usage As a ratio of half of the load parameters.
  • the load threshold can be determined by a person skilled in the art according to the cluster size, the cluster working status, and the like, and the CPU usage and the memory usage lower limit of 10% and the upper limit of 90% can be set.
  • FIG. 3 is a schematic flowchart of a load balancing algorithm according to an embodiment of the present invention.
  • the load balancing algorithm includes:
  • Step 300 When the load balancing degree of the physical machine in the cluster is less than a preset load balancing threshold, the physical machines in the cluster are arranged according to the load;
  • Step 301 Select the physical machine with the highest load, and traverse the load balance degree of the physical machine in the cluster when each virtual machine on the physical machine moves out according to the performance data information, and determine the virtual machine with the best load balance degree for the migration mark. And delete the determined virtual machine from the physical machine;
  • Step 302 Calculate the load balance degree when the virtual machine that completes the migration mark migrates to each of the other physical machines, and move the virtual machine into the calculated physical machine with the best load balance degree.
  • the load balancing degree can include two levels, one is the physical machine level, and the other is the virtual machine level. Both levels can be measured by CPU usage and memory usage; among them, the physical machine weight accounts for 60%.
  • the virtual machine weight is 40%.
  • FIG. 4 is a schematic flowchart of a hotspot cancellation algorithm according to an embodiment of the present invention.
  • the hotspot cancellation algorithm includes:
  • Step 400 Determine, according to performance data information and a preset load hotspot threshold, whether the physical machine in the cluster is a hot spot physical machine;
  • Step 401 Select one or more virtual machines from the hotspot physical machine, and migrate them to other physical machines in the cluster to reduce the load threshold of the hotspot physical machine.
  • the load hotspot threshold in the embodiment of the present invention may include: a memory load hotspot threshold, a CPU load hotspot threshold, and the like, which may be set by a person skilled in the art according to an empirical value.
  • FIG. 5 is a schematic flowchart of a hotspot cancellation algorithm according to another embodiment of the present invention.
  • the hotspot cancellation algorithm includes:
  • Step 500 Sort the physical machines in the cluster according to the performance data information according to the load;
  • Step 501 Determine, according to the performance data information, that the physical machine is the hotspot physical machine when the CPU load of the highest load is greater than a preset CPU load hotspot threshold, or the memory usage is greater than a preset memory load hotspot threshold;
  • Step 502 The virtual machines in the hotspot physical machine are sorted according to the performance data information according to the performance data, and the virtual machine with the highest load after the sorting is migrated to the physical machine with the lowest load in the cluster.
  • the embodiment of the present invention continuously performs the determination process according to the above steps.
  • Step 103 Output a resource scheduling decision according to the selected resource scheduling algorithm, to perform resource scheduling on the cluster according to the output resource scheduling decision.
  • the resource scheduling decision includes the resource decision planning process and the virtual machine control function.
  • the control function of the virtual machine is mainly the encapsulation of the xm command.
  • the format of the xm migration command is: #xm migreate– Live-ssl vm_id dest_pm_ip; where vm_id represents the virtual machine ID to be migrated, and dest_pm_ip indicates the IP address of the target physical machine.
  • the parameters -live and -ssl are used to indicate that the ssl connection is used for dynamic migration to ensure the security of the migration. .
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used for the foregoing method for realizing resource scheduling.
  • An embodiment of the present invention further provides an apparatus for implementing resource scheduling, including: a memory and a processor; wherein
  • a processor is configured to execute program instructions in the memory
  • the algorithm pool includes two or more resource scheduling algorithms.
  • the obtaining and parsing unit 601 is configured to: obtain performance parameter information of the physical machine and the virtual machine in the cluster, and parse the performance data information converted into the preset format;
  • the performance data information of the preset format includes: a format compatible with each resource scheduling system, and the specific format may be determined according to a format supported by the resource scheduling system.
  • the obtaining and parsing unit 601 of the embodiment of the present invention is configured to:
  • the performance parameter information collected by each of the monitoring clients in a multicast manner is synchronously collected.
  • the performance parameter information of the embodiment of the present invention includes:
  • Each of the physical machines Internet Protocol IP address, name, unique identifier, CPU total CPU, CPU usage, total memory size, memory usage, disk I/O port I/O read/write speed, physical machine
  • the set virtual machine; here, the virtual machine set on the physical machine may include virtual machine parameter information such as the number and name of the set virtual machine.
  • Name of each virtual machine unique name, allocated memory size, memory usage, number of allocated CPUs, read/write speed of virtual machine disk I/O, affinity description, mutual exclusion description, status flag .
  • the determining unit 602 is configured to: determine cluster load information according to the obtained performance data information;
  • the selecting unit 603 is configured to: select, according to the determined cluster load information and the received request configuration, a resource scheduling algorithm for performing resource scheduling from a preset algorithm pool;
  • the resource scheduling algorithm in the embodiment of the present invention includes at least one of a hotspot cancellation algorithm, an energy saving integration algorithm, and a load balancing algorithm.
  • the hotspot cancellation algorithm, the energy-saving integration algorithm, and the load balancing algorithm are optional algorithms of the embodiments of the present invention.
  • the resource scheduling algorithm may be added or deleted according to the requirements of the cluster for resource scheduling.
  • the energy-saving integration algorithm includes:
  • the physical machine When the load of the cluster is less than or equal to the preset load threshold, the physical machine is sorted according to the load according to the cluster load information, and the preset physical machine with the lowest load order is added to the list of the physical machine to be powered off, and the unselected The physical machine is added to the candidate physical machine list; wherein the load includes: CPU usage and memory usage;
  • the number of selected physical machines with the lowest load ordering may be determined according to the cluster size, and may include: selecting three physical machines with the lowest load ordering;
  • the virtual machines in the to-be-migrated list are attempted to be placed one by one on the physical machine with the highest load from the candidate physical machine list;
  • the migration decision sequence is output according to the migrated location relationship, and the power-off decision sequence is generated according to the to-be-migrated list;
  • the virtual machine fails to be placed, remove the physical machine with the highest load from the list of physical machines to be powered off, add the deleted physical machine to the end of the list of candidate physical machines, and delete the physical machine added to the list of candidate physical machines from the list to be migrated.
  • the virtual machine updates the load sorting list of the candidate physical machine list, the list to be migrated, and continues the placement attempt of the virtual machine in the to-be-migrated list.
  • the load may include one or more of a CPU load, a memory usage, and the like.
  • the load may be sorted according to one of the loads, for example, only according to the CPU load. Sorting; or, only sort according to memory usage, or CPU load and memory usage are calculated according to a certain calculation formula, obtain a parameter indicating the load, and sort according to the obtained parameters, for example, CPU load and memory usage As a ratio of half of the load parameters.
  • the load threshold can be determined by a person skilled in the art according to the cluster size, the cluster working status, and the like, and the CPU usage and the memory usage lower limit of 10% and the upper limit of 90% can be set.
  • the load balancing algorithm includes:
  • the physical machines in the cluster are arranged according to the load;
  • the load balancing degree can include two levels, one is the physical machine level, and the other is the virtual machine level. Both levels can be measured by CPU usage and memory usage; among them, the physical machine weight accounts for 60%.
  • the virtual machine weight is 40%.
  • the hotspot cancellation algorithm includes:
  • the load hotspot threshold in the embodiment of the present invention may include: a memory load hotspot threshold, a CPU load hotspot threshold, and the like, which may be set by a person skilled in the art according to an empirical value.
  • the hotspot cancellation algorithm includes:
  • the virtual machines in the hotspot physical machine are sorted according to the performance data information according to the performance data, and the virtual machines with the highest load after the sorting are migrated to the physical machines with the lowest load in the cluster.
  • the embodiment of the present invention continuously performs the determination process according to the above steps.
  • the output scheduling unit 604 is configured to: output a resource scheduling decision according to the selected resource scheduling algorithm, to perform resource scheduling on the cluster according to the output resource scheduling decision;
  • the algorithm pool includes two or more resource scheduling algorithms.
  • the resource scheduling decision includes the resource decision planning process and the virtual machine control function.
  • the control function of the virtual machine is mainly the encapsulation of the xm command.
  • the format of the xm migration command is: #xm migreate– Live-ssl vm_id dest_pm_ip; where vm_id represents the virtual machine ID to be migrated, and dest_pm_ip indicates the IP address of the target physical machine.
  • the parameters -live and -ssl are used to indicate that the ssl connection is used for dynamic migration to ensure the security of the migration. .
  • the apparatus of the embodiment of the present invention further includes: a registration unit, configured to separately register the resource scheduling algorithm according to a one-to-one correspondence relationship by using a preset algorithm extension interface.
  • a registration unit configured to separately register the resource scheduling algorithm according to a one-to-one correspondence relationship by using a preset algorithm extension interface.
  • more algorithm extension interfaces may be added according to the requirements of the cluster for resource scheduling, and more resource scheduling algorithms are added.
  • Computer storage medium includes volatile and nonvolatile, implemented in any method or technology for storing information, such as computer readable instructions, data structures, program modules, or other data. , removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical disc storage, magnetic cartridge, magnetic tape, magnetic disk storage or other magnetic storage device, or may Any other medium used to store the desired information and that can be accessed by the computer.
  • communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and can include any information delivery media.
  • the embodiment of the invention implements dynamic resource scheduling for different virtualization platforms.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Debugging And Monitoring (AREA)
  • Multi-Process Working Machines And Systems (AREA)
  • General Factory Administration (AREA)

Abstract

一种实现资源调度的方法及装置,包括:获取集群中物理机和虚拟机的性能参数信息,并解析转换为预设格式的性能数据信息(100);根据获得的性能数据信息确定集群负载信息(101);根据确定的集群负载信息及接收到的请求配置,从预设的算法池中选择进行资源调度的资源调度算法(102);根据选择的资源调度算法输出资源调度决策,以根据输出的资源调度决策对集群进行资源调度(103);其中,算法池中包括两种或两种以上资源调度算法。

Description

一种实现资源调度的方法及装置 技术领域
本文涉及但不限于虚拟化技术,尤指一种实现资源调度的方法及装置。
背景技术
随着云服务的应用越来越广,基于虚拟化技术的服务器集群规模越来越大,在大规模虚拟化集群环境中,虚拟机的数目甚至达到数万台,它们在集群中的部署情况对于整个集群而言是至关重要的,大规模的集群的资源负载情况变化多端,系统管理员往往无法在短时间内对当前集群的资源负载情况作出精确的判断,也无法对众多的虚拟机进行调度控制来完成资源调度的需要。因此,越来越多的技术人员开始关注虚拟化集群中的动态资源调度问题。
在虚拟化集群环境中,典型的资源调度场景有以下几种:1)物理机的负载高低程度不均衡,资源利用的密度低;2)部分物理机负载过低,资源利用率低;3)部分物理机负载过高,其上的虚拟机性能受到影响;4)存在待开启虚拟机的请求,需要选择合适的放置点;如何有效的满足以上应用场景的需要,是当前虚拟化集群管理中一个至关重要的问题,因此实现大规模虚拟化集群环境下的动态资源调度有着非常重大的意义。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本发明实施例提供一种实现资源调度的方法及装置,能够同时实现对不同虚拟平台的资源调度。
本发明实施例提供了一种实现资源调度的方法,包括:
获取集群中物理机和虚拟机的性能参数信息,并解析转换为预设格式的性能数据信息;
根据获得的性能数据信息确定集群负载信息;
根据确定的集群负载信息及接收到的请求配置,从预设的算法池中选择进行资源调度的资源调度算法;
根据选择的资源调度算法输出资源调度决策,以根据输出的资源调度决策对集群进行资源调度;
其中,所述算法池中包括两种或两种以上资源调度算法。
可选的,所述获取集群中物理机和虚拟机的性能参数信息包括:
通过预先设置在每一个物理机上的监控客户端采集所述物理机和所述虚拟机的性能参数信息;
采用轮询方式从所述监控客户端上获取监控客户端采集的所述性能参数信息;
汇总通过轮询方式获得的所有所述物理机和所述虚拟机的所述性能参数信息;
其中,每一个所述监控客户端之间采用组播方式同步采集到的所述性能参数信息。
可选的,所述性能参数信息包括:
每一个所述物理机的:互联网协议IP地址、名称、唯一标识、中央处理器CPU总数、CPU使用率、内存总大小、内存使用率、磁盘输入输出端口I/O的读写速度、物理机上设置的虚拟机;这里,物理机上设置的虚拟机包括设置的虚拟机的个数、名称等虚拟机参数信息。
每一个虚拟机的:名称、唯一标识、分配的内存大小、内存使用率、分配的CPU个数、虚拟机磁盘I/O的读写速度、亲和性描述、互斥性描述、状态标识位。
可选的,所述算法池通过预先设置的算法扩展接口,按一一对应的关系分别注册所述资源调度算法。
可选的,所述资源调度算法包括:热点解除算法、节能整合算法和负载均衡算法中至少一种。
可选的,所述资源调度算法包括节能整合算法时,所述节能整合算法包括:
所述集群的负载小于或等于预设的负载阈值时,根据集群负载信息将物理机按照负载进行排序,将负载排序最低的预设台物理机添加到待下电物理机列表,将未选取的物理机添加到候选物理机列表;其中,所述负载包括:CPU使用率、内存使用率;
将待下电物理机列表中的所有虚拟机添加到待迁移列表;
对待迁移列表中的每一个虚拟机,将待迁移列表中的虚拟机逐一尝试放置到从所述候选物理机列表中负载最高的物理机上;
若虚拟机放置成功,根据迁移的位置关系输出迁移决策序列,同时根据待迁移列表生成下电决策序列;
若虚拟机放置失败,从待下电物理机列表中删除负载最高的物理机,将删除的物理机添加到候选物理机列表的尾部,从待迁移列表中删除添加到候选物理机列表的物理机上的虚拟机,更新候选物理机列表的负载排序、待迁移列表,继续待迁移列表中虚拟机的放置尝试。
可选的,所述资源调度算法包括负载均衡算法时,所述负载均衡算法包括:
当集群中物理机的负载均衡度小于预设的负载均衡阈值时,将集群中物理机的按照负载进行排列;
选择负载最高的物理机,根据性能数据信息遍历该物理机上的每一个虚拟机迁出时的集群中物理机的负载均衡度,确定负载均衡度最优的那台虚拟机进行迁移标记,并将确定的虚拟机从物理机上删除;
计算完成迁移标记的虚拟机迁移到其他每一个物理机时的负载均衡度,将虚拟机迁入计算获得的负载均衡度最优的物理机。
可选的,所述资源调度算法包括热点解除算法时,所述热点解除算法包括:
根据性能数据信息和预先设置的负载热点阈值确定集群中的物理机是否为热点物理机;
从热点物理机上选择一个或一个以上虚拟机,将其迁移到集群中的其他物理机上,以降低热点物理机的负载阈值。
可选的,所述资源调度算法包括热点解除算法时,所述热点解除算法包括:
根据所述性能数据信息将集群中的物理机按照负载进行排序;
根据所述性能数据信息,当负载最高的物理机CPU负载大于预先设置的CPU负载热点阈值,或内存使用率大于预先设置的内存负载热点阈值时,确定该物理机为热点物理机;
对热点物理机中的虚拟机根据性能数据信息按照负载进行排序,将排序后负载最高的虚拟机迁移到集群中负载最低的物理机上。
另一方面,本发明实施例还提供一种实现资源调度的装置,包括:获取及解析单元、确定单元、选择单元和输出调度单元;其中,
获取及解析单元设置为:获取集群中物理机和虚拟机的性能参数信息,并解析转换为预设格式的性能数据信息;
确定单元设置为:根据获得的性能数据信息确定集群负载信息;
选择单元设置为:根据确定的集群负载信息及接收到的请求配置,从预设的算法池中选择进行资源调度的资源调度算法;
输出调度单元设置为:根据选择的资源调度算法输出资源调度决策,以根据输出的资源调度决策对集群进行资源调度;
其中,所述算法池中包括两种或两种以上资源调度算法。
可选的,所述获取及解析单元是设置为:
通过预先设置在每一个物理机上的监控客户端采集所述物理机和所述虚拟机的性能参数信息;
采用轮询方式从所述监控客户端上获取监控客户端采集的所述性能参数信息;汇总通过轮询方式获得的所有所述物理机和所述虚拟机的所述性能参数信息;
将汇总的性能参数信息解析转换为预设格式的性能数据信息;
其中,每一个所述监控客户端之间采用组播方式同步采集到的所述性能参数信息。
可选的,所述性能参数信息包括:
每一个所述物理机的:互联网协议IP地址、名称、唯一标识、中央处理器CPU总数、CPU使用率、内存总大小、内存使用率、磁盘输入输出端口I/O的读写速度、物理机上设置的虚拟机;这里,物理机上设置的虚拟机包括设置的虚拟机的个数、名称等虚拟机参数信息。
每一个虚拟机的:名称、唯一标识、分配的内存大小、内存使用率、分配的CPU个数、虚拟机磁盘I/O的读写速度、亲和性描述、互斥性描述、状态标识位。
可选的,所述装置还包括:
注册单元,设置为通过预先设置的算法扩展接口,按一一对应的关系分别注册所述资源调度算法。
可选的,所述资源调度算法包括:热点解除算法、节能整合算法和负载均衡算法中至少一种。
可选的,所述资源调度算法包括节能整合算法时,所述节能整合算法包括:
所述集群的负载小于或等于预设的负载阈值时,根据集群负载信息将物理机按照负载进行排序,将负载排序最低的预设台物理机添加到待下电物理机列表,将未选取的物理机添加到候选物理机列表;其中,所述负载包括:CPU使用率、内存使用率;
将待下电物理机列表中的所有虚拟机添加到待迁移列表;
对待迁移列表中的每一个虚拟机,将待迁移列表中的虚拟机逐一尝试放置到从所述候选物理机列表中负载最高的物理机上;
若虚拟机放置成功,根据迁移的位置关系输出迁移决策序列,同时根据待迁移列表生成下电决策序列;
若虚拟机放置失败,从待下电物理机列表中删除负载最高的物理机,将删除的物理机添加到候选物理机列表的尾部,从待迁移列表中删除添加到候 选物理机列表的物理机上的虚拟机,更新候选物理机列表的负载排序、待迁移列表,继续待迁移列表中虚拟机的放置尝试。
可选的,所述资源调度算法包括负载均衡算法时,所述负载均衡算法包括:
当集群中物理机的负载均衡度小于预设的负载均衡阈值时,将集群中物理机的按照负载进行排列;
选择负载最高的物理机,根据性能数据信息遍历该物理机上的每一个虚拟机迁出时的集群中物理机的负载均衡度,确定负载均衡度最优的那台虚拟机进行迁移标记,并将确定的虚拟机从物理机上删除;
计算完成迁移标记的虚拟机迁移到其他每一个物理机时的负载均衡度,将虚拟机迁入计算获得的负载均衡度最优的物理机。
可选的,所述资源调度算法包括热点解除算法时,所述热点解除算法包括:
根据性能数据信息和预先设置的负载热点阈值确定集群中的物理机是否为热点物理机;
从热点物理机上选择一个或一个以上虚拟机,将其迁移到集群中的其他物理机上,以降低热点物理机的负载阈值。
可选的,所述资源调度算法包括热点解除算法时,所述热点解除算法包括:
根据所述性能数据信息将集群中的物理机按照负载进行排序;
根据所述性能数据信息,当负载最高的物理机CPU负载大于预先设置的CPU负载热点阈值,或内存使用率大于预先设置的内存负载热点阈值时,确定该物理机为热点物理机;
对热点物理机中的虚拟机根据性能数据信息按照负载进行排序,将排序后负载最高的虚拟机迁移到集群中负载最低的物理机上。
本申请包括:获取集群中物理机和虚拟机的性能参数信息,并解析转换为预设格式的性能数据信息;根据获得的性能数据信息确定集群负载信息;根据确定的集群负载信息及接收到的请求配置,从预设的算法池中选择进行 资源调度的资源调度算法;根据选择的资源调度算法输出资源调度决策,以根据输出的资源调度决策对集群进行资源调度;其中,算法池中包括两种或两种以上资源调度算法。本发明实施例实现了对不同虚拟化平台的动态资源调度。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为本发明实施例实现资源调度的方法的流程图;
图2为本发明实施例节能整合算法的流程示意图;
图3为本发明实施例负载均衡算法的流程示意图;
图4为本发明实施例热点解除算法的流程示意图;
图5为本发明另一实施例热点解除算法的流程示意图;
图6为本发明实施例实现资源调度的装置的结构框图。
本发明的实施方式
下文中将结合附图对本发明的实施例进行详细说明。
动态资源平衡(DRS,Distributed Resource Scheduler))是威睿(Vmware)公司推出的商业化的动态资源调度解决方案,DRS根据Vmware的ESX主机的资源负载情况,动态的迁移虚拟机至负载较低的主机上,然后关闭该ESX主机,虚拟机在不同ESX主机上的动态迁移由Vmware VMotion来实现,迁移过程对终端用户是完全透明的。Vmware提供了一套完整的动态资源调度解决方案,但是因为这种动态资源调整受限于Vmware本身设定的架构,无法应用到包括Xen(Xen是一种开放源代码虚拟机监视器)、基于内核的虚拟机(KVM,Kernel-based Virtual Machine)在内的其他虚拟化平台。
OpenNebula(OpenNebula是一款为云计算而打造的开源工具箱)支持与Xen、KVM等虚拟化平台一起建立和管理私有云,同时还提供Deltacloud(红帽公司2009年9月推出了一套开源的应用程序编程接口(API,Application Programming Interface))适配器与亚马逊(Amazon)弹性计算 云(EC2,Elastic Compute Cloud)配合来管理混合云,作为一个虚拟化管理平台,OpenNebula将虚拟机的网络之间互连的协议(IP)、镜像文件、内存和中央处理器(CPU,Central Processing Unit)等资源管理,以及虚拟机资源使用统计管理等功能整合为一体,为集群管理员提供统一的操作入口,但OpenNebula的重心放在虚拟化集群管理上,它本身的调度功能相对比较薄弱,OpenNebula仅提供了虚拟机部署的初始放置功能,无法实现动态资源调度。
综上,目前尚未有可以对各虚拟化平台均适用的动态资源调度系统。
图1为本发明实施例实现资源调度的方法的流程图,如图1所示,包括:
步骤100、获取集群中物理机和虚拟机的性能参数信息,并解析转换为预设格式的性能数据信息;
这里,预设格式的性能数据信息包括:可以兼容每一种资源调度系统的格式,具体格式可以根据资源调度系统支持的格式确定。
可选的,本发明实施例获取集群中物理机和虚拟机的性能参数信息包括:
通过预先设置在每一个物理机上的监控客户端采集物理机和虚拟机的性能参数信息;
采用轮询方式从监控客户端上获取监控客户端采集的性能参数信息;
汇总通过轮询方式获得的所有物理机和虚拟机的性能参数信息;
其中,每一个监控客户端之间采用组播方式同步采集到的性能参数信息。
可选的,本发明实施例性能参数信息包括:
每一个物理机的:互联网协议IP地址、名称、唯一标识、中央处理器CPU总数、CPU使用率、内存总大小、内存使用率、磁盘输入输出端口I/O的读写速度、物理机上设置的虚拟机;这里,物理机上设置的虚拟机可以包括设置的虚拟机的个数、名称等虚拟机参数信息。
每一个虚拟机的:名称、唯一标识、分配的内存大小、内存使用率、分配的CPU个数、虚拟机磁盘I/O的读写速度、亲和性描述、互斥性描述、状态标识位。
步骤101、根据获得的性能数据信息确定集群负载信息;
步骤102、根据确定的集群负载信息及接收到的请求配置,从预设的算法池中选择进行资源调度的资源调度算法;
其中,算法池中包括两种或两种以上资源调度算法。
可选的,本发明实施例算法池通过预先设置的算法扩展接口,按一一对应的关系分别注册资源调度算法。
这里,本发明实施例可以根据集群进行资源调度的需求添加更多的算法扩展接口,实现更多的资源调度算法的添加。
可选的,本发明实施例资源调度算法包括:热点解除算法、节能整合算法和负载均衡算法中至少一种。
需要说明的是,热点解除算法、节能整合算法和负载均衡算法为本发明实施例的可选算法,本发明实施例可以根据集群进行资源调度的需求添加或删除资源调度算法。
可选的,图2为本发明实施例节能整合算法的流程示意图,如图2所示,本发明实施例资源调度算法包括节能整合算法时,节能整合算法包括:
步骤200、集群的负载小于或等于预设的负载阈值时,根据集群负载信息将物理机按照负载进行排序,将负载排序最低的预设台物理机添加到待下电物理机列表,将未选取的物理机添加到候选物理机列表;其中,负载包括:CPU使用率、内存使用率;
需要说明的是,选取的负载排序最低的物理机的台数可以根据集群规模进行确定,可以包括:选取负载排序最低的三台物理机;
步骤201、将待下电物理机列表中的所有虚拟机添加到待迁移列表;
步骤202、对待迁移列表中的每一个虚拟机,将待迁移列表中的虚拟机逐一尝试放置到从候选物理机列表中负载最高的物理机上;
步骤203、判断虚拟机是否放置成功;若虚拟机放置成功,则执行步骤204;若虚拟机放置失败,则执行步骤205;
步骤204、根据迁移的位置关系输出迁移决策序列,同时根据待迁移列表生成下电决策序列,流程结束;
步骤205、若虚拟机放置失败,从待下电物理机列表中删除负载最高的物理机,将删除的物理机添加到候选物理机列表的尾部,从待迁移列表中删除添加到候选物理机列表的物理机上的虚拟机,更新候选物理机列表的负载排序、待迁移列表,继续待迁移列表中虚拟机的放置尝试,流程结束。
需要说明的是,本发明实施例负载可以包括CPU负载、内存使用率等中的一种或多种,包含多种时,按照负载排序可以根据其中一种进行排序,例如、只根据CPU负载进行排序;或者,只根据内存使用率进行排序,或者CPU负载和内存使用率按照一定的计算公式进行计算后,获得一个表示负载的参数,根据获得的参数进行排序,例如、CPU负载和内存使用率分别作为负载参数的一半比例。负载阈值可以由本领域技术人员根据集群规模、集群工作状态等进行确定,可以设定CPU占用率和内存占用率下限10%、上限90%。
可选的,图3为本发明实施例负载均衡算法的流程示意图,如图3所示,本发明实施例资源调度算法包括负载均衡算法时,负载均衡算法包括:
步骤300、当集群中物理机的负载均衡度小于预设的负载均衡阈值时,将集群中物理机的按照负载进行排列;
步骤301、选择负载最高的物理机,根据性能数据信息遍历该物理机上的每一个虚拟机迁出时的集群中物理机的负载均衡度,确定负载均衡度最优的那台虚拟机进行迁移标记,并将确定的虚拟机从物理机上删除;
步骤302、计算完成迁移标记的虚拟机迁移到其他每一个物理机时的负载均衡度,将虚拟机迁入计算获得的负载均衡度最优的物理机。
需要说明的是,负载均衡度可以包括2个层面,一是物理机层面,二是虚拟机层面,两个层面都可以从CPU使用率、内存使用率衡量;其中,物理机权重占60%,虚拟机权重占40%。
可选的,图4为本发明实施例热点解除算法的流程示意图,如图4所示,本发明实施例资源调度算法包括热点解除算法时,热点解除算法包括:
步骤400、根据性能数据信息和预先设置的负载热点阈值确定集群中的物理机是否为热点物理机;
步骤401、从热点物理机上选择一个或一个以上虚拟机,将其迁移到集群中的其他物理机上,以降低热点物理机的负载阈值。
需要说明的是,本发明实施例负载热点阈值可以包括:内存负载热点阈值、CPU负载热点阈值等,可以由本领域技术人员根据经验值进行设定。
可选的,图5为本发明另一实施例热点解除算法的流程示意图,如图5所示,本发明实施例资源调度算法包括热点解除算法时,热点解除算法包括:
步骤500、根据性能数据信息将集群中的物理机按照负载进行排序;
步骤501、根据性能数据信息,当负载最高的物理机CPU负载大于预先设置的CPU负载热点阈值,或内存使用率大于预先设置的内存负载热点阈值时,确定该物理机为热点物理机;
步骤502、对热点物理机中的虚拟机根据性能数据信息按照负载进行排序,将排序后负载最高的虚拟机迁移到集群中负载最低的物理机上。
需要说明的是,只要判断出存在热点物理机,则本发明实施例按照上述步骤不断的执行判断处理。
步骤103、根据选择的资源调度算法输出资源调度决策,以根据输出的资源调度决策对集群进行资源调度;
需要说明的是,本发明实施例资源调度决策包括资源的决策规划处理以及虚拟机控制功能其中,虚拟机的控制功能主要是对xm命令的封装,例如xm迁移命令的格式为:#xm migreate–live–ssl vm_id dest_pm_ip;其中,vm_id表示待迁移的虚拟机ID,dest_pm_ip表明目标物理机的ip地址,本发明实施例使用参数-live和-ssl表明使用ssl连接进行动态迁移,保证迁移的安全性。
本申请包括:获取集群中物理机和虚拟机的性能参数信息,并解析转换为预设格式的性能数据信息;根据获得的性能数据信息确定集群负载信息; 根据确定的集群负载信息及接收到的请求配置,从预设的算法池中选择进行资源调度的资源调度算法;根据选择的资源调度算法输出资源调度决策,以根据输出的资源调度决策对集群进行资源调度;其中,算法池中包括两种或两种以上资源调度算法。本发明实施例实现了对不同虚拟化平台的动态资源调度。
本发明实施例还提供一种计算机存储介质,计算机存储介质中存储有计算机可执行指令,计算机可执行指令用于上述实现资源调度的方法。
本发明实施例还提供一种实现资源调度的装置,包括:存储器和处理器;其中,
处理器被配置为执行所述存储器中的程序指令;
程序指令在处理器读取执行以下操作:
获取集群中物理机和虚拟机的性能参数信息,并解析转换为预设格式的性能数据信息;
根据获得的性能数据信息确定集群负载信息;
根据确定的集群负载信息及接收到的请求配置,从预设的算法池中选择进行资源调度的资源调度算法;
根据选择的资源调度算法输出资源调度决策,以根据输出的资源调度决策对集群进行资源调度;
其中,算法池中包括两种或两种以上资源调度算法。
图6为本发明实施例实现资源调度的装置的结构框图,如图6所示,包括:获取及解析单元601、确定单元602、选择单元603和输出调度单元604;其中,
获取及解析单元601设置为:获取集群中物理机和虚拟机的性能参数信息,并解析转换为预设格式的性能数据信息;
这里,预设格式的性能数据信息包括:可以兼容每一种资源调度系统的格式,具体格式可以根据资源调度系统支持的格式确定。
可选的,本发明实施例获取及解析单元601是设置为:
通过预先设置在每一个物理机上的监控客户端采集所述物理机和所述虚拟机的性能参数信息;
采用轮询方式从所述监控客户端上获取监控客户端采集的所述性能参数信息;汇总通过轮询方式获得的所有所述物理机和所述虚拟机的所述性能参数信息;
将汇总的性能参数信息解析转换为预设格式的性能数据信息;
其中,每一个所述监控客户端之间采用组播方式同步采集到的所述性能参数信息。
可选的,本发明实施例性能参数信息包括:
每一个所述物理机的:互联网协议IP地址、名称、唯一标识、中央处理器CPU总数、CPU使用率、内存总大小、内存使用率、磁盘输入输出端口I/O的读写速度、物理机上设置的虚拟机;这里,物理机上设置的虚拟机可以包括设置的虚拟机的个数、名称等虚拟机参数信息。
每一个虚拟机的:名称、唯一标识、分配的内存大小、内存使用率、分配的CPU个数、虚拟机磁盘I/O的读写速度、亲和性描述、互斥性描述、状态标识位。
确定单元602设置为:根据获得的性能数据信息确定集群负载信息;
选择单元603设置为:根据确定的集群负载信息及接收到的请求配置,从预设的算法池中选择进行资源调度的资源调度算法;
可选的,本发明实施例资源调度算法包括:热点解除算法、节能整合算法和负载均衡算法中的至少一种。
需要说明的是,热点解除算法、节能整合算法和负载均衡算法为本发明实施例的可选算法,本发明实施例可以根据集群进行资源调度的需求添加或删除资源调度算法。
可选的,本发明实施例资源调度算法包括节能整合算法时,所述节能整合算法包括:
所述集群的负载小于或等于预设的负载阈值时,根据集群负载信息将物理机按照负载进行排序,将负载排序最低的预设台物理机添加到待下电物理机列表,将未选取的物理机添加到候选物理机列表;其中,所述负载包括:CPU使用率、内存使用率;
需要说明的是,选取的负载排序最低的物理机的台数可以根据集群规模进行确定,可以包括:选取负载排序最低的三台物理机;
将待下电物理机列表中的所有虚拟机添加到待迁移列表;
对待迁移列表中的每一个虚拟机,将待迁移列表中的虚拟机逐一尝试放置到从所述候选物理机列表中负载最高的物理机上;
若虚拟机放置成功,根据迁移的位置关系输出迁移决策序列,同时根据待迁移列表生成下电决策序列;
若虚拟机放置失败,从待下电物理机列表中删除负载最高的物理机,将删除的物理机添加到候选物理机列表的尾部,从待迁移列表中删除添加到候选物理机列表的物理机上的虚拟机,更新候选物理机列表的负载排序、待迁移列表,继续待迁移列表中虚拟机的放置尝试。
需要说明的是,本发明实施例负载可以包括CPU负载、内存使用率等中的一种或多种,包含多种时,按照负载排序可以根据其中一种进行排序,例如、只根据CPU负载进行排序;或者,只根据内存使用率进行排序,或者CPU负载和内存使用率按照一定的计算公式进行计算后,获得一个表示负载的参数,根据获得的参数进行排序,例如、CPU负载和内存使用率分别作为负载参数的一半比例。负载阈值可以由本领域技术人员根据集群规模、集群工作状态等进行确定,可以设定CPU占用率和内存占用率下限10%、上限90%。
可选的,本发明实施例资源调度算法包括负载均衡算法时,所述负载均衡算法包括:
当集群中物理机的负载均衡度小于预设的负载均衡阈值时,将集群中的 物理机按照负载进行排列;
选择负载最高的物理机,根据性能数据信息遍历该物理机上的每一个虚拟机迁出时的集群中物理机的负载均衡度,确定负载均衡度最优的那台虚拟机进行迁移标记,并将确定的虚拟机从物理机上删除;
计算完成迁移标记的虚拟机迁移到其他每一个物理机时的负载均衡度,将虚拟机迁入计算获得的负载均衡度最优的物理机。
需要说明的是,负载均衡度可以包括2个层面,一是物理机层面,二是虚拟机层面,两个层面都可以从CPU使用率、内存使用率衡量;其中,物理机权重占60%,虚拟机权重占40%。
可选的,本发明实施例资源调度算法包括热点解除算法时,所述热点解除算法包括:
根据性能数据信息和预先设置的负载热点阈值确定集群中的物理机是否为热点物理机;
从热点物理机上选择一个或一个以上虚拟机,将其迁移到集群中的其他物理机上,以降低热点物理机的负载阈值。
需要说明的是,本发明实施例负载热点阈值可以包括:内存负载热点阈值、CPU负载热点阈值等,可以由本领域技术人员根据经验值进行设定。
可选的,本发明实施例资源调度算法包括热点解除算法时,所述热点解除算法包括:
根据所述性能数据信息将集群中的物理机按照负载进行排序;
根据所述性能数据信息,当负载最高的物理机CPU负载大于预先设置的CPU负载热点阈值,或内存使用率大于预先设置的内存负载热点阈值时,确定该物理机为热点物理机;
对热点物理机中的虚拟机根据性能数据信息按照负载进行排序,将排序后负载最高的虚拟机迁移到集群中负载最低的物理机上。
需要说明的是,只要判断出存在热点物理机,则本发明实施例按照上述步骤不断的执行判断处理。
输出调度单元604设置为:根据选择的资源调度算法输出资源调度决策,以根据输出的资源调度决策对集群进行资源调度;
其中,所述算法池中包括两种或两种以上资源调度算法。
需要说明的是,本发明实施例资源调度决策包括资源的决策规划处理以及虚拟机控制功能其中,虚拟机的控制功能主要是对xm命令的封装,例如xm迁移命令的格式为:#xm migreate–live–ssl vm_id dest_pm_ip;其中,vm_id表示待迁移的虚拟机ID,dest_pm_ip表明目标物理机的ip地址,本发明实施例使用参数-live和-ssl表明使用ssl连接进行动态迁移,保证迁移的安全性。
可选的,本发明实施例装置还包括:注册单元,设置为通过预先设置的算法扩展接口,按一一对应的关系分别注册所述资源调度算法。
这里,本发明实施例可以根据集群进行资源调度的需求添加更多的算法扩展接口,实现更多的资源调度算法的添加。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理单元的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、 磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。虽然本发明所揭露的实施方式如上,但所述的内容仅为便于理解本发明而采用的实施方式,并非用以限定本发明。任何本发明所属领域内的技术人员,在不脱离本发明所揭露的精神和范围的前提下,可以在实施的形式及细节上进行任何的修改与变化,但本发明的专利保护范围,仍须以所附的权利要求书所界定的范围为准。
工业实用性
本发明实施例实现了对不同虚拟化平台的动态资源调度。

Claims (18)

  1. 一种实现资源调度的方法,包括:
    获取集群中物理机和虚拟机的性能参数信息,并解析转换为预设格式的性能数据信息(100);
    根据获得的性能数据信息确定集群负载信息(101);
    根据确定的集群负载信息及接收到的请求配置,从预设的算法池中选择进行资源调度的资源调度算法(102);
    根据选择的资源调度算法输出资源调度决策,以根据输出的资源调度决策对集群进行资源调度(103);
    其中,所述算法池中包括两种或两种以上资源调度算法。
  2. 根据权利要求1所述的方法,其中,所述获取集群中物理机和虚拟机的性能参数信息(100)包括:
    通过预先设置在每一个物理机上的监控客户端采集所述物理机和所述虚拟机的性能参数信息;
    采用轮询方式从所述监控客户端上获取监控客户端采集的所述性能参数信息;
    汇总通过轮询方式获得的所有所述物理机和所述虚拟机的所述性能参数信息;
    其中,每一个所述监控客户端之间采用组播方式同步采集到的所述性能参数信息。
  3. 根据权利要求2所述的方法,其中,所述性能参数信息包括:
    每一个所述物理机的:互联网协议IP地址、名称、唯一标识、中央处理器CPU总数、CPU使用率、内存总大小、内存使用率、磁盘输入输出端口I/O的读写速度、物理机上设置的虚拟机;这里,物理机上设置的虚拟机包括设置的虚拟机的个数、名称;
    每一个虚拟机的:名称、唯一标识、分配的内存大小、内存使用率、分配的CPU个数、虚拟机磁盘I/O的读写速度、亲和性描述、互斥性描述、 状态标识位。
  4. 根据权利要求1所述的方法,其中,所述算法池通过预先设置的算法扩展接口,按一一对应的关系分别注册所述资源调度算法。
  5. 根据权利要求1~4任一项所述的方法,其中,所述资源调度算法包括:热点解除算法、节能整合算法和负载均衡算法中至少一种。
  6. 根据权利要求1~4任一项所述的方法,其中,所述资源调度算法包括节能整合算法时,所述节能整合算法包括:
    所述集群的负载小于或等于预设的负载阈值时,根据集群负载信息将物理机按照负载进行排序,将负载排序最低的预设台物理机添加到待下电物理机列表,将未选取的物理机添加到候选物理机列表(200);其中,所述负载包括:CPU使用率、内存使用率;
    将待下电物理机列表中的所有虚拟机添加到待迁移列表(201);
    对待迁移列表中的每一个虚拟机,将待迁移列表中的虚拟机逐一尝试放置到从所述候选物理机列表中负载最高的物理机上(202);
    若虚拟机放置成功,根据迁移的位置关系输出迁移决策序列,同时根据待迁移列表生成下电决策序列(204);
    若虚拟机放置失败,从待下电物理机列表中删除负载最高的物理机,将删除的物理机添加到候选物理机列表的尾部,从待迁移列表中删除添加到候选物理机列表的物理机上的虚拟机,更新候选物理机列表的负载排序、待迁移列表,继续待迁移列表中虚拟机的放置尝试(205)。
  7. 根据权利要求1~4任一项所述的方法,其中,所述资源调度算法包括负载均衡算法时,所述负载均衡算法包括:
    当集群中物理机的负载均衡度小于预设的负载均衡阈值时,将集群中物理机的按照负载进行排列(300);
    选择负载最高的物理机,根据性能数据信息遍历该物理机上的每一个虚拟机迁出时的集群中物理机的负载均衡度,确定负载均衡度最优的那台虚拟机进行迁移标记,并将确定的虚拟机从物理机上删除(301);
    计算完成迁移标记的虚拟机迁移到其他每一个物理机时的负载均衡度, 将虚拟机迁入计算获得的负载均衡度最优的物理机(302)。
  8. 根据权利要求1~4任一项所述的方法,其中,所述资源调度算法包括热点解除算法时,所述热点解除算法包括:
    根据性能数据信息和预先设置的负载热点阈值确定集群中的物理机是否为热点物理机(400);
    从热点物理机上选择一个或一个以上虚拟机,将其迁移到集群中的其他物理机上,以降低热点物理机的负载阈值(401)。
  9. 根据权利要求1~4任一项所述的方法,其中,所述资源调度算法包括热点解除算法时,所述热点解除算法包括:
    根据所述性能数据信息将集群中的物理机按照负载进行排序(500);
    根据所述性能数据信息,当负载最高的物理机CPU负载大于预先设置的CPU负载热点阈值,或内存使用率大于预先设置的内存负载热点阈值时,确定该物理机为热点物理机(501);
    对热点物理机中的虚拟机根据性能数据信息按照负载进行排序,将排序后负载最高的虚拟机迁移到集群中负载最低的物理机上(502)。
  10. 一种实现资源调度的装置,包括:获取及解析单元(601)、确定单元(602)、选择单元(603)和输出调度单元(604);其中,
    获取及解析单元(601)设置为:获取集群中物理机和虚拟机的性能参数信息,并解析转换为预设格式的性能数据信息;
    确定单元(602)设置为:根据获得的性能数据信息确定集群负载信息;
    选择单元(603)设置为:根据确定的集群负载信息及接收到的请求配置,从预设的算法池中选择进行资源调度的资源调度算法;
    输出调度单元(604)设置为:根据选择的资源调度算法输出资源调度决策,以根据输出的资源调度决策对集群进行资源调度;
    其中,所述算法池中包括两种或两种以上资源调度算法。
  11. 根据权利要求10所述的装置,其中,所述获取及解析单元(601)是设置为:
    通过预先设置在每一个物理机上的监控客户端采集所述物理机和所述虚拟机的性能参数信息;
    采用轮询方式从所述监控客户端上获取监控客户端采集的所述性能参数信息;汇总通过轮询方式获得的所有所述物理机和所述虚拟机的所述性能参数信息;
    将汇总的性能参数信息解析转换为预设格式的性能数据信息;
    其中,每一个所述监控客户端之间采用组播方式同步采集到的所述性能参数信息。
  12. 根据权利要求11所述的装置,其中,所述性能参数信息包括:
    每一个所述物理机的:互联网协议IP地址、名称、唯一标识、中央处理器CPU总数、CPU使用率、内存总大小、内存使用率、磁盘输入输出端口I/O的读写速度、物理机上设置的虚拟机;这里,物理机上设置的虚拟机包括设置的虚拟机的个数、名称;
    每一个虚拟机的:名称、唯一标识、分配的内存大小、内存使用率、分配的CPU个数、虚拟机磁盘I/O的读写速度、亲和性描述、互斥性描述、状态标识位。
  13. 根据权利要求10所述的装置,所述装置还包括:
    注册单元,设置为通过预先设置的算法扩展接口,按一一对应的关系分别注册所述资源调度算法。
  14. 根据权利要求10~13任一项所述的装置,其中,所述资源调度算法包括:热点解除算法、节能整合算法和负载均衡算法中的至少一种。
  15. 根据权利要求10~13任一项所述的装置,其中,所述资源调度算法包括节能整合算法时,所述节能整合算法包括:
    所述集群的负载小于或等于预设的负载阈值时,根据集群负载信息将物理机按照负载进行排序,将负载排序最低的预设台物理机添加到待下电物理机列表,将未选取的物理机添加到候选物理机列表;其中,所述负载包括:CPU使用率、内存使用率;
    将待下电物理机列表中的所有虚拟机添加到待迁移列表;
    对待迁移列表中的每一个虚拟机,将待迁移列表中的虚拟机逐一尝试放置到从所述候选物理机列表中负载最高的物理机上;
    若虚拟机放置成功,根据迁移的位置关系输出迁移决策序列,同时根据待迁移列表生成下电决策序列;
    若虚拟机放置失败,从待下电物理机列表中删除负载最高的物理机,将删除的物理机添加到候选物理机列表的尾部,从待迁移列表中删除添加到候选物理机列表的物理机上的虚拟机,更新候选物理机列表的负载排序、待迁移列表,继续待迁移列表中虚拟机的放置尝试。
  16. 根据权利要求10~13任一项所述的装置,其中,所述资源调度算法包括负载均衡算法时,所述负载均衡算法包括:
    当集群中物理机的负载均衡度小于预设的负载均衡阈值时,将集群中物理机的按照负载进行排列;
    选择负载最高的物理机,根据性能数据信息遍历该物理机上的每一个虚拟机迁出时的集群中物理机的负载均衡度,确定负载均衡度最优的那台虚拟机进行迁移标记,并将确定的虚拟机从物理机上删除;
    计算完成迁移标记的虚拟机迁移到其他每一个物理机时的负载均衡度,将虚拟机迁入计算获得的负载均衡度最优的物理机。
  17. 根据权利要求10~13任一项所述的装置,其中,所述资源调度算法包括热点解除算法时,所述热点解除算法包括:
    根据性能数据信息和预先设置的负载热点阈值确定集群中的物理机是否为热点物理机;
    从热点物理机上选择一个或一个以上虚拟机,将其迁移到集群中的其他物理机上,以降低热点物理机的负载阈值。
  18. 根据权利要求10~13任一项所述的装置,其中,所述资源调度算法包括热点解除算法时,所述热点解除算法包括:
    根据所述性能数据信息将集群中的物理机按照负载进行排序;
    根据所述性能数据信息,当负载最高的物理机CPU负载大于预先设置的CPU负载热点阈值,或内存使用率大于预先设置的内存负载热点阈值时, 确定该物理机为热点物理机;
    对热点物理机中的虚拟机根据性能数据信息按照负载进行排序,将排序后负载最高的虚拟机迁移到集群中负载最低的物理机上。
PCT/CN2018/076386 2017-03-27 2018-02-12 一种实现资源调度的方法及装置 Ceased WO2018177042A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
ES18774667T ES2939689T3 (es) 2017-03-27 2018-02-12 Método y dispositivo para realizar planificación de recursos
EP18774667.2A EP3606008B1 (en) 2017-03-27 2018-02-12 Method and device for realizing resource scheduling

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710188524.1A CN108667859A (zh) 2017-03-27 2017-03-27 一种实现资源调度的方法及装置
CN201710188524.1 2017-03-27

Publications (1)

Publication Number Publication Date
WO2018177042A1 true WO2018177042A1 (zh) 2018-10-04

Family

ID=63674155

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/076386 Ceased WO2018177042A1 (zh) 2017-03-27 2018-02-12 一种实现资源调度的方法及装置

Country Status (4)

Country Link
EP (1) EP3606008B1 (zh)
CN (1) CN108667859A (zh)
ES (1) ES2939689T3 (zh)
WO (1) WO2018177042A1 (zh)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110083435A (zh) * 2019-05-08 2019-08-02 扬州大学 基于联盟形成的虚拟机迁移设计方法
CN110417686A (zh) * 2019-06-12 2019-11-05 北京因特睿软件有限公司 云资源动态调度系统
CN110855762A (zh) * 2019-10-31 2020-02-28 云南电网有限责任公司信息中心 一种电网系统中异构集群节点的数据块分配方法
CN111078369A (zh) * 2019-12-27 2020-04-28 中国建设银行股份有限公司 一种云计算机下虚拟机分配方法、装置以及服务器
CN112015326A (zh) * 2019-05-28 2020-12-01 浙江宇视科技有限公司 集群数据处理方法、装置、设备及存储介质
CN112416520A (zh) * 2020-11-21 2021-02-26 广州西麦科技股份有限公司 一种基于vSphere的智能资源调度方法
CN112540844A (zh) * 2019-09-20 2021-03-23 北京京东尚科信息技术有限公司 集群内容器调度方法、装置、存储介质和电子设备
CN112732408A (zh) * 2021-01-18 2021-04-30 浪潮云信息技术股份公司 一种用于计算节点资源优化的方法
CN113010269A (zh) * 2021-03-29 2021-06-22 深信服科技股份有限公司 一种虚拟机调度方法、装置、电子设备及可读存储介质
CN113138849A (zh) * 2020-01-20 2021-07-20 阿里巴巴集团控股有限公司 一种计算资源调度和迁移方法、相关装置及系统
CN113760523A (zh) * 2020-11-16 2021-12-07 北京沃东天骏信息技术有限公司 Redis高热点数据迁移方法
CN113783912A (zh) * 2020-08-25 2021-12-10 北京沃东天骏信息技术有限公司 请求分发方法、装置及存储介质
CN114035889A (zh) * 2021-10-22 2022-02-11 广东工业大学 一种二维时间尺度的容器调度方法及系统
CN114125059A (zh) * 2021-10-11 2022-03-01 国电南瑞科技股份有限公司 一种基于容器的监控实时数据缓存系统及方法
CN115373862A (zh) * 2022-10-26 2022-11-22 安超云软件有限公司 基于数据中心的动态资源调度方法、系统及存储介质
CN117472516A (zh) * 2023-12-27 2024-01-30 苏州元脑智能科技有限公司 虚拟资源调度方法、装置、集群系统、电子设备和介质

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109739614A (zh) * 2018-11-22 2019-05-10 杭州数梦工场科技有限公司 虚拟机新建方法、装置及设备
CN109617954B (zh) * 2018-11-29 2021-07-30 郑州云海信息技术有限公司 一种创建云主机的方法和装置
CN109857518B (zh) * 2019-01-08 2022-10-14 平安科技(深圳)有限公司 一种网络资源的分配方法及设备
CN109918196B (zh) * 2019-01-23 2022-11-29 深圳壹账通智能科技有限公司 系统资源分配方法、装置、计算机设备和存储介质
CN110472875A (zh) * 2019-08-20 2019-11-19 北京百度网讯科技有限公司 用于生成信息的方法和装置
CN110515730A (zh) * 2019-08-22 2019-11-29 北京宝兰德软件股份有限公司 基于kubernetes容器编排系统的资源二次调度方法及装置
CN111104203B (zh) * 2019-12-13 2023-04-28 广东省华南技术转移中心有限公司 虚拟机分散调度方法、装置以及电子设备、存储介质
CN111858031B (zh) * 2020-06-19 2022-06-07 浪潮电子信息产业股份有限公司 一种集群分布式资源调度方法、装置、设备及存储介质
CN114064191B (zh) * 2020-07-30 2025-03-18 中移(苏州)软件技术有限公司 资源调度方法及装置、设备、存储介质
CN114327849B (zh) * 2020-10-09 2024-11-22 上海盛霄云计算技术有限公司 一种基于智能监控的资源调度方法
CN112416530B (zh) * 2020-12-08 2023-12-22 西藏宁算科技集团有限公司 弹性管理集群物理机节点的方法、装置及电子设备
CN113285833B (zh) * 2021-05-26 2023-03-31 北京百度网讯科技有限公司 用于获取信息的方法和装置
CN113448738B (zh) * 2021-08-31 2021-11-12 成都派沃特科技股份有限公司 服务器可用度调整方法、装置、设备及存储介质
CN114816747A (zh) * 2022-04-21 2022-07-29 国汽智控(北京)科技有限公司 处理器的多核负载调控方法、装置及电子设备
CN117812081B (zh) * 2023-12-11 2025-11-04 天翼云科技有限公司 一种基于注意力机制的自动预测方法及云资源调整系统
CN120196450B (zh) * 2025-05-23 2025-08-12 上海华诚金锐信息技术有限公司 一种基于动态负载调节的节能服务器运行方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014101010A1 (zh) * 2012-12-26 2014-07-03 华为技术有限公司 一种虚拟机系统的资源管理方法、虚拟机系统和装置
CN104881325A (zh) * 2015-05-05 2015-09-02 中国联合网络通信集团有限公司 一种资源调度方法和资源调度系统
CN105141541A (zh) * 2015-09-23 2015-12-09 浪潮(北京)电子信息产业有限公司 一种基于任务的动态负载均衡调度方法及装置
CN106534318A (zh) * 2016-11-15 2017-03-22 浙江大学 一种基于流量亲和性的OpenStack云平台资源动态调度系统和方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201324357A (zh) * 2011-12-01 2013-06-16 Univ Tunghai 虛擬機叢集之綠能管理方法
CN102662754A (zh) * 2012-04-20 2012-09-12 浙江大学 一种支持多场景的虚拟机调度装置和方法
US9363190B2 (en) * 2013-07-31 2016-06-07 Manjrasoft Pty. Ltd. System, method and computer program product for energy-efficient and service level agreement (SLA)-based management of data centers for cloud computing
CN104184813B (zh) * 2014-08-20 2018-03-09 杭州华为数字技术有限公司 虚拟机的负载均衡方法和相关设备及集群系统
EP3046028B1 (en) * 2015-01-15 2020-02-19 Alcatel Lucent Load-balancing and scaling of cloud resources by migrating a data session

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014101010A1 (zh) * 2012-12-26 2014-07-03 华为技术有限公司 一种虚拟机系统的资源管理方法、虚拟机系统和装置
CN104881325A (zh) * 2015-05-05 2015-09-02 中国联合网络通信集团有限公司 一种资源调度方法和资源调度系统
CN105141541A (zh) * 2015-09-23 2015-12-09 浪潮(北京)电子信息产业有限公司 一种基于任务的动态负载均衡调度方法及装置
CN106534318A (zh) * 2016-11-15 2017-03-22 浙江大学 一种基于流量亲和性的OpenStack云平台资源动态调度系统和方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3606008A4 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110083435A (zh) * 2019-05-08 2019-08-02 扬州大学 基于联盟形成的虚拟机迁移设计方法
CN112015326B (zh) * 2019-05-28 2023-02-17 浙江宇视科技有限公司 集群数据处理方法、装置、设备及存储介质
CN112015326A (zh) * 2019-05-28 2020-12-01 浙江宇视科技有限公司 集群数据处理方法、装置、设备及存储介质
CN110417686A (zh) * 2019-06-12 2019-11-05 北京因特睿软件有限公司 云资源动态调度系统
CN110417686B (zh) * 2019-06-12 2021-12-14 因特睿科技有限公司 云资源动态调度系统
CN112540844A (zh) * 2019-09-20 2021-03-23 北京京东尚科信息技术有限公司 集群内容器调度方法、装置、存储介质和电子设备
CN110855762A (zh) * 2019-10-31 2020-02-28 云南电网有限责任公司信息中心 一种电网系统中异构集群节点的数据块分配方法
CN111078369A (zh) * 2019-12-27 2020-04-28 中国建设银行股份有限公司 一种云计算机下虚拟机分配方法、装置以及服务器
CN111078369B (zh) * 2019-12-27 2023-03-28 中国建设银行股份有限公司 一种云计算机下虚拟机分配方法、装置以及服务器
CN113138849A (zh) * 2020-01-20 2021-07-20 阿里巴巴集团控股有限公司 一种计算资源调度和迁移方法、相关装置及系统
CN113138849B (zh) * 2020-01-20 2024-04-26 阿里巴巴集团控股有限公司 一种计算资源调度和迁移方法、相关装置及系统
CN113783912A (zh) * 2020-08-25 2021-12-10 北京沃东天骏信息技术有限公司 请求分发方法、装置及存储介质
CN113760523A (zh) * 2020-11-16 2021-12-07 北京沃东天骏信息技术有限公司 Redis高热点数据迁移方法
CN112416520B (zh) * 2020-11-21 2023-10-13 广州西麦科技股份有限公司 一种基于vSphere的智能资源调度方法
CN112416520A (zh) * 2020-11-21 2021-02-26 广州西麦科技股份有限公司 一种基于vSphere的智能资源调度方法
CN112732408A (zh) * 2021-01-18 2021-04-30 浪潮云信息技术股份公司 一种用于计算节点资源优化的方法
CN113010269A (zh) * 2021-03-29 2021-06-22 深信服科技股份有限公司 一种虚拟机调度方法、装置、电子设备及可读存储介质
CN113010269B (zh) * 2021-03-29 2024-02-23 深信服科技股份有限公司 一种虚拟机调度方法、装置、电子设备及可读存储介质
CN114125059A (zh) * 2021-10-11 2022-03-01 国电南瑞科技股份有限公司 一种基于容器的监控实时数据缓存系统及方法
CN114125059B (zh) * 2021-10-11 2023-08-25 国电南瑞科技股份有限公司 一种基于容器的监控实时数据缓存系统及方法
CN114035889A (zh) * 2021-10-22 2022-02-11 广东工业大学 一种二维时间尺度的容器调度方法及系统
CN115373862A (zh) * 2022-10-26 2022-11-22 安超云软件有限公司 基于数据中心的动态资源调度方法、系统及存储介质
CN117472516A (zh) * 2023-12-27 2024-01-30 苏州元脑智能科技有限公司 虚拟资源调度方法、装置、集群系统、电子设备和介质
CN117472516B (zh) * 2023-12-27 2024-03-29 苏州元脑智能科技有限公司 虚拟资源调度方法、装置、集群系统、电子设备和介质

Also Published As

Publication number Publication date
EP3606008B1 (en) 2022-12-14
EP3606008A4 (en) 2020-11-25
ES2939689T3 (es) 2023-04-26
EP3606008A1 (en) 2020-02-05
CN108667859A (zh) 2018-10-16

Similar Documents

Publication Publication Date Title
WO2018177042A1 (zh) 一种实现资源调度的方法及装置
US10261840B2 (en) Controlling virtual machine density and placement distribution in a converged infrastructure resource pool
US11573831B2 (en) Optimizing resource usage in distributed computing environments by dynamically adjusting resource unit size
US9569245B2 (en) System and method for controlling virtual-machine migrations based on processor usage rates and traffic amounts
US10324754B2 (en) Managing virtual machine patterns
US10067803B2 (en) Policy based virtual machine selection during an optimization cycle
US9411658B2 (en) Token-based adaptive task management for virtual machines
US10003568B2 (en) Dynamically assigning network addresses
US11474880B2 (en) Network state synchronization for workload migrations in edge devices
US20150295790A1 (en) Management of virtual machine resources in computing environments
CN110865867A (zh) 应用拓扑关系发现的方法、装置和系统
WO2018010654A1 (zh) 一种虚拟机热迁移的方法、装置及系统
US10152343B2 (en) Method and apparatus for managing IT infrastructure in cloud environments by migrating pairs of virtual machines
CN109218383B (zh) 虚拟网络功能负载平衡器
CN103795804A (zh) 存储资源调度方法及存储计算系统
JP2016103113A5 (zh)
US20200272526A1 (en) Methods and systems for automated scaling of computing clusters
US9537780B2 (en) Quality of service agreement and service level agreement enforcement in a cloud computing environment
CN109960579B (zh) 一种调整业务容器的方法及装置
US10623526B2 (en) Dynamically configuring multi-mode hardware components based on workload requirements
US9965308B2 (en) Automatic creation of affinity-type rules for resources in distributed computer systems
CN110278104A (zh) 用于优化的服务质量加速的技术
WO2023183704A9 (en) Customized cross-premise resource selection for containerized applications
US20140059008A1 (en) Resource allocation analyses on hypothetical distributed computer systems
CN107608765B (zh) 一种虚拟机迁移方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18774667

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018774667

Country of ref document: EP

Effective date: 20191028