WO2012082349A2 - Workload scheduling based on a platform energy policy - Google Patents
Workload scheduling based on a platform energy policy Download PDFInfo
- Publication number
- WO2012082349A2 WO2012082349A2 PCT/US2011/062305 US2011062305W WO2012082349A2 WO 2012082349 A2 WO2012082349 A2 WO 2012082349A2 US 2011062305 W US2011062305 W US 2011062305W WO 2012082349 A2 WO2012082349 A2 WO 2012082349A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- workload
- platform
- memory
- server platform
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/329—Power saving characterised by the action undertaken by task scheduling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5094—Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the invention relates to power management and more particularly to workload scheduling of an electronic system based on a platform energy policy.
- Fig. 1 is a block diagram of a server platform in accordance with some embodiments of the invention.
- Fig. 2 is a block diagram of a data center system in accordance with some embodiments of the invention.
- Fig. 3 is a flow diagram in accordance with some embodiments of the invention.
- Fig. 4 is a block diagram of another data center system in accordance with some embodiments of the invention.
- Fig. 5 is another flow diagram in accordance with some embodiments of the invention.
- a server platform 10 may include one or more processing cores 12 and memory 14 in electrical communication with the one or more processing cores 12.
- the memory 14 may store code which when executed causes the server platform 10 to store a platform power correlation factor, receive workload requirements for a workload from a workload scheduler, determine a current and expected energy consumption based on the workload requirements and the platform performance correlation factor, communicate the current and expected energy consumption for the workload to the workload scheduler, and if the workload is dispatched to the server platform from the workload scheduler, store the workload requirements in the memory and modify characteristics of the server platform to execute the workload.
- the platform power correlation factor may correspond to an expected power draw at various levels of resource utilization.
- the workload requirements may correspond to one or more of a number of processing cores, an amount of memory needed, and an expected run time.
- the workload scheduler may be configured to determine if the workload can be sent to the server platform 10 based on the current and expected energy consumption for the workload communicated to the workload scheduler from the server platform 10 and pre-configured power and temperature thresholds for the server platform 10 and also one or more of rack location, row location, and other data center specific information.
- the modified server platform 10 In some embodiments of the server platform 10, the modified
- characteristics of the workload may include one or more of a processing core to switch off, a portion of memory to switch off, a power profile, and a performance profile.
- the memory 14 to store the platform power correlation factor and the workload requirements may include a non-volatile memory, such as flash memory.
- a data center system 20 may include one or more server platforms 22, a workload scheduler 24, and a set of stored data 26 shared between the one or more server platforms 22 and the workload scheduler 24.
- the server platforms 22 may include one or more processing cores and memory in electrical communication with the one or more processing cores.
- the memory may store code which when executed causes the server platform 22 to store a platform power correlation factor, receive workload requirements for a workload from the workload scheduler, determine a current and expected energy consumption based on the workload requirements and the platform performance correlation factor, communicate the current and expected energy consumption for the workload to the workload scheduler, and if the workload is dispatched to the server platform from the workload scheduler, store the workload requirements in the memory and modify characteristics of the server platform to execute the workload.
- the workload scheduler 24 may determine if the workload can be sent to the server platform 22 based on the current and expected energy consumption for the workload communicated to the workload scheduler 24 from the server platform 22 and pre-configured power and temperature thresholds for the server platform 22 and also one or more of rack location, row location, and other data center specific information.
- the set of stored data may include at least one of a platform compute policy and a platform energy policy.
- the platform power correlation factor may correspond to an expected power draw at various levels of resource utilization.
- the workload requirements may correspond to one or more of a number of processing cores, an amount of memory needed, and an expected run time.
- the modified characteristics of the workload may include one or more of a processing core to switch off, a portion of memory to switch off, a power profile, and a performance profile.
- the memory to store the platform power correlation factor and the workload requirements may include a non-volatile memory, such as a flash memory.
- a method of operating a server platform in accordance with some embodiments of the invention may include storing a platform power correlation factor in a memory (e.g. at block 30), receiving workload requirements for a workload from a workload scheduler (e.g. at block 31 ), determining a current and expected energy consumption based on the workload requirements and the platform performance correlation factor (e.g. at block 32), communicating the current and expected energy consumption for the workload to the workload scheduler (e.g. at block 33), and if the workload is dispatched to the server platform from the workload scheduler, storing the workload requirements in the memory and modifying characteristics of the server platform to execute the workload (e.g. at block 34).
- the platform power correlation factor may correspond to an expected power draw at various levels of resource utilization (e.g. at block 35).
- the workload requirements may correspond to one or more of a number of processing cores, an amount of memory needed, and an expected run time (e.g. at block 36).
- the workload scheduler may determine if the workload can be sent to the server platform based on the current and expected energy consumption for the workload communicated to the workload scheduler from the server platform and pre-configured power and temperature thresholds for the server platform and also one or more of rack location, row location, and other data center specific information (e.g. at block 37).
- the modified characteristics of the workload may include one or more of a processing core to switch off, a portion of memory to switch off, a power profile, and a performance profile (e.g. at block 38).
- the memory for storing the platform power correlation factor and the workload requirements may include a non-volatile memory (e.g. at block 39), such as a flash memory.
- some embodiments of the invention may provide a technique for data center energy efficiency with power and thermal aware workload scheduling.
- some embodiments of the invention may involve balancing IT load, energy efficiency, location awareness, and / or a platform power correlation.
- some embodiments of the invention may be useful in a data center utilizing server platforms that have service processors with temperature and power sensors (e.g. IPMI 2.0 and above including, for example, Intel's Node Manager).
- the cost of energy for a large scale data center may be the single largest operational expense for the data center.
- Such data center environments may see relatively high server resource utilization (e.g. CPU, memory, I/O) and as a result higher energy consumption for running the servers as well as cooling them.
- server resource utilization e.g. CPU, memory, I/O
- some embodiments of the invention may provide a platform capability that helps lower energy cost with little or no throughput impact.
- some embodiments of the invention may provide platform level hardware and / or software capabilities that workload schedulers can use to intelligently schedule and dispatch jobs to achieve improved or optimal compute utilization as well as energy consumption.
- some embodiments of the invention may provide an energy policy engine for HPC / cloud type of computing needs.
- the available compute capacity expressed, for example, in normalized units such as SPECint or SPECfp or an application specific performance indicator specific to a particular data center (e.g. an HPC shop);
- the workload requirements may include information about a preferred execution environment for the workload such as architecture, number of cores, memory, and / or disk space, among other workload requirement information such as priority or criticality;
- Location information related to the server platforms For example, row / rack / data center location information.
- some embodiments of the invention may include a platform power correlation factor stored in memory.
- the platform power correlation factor may be embedded in the firmware.
- the platform power correlation factor may allow the data center system to determine an expected power draw at various level of resource utilization as well as to determine an expected power draw if some of the resources were switched off.
- the data center system may also have the ability to record the location information for the server platforms and / or components of the server platforms in the data center.
- Some server platforms may provide some capability (e.g. Intel's Node ManagerTM) to manage server power consumption (e.g. read the server power usage and set basic policies for controlling power usage).
- some embodiments of the present invention may provide a method for workload schedulers to interact directly with the platform and leverage existing platform abilities such as node manager, etc to efficiently schedule workloads while optimizing energy consumption.
- a data center system 40 includes one or more server platforms 42 in communication with a workload scheduler 44 and a data share 46.
- the server platform 42 may include a combination of software (e.g. drivers, operating systems, and applications) and hardware / firmware (e.g. a
- the data share 46 may include information related to a configuration management database (CMDB), compute policies, and energy policies).
- CMDB configuration management database
- some embodiments of the invention may provide a direct interface for workload schedulers to interact with platform capabilities. For example, the ability to map workload efficiency to power consumption of each platform and / or the ability to record location in data center and assist in self- manageability.
- the platform interface to the workload schedulers may provide a mechanism to store the relevant bits of data in platform level flash storage.
- the workload scheduler may send the workload requirements to a server platform (e.g. number of cores, amount of memory needed, and an expected run time; e.g. at block 50).
- the server platform may respond with a current and expected energy consumption based on the power to performance correlation factor (e.g. based on the workload requirements and the platform power correlation factor; e.g. at blocks 51 and 52).
- the server platform may also provide additional information from the node manager and / or service processor (e.g. location information; e.g. at block 53).
- the workload scheduler may then determine if the workload can be sent to that server platform based on how the response matches against pre-configured power and temperature thresholds for that rack / row / data center (e.g. at blocks 54 and 55). If the workload can be run on the server platform, then the workload scheduler may dispatch the job and store the workload requirements in the data store (e.g. at block 56). Based on the workload requirements, the server platform may modify some operating characteristics to execute the workload (e.g. switching off some cores, etc.; e.g. at block 57 ) and perform those actions (e.g. utilizing the internal interface to the node manager / service processor; e.g. at block 58).
- the server platform can automatically switch remaining cores to a low power state and ensure no power capping is done to achieve highest throughput.
- all cores can be switched on to high power state but some of the memory DIMM's can be turned off.
- the server platform may shut down half the cores and update the performance capability data so that the workload scheduler is aware of degraded capability.
- some embodiments of the invention may also help in system management. For example, being able to query the server platform itself for performance capability information and location information may enable highly accurate and reliable manageability.
- components of an energy efficient data center with power and thermal aware workload scheduling may include a flash memory based data store, extensions to a manageability engine interface to allow host OS and applications such as workload schedulers to transact with the server platforms, and an interface to the server platform firmware / BIOS , service process and other related platform capabilities such as a node manager.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Power Sources (AREA)
Abstract
In some embodiments, a data center system may include one or more server platforms, a workload scheduler, and a set of stored data shared between the server platforms and the workload scheduler. The server platforms may include processing cores and memory in electrical communication with the processing cores. The memory may store code which when executed causes the server platform to store a platform power correlation factor, receive workload requirements for a workload from a workload scheduler, determine a current and expected energy consumption based on the workload requirements and the platform performance correlation factor, communicate the current and expected energy consumption for the workload to the workload scheduler, and if the workload is dispatched to the server platform from the workload scheduler, store the workload requirements in the memory and modify characteristics of the server platform to execute the workload. The workload scheduler may determine if the workload can be sent to the server platform based on the current and expected energy consumption for the workload and pre-configured power and temperature thresholds for the server platform and also rack location, row location, and / or other data center specific information. The set of stored data may include a platform compute policy and / or a platform energy policy. Other embodiments are disclosed and claimed.
Description
WORKLOAD SCHEDULING BASED ON A PLATFORM ENERGY POLICY
The invention relates to power management and more particularly to workload scheduling of an electronic system based on a platform energy policy.
BACKGROUND AND RELATED ART
The article "Above the Clouds: A Berkeley View of Cloud Computing," written by Michael Armbrust et al., dated February 10, 2009, discusses the need for energy proportionality in data centers. Various companies provide hardware and / or software for power management. For example, Intel Corporation's Dynamic Power Node Manager and Data Center Manager are hardware and / or software power management tools for a server or a group of servers.
BRIEF DESCRIPTION OF THE DRAWINGS
Various features of the invention will be apparent from the following description of preferred embodiments as illustrated in the accompanying
drawings, in which like reference numerals generally refer to the same parts throughout the drawings. The drawings are not necessarily to scale, the emphasis instead being placed upon illustrating the principles of the invention.
Fig. 1 is a block diagram of a server platform in accordance with some embodiments of the invention.
Fig. 2 is a block diagram of a data center system in accordance with some embodiments of the invention.
Fig. 3 is a flow diagram in accordance with some embodiments of the invention.
Fig. 4 is a block diagram of another data center system in accordance with some embodiments of the invention.
Fig. 5 is another flow diagram in accordance with some embodiments of the invention.
DESCRIPTION
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular structures, architectures, interfaces, techniques, etc. in order to provide a thorough understanding of the various aspects of the invention. However, it will be apparent to those skilled in the art having the benefit of the present disclosure that the various aspects of the invention may be practiced in other examples that depart from these specific
details. In certain instances, descriptions of well known devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
With reference to Fig. 1 , a server platform 10 may include one or more processing cores 12 and memory 14 in electrical communication with the one or more processing cores 12. For example, the memory 14 may store code which when executed causes the server platform 10 to store a platform power correlation factor, receive workload requirements for a workload from a workload scheduler, determine a current and expected energy consumption based on the workload requirements and the platform performance correlation factor, communicate the current and expected energy consumption for the workload to the workload scheduler, and if the workload is dispatched to the server platform from the workload scheduler, store the workload requirements in the memory and modify characteristics of the server platform to execute the workload.
For example, in some embodiments of the server platform 10, the platform power correlation factor may correspond to an expected power draw at various levels of resource utilization. The workload requirements may correspond to one or more of a number of processing cores, an amount of memory needed, and an expected run time. For example, the workload scheduler may be configured to determine if the workload can be sent to the server platform 10 based on the current and expected energy consumption for the workload communicated to the workload scheduler from the server platform 10 and pre-configured power and temperature thresholds for the server platform 10 and also one or more of rack location, row location, and other data center specific information.
In some embodiments of the server platform 10, the modified
characteristics of the workload may include one or more of a processing core to switch off, a portion of memory to switch off, a power profile, and a performance profile. For example, the memory 14 to store the platform power correlation factor and the workload requirements may include a non-volatile memory, such as flash memory.
With reference to Figure 2, a data center system 20 may include one or more server platforms 22, a workload scheduler 24, and a set of stored data 26 shared between the one or more server platforms 22 and the workload scheduler
24. For example, at least one of the server platforms 22 may include one or more processing cores and memory in electrical communication with the one or more processing cores. The memory may store code which when executed causes the server platform 22 to store a platform power correlation factor, receive workload requirements for a workload from the workload scheduler, determine a current and expected energy consumption based on the workload requirements and the platform performance correlation factor, communicate the current and expected energy consumption for the workload to the workload scheduler, and if the workload is dispatched to the server platform from the workload scheduler, store the workload requirements in the memory and modify characteristics of the server platform to execute the workload.
In the data center system 20 in accordance with some embodiments of the invention, the workload scheduler 24 may determine if the workload can be sent to the server platform 22 based on the current and expected energy consumption for the workload communicated to the workload scheduler 24 from the server platform 22 and pre-configured power and temperature thresholds for the server platform 22 and also one or more of rack location, row location, and other data center specific information. For example, the set of stored data may include at least one of a platform compute policy and a platform energy policy.
In some embodiments of the data center system 20, the platform power correlation factor may correspond to an expected power draw at various levels of resource utilization. The workload requirements may correspond to one or more of a number of processing cores, an amount of memory needed, and an expected run time. The modified characteristics of the workload may include one or more of a processing core to switch off, a portion of memory to switch off, a power profile, and a performance profile. For example, the memory to store the platform power correlation factor and the workload requirements may include a non-volatile memory, such as a flash memory.
With reference to Figure 3, a method of operating a server platform in accordance with some embodiments of the invention may include storing a platform power correlation factor in a memory (e.g. at block 30), receiving workload requirements for a workload from a workload scheduler (e.g. at block 31 ), determining a current and expected energy consumption based on the
workload requirements and the platform performance correlation factor (e.g. at block 32), communicating the current and expected energy consumption for the workload to the workload scheduler (e.g. at block 33), and if the workload is dispatched to the server platform from the workload scheduler, storing the workload requirements in the memory and modifying characteristics of the server platform to execute the workload (e.g. at block 34).
For example, the platform power correlation factor may correspond to an expected power draw at various levels of resource utilization (e.g. at block 35). For example, the workload requirements may correspond to one or more of a number of processing cores, an amount of memory needed, and an expected run time (e.g. at block 36).
In some embodiments of the invention, the workload scheduler may determine if the workload can be sent to the server platform based on the current and expected energy consumption for the workload communicated to the workload scheduler from the server platform and pre-configured power and temperature thresholds for the server platform and also one or more of rack location, row location, and other data center specific information (e.g. at block 37).
For example, the modified characteristics of the workload may include one or more of a processing core to switch off, a portion of memory to switch off, a power profile, and a performance profile (e.g. at block 38). For example, the memory for storing the platform power correlation factor and the workload requirements may include a non-volatile memory (e.g. at block 39), such as a flash memory.
Advantageously, some embodiments of the invention may provide a technique for data center energy efficiency with power and thermal aware workload scheduling. For example, some embodiments of the invention may involve balancing IT load, energy efficiency, location awareness, and / or a platform power correlation. For example, some embodiments of the invention may be useful in a data center utilizing server platforms that have service processors with temperature and power sensors (e.g. IPMI 2.0 and above including, for example, Intel's Node Manager).
By way of background and without limitation, the cost of energy for a large scale data center (e.g. HPC and / or internet/cloud provider) may be the single
largest operational expense for the data center. Such data center environments may see relatively high server resource utilization (e.g. CPU, memory, I/O) and as a result higher energy consumption for running the servers as well as cooling them. Advantageously, some embodiments of the invention may provide a platform capability that helps lower energy cost with little or no throughput impact.
For example, some embodiments of the invention may provide platform level hardware and / or software capabilities that workload schedulers can use to intelligently schedule and dispatch jobs to achieve improved or optimal compute utilization as well as energy consumption. For example, some embodiments of the invention may provide an energy policy engine for HPC / cloud type of computing needs.
In accordance with some aspects of the invention, some combination the following data may be utilized to perform effective power and thermal aware scheduling:
1 . The available compute capacity expressed, for example, in normalized units such as SPECint or SPECfp or an application specific performance indicator specific to a particular data center (e.g. an HPC shop);
2. The requirements for the workload. For example, the workload requirements may include information about a preferred execution environment for the workload such as architecture, number of cores, memory, and / or disk space, among other workload requirement information such as priority or criticality;
3. Current (and expected) power draw at different granularities. For example, Watts/Hr for a particular server / rack / row / data center configuration; and / or
4. Location information related to the server platforms. For example, row / rack / data center location information.
Advantageously, some embodiments of the invention may include a platform power correlation factor stored in memory. For example, the platform power correlation factor may be embedded in the firmware. For example, the platform power correlation factor may allow the data center system to determine an expected power draw at various level of resource utilization as well as to determine an expected power draw if some of the resources were switched off. The data center system may also have the ability to record the location
information for the server platforms and / or components of the server platforms in the data center.
Some server platforms may provide some capability (e.g. Intel's Node Manager™) to manage server power consumption (e.g. read the server power usage and set basic policies for controlling power usage). Advantageously, some embodiments of the present invention may provide a method for workload schedulers to interact directly with the platform and leverage existing platform abilities such as node manager, etc to efficiently schedule workloads while optimizing energy consumption.
With reference to Fig. 4, in accordance with some embodiments of the invention a data center system 40 includes one or more server platforms 42 in communication with a workload scheduler 44 and a data share 46. For example, the server platform 42 may include a combination of software (e.g. drivers, operating systems, and applications) and hardware / firmware (e.g. a
manageability engine interface with extensions, a flash memory store, a platform interface, a node manager, a service processor, and LAN hardware). For example, the data share 46 may include information related to a configuration management database (CMDB), compute policies, and energy policies).
Advantageously, some embodiments of the invention may provide a direct interface for workload schedulers to interact with platform capabilities. For example, the ability to map workload efficiency to power consumption of each platform and / or the ability to record location in data center and assist in self- manageability. For example, the platform interface to the workload schedulers may provide a mechanism to store the relevant bits of data in platform level flash storage.
With reference to Fig. 5, in some embodiments of the invention the workload scheduler may send the workload requirements to a server platform (e.g. number of cores, amount of memory needed, and an expected run time; e.g. at block 50). The server platform may respond with a current and expected energy consumption based on the power to performance correlation factor (e.g. based on the workload requirements and the platform power correlation factor; e.g. at blocks 51 and 52). The server platform may also provide additional information from the node manager and / or service processor (e.g. location
information; e.g. at block 53). The workload scheduler may then determine if the workload can be sent to that server platform based on how the response matches against pre-configured power and temperature thresholds for that rack / row / data center (e.g. at blocks 54 and 55). If the workload can be run on the server platform, then the workload scheduler may dispatch the job and store the workload requirements in the data store (e.g. at block 56). Based on the workload requirements, the server platform may modify some operating characteristics to execute the workload (e.g. switching off some cores, etc.; e.g. at block 57 ) and perform those actions (e.g. utilizing the internal interface to the node manager / service processor; e.g. at block 58).
In one non-limiting example, for a highly critical workload that needs two cores and all the memory, the server platform can automatically switch remaining cores to a low power state and ensure no power capping is done to achieve highest throughput. In another non-limiting example, for a multi-threaded workload that does not need all the system memory, all cores can be switched on to high power state but some of the memory DIMM's can be turned off. In another non-limiting example, if the service processor / node manager reports higher ambient temperatures, the server platform may shut down half the cores and update the performance capability data so that the workload scheduler is aware of degraded capability.
Advantageously, some embodiments of the invention may also help in system management. For example, being able to query the server platform itself for performance capability information and location information may enable highly accurate and reliable manageability.
In accordance with some embodiments of the invention, components of an energy efficient data center with power and thermal aware workload scheduling may include a flash memory based data store, extensions to a manageability engine interface to allow host OS and applications such as workload schedulers to transact with the server platforms, and an interface to the server platform firmware / BIOS , service process and other related platform capabilities such as a node manager.
The foregoing and other aspects of the invention are achieved individually and in combination. The invention should not be construed as requiring two or
more of such aspects unless expressly required by a particular claim. Moreover, while the invention has been described in connection with what is presently considered to be the preferred examples, it is to be understood that the invention is not limited to the disclosed examples, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and the scope of the invention.
Claims
1 . A server platform, comprising:
one or more processing cores; and
memory in electrical communication with the one or more processing cores, the memory storing code which when executed causes the server platform to:
store a platform power correlation factor;
receive workload requirements for a workload from a workload scheduler;
determine a current and expected energy consumption based on the workload requirements and the platform performance correlation factor;
communicate the current and expected energy consumption for the workload to the workload scheduler; and
if the workload is dispatched to the server platform from the workload scheduler, store the workload requirements in the memory and modify characteristics of the server platform to execute the workload.
2. The server platform of claim 1 , wherein the platform power correlation factor corresponds to an expected power draw at various levels of resource utilization.
3. The server platform of claim 1 , wherein the workload requirements correspond to one or more of a number of processing cores, an amount of memory needed, and an expected run time.
4. The server platform of claim 1 , wherein the workload scheduler is to determine if the workload can be sent to the server platform based on the current and expected energy consumption for the workload communicated to the workload scheduler from the server platform and pre-configured power and temperature thresholds for the server platform and also one or more of rack location, row location, and other data center specific information.
5. The server platform of claim 1 , wherein the modified characteristics of the workload include one or more of a processing core to switch off, a portion of memory to switch off, a power profile, and a performance profile.
6. The server platform of claim 1 , wherein the memory to store the platform power correlation factor and the workload requirements includes a nonvolatile memory.
7. A method of operating a server platform, comprising:
storing a platform power correlation factor in a memory;
receiving workload requirements for a workload from a workload scheduler;
determining a current and expected energy consumption based on the workload requirements and the platform performance correlation factor;
communicating the current and expected energy consumption for the workload to the workload scheduler; and
if the workload is dispatched to the server platform from the workload scheduler, storing the workload requirements in the memory and modifying characteristics of the server platform to execute the workload.
8. The method of claim 7, wherein the platform power correlation factor corresponds to an expected power draw at various levels of resource utilization.
9. The method of claim 7, wherein the workload requirements correspond to one or more of a number of processing cores, an amount of memory needed, and an expected run time.
10. The method of claim 7, wherein the workload scheduler is to determine if the workload can be sent to the server platform based on the current and expected energy consumption for the workload communicated to the workload scheduler from the server platform and pre-configured power and temperature thresholds for the server platform and also one or more of rack location, row location, and other data center specific information.
1 1 . The server platform of claim 7, wherein the modified characteristics of the workload include one or more of a processing core to switch off, a portion of memory to switch off, a power profile, and a performance profile.
12. The method of claim 7, wherein the memory for storing the platform power correlation factor and the workload requirements includes a non-volatile memory.
13. A data center system, comprising:
one or more server platforms;
a workload scheduler; and
a set of stored data shared between the one or more server platforms and the workload scheduler,
wherein at least one of the server platforms includes:
one or more processing cores; and
memory in electrical communication with the one or more processing cores, the memory storing code which when executed causes the server platform to:
store a platform power correlation factor;
receive workload requirements for a workload from a workload scheduler;
determine a current and expected energy consumption based on the workload requirements and the platform performance correlation factor;
communicate the current and expected energy consumption for the workload to the workload scheduler; and
if the workload is dispatched to the server platform from the workload scheduler, store the workload requirements in the memory and modify characteristics of the server platform to execute the workload,
wherein the workload scheduler is to determine if the workload can be sent to the server platform based on the current and expected energy consumption for the workload communicated to the workload scheduler from the server platform and pre-configured power and temperature thresholds for the server platform and also one or more of rack location, row location, and other data center specific information,
and wherein the set of stored data includes at least one of a platform compute policy and a platform energy policy.
14. The data center system of claim 13, wherein the platform power correlation factor corresponds to an expected power draw at various levels of resource utilization.
15. The data center system of claim 13, wherein the workload requirements correspond to one or more of a number of processing cores, an amount of memory needed, and an expected run time.
16. The data center system of claim 13, wherein the modified characteristics of the workload include one or more of a processing core to switch off, a portion of memory to switch off, a power profile, and a performance profile.
17. The data center system of claim 13, wherein the memory to store the platform power correlation factor and the workload requirements includes a non-volatile memory.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN3003/DEL/2010 | 2010-12-16 | ||
| IN3003DE2010 | 2010-12-16 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2012082349A2 true WO2012082349A2 (en) | 2012-06-21 |
| WO2012082349A3 WO2012082349A3 (en) | 2012-08-16 |
Family
ID=46245273
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2011/062305 Ceased WO2012082349A2 (en) | 2010-12-16 | 2011-11-29 | Workload scheduling based on a platform energy policy |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2012082349A2 (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104049716A (en) * | 2014-06-03 | 2014-09-17 | 中国科学院计算技术研究所 | Computer energy-saving method and system combined with temperature sensing |
| US9558088B2 (en) | 2012-12-13 | 2017-01-31 | International Business Machines Corporation | Using environmental signatures for test scheduling |
| EP3267312A1 (en) * | 2016-07-07 | 2018-01-10 | Honeywell International Inc. | Multivariable controller for coordinated control of computing devices and building infrastructure in data centers or other locations |
| US20230305906A1 (en) * | 2022-03-24 | 2023-09-28 | Honeywell International S.R.O. | Systems and method for thermal management of computing systems |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6167524A (en) * | 1998-04-06 | 2000-12-26 | International Business Machines Corporation | Apparatus and method for efficient battery utilization in portable personal computers |
| US7062389B2 (en) * | 2001-06-18 | 2006-06-13 | Verisae, Inc. | Enterprise energy management system |
| US20090007128A1 (en) * | 2007-06-28 | 2009-01-01 | International Business Machines Corporation | method and system for orchestrating system resources with energy consumption monitoring |
| US8447993B2 (en) * | 2008-01-23 | 2013-05-21 | Palo Alto Research Center Incorporated | Integrated energy savings and business operations in data centers |
-
2011
- 2011-11-29 WO PCT/US2011/062305 patent/WO2012082349A2/en not_active Ceased
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9558088B2 (en) | 2012-12-13 | 2017-01-31 | International Business Machines Corporation | Using environmental signatures for test scheduling |
| CN104049716A (en) * | 2014-06-03 | 2014-09-17 | 中国科学院计算技术研究所 | Computer energy-saving method and system combined with temperature sensing |
| EP3267312A1 (en) * | 2016-07-07 | 2018-01-10 | Honeywell International Inc. | Multivariable controller for coordinated control of computing devices and building infrastructure in data centers or other locations |
| US20230305906A1 (en) * | 2022-03-24 | 2023-09-28 | Honeywell International S.R.O. | Systems and method for thermal management of computing systems |
| US12204952B2 (en) * | 2022-03-24 | 2025-01-21 | Honeywell International S.R.O. | Systems and method for thermal management of computing systems |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2012082349A3 (en) | 2012-08-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8996890B2 (en) | Method for power conservation in virtualized environments | |
| US8776066B2 (en) | Managing task execution on accelerators | |
| US9864627B2 (en) | Power saving operating system for virtual environment | |
| CN103069389B (en) | High-throughput computing method and system in a hybrid computing environment | |
| CN103069390B (en) | Method and system for re-scheduling workload in a hybrid computing environment | |
| US8489904B2 (en) | Allocating computing system power levels responsive to service level agreements | |
| US9015726B2 (en) | Scheduling jobs of a multi-node computer system based on environmental impact | |
| US8448006B2 (en) | Performing virtual and/or physical resource management for power management | |
| US11093297B2 (en) | Workload optimization system | |
| US8810584B2 (en) | Smart power management in graphics processing unit (GPU) based cluster computing during predictably occurring idle time | |
| US8842562B2 (en) | Method of handling network traffic through optimization of receive side scaling | |
| US20130167152A1 (en) | Multi-core-based computing apparatus having hierarchical scheduler and hierarchical scheduling method | |
| JP2012508940A (en) | Extending the processor for secure embedded container execution | |
| US20200341789A1 (en) | Containerized workload scheduling | |
| US11334436B2 (en) | GPU-based advanced memory diagnostics over dynamic memory regions for faster and efficient diagnostics | |
| US20140344595A1 (en) | Dynamic System Management Communication Path Selection | |
| JP2018503184A (en) | System and method for dynamic temporal power steering | |
| US20200267071A1 (en) | Traffic footprint characterization | |
| GB2427724A (en) | High speed and low power mode multiprocessor system using multithreading processors | |
| Choi et al. | Task Classification Based Energy‐Aware Consolidation in Clouds | |
| WO2012082349A2 (en) | Workload scheduling based on a platform energy policy | |
| US8457805B2 (en) | Power distribution considering cooling nodes | |
| US12001329B2 (en) | System and method for storage class memory tiering | |
| CN115220642B (en) | Predicting storage array capacity | |
| JP4852585B2 (en) | Computer-implemented method, computer-usable program product, and data processing system for saving energy in multipath data communication |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11848343 Country of ref document: EP Kind code of ref document: A2 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 11848343 Country of ref document: EP Kind code of ref document: A2 |