US20160120070A1 - Data center pressure anomaly detection and remediation - Google Patents
Data center pressure anomaly detection and remediation Download PDFInfo
- Publication number
- US20160120070A1 US20160120070A1 US14/524,096 US201414524096A US2016120070A1 US 20160120070 A1 US20160120070 A1 US 20160120070A1 US 201414524096 A US201414524096 A US 201414524096A US 2016120070 A1 US2016120070 A1 US 2016120070A1
- Authority
- US
- United States
- Prior art keywords
- fans
- servers
- fan
- data
- data center
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/20—Modifications to facilitate cooling, ventilating, or heating
- H05K7/20709—Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
- H05K7/20718—Forced ventilation of a gaseous coolant
- H05K7/20745—Forced ventilation of a gaseous coolant within rooms for removing heat from cabinets, e.g. by air conditioning device
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F04—POSITIVE - DISPLACEMENT MACHINES FOR LIQUIDS; PUMPS FOR LIQUIDS OR ELASTIC FLUIDS
- F04D—NON-POSITIVE-DISPLACEMENT PUMPS
- F04D15/00—Control, e.g. regulation, of pumps, pumping installations or systems
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/20—Modifications to facilitate cooling, ventilating, or heating
- H05K7/20709—Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
- H05K7/20836—Thermal management, e.g. server temperature control
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B30/00—Energy efficient heating, ventilation or air conditioning [HVAC]
Definitions
- a system is described herein that is operable to automatically detect pressure anomalies within a data center, to generate an alert when such anomalies are detected, and to initiate actions to remediate the anomalies.
- the system monitors each of a plurality of fans used to dissipate heat generated by one or more servers in the data center.
- the fans may comprise, for example, server fans or blade chassis fans that blow air into a hot aisle containment unit.
- the system obtains data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans.
- the system compares the obtained data to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment.
- the system determines whether or not a pressure anomaly exists in the data center. If the system determines that a pressure anomaly exists in the data center, then the system may generate an alert and/or take steps to remediate the anomaly. Such steps may include, for example, modifying a manner of operation of one or more of the fans and/or modifying a manner of operation of one or more of the servers.
- FIG. 1 is a perspective view of an example hot aisle containment system that may be implemented in a data center and that may benefit from the pressure anomaly detection and remediation embodiments described herein.
- FIG. 2 is a side view of another example hot aisle containment system that may be implemented in a data center and that may benefit from the pressure anomaly detection and remediation embodiments described herein.
- FIG. 3 is a block diagram of an example data center management system that is capable of automatically detecting pressure anomalies by monitoring server fans in a data center and taking certain actions in response to such detection.
- FIG. 4 is a block diagram of an example data center management system that is capable of automatically detecting pressure anomalies by monitoring blade server chassis fans in a data center and taking certain actions in response to such detection.
- FIG. 5 depicts a flowchart of a method for generating fan reference data that indicates, for each of a plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment.
- FIG. 6 depicts a flowchart of a method for automatically detecting a pressure anomaly within a data center in accordance with an embodiment.
- FIG. 7 depicts a flowchart of a method for automatically taking actions in response to the detection of a pressure anomaly within a data center in accordance with an embodiment.
- FIG. 8 is a block diagram of an example processor-based computer system that may be used to implement various embodiments.
- references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” or the like, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of persons skilled in the relevant art(s) to implement such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- a system is described herein that is operable to automatically detect pressure anomalies within a data center, to generate an alert when such anomalies are detected, and to initiate actions to remediate the anomalies.
- the system monitors each of a plurality of fans used to dissipate heat generated by one or more servers in the data center.
- the fans may comprise, for example, server fans or blade chassis fans that blow air into a hot aisle containment unit.
- the system obtains data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans.
- the system compares the obtained data to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment.
- the system determines whether or not a pressure anomaly exists in the data center. If the system determines that a pressure anomaly exists in the data center, then the system may generate an alert and/or take steps to remediate the anomaly. Such steps may include, for example, modifying a manner of operation of one or more of the fans, and/or modifying a manner of operation of one or more of the servers.
- Section II describes example hot aisle containment systems that may be implemented in a data center and technical problems that may arise when using such systems.
- Section III describes example data center management systems that can help solve such technical problems by automatically detecting pressure anomalies in a data center, raising alerts about such anomalies, and taking actions to remediate such anomalies.
- Section IV describes an example processor-based computer system that may be used to implement various embodiments described herein.
- Section V describes some additional exemplary embodiments. Section VI provides some concluding remarks.
- FIG. 1 is a perspective view of an example hot aisle containment system 100 that may be implemented in a data center and that may benefit from the pressure anomaly detection and remediation embodiments described herein.
- Hot aisle containment system 100 may be installed in a data center for a variety of reasons including, but not limited to, protecting data center computing equipment, conserving energy, and reducing cooling costs by managing air flow.
- Hot aisle containment system 100 is merely representative of one type of hot aisle containment system. Persons skilled in the relevant art(s) will appreciate that a wide variety of other approaches to implementing a hot aisle containment system may be taken, and that hot aisle containment systems implemented in accordance with such other approaches may also benefit from the pressure anomaly detection and remediation embodiments described herein.
- example hot aisle containment system 100 includes a plurality of server cabinets 106 1 - 106 14 disposed on a floor 102 of a data center.
- Each server cabinet 106 1 - 106 14 is configured to house a plurality of servers.
- Each server has at least one cold air intake and at least one hot air outlet.
- Each server is situated in a server cabinet such that the cold air intake(s) thereof are facing or otherwise exposed to one of two cold aisles 112 , 114 while the hot air outlet(s) thereof are facing or otherwise exposed to a hot aisle 116 .
- server cabinets 106 1 - 106 14 The physical structure of server cabinets 106 1 - 106 14 , the servers housed therein, and doors 108 , 110 serve to isolate the air in cold aisles 112 , 114 from the air in hot aisle 116 . Still other structures or methods may be used to provide isolation between cold aisles 112 , 114 and hot aisle 116 . For example, foam or some other material may be inserted between the servers and the interior walls of server cabinets 106 1 - 106 14 to provide further isolation between the air in cold aisles 112 , 114 and the air in hot aisle 116 .
- hot aisle containment unit In scenarios in which gaps exist between any of server cabinets 106 1 - 106 14 or between any of server cabinets 106 1 - 106 14 and the floor/ceiling, panels or other physical barriers may be installed to prevent air from flowing between hot aisle 116 and cold aisles 112 , 114 through such gaps.
- the enclosure created around the hot aisle by these various structures may be referred to as a “hot aisle containment unit.”
- a cooling system (not shown in FIG. 1 ) produces cooled air 118 which is circulated into each of cold aisles 112 , 114 via vents 104 in floor 102 .
- a variety of other methods for circulating cooled air 118 into cold aisles 112 , 114 may be used.
- cooled air 118 may be circulated into each of cold aisles 112 , 114 via vents in the walls at the end of a server row or vents in the ceiling.
- Fans integrated within the servers installed in server cabinets 106 1 - 106 14 operate to draw cooled air 118 into the servers via the cold air intakes thereof. Cooled air 118 absorbs heat from the servers' internal components, thereby becoming heated air 120 . Such heated air 120 is expelled by the server fans into hot aisle 116 via the server hot air outlets.
- the fans that draw cooled air 118 toward the server components and expel heated air 120 away from the server components need not be integrated within the servers themselves, but may also be externally located with respect to the servers.
- the blade server chassis may itself include one or more fans that operate to draw cooled air 118 toward the blade servers and their components via one or more chassis cold air intakes and to expel heated air 120 away from the blade servers and their components via one or more chassis hot air outlets.
- Heated air 120 within hot aisle 116 may be drawn therefrom by one or more exhaust fans or other airflow control mechanisms (not shown in FIG. 1 ).
- heated air 120 may be drawn out of hot aisle 116 via vents in a ceiling disposed over hot aisle 116 and routed elsewhere using a system of ducts.
- heated air 120 or a portion thereof may be routed back to the cooling system to be cooled thereby and recirculated into cold aisles 112 , 114 .
- Heated air 120 or a portion thereof may also be vented from the data center to the outside world, or in colder climates, redirected back into the data center or an adjacent building or space to provide heating.
- hot aisle containment system 100 may also improve the energy efficiency of the data center and reduce cooling costs.
- FIG. 2 is a side view of another example hot aisle containment system 200 that may be implemented in a data center and that may benefit from the pressure anomaly detection and remediation embodiments described herein. Like hot aisle containment system 100 , hot aisle containment system 200 is also representative of merely one type of hot aisle containment system.
- example hot aisle containment system 200 includes a plurality of server cabinets 206 disposed on a floor 202 of a data center. Each server cabinet 206 is configured to house a plurality of servers. Each server has at least one cold air intake and at least one hot air outlet. Each server is situated in a server cabinet 206 such that the cold air intake(s) thereof are facing or otherwise exposed to one of two cold aisles 212 , 214 while the hot air outlet(s) thereof are facing or otherwise exposed to a hot aisle 216 .
- the physical structure of server cabinets 206 and the servers housed therein serve to isolate the air in cold aisles 212 , 214 from the air in hot aisle 216 .
- panels 208 may be installed between the tops of server cabinets 206 and ceiling 204 to further isolate the air in cold aisles 212 , 214 from the air in hot aisle 216 .
- a computer room air conditioner (CRAC) 210 produces cooled air 218 that is blown into one or more channels that run under floor 202 . Such cooled air 218 passes from these channel(s) into cold aisles 212 , 214 via vents 222 in floor 202 , although other means for venting cooled air 218 into cold aisles 212 , 214 may be used.
- CRAC 210 may represent, for example, an air-cooled CRAC, a glycol-cooled CRAC or a water-cooled CRAC.
- Still other types of cooling systems may be used to produce cooled air 218 , including but not limited to a computer room air handler (CRAH) and chiller, a pumped refrigerant heat exchanger and chiller, or a direct or indirect evaporative cooling system.
- CRAH computer room air handler
- chiller a pumped refrigerant heat exchanger and chiller
- direct or indirect evaporative cooling system a direct or indirect evaporative cooling system.
- Fans integrated within the servers installed in server cabinets 206 operate to draw cooled air 218 into the servers via the cold air intakes thereof. Cooled air 218 absorbs heat from the servers' internal components, thereby becoming heated air 220 . Such heated air 220 is expelled by the server fans into hot aisle 216 via the server hot air outlets.
- the fans that draw cooled air 218 toward the server components and expel heated air 220 away from the server components need not be integrated within the servers themselves, but may also be externally located with respect to the servers (e.g., such fans may be part of a blade server chassis).
- Heated air 220 within hot aisle 216 may be drawn out of hot aisle 216 by one or more exhaust fans or other airflow control mechanism (not shown in FIG. 2 ).
- heated air 220 may be drawn out of hot aisle 216 via vents 224 in a ceiling 204 disposed over hot aisle 216 and routed via one or more channels back to CRAC 210 to be cooled thereby and recirculated into cold aisles 212 , 214 .
- a portion of heated air 220 may also be vented from the data center to the outside world, or in colder climates, redirected back into the data center or an adjacent building or space to provide heating. In any case, direct recirculation of heated air 220 into cold aisles 212 , 214 is substantially prevented.
- hot aisle containment system 200 may also improve the energy efficiency of the data center and reduce cooling costs.
- hot aisle containment systems 100 , 200 may each be configured to maintain a slight negative pressure in the hot aisle. This may be achieved, for example, by utilizing one or more exhaust fans to draw a slightly greater volume of air per unit of time out of the hot aisle than is normally supplied to it during the same unit of time. By maintaining a slightly negative pressure in the hot aisle, air will tend to flow naturally from the cold aisles to the hot aisle. Furthermore, when a slightly negative pressure is maintained in the hot aisle, the server or blade chassis fans used to cool the servers' internal components will not be required to work as hard to blow or draw air over those components as they would if the pressure in the hot aisle exceeded that in the cold aisles.
- the heated air in the hot aisle containment unit can push out of containment panels and other openings into the cold aisle(s) and be drawn into nearby servers. These servers can exceed their specified thermal values and trip thermal alarms or possibly become damaged due to excessive heat. Excessive pressure in the hot aisle containment unit can also cause other environmental control components to be damaged. For example, exhaust fans that are used to draw hot air out of the hot aisle containment unit could potentially be damaged by the excessive pressure.
- various data center management systems will be described that can help address such problems by automatically detecting pressure anomalies in a data center that includes a hot aisle containment system and taking actions to remediate such anomalies before equipment damage occurs.
- FIG. 3 is a block diagram of an example data center management system 300 that is capable of automatically detecting pressure anomalies in a data center and taking certain actions in response thereto.
- System 300 may be implemented, for example and without limitation, to detect and remediate pressure anomalies in a data center that implements a hot aisle containment system such as hot aisle containment system 100 discussed above in reference to FIG. 1 , hot aisle containment system 200 discussed above in reference to FIG. 2 , or some other type of hot aisle containment system.
- a hot aisle containment system such as hot aisle containment system 100 discussed above in reference to FIG. 1 , hot aisle containment system 200 discussed above in reference to FIG. 2 , or some other type of hot aisle containment system.
- data center management system includes a computing device 302 and a plurality of servers 306 1 - 306 N , each of which is connected to computing device 302 via a network 304 .
- Computing device 302 is intended to represent a processor-based electronic device that is configured to execute software for performing certain data center management operations, some of which will be described herein.
- Computing device 302 may represent, for example, a desktop computer or a server.
- computing device 302 is not so limited, and may also represent other types of computing devices, such as a laptop computer, a tablet computer, a netbook, a wearable computer (e.g., a head-mounted computer), or the like.
- Servers 306 1 - 306 N represent server computers located within a data center. Generally speaking, each of servers 306 1 - 306 N is configured to perform operations involving the provision of data to other computers. For example, one or more of servers 306 1 - 306 N may be configured to provide data to client computers over a wide area network, such as the Internet. Furthermore, one or more of servers 306 1 - 306 N may be configured to provide data to other servers, such as any other ones of servers 306 1 - 306 N or any other servers inside or outside of the data center within which servers 306 1 - 306 N reside. Each of servers 306 1 - 306 N may comprise, for example, a special-purpose type of server, such as a Web server, a mail server, a file server, or the like.
- Network 304 may comprise a data center local area network (LAN) that facilitates communication between each of servers 306 1 - 306 N and computing device 302 .
- LAN local area network
- network 304 may comprise any type of network or combination of networks suitable for facilitating communication between computing devices.
- Network(s) 304 may include, for example and without limitation, a wide area network (e.g., the Internet), a personal area network, a private network, a public network, a packet network, a circuit-switched network, a wired network, and/or a wireless network.
- server 306 1 includes a number of components. These components include one or more server fans 330 , a fan control component 332 , one or more fan speed sensors 334 , and a data center management agent 336 . It is to be understood that each server 306 2 - 306 N includes instances of the same or similar components, but that these have not been shown in FIG. 3 due to space constraints and for ease of illustration.
- Server fan(s) 330 comprise one or more mechanical devices that operate to produce a current of air.
- each server fan 330 may comprise a mechanical device that includes a plurality of blades that are radially attached to a central hub-like component and that can revolve therewith to produce a current of air.
- Each server fan 330 may comprise, for example, a fixed-speed or variable-speed fan.
- Server fan(s) 330 are operable to generate airflow for the purpose of dissipating heat generated by one or more components of server 306 1 .
- Server components that may generate heat include but are not limited to central processing units (CPUs), chipsets, memory devices, network adapters, hard drives, power supplies, or the like.
- server 306 1 includes one or more cold air intakes and one or more hot air outlets.
- each server fan 330 may be operable to draw air into server 306 1 via the cold air intake(s) and to expel air therefrom via the hot air outlet(s).
- the cold air intake(s) may be facing or otherwise exposed to a data center cold aisle and the hot air outlet(s) may be facing or otherwise exposed to a data center hot aisle.
- each server fan 330 is operable to draw cooled air into server 306 1 from the cold aisle and expel heated air therefrom into the hot aisle.
- Fan control component 332 comprises a component that operates to control a speed at which each server fan 330 rotates.
- the fan speed may be represented, for example, in revolutions per minute (RPMs), and may range from 0 RPM (i.e., server fan is off) to some upper limit.
- RPMs revolutions per minute
- the different fan speeds that can be achieved by a particular server fan will vary depending upon the fan type.
- Fan control component 332 may be implemented in hardware (e.g., using one or more digital and/or analog circuits), as software (e.g., software executing on one or more processors of server 306 1 ), or as a combination of hardware and software.
- Fan control component 332 may implement an algorithm for controlling the speed of each server fan 330 .
- fan control component 332 may implement an algorithm for selecting a target fan speed for each server fan 330 based on any number of ascertainable factors.
- the target fan speed may be selected based on a temperature sensed by a temperature sensor internal to, adjacent to, or otherwise associated with server 306 1 , or based on a determined degree of usage of one or more server components, although these are only a few examples. It is also possible that fan control component 332 may select a target fan speed for each server fan 330 based on external input received from a data center management tool or other entity, as will be discussed elsewhere herein.
- server 306 1 may include multiple fan control components.
- server 306 1 may include different fan control components that operate to control the speed of different server fan(s), respectively.
- Fan speed sensor(s) 334 comprise one or more sensors that operate to determine an actual speed at which each server fan 330 is operating.
- the actual speed at which a particular server fan 330 operates may differ from a target speed at which the server fan is being driven to operate as determined by fan control component 332 .
- fan control component 332 may determine that a particular server fan 330 should be driven to operate at a speed of 2,100 RPM, in reality the particular server fan 330 may be operating at a speed of 2,072 RPM.
- the difference between the target speed and the actual speed may be due to a number of factors, including the design of the server fan itself, the design of the components used to drive the server fan, as well as the ambient conditions in which the server fan is operating. For example, if a higher pressure exists at the hot air outlet(s) of server 306 1 than exists at the cold air inlets thereof, this may cause server fan(s) 330 to operate at a reduced actual speed relative to a desired target speed.
- fan speed sensor(s) 334 Any type of sensor that can be used to determine the speed of a fan may be used to implement fan speed sensor(s) 334 .
- fan speed sensor(s) 334 comprise one or more tachometers, although this example is not intended to be limiting.
- Data center management agent 336 comprises a software component executing on one or more processors of server 306 1 (not shown in FIG. 3 ). Generally speaking, data center management agent 336 performs operations that enable a remotely-executing data management tool to collect information about various operational aspects of server 306 1 and that enable the remotely-executing data management tool to modify a manner of operation of server 306 1 .
- Data center management agent 336 includes a reporting component 340 .
- Reporting component 340 is operable to collect data concerning the operation of server fan(s) 330 and to send such data to the remotely-executing data center management tool.
- data may include, for example, a target speed of a server fan 330 as determined by fan control component 332 (or other component configured to select a target speed to which server fan 330 is to be driven) at a particular point in time or within a given timeframe as well as an actual speed of the server fan 330 as detected by a fan speed sensor 334 at the same point in time or within the same timeframe.
- reporting component 340 operates to intermittently collect a target speed and an actual speed of each server fan 330 and to send such target speed and actual speed data to the remotely-executing data center management tool.
- reporting component 340 may operate to obtain such data on a periodic basis and provide it to the remotely-executing data center management tool.
- the exact times and/or rate at which such data collection and reporting is carried out by reporting component 340 may be fixed or configurable depending upon the implementation.
- the remotely-executing data center management tool can specify when and/or how often such data collection and reporting should occur.
- the data collection and reporting may be carried out automatically by reporting component 340 and the data may then be pushed to the remotely-executing data center management tool.
- the data collection and reporting may be carried out by reporting component 340 only when the remotely-executing data center management tool requests (i.e., polls) reporting component 340 for the data.
- the target and actual speed data for each server fan 330 that is conveyed by reporting component 340 to the remotely-executing data center management tool can be used by the remotely-executing data center management tool to determine if a pressure anomaly exists in the data center.
- Data center management agent 336 also includes a server operation management component 342 .
- Server operation management component 342 is operable to receive instructions from the remotely-executing data center management tool, and in response to those instructions, change a manner of operation of server 306 1 .
- the change in manner of operation of server 306 1 may be intended to remediate or otherwise mitigate a pressure anomaly that has been detected within the data center in which server 306 1 resides.
- server operation management component 342 may change the manner of operation of server 306 1 may include but are not limited to: changing (e.g., reducing) a speed of one or more server fans 330 , causing data center management agent 336 to begin monitoring and reporting the temperature of internal server components to the remotely-executing data center management tool (or increasing a rate at which such monitoring/reporting occurs), terminating at least one process executing on server 306 1 and/or discontinuing the use of at least one resource of server 306 1 (e.g., pursuant to the migration of a customer workflow to another server), reducing an amount of power supplied to one or more internal components of server 306 1 , or shutting down server 306 1 entirely.
- computing device 302 includes a data center management tool 310 .
- Data center management tool 310 comprises a software component that is executed by one or more processors of computing device 302 (not shown in FIG. 3 ).
- data center management tool 310 is operable to collect operational data from each of servers 306 1 - 306 N relating to the target and actual fan speeds of those servers and to use such operational data to determine if a pressure anomaly exists within the data center in which those servers are located.
- data center management tool 310 is operable to take certain actions in response to determining that such a pressure anomaly exists such as generating an alert and/or changing a manner of operation of one or more of servers 306 1 - 306 N in a manner intended to remediate the anomaly.
- Data center management tool 310 includes a fan monitoring component 312 , a pressure anomaly detection component 318 , and a pressure anomaly response component 320 .
- data center management tool 310 is operable to access real-time fan data 314 and fan reference data 316 .
- Real-time fan data 314 and fan reference data 316 may each be stored in volatile and/or non-volatile memory within computing device 302 or may be stored in one or more volatile and/or non-volatile memory devices that are external to computing device 302 and communicatively connected thereto for access thereby.
- Real-time fan data 314 and fan reference data 316 may each be data that is stored separately from data center management tool 310 and accessed thereby or may be data that is internally stored with respect to data center management tool 310 (e.g., within one or more data structures of data center management tool 310 ).
- Fan monitoring component 312 is operable to collect information from a reporting component installed on each of servers 306 1 - 306 N (e.g., reporting component 340 installed on server 306 1 ), wherein such information includes operational information about one or more server fans on each of servers 306 1 - 306 N . As was previously described, such information may include target speed and actual speed data for each monitored server fan on servers 306 1 - 306 N . Fan monitoring component 312 stores such operational information as part of real-time fan data 314 . Such real-time fan data 314 may comprise the raw target and actual fan speed data received from servers 306 1 - 306 N , or it may comprise a processed version thereof. For example, fan monitoring component 312 may perform certain operations (e.g., filtering, time-averaging, smoothing, error correcting, or the like) on the raw target and actual fan speed data before storing it as real-time fan data 314 .
- certain operations e.g., filtering, time-averaging, smoothing, error correcting, or
- Pressure anomaly detection component 318 is operable to compare the obtained real-time fan data 314 to fan reference data 314 to determine whether a pressure anomaly exists in the data center in which servers 306 1 - 306 N reside.
- Fan reference data 314 is data that indicates, for each server fan that is monitored by fan monitoring component 312 , how an actual speed of the server fan relates to a target speed of the server fan in a substantially pressure-neutral environment (i.e., in an environment in which the pressure at the cold air intake(s) of the server is at least roughly equivalent to the pressure at the hot air outlet(s) thereof).
- pressure anomaly detection component 318 By comparing the target-vs-actual speed data that is obtained during operation of the server fans to the reference target-vs-actual speed data for the same server fans in a substantially-pressure neutral environment, pressure anomaly detection component 318 is able to determine whether or not a pressure anomaly exists in the data center. Specific details concerning how fan reference data 314 may be obtained and how pressure anomaly detection component 318 is able to detect pressure anomalies by comparing real-time fan data 314 to fan reference data 316 will be provided below with respect to FIGS. 5 and 6 .
- Pressure anomaly response component 320 is operable to perform certain actions automatically in response to the detection of a pressure anomaly by pressure anomaly detection component 318 .
- pressure anomaly response component 320 may generate an alert or send instructions to one or more of servers 306 1 - 306 N to cause those servers to change their manner of operation. Such changes may be intended to remediate the pressure anomaly. Specific details concerning the automatic responses that may be performed by pressure anomaly response component 320 in response to the detection of a pressure anomaly will be provided below with respect to FIG. 7 .
- computing device 302 may be located in the same data center as servers 306 1 - 306 N or may be located remotely with respect to the data center. Furthermore, it is possible that various subsets of servers 306 1 - 306 N may be located in different data centers. In such a scenario, data management tool 310 may be capable of detecting pressure anomalies in different data centers and responding to or remediating the same.
- data center management tool 310 is shown as part of computing device 302 , in alternate implementations, data center management tool may be installed and executed on any one or more of servers 306 1 - 306 N .
- data center management tool 310 may be installed and executed on one of servers 306 1 - 306 N and operate to perform pressure anomaly detection and remediation for servers 306 1 - 306 N .
- an instance of data center management tool 310 may be installed and executed on one server in each of a plurality of subset of servers 306 1 - 306 N and operate to perform pressure anomaly detection and remediation for the servers in that subset.
- FIG. 3 depicts a data center management system 300 in which server fans are monitored and information obtained thereby is used to detect pressure anomalies.
- server fans are monitored in accordance with embodiments and information obtained thereby can also be used to detect pressure anomalies.
- FIG. 4 depicts an alternate data center management system 400 in which blade server chassis fans are monitored and information obtained thereby is used to detect pressure anomalies.
- data center management system includes a computing device 402 that executes a data center management tool 410 and a plurality of blade server chassis 406 1 - 406 N , each of which is connected to computing device 402 via a network 404 .
- Computing device 402 , data center management tool 410 , and network 404 may be substantially similar to previously-described computing device 302 , data center management tool 310 and network 304 , respectively, except that data center management tool 410 is configured to collect blade server chassis fan operational information as opposed to server fan operational information and to detect pressure anomalies via an analysis thereof.
- data center management tool 410 includes a fan monitoring component 412 , a pressure anomaly detection component 418 and a pressure anomaly response component 420 that may operate in a substantially similar manner to fan monitoring component 312 , pressure anomaly detection component 318 , and pressure anomaly response component 320 , respectively, as described above in reference to FIG. 3 . Furthermore, data center management tool 410 is operable to access real-time fan data 414 and fan reference data 416 which may be substantially similar to real-time fan data 314 and fan reference data 316 except that such data may refer to blade server chassis fans as opposed to server fans.
- Blade server chassis 406 1 - 406 N represent blade server chassis located within a data center. Generally speaking, each of blade server chassis 406 1 - 406 N is configured to house one or more blade servers. As further shown in FIG. 4 , blade server chassis 406 1 includes a number of components. These components include one or more blade server chassis fans 430 , a fan control component 432 , one or more fan speed sensors 434 , and a data center management agent 436 . It is to be understood that each blade server chassis 406 2 - 406 N includes instances of the same or similar components, but that these have not been shown in FIG. 4 due to space constraints and for ease of illustration.
- Blade chassis server fan(s) 430 comprise one or more mechanical devices that operate to produce a current of air.
- each blade server chassis fan 430 may comprise a mechanical device that includes a plurality of blades that are radially attached to a central hub-like component and that can revolve therewith to produce a current of air.
- Each blade server chassis fan 430 may comprise, for example, a fixed-speed or variable-speed fan.
- Blade server chassis fan(s) 430 are operable to generate airflow for the purpose of dissipating heat generated by one or more blade servers installed within blade server chassis 406 1 .
- blade server chassis 406 1 includes one or more cold air intakes and one or more hot air outlets.
- each blade server chassis fan 430 may be operable to draw air into blade server chassis 406 1 via the cold air intake(s) and to expel air therefrom via the hot air outlet(s).
- the cold air intake(s) may be facing or otherwise exposed to a data center cold aisle and the hot air outlet(s) may be facing or otherwise exposed to a data center hot aisle.
- each blade server chassis fan 430 is operable to draw cooled air into blade server chassis 406 1 from the cold aisle and expel heated air therefrom into the hot aisle.
- Fan control component 432 comprises a component that operates to control a speed at which each blade server chassis fan 430 rotates.
- the fan speed may range from 0 RPM (i.e., server fan is off) to some upper limit.
- the different fan speeds that can be achieved by a particular blade server chassis fan will vary depending upon the fan type.
- Fan control component 432 may be implemented in hardware (e.g., using one or more digital and/or analog circuits), as software (e.g., software executing on one or more processors of blade server chassis 406 1 ), or as a combination of hardware and software.
- Fan control component 432 may implement an algorithm for controlling the speed of each blade server chassis fan 430 .
- fan control component 432 may implement an algorithm for selecting a target fan speed for each blade server chassis fan 430 based on any number of ascertainable factors.
- the target fan speed may be selected based on a temperature sensed by a temperature sensor internal to, adjacent to, or otherwise associated with blade server chassis 406 1 , or based on a determined degree of usage of one or more blade server components, although these are only a few examples. It is also possible that fan control component 432 may select a target fan speed for each blade server chassis fan 430 based on external input received from a data center management tool or other entity, as will be discussed elsewhere herein.
- blade server chassis 406 1 may include multiple fan control components.
- blade server chassis 406 1 may include different fan control components that operate to control the speed of different blade server chassis fan(s), respectively.
- Fan speed sensor(s) 434 comprise one or more sensors that operate to determine an actual speed at which each blade server chassis fan 430 is operating. Any type of sensor that can be used to determine the speed of a fan may be used to implement fan speed sensor(s) 434 . In one embodiment, fan speed sensor(s) 434 comprise one or more tachometers, although this example is not intended to be limiting.
- Data center management agent 436 comprises a software component executing on one or more processors of blade server chassis 406 1 (not shown in FIG. 4 ). Generally speaking, data center management agent 436 performs operations that enable remotely-executing data management tool 410 to collect information about various operational aspects of blade server chassis 406 1 and that enable remotely-executing data management tool 410 to modify a manner of operation of blade server chassis 406 1 .
- Data center management agent 436 includes a reporting component 440 .
- Reporting component 440 is operable to collect data concerning the operation of blade server chassis fan(s) 430 and to send such data to remotely-executing data center management tool 410 .
- data may include, for example, a target speed of a blade server chassis fan 430 as determined by fan control component 432 (or other component configured to select a target speed to which blade server chassis fan 430 is to be driven) at a particular point in time or within a given timeframe as well as an actual speed of the blade server chassis fan 430 as detected by a fan speed sensor 434 at the same point in time or within the same timeframe.
- reporting component 440 operates to intermittently collect a target speed and an actual speed of each blade server chassis fan 430 and to send such target speed and actual speed data to remotely-executing data center management tool 410 .
- the target and actual speed data for each blade server chassis fan 430 that is conveyed by reporting component 440 to remotely-executing data center management tool 410 can be used by remotely-executing data center management tool 410 to determine if a pressure anomaly exists in the data center.
- Data center management agent 336 also includes a blade server chassis (BSC) operation management component 442 .
- BSC operation management component 442 is operable to receive instructions from remotely-executing data center management tool 410 , and in response to those instructions, change a manner of operation of blade server chassis 406 1 .
- the change in manner of operation of blade server chassis 406 1 may be intended to remediate or otherwise mitigate a pressure anomaly that has been detected within the data center in which blade server chassis 406 1 resides.
- BSC operation management component 442 may change the manner of operation of blade server chassis 406 1 may include but are not limited to changing (e.g., reducing) a speed of one or more blade server chassis fans 430 , causing data center management agent 436 to begin monitoring and reporting the temperature of blade servers and/or blade server components to remotely-executing data center management tool 410 (or increasing at rate at which such monitoring/reporting occurs), or shutting down blade server chassis 406 1 entirely.
- a data center management agent may also be installed on each blade server installed within blade server chassis 406 1 . These agents may be used by data center management tool 410 to carry out blade-server-specific remediation actions such as but not limited to: terminating at least one process executing on a blade server and/or discontinuing the use of at least one resource of a blade server (e.g., pursuant to the migration of a customer workflow to another server), reducing an amount of power supplied to one or more components of a blade server, or shutting down a blade server entirely.
- server fans included in one or more servers and blade server chassis fans included in one or more blade chassis are monitored and information obtained thereby is used to detect pressure anomalies.
- remediation actions can be taken by changing the manner of operation of one or more servers, server fans, blade server chassis, blade server chassis fans, or blade servers.
- FIG. 5 depicts a flowchart 500 of one example method for generating fan reference data 316 , 416 as described above in reference to FIGS. 3 and 4 , respectively.
- the method of flowchart 500 is described herein by way of example only and is not intended to be limiting. Persons skilled in the relevant art(s) will appreciated that other techniques may be used to generate fan reference data 316 , 416 .
- the method of flowchart 500 begins at step 502 in which data is obtained that indicates, for each of a plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment.
- Each fan may comprise, for example, a server fan or a blade server chassis fan.
- a substantially pressure-neutral environment may comprise an environment in which the pressure at the fan inlet is roughly or substantially equivalent to the pressure at the fan outlet.
- Such data may be obtained, for example, by testing a fan using tachometer or other suitable sensor while the fan is operating in a substantially pressure-neutral environment to determine how the actual speed of the fan compares to the target speed to which the fan is being driven. Such data may also be obtained by calibrating the design of a fan so that it operates at a particular actual speed when being driven to a particular target speed. Such data may further be obtained from product specifications associated with a particular fan.
- the data obtained during step 500 may refer to an individual fan only or to a particular type of fan (e.g., a particular brand or model of fan).
- the data that is obtained during step 500 may indicate, for a particular fan or fan type, how the actual speed of the fan relates to the desired target speed of the fan for multiple different target speeds. For example, for a variable speed fan, a range of actual fan speeds may be determined that relate to a corresponding range of target speeds. In further accordance with this example, a range of actual fan speeds may be determined that relate to a target speed range of 0 RPM to some maximum RPM.
- the data obtained during step 502 in stored in a data store or data structure that is accessible to a data center management tool, such as either of data center management tool 310 of FIG. 3 or data center management tool 410 of FIG. 4 .
- the data obtained during step 502 may be stored in a data store that separate from data center management tool 310 , 410 and accessed thereby or may be data that is internally stored with respect to data center management tool 310 , 410 (e.g., within one or more data structures of data center management tool 310 , 410 ).
- FIG. 6 depicts a flowchart 600 of a method for automatically detecting a pressure anomaly within a data center in accordance with an embodiment.
- the method of flowchart 600 may be performed, for example, by data center management tool 310 of FIG. 3 or data center management tool 410 of FIG. 4 and therefore will be described herein with continued reference to those embodiments. However, the method is not limited to those embodiments.
- the method of flowchart 600 begins at step 602 , during which each of a plurality of fans used to dissipate heat generated by one or more servers in a data center is monitored to obtain data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans.
- the fans may be, for example, server fans and/or blade server chassis fans.
- This step may be performed, for example, by fan monitoring component 312 of data center management tool 310 (as described above in reference to FIG. 3 ) or fan monitoring component 412 of data center management tool 410 (as described above in reference to FIG. 4 ).
- these components can collect such data from data center management agents executing on the servers or blade server chassis that house such servers.
- data may be stored as real-time fan data 314 , 414 .
- the data obtained during step 602 is compared to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment.
- This step may be performed, for example, by pressure anomaly detection component 318 of data center management tool 310 (as described above in reference to FIG. 3 ) or pressure anomaly detection component 418 of data center management tool 410 (as described above in reference to FIG. 4 ).
- This step may comprise, for example, comparing real-time fan data 314 to fan reference data 316 or comparing real-time fan data 414 to fan reference data 416 .
- step 606 based at least on the comparison conducted during step 604 , it is determined whether a pressure anomaly exists in the data center. Like step 604 , this step may also be performed, for example, by pressure anomaly detection component 318 or pressure anomaly detection component 418 .
- the comparing carried out in step 604 between the data obtained during step 602 and the reference data may comprise, for example, determining a measure of difference or deviation between an actual-to-target speed relationship specified by the data obtained during step 602 and an actual-to-target speed relationship specified by the reference data. For example, if a particular degree of deviation between the obtained data actual-to-target speed relationship and the reference data actual-to-target speed relationship is observed or if a particular pattern of deviation is observed over time, then pressure anomaly detection component 318 , 418 may determine that a pressure anomaly exists.
- pressure anomaly detection component 318 , 418 may determine that a pressure anomaly exists if the measure of difference for a particular number of the fans exceeds a particular threshold. This approach recognizes that a pressure anomaly such as that described above (i.e., a positive pressure is building up in a hot aisle containment unit relative to one or more adjacent cold aisles) may be likely to significantly impact the behavior of a large number of fans. For example, if an N % or greater deviation from a reference actual-to-target speed relationship is observed for M % or greater of the monitored fan population, then pressure anomaly detection component 318 , 418 may determine that a pressure anomaly exists. In addition to the foregoing, pressure anomaly detection component 318 , 418 may consider the proximity or location of the fans for which deviations from a reference actual-to-target speed relationship is being reported.
- FIG. 7 depicts a flowchart 700 of a method for automatically taking actions in response to the detection of a pressure anomaly within a data center in accordance with an embodiment.
- the method of flowchart 700 may be performed, for example, by data center management tool 310 of FIG. 3 or data center management tool 410 of FIG. 4 and therefore will be described herein with continued reference to those embodiments. However, the method is not limited to those embodiments.
- the method of flowchart begins at step 702 , in which it is determined that a pressure anomaly exists in the data center.
- This step is analogous to step 606 of flowchart 600 and thus may be performed in a manner described above with reference to that flowchart.
- Step 702 may be performed, for example, by pressure anomaly detection component 318 or pressure anomaly detection component 418 .
- step 704 in response to the determination in step 702 that a pressure anomaly exists in the data center, one or more actions are selectively performed. This step may be performed, for example, by pressure anomaly response component 320 of data center management tool 310 (as described above in reference to FIG. 3 ) or pressure anomaly response component 420 of data center management tool 410 (as described above in reference to FIG. 4 ).
- Steps 706 , 708 and 710 show various types of actions that may be selectively performed in response to the determination that a pressure anomaly exists. Each of these steps may be carried out in isolation or in conjunction with one or more other steps.
- an alert is generated.
- This alert may be audible, visible and/or haptic in nature.
- the alert may be generated, for example, via a user interface of computing device 302 , computing device 402 , or via a user interface of a computing device that is communicatively connected thereto.
- the alert may be recorded in a log.
- the alert may also be transmitted to another device or to a user in the form of a message, e-mail or the like.
- pressure anomaly response component 320 may send commands to server operation management components 342 executing on servers 306 1 - 306 N to cause the speed of certain server fans that are determined to be associated with the pressure anomaly to be reduced.
- pressure anomaly response component 420 may send commands to BSC operation management components 442 executing on blade server chassis 406 1 - 406 N to cause the speed of certain blade server chassis fans that are determine to be associated with the pressure anomaly to be reduced. This may have the effect of reducing the pressure within a hot aisle containment unit toward which the fans are blowing heated air, thereby helping to remediate the pressure anomaly.
- pressure anomaly response component 310 , 410 may also begin to monitor the temperature of internal server components via data center management agents 336 , 436 (or increase the rate at which such information is reported) so that pressure anomaly response component 310 , 410 can determine whether the reduction of the fan speed is going to cause those components to exceed specified thermal limits and potentially be damaged. If pressure anomaly response component 310 , 410 determines that the reduction of the fan speed is going to cause those components to exceed specified thermal limits and potentially be damaged, then pressure anomaly response component 310 , 410 may take additional steps, such as increasing fan speeds or shutting down one or more servers.
- step 710 the manner of operation of at least one of the servers in the data center is modified.
- pressure anomaly response component 310 , 410 may interact with data center management agents to shut down one or more of the servers that are determined to be impacted by the pressure anomaly.
- pressure anomaly response component 310 , 410 may operate to migrate one or more customer workflows from servers that are determined to be impacted by the pressure anomaly to servers that are not.
- pressure anomaly response component 310 , 410 may interact with data center management agents to reduce an amount of power supplied to one or more internal components of a server that is determined to be impacted by the pressure anomaly, which can have the effect of reducing the temperature of such internal components.
- FIG. 8 depicts an example processor-based computer system 800 that may be used to implement various embodiments described herein.
- computer system 800 may be used to implement computing device 302 , any of servers 306 1 - 306 N , computing device 402 , blade server chassis 406 1 - 406 N , or any of the blade servers installed therein.
- Computer system 800 may also be used to implement any or all of the steps of any or all of the flowcharts depicted in FIGS. 5-7 .
- the description of computer system 800 is provided herein for purposes of illustration, and is not intended to be limiting. Embodiments may be implemented in further types of computer systems, as would be known to persons skilled in the relevant art(s).
- computer system 800 includes a processing unit 802 , a system memory 804 , and a bus 806 that couples various system components including system memory 804 to processing unit 802 .
- Processing unit 802 may comprise one or more microprocessors or microprocessor cores.
- Bus 806 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
- System memory 804 includes read only memory (ROM) 808 and random access memory (RAM) 810 .
- ROM read only memory
- RAM random access memory
- a basic input/output system 812 (BIOS) is stored in ROM 808 .
- Computer system 800 also has one or more of the following drives: a hard disk drive 814 for reading from and writing to a hard disk, a magnetic disk drive 816 for reading from or writing to a removable magnetic disk 818 , and an optical disk drive 820 for reading from or writing to a removable optical disk 822 such as a CD ROM, DVD ROM, BLU-RAYTM disk or other optical media.
- Hard disk drive 814 , magnetic disk drive 816 , and optical disk drive 820 are connected to bus 806 by a hard disk drive interface 824 , a magnetic disk drive interface 826 , and an optical drive interface 828 , respectively.
- the drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer.
- a hard disk a removable magnetic disk and a removable optical disk
- other types of computer-readable memory devices and storage structures can be used to store data, such as flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and the like.
- a number of program modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. These program modules include an operating system 830 , one or more application programs 832 , other program modules 834 , and program data 836 .
- the program modules may include computer program logic that is executable by processing unit 802 to perform any or all of the functions and features of computing device 302 , any of servers 306 1 - 306 N , computing device 402 , blade server chassis 406 1 - 406 N , or any of the blade servers installed therein, as described above.
- the program modules may also include computer program logic that, when executed by processing unit 802 , performs any of the steps or operations shown or described in reference to the flowcharts of FIGS. 5-7 .
- a user may enter commands and information into computer system 800 through input devices such as a keyboard 838 and a pointing device 840 .
- Other input devices may include a microphone, joystick, game controller, scanner, or the like.
- a touch screen is provided in conjunction with a display 844 to allow a user to provide user input via the application of a touch (as by a finger or stylus for example) to one or more points on the touch screen.
- processing unit 802 through a serial port interface 842 that is coupled to bus 806 , but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB).
- Such interfaces may be wired or wireless interfaces.
- a display 844 is also connected to bus 806 via an interface, such as a video adapter 846 .
- computer system 800 may include other peripheral output devices (not shown) such as speakers and printers.
- Computer system 800 is connected to a network 848 (e.g., a local area network or wide area network such as the Internet) through a network interface or adapter 850 , a modem 852 , or other suitable means for establishing communications over the network.
- a network 848 e.g., a local area network or wide area network such as the Internet
- Modem 852 which may be internal or external, is connected to bus 806 via serial port interface 842 .
- computer program medium As used herein, the terms “computer program medium,” “computer-readable medium,” and “computer-readable storage medium” are used to generally refer to memory devices or storage structures such as the hard disk associated with hard disk drive 814 , removable magnetic disk 818 , removable optical disk 822 , as well as other memory devices or storage structures such as flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and the like. Such computer-readable storage media are distinguished from and non-overlapping with communication media (do not include communication media). Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wireless media such as acoustic, RF, infrared and other wireless media. Embodiments are also directed to such communication media.
- computer programs and modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. Such computer programs may also be received via network interface 850 , serial port interface 842 , or any other interface type. Such computer programs, when executed or loaded by an application, enable computer system 800 to implement features of embodiments of the present invention discussed herein. Accordingly, such computer programs represent controllers of computer system 800 .
- Embodiments are also directed to computer program products comprising software stored on any computer useable medium. Such software, when executed in one or more data processing devices, causes a data processing device(s) to operate as described herein.
- Embodiments of the present invention employ any computer-useable or computer-readable medium, known now or in the future. Examples of computer-readable mediums include, but are not limited to memory devices and storage structures such as RAM, hard drives, floppy disks, CD ROMs, DVD ROMs, zip disks, tapes, magnetic storage devices, optical storage devices, MEMs, nanotechnology-based storage devices, and the like.
- computer system 800 may be implemented as hardware logic/electrical circuitry or firmware.
- one or more of these components may be implemented in a system-on-chip (SoC).
- SoC may include an integrated circuit chip that includes one or more of a processor (e.g., a microcontroller, microprocessor, digital signal processor (DSP), etc.), memory, one or more communication interfaces, and/or further circuits and/or embedded firmware to perform its functions.
- a processor e.g., a microcontroller, microprocessor, digital signal processor (DSP), etc.
- a method that is performed by data center management software executing on at least one computer is described herein.
- each of a plurality of fans used to dissipate heat generated by one or more servers in a data center is monitored to obtain data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans.
- the obtained data is then compared to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment. Based on the comparison, it is determined that a pressure anomaly exists in the data center.
- one or more of the following is performed: generating an alert and modifying a manner of operation of one or more of at least one of the fans and at least one of the servers.
- the plurality of fans comprise one or more of a server fan and a blade chassis fan.
- each of the plurality of fans is configured to blow air into a hot aisle containment unit.
- modifying the manner of operation of at least one of the fans comprises reducing a speed of at least one of the fans.
- the method may further include monitoring a temperature of one or more internal components of one or more of the servers responsive to reducing the speed of the at least one of the fans.
- modifying the manner of operation of at least one of the servers comprises migrating a customer workflow from at least one of the servers.
- modifying the manner of operation of at least one of the servers comprises shutting down at least one of the servers.
- modifying the manner of operation of at least one of the servers comprises reducing an amount of power supplied to one or more internal components of one or more of the servers.
- comparing the obtained data to the reference data comprises determining, for each of the fans, a measure of difference between an actual-to-target speed relationship specified by the obtained data and an actual-to-target speed relationship specified by the reference data.
- determining that the pressure anomaly exists in the data center based on the comparison may comprise determining that the measure of difference for a particular number of the fans exceeds a particular threshold.
- a system is also described herein.
- the system includes at least one processor and a memory.
- the memory stores computer program logic for execution by the at least one processor.
- the computer program logic includes one or more components configured to perform operations when executed by the at least one processor.
- the one or more components include a fan monitoring component, a pressure anomaly detection component and a pressure anomaly response component.
- the fan monitoring component is operable to monitor each of a plurality of fans used to dissipate heat generated by one or more servers in a data center to obtain data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans.
- the pressure anomaly detection component is operable to compare the obtained data to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment and, based on the comparison, determine that a pressure anomaly exists in the data center.
- the pressure anomaly response component is operable to perform one or more of the following in response to a determination that the pressure anomaly exists: (i) generate an alert and (ii) modify a manner of operation of one or more of at least one of the fans and at least one of the servers.
- the pressure anomaly response component is operable to modify the manner of operation of at least one of the fans by reducing a speed of at least one of the fans.
- the pressure anomaly response component may be further operable to monitor a temperature of one or more internal components of one or more of the servers responsive to reducing the speed of the at least one of the fans.
- the pressure anomaly response component is operable to modify the manner of operation of at least one of the servers by migrating at least one service or resource from at least one of the servers.
- the pressure anomaly response component is operable to modify the manner of operation of at least one of the servers by shutting down at least one of the servers.
- the pressure anomaly response component is operable to modify the manner of operation of at least one of the servers by reducing an amount of power supplied to one or more internal components of one or more of the servers.
- the pressure anomaly detection component is operable to compare the obtained data to the reference data by determining, for each of the fans, a measure of difference between an actual-to-target speed relationship specified by the obtained data and an actual-to-target speed relationship specified by the reference data.
- the pressure anomaly detection component may be operable to determine that the pressure anomaly exists in the data center based on the comparison by determining that the measure of difference for a particular number of the fans exceeds a particular threshold.
- the computer program product comprises a computer-readable memory having computer program logic recorded thereon that when executed by at least one processor causes the at least one processor to perform a method that includes: monitoring each of a plurality of fans used to dissipate heat generated by one or more servers in a data center to obtain data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans; determining that a pressure anomaly exists in the data center based on at least the obtained data; and based on the determination that the pressure anomaly exists in the data center, performing one or more of: generating an alert; and modifying a manner of operation of one or more of: at least one of the fans; and at least one of the servers.
- determining that the pressure anomaly exists in the data center based on at least the obtained data comprises comparing the obtained data to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Thermal Sciences (AREA)
- Microelectronics & Electronic Packaging (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Cooling Or The Like Of Electrical Apparatus (AREA)
- Control Of Positive-Displacement Air Blowers (AREA)
- Air Conditioning Control Device (AREA)
Abstract
Description
- In data centers that utilize hot aisle containment units for environmental control, macroscopic pressure issues occasionally arise. For example, if the fans that operate to cool a group of servers cause air to be blown into a hot aisle containment unit at a rate that exceeds the rate at which air can be removed therefrom, then the hot aisle containment unit will become pressured relative to adjacent cold aisle(s). As the pressure increases, the hot air in the hot aisle containment unit can push out of containment panels and other openings into the cold aisle(s) and be drawn into nearby servers. These servers can exceed their specified thermal values and trip thermal alarms or possibly become damaged due to excessive heat. Excessive pressure in the hot aisle containment unit can also cause other environmental control components to be damaged. For example, exhaust fans that are used to draw hot air out of the hot aisle containment unit could potentially be damaged by the excessive pressure.
- A system is described herein that is operable to automatically detect pressure anomalies within a data center, to generate an alert when such anomalies are detected, and to initiate actions to remediate the anomalies. In accordance with embodiments, the system monitors each of a plurality of fans used to dissipate heat generated by one or more servers in the data center. The fans may comprise, for example, server fans or blade chassis fans that blow air into a hot aisle containment unit. Through such monitoring, the system obtains data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans. The system then compares the obtained data to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment. Based on the comparison, the system determines whether or not a pressure anomaly exists in the data center. If the system determines that a pressure anomaly exists in the data center, then the system may generate an alert and/or take steps to remediate the anomaly. Such steps may include, for example, modifying a manner of operation of one or more of the fans and/or modifying a manner of operation of one or more of the servers.
- This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Moreover, it is noted that the claimed subject matter is not limited to the specific embodiments described in the Detailed Description and/or other sections of this document. Such embodiments are presented herein for illustrative purposes only. Additional embodiments will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein.
- The accompanying drawings, which are incorporated herein and form part of the specification, illustrate embodiments of the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the relevant art(s) to make and use the invention.
-
FIG. 1 is a perspective view of an example hot aisle containment system that may be implemented in a data center and that may benefit from the pressure anomaly detection and remediation embodiments described herein. -
FIG. 2 is a side view of another example hot aisle containment system that may be implemented in a data center and that may benefit from the pressure anomaly detection and remediation embodiments described herein. -
FIG. 3 is a block diagram of an example data center management system that is capable of automatically detecting pressure anomalies by monitoring server fans in a data center and taking certain actions in response to such detection. -
FIG. 4 is a block diagram of an example data center management system that is capable of automatically detecting pressure anomalies by monitoring blade server chassis fans in a data center and taking certain actions in response to such detection. -
FIG. 5 depicts a flowchart of a method for generating fan reference data that indicates, for each of a plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment. -
FIG. 6 depicts a flowchart of a method for automatically detecting a pressure anomaly within a data center in accordance with an embodiment. -
FIG. 7 depicts a flowchart of a method for automatically taking actions in response to the detection of a pressure anomaly within a data center in accordance with an embodiment. -
FIG. 8 is a block diagram of an example processor-based computer system that may be used to implement various embodiments. - The features and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
- The following detailed description refers to the accompanying drawings that illustrate exemplary embodiments of the present invention. However, the scope of the present invention is not limited to these embodiments, but is instead defined by the appended claims. Thus, embodiments beyond those shown in the accompanying drawings, such as modified versions of the illustrated embodiments, may nevertheless be encompassed by the present invention.
- References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” or the like, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of persons skilled in the relevant art(s) to implement such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- A system is described herein that is operable to automatically detect pressure anomalies within a data center, to generate an alert when such anomalies are detected, and to initiate actions to remediate the anomalies. In accordance with embodiments, the system monitors each of a plurality of fans used to dissipate heat generated by one or more servers in the data center. The fans may comprise, for example, server fans or blade chassis fans that blow air into a hot aisle containment unit. Through such monitoring, the system obtains data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans. The system then compares the obtained data to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment. Based on the comparison, the system determines whether or not a pressure anomaly exists in the data center. If the system determines that a pressure anomaly exists in the data center, then the system may generate an alert and/or take steps to remediate the anomaly. Such steps may include, for example, modifying a manner of operation of one or more of the fans, and/or modifying a manner of operation of one or more of the servers.
- Section II describes example hot aisle containment systems that may be implemented in a data center and technical problems that may arise when using such systems. Section III describes example data center management systems that can help solve such technical problems by automatically detecting pressure anomalies in a data center, raising alerts about such anomalies, and taking actions to remediate such anomalies. Section IV describes an example processor-based computer system that may be used to implement various embodiments described herein. Section V describes some additional exemplary embodiments. Section VI provides some concluding remarks.
-
FIG. 1 is a perspective view of an example hotaisle containment system 100 that may be implemented in a data center and that may benefit from the pressure anomaly detection and remediation embodiments described herein. Hotaisle containment system 100 may be installed in a data center for a variety of reasons including, but not limited to, protecting data center computing equipment, conserving energy, and reducing cooling costs by managing air flow. Hotaisle containment system 100 is merely representative of one type of hot aisle containment system. Persons skilled in the relevant art(s) will appreciate that a wide variety of other approaches to implementing a hot aisle containment system may be taken, and that hot aisle containment systems implemented in accordance with such other approaches may also benefit from the pressure anomaly detection and remediation embodiments described herein. - As shown in
FIG. 1 , example hotaisle containment system 100 includes a plurality of server cabinets 106 1-106 14 disposed on afloor 102 of a data center. Each server cabinet 106 1-106 14 is configured to house a plurality of servers. Each server has at least one cold air intake and at least one hot air outlet. Each server is situated in a server cabinet such that the cold air intake(s) thereof are facing or otherwise exposed to one of two 112, 114 while the hot air outlet(s) thereof are facing or otherwise exposed to acold aisles hot aisle 116. The physical structure of server cabinets 106 1-106 14, the servers housed therein, and 108, 110 serve to isolate the air indoors 112, 114 from the air incold aisles hot aisle 116. Still other structures or methods may be used to provide isolation between 112, 114 andcold aisles hot aisle 116. For example, foam or some other material may be inserted between the servers and the interior walls of server cabinets 106 1-106 14 to provide further isolation between the air in 112, 114 and the air incold aisles hot aisle 116. Additionally, in scenarios in which gaps exist between any of server cabinets 106 1-106 14 or between any of server cabinets 106 1-106 14 and the floor/ceiling, panels or other physical barriers may be installed to prevent air from flowing betweenhot aisle 116 and 112, 114 through such gaps. The enclosure created around the hot aisle by these various structures may be referred to as a “hot aisle containment unit.”cold aisles - A cooling system (not shown in
FIG. 1 ) produces cooledair 118 which is circulated into each of 112, 114 viacold aisles vents 104 infloor 102. A variety of other methods for circulating cooledair 118 into 112, 114 may be used. For example, cooledcold aisles air 118 may be circulated into each of 112, 114 via vents in the walls at the end of a server row or vents in the ceiling. Fans integrated within the servers installed in server cabinets 106 1-106 14 operate to draw cooledcold aisles air 118 into the servers via the cold air intakes thereof. Cooledair 118 absorbs heat from the servers' internal components, thereby becomingheated air 120. Suchheated air 120 is expelled by the server fans intohot aisle 116 via the server hot air outlets. - It is noted that the fans that draw cooled
air 118 toward the server components and expelheated air 120 away from the server components need not be integrated within the servers themselves, but may also be externally located with respect to the servers. For example, in a scenario in which the servers comprise blade servers installed within a blade server chassis, the blade server chassis may itself include one or more fans that operate to draw cooledair 118 toward the blade servers and their components via one or more chassis cold air intakes and to expelheated air 120 away from the blade servers and their components via one or more chassis hot air outlets. -
Heated air 120 withinhot aisle 116 may be drawn therefrom by one or more exhaust fans or other airflow control mechanisms (not shown inFIG. 1 ). For example,heated air 120 may be drawn out ofhot aisle 116 via vents in a ceiling disposed overhot aisle 116 and routed elsewhere using a system of ducts. Depending upon the implementation,heated air 120 or a portion thereof may be routed back to the cooling system to be cooled thereby and recirculated into 112, 114.cold aisles Heated air 120 or a portion thereof may also be vented from the data center to the outside world, or in colder climates, redirected back into the data center or an adjacent building or space to provide heating. In any case, direct recirculation ofheated air 120 into 112, 114 is substantially prevented. This helps to ensure that the temperature of the air that is drawn into the servers is kept at a level that does not exceed the operational specifications thereof, thereby avoiding damage to the servers' internal components. The foregoing features of hotcold aisles aisle containment system 100 may also improve the energy efficiency of the data center and reduce cooling costs. -
FIG. 2 is a side view of another example hotaisle containment system 200 that may be implemented in a data center and that may benefit from the pressure anomaly detection and remediation embodiments described herein. Like hotaisle containment system 100, hotaisle containment system 200 is also representative of merely one type of hot aisle containment system. - As shown in
FIG. 2 , example hotaisle containment system 200 includes a plurality ofserver cabinets 206 disposed on afloor 202 of a data center. Eachserver cabinet 206 is configured to house a plurality of servers. Each server has at least one cold air intake and at least one hot air outlet. Each server is situated in aserver cabinet 206 such that the cold air intake(s) thereof are facing or otherwise exposed to one of two 212, 214 while the hot air outlet(s) thereof are facing or otherwise exposed to acold aisles hot aisle 216. The physical structure ofserver cabinets 206 and the servers housed therein serve to isolate the air in 212, 214 from the air incold aisles hot aisle 216. Still other structures or methods may be used to provide isolation between 212, 214 andcold aisles hot aisle 216. For example, as further shown inFIG. 2 , panels 208 may be installed between the tops ofserver cabinets 206 andceiling 204 to further isolate the air in 212, 214 from the air incold aisles hot aisle 216. - A computer room air conditioner (CRAC) 210 produces cooled
air 218 that is blown into one or more channels that run underfloor 202. Such cooledair 218 passes from these channel(s) into 212, 214 viacold aisles vents 222 infloor 202, although other means for venting cooledair 218 into 212, 214 may be used.cold aisles CRAC 210 may represent, for example, an air-cooled CRAC, a glycol-cooled CRAC or a water-cooled CRAC. Still other types of cooling systems may be used to produce cooledair 218, including but not limited to a computer room air handler (CRAH) and chiller, a pumped refrigerant heat exchanger and chiller, or a direct or indirect evaporative cooling system. - Fans integrated within the servers installed in
server cabinets 206 operate to draw cooledair 218 into the servers via the cold air intakes thereof. Cooledair 218 absorbs heat from the servers' internal components, thereby becomingheated air 220. Suchheated air 220 is expelled by the server fans intohot aisle 216 via the server hot air outlets. The fans that draw cooledair 218 toward the server components and expelheated air 220 away from the server components need not be integrated within the servers themselves, but may also be externally located with respect to the servers (e.g., such fans may be part of a blade server chassis). -
Heated air 220 withinhot aisle 216 may be drawn out ofhot aisle 216 by one or more exhaust fans or other airflow control mechanism (not shown inFIG. 2 ). For example,heated air 220 may be drawn out ofhot aisle 216 viavents 224 in aceiling 204 disposed overhot aisle 216 and routed via one or more channels back toCRAC 210 to be cooled thereby and recirculated into 212, 214. A portion ofcold aisles heated air 220 may also be vented from the data center to the outside world, or in colder climates, redirected back into the data center or an adjacent building or space to provide heating. In any case, direct recirculation ofheated air 220 into 212, 214 is substantially prevented. This helps to ensure that the temperature of the air that is drawn into the servers is kept at a level that does not exceed the operational specifications thereof, thereby avoiding damage to the servers' internal components. The foregoing features of hotcold aisles aisle containment system 200 may also improve the energy efficiency of the data center and reduce cooling costs. - To obtain desired airflow, hot
100, 200 may each be configured to maintain a slight negative pressure in the hot aisle. This may be achieved, for example, by utilizing one or more exhaust fans to draw a slightly greater volume of air per unit of time out of the hot aisle than is normally supplied to it during the same unit of time. By maintaining a slightly negative pressure in the hot aisle, air will tend to flow naturally from the cold aisles to the hot aisle. Furthermore, when a slightly negative pressure is maintained in the hot aisle, the server or blade chassis fans used to cool the servers' internal components will not be required to work as hard to blow or draw air over those components as they would if the pressure in the hot aisle exceeded that in the cold aisles.aisle containment systems - Problems can occur, however, when a large group of server or blade chassis fans begin to operate in unison, thereby causing an extraordinary amount of air to flow into the hot aisle. This might occur for a variety of reasons. For example, such behavior could be caused by an increase in the ambient temperature in the cold aisles (e.g., if the normal ambient temperature in the cold aisles is 73° F., and the data center raises the ambient temperature to 85° F. due to a heat wave in the area). As another example, such behavior could be caused by a problem with the algorithms used to control server or blade chassis fan speeds. There may be still other causes. Regardless of the cause, such airflow into the hot aisle may surpass the exhaust capabilities of the hot aisle containment system, thereby generating a higher pressure in the hot aisle than in the cold aisles. A similar situation may arise if one or more exhaust fans that operate to draw air from the hot aisle stop working or become otherwise incapable of removing air from the hot aisle at the rate air is being blown into the hot aisle.
- In these types of situations, as the pressure increases in the hot aisle, the heated air in the hot aisle containment unit can push out of containment panels and other openings into the cold aisle(s) and be drawn into nearby servers. These servers can exceed their specified thermal values and trip thermal alarms or possibly become damaged due to excessive heat. Excessive pressure in the hot aisle containment unit can also cause other environmental control components to be damaged. For example, exhaust fans that are used to draw hot air out of the hot aisle containment unit could potentially be damaged by the excessive pressure. In the following section, various data center management systems will be described that can help address such problems by automatically detecting pressure anomalies in a data center that includes a hot aisle containment system and taking actions to remediate such anomalies before equipment damage occurs.
-
FIG. 3 is a block diagram of an example datacenter management system 300 that is capable of automatically detecting pressure anomalies in a data center and taking certain actions in response thereto.System 300 may be implemented, for example and without limitation, to detect and remediate pressure anomalies in a data center that implements a hot aisle containment system such as hotaisle containment system 100 discussed above in reference toFIG. 1 , hotaisle containment system 200 discussed above in reference toFIG. 2 , or some other type of hot aisle containment system. - As shown in
FIG. 3 , data center management system includes acomputing device 302 and a plurality of servers 306 1-306 N, each of which is connected tocomputing device 302 via anetwork 304.Computing device 302 is intended to represent a processor-based electronic device that is configured to execute software for performing certain data center management operations, some of which will be described herein.Computing device 302 may represent, for example, a desktop computer or a server. However,computing device 302 is not so limited, and may also represent other types of computing devices, such as a laptop computer, a tablet computer, a netbook, a wearable computer (e.g., a head-mounted computer), or the like. - Servers 306 1-306 N represent server computers located within a data center. Generally speaking, each of servers 306 1-306 N is configured to perform operations involving the provision of data to other computers. For example, one or more of servers 306 1-306 N may be configured to provide data to client computers over a wide area network, such as the Internet. Furthermore, one or more of servers 306 1-306 N may be configured to provide data to other servers, such as any other ones of servers 306 1-306 N or any other servers inside or outside of the data center within which servers 306 1-306 N reside. Each of servers 306 1-306 N may comprise, for example, a special-purpose type of server, such as a Web server, a mail server, a file server, or the like.
-
Network 304 may comprise a data center local area network (LAN) that facilitates communication between each of servers 306 1-306 N andcomputing device 302. However, this example is not intended to be limiting, andnetwork 304 may comprise any type of network or combination of networks suitable for facilitating communication between computing devices. Network(s) 304 may include, for example and without limitation, a wide area network (e.g., the Internet), a personal area network, a private network, a public network, a packet network, a circuit-switched network, a wired network, and/or a wireless network. - As further shown in
FIG. 3 , server 306 1 includes a number of components. These components include one ormore server fans 330, afan control component 332, one or morefan speed sensors 334, and a datacenter management agent 336. It is to be understood that each server 306 2-306 N includes instances of the same or similar components, but that these have not been shown inFIG. 3 due to space constraints and for ease of illustration. - Server fan(s) 330 comprise one or more mechanical devices that operate to produce a current of air. For example, each
server fan 330 may comprise a mechanical device that includes a plurality of blades that are radially attached to a central hub-like component and that can revolve therewith to produce a current of air. Eachserver fan 330 may comprise, for example, a fixed-speed or variable-speed fan. Server fan(s) 330 are operable to generate airflow for the purpose of dissipating heat generated by one or more components of server 306 1. Server components that may generate heat include but are not limited to central processing units (CPUs), chipsets, memory devices, network adapters, hard drives, power supplies, or the like. - In one embodiment, server 306 1 includes one or more cold air intakes and one or more hot air outlets. In further accordance with such an embodiment, each
server fan 330 may be operable to draw air into server 306 1 via the cold air intake(s) and to expel air therefrom via the hot air outlet(s). In still further accordance with such an embodiment, the cold air intake(s) may be facing or otherwise exposed to a data center cold aisle and the hot air outlet(s) may be facing or otherwise exposed to a data center hot aisle. In this embodiment, eachserver fan 330 is operable to draw cooled air into server 306 1 from the cold aisle and expel heated air therefrom into the hot aisle. -
Fan control component 332 comprises a component that operates to control a speed at which eachserver fan 330 rotates. The fan speed may be represented, for example, in revolutions per minute (RPMs), and may range from 0 RPM (i.e., server fan is off) to some upper limit. The different fan speeds that can be achieved by a particular server fan will vary depending upon the fan type.Fan control component 332 may be implemented in hardware (e.g., using one or more digital and/or analog circuits), as software (e.g., software executing on one or more processors of server 306 1), or as a combination of hardware and software. -
Fan control component 332 may implement an algorithm for controlling the speed of eachserver fan 330. For example,fan control component 332 may implement an algorithm for selecting a target fan speed for eachserver fan 330 based on any number of ascertainable factors. For example, the target fan speed may be selected based on a temperature sensed by a temperature sensor internal to, adjacent to, or otherwise associated with server 306 1, or based on a determined degree of usage of one or more server components, although these are only a few examples. It is also possible thatfan control component 332 may select a target fan speed for eachserver fan 330 based on external input received from a data center management tool or other entity, as will be discussed elsewhere herein. - Although only a single
fan control component 332 is shown inFIG. 3 , it is possible that server 306 1 may include multiple fan control components. For example, server 306 1 may include different fan control components that operate to control the speed of different server fan(s), respectively. - Fan speed sensor(s) 334 comprise one or more sensors that operate to determine an actual speed at which each
server fan 330 is operating. The actual speed at which aparticular server fan 330 operates may differ from a target speed at which the server fan is being driven to operate as determined byfan control component 332. For example, althoughfan control component 332 may determine that aparticular server fan 330 should be driven to operate at a speed of 2,100 RPM, in reality theparticular server fan 330 may be operating at a speed of 2,072 RPM. The difference between the target speed and the actual speed may be due to a number of factors, including the design of the server fan itself, the design of the components used to drive the server fan, as well as the ambient conditions in which the server fan is operating. For example, if a higher pressure exists at the hot air outlet(s) of server 306 1 than exists at the cold air inlets thereof, this may cause server fan(s) 330 to operate at a reduced actual speed relative to a desired target speed. - Any type of sensor that can be used to determine the speed of a fan may be used to implement fan speed sensor(s) 334. In one embodiment, fan speed sensor(s) 334 comprise one or more tachometers, although this example is not intended to be limiting.
- Data
center management agent 336 comprises a software component executing on one or more processors of server 306 1 (not shown inFIG. 3 ). Generally speaking, datacenter management agent 336 performs operations that enable a remotely-executing data management tool to collect information about various operational aspects of server 306 1 and that enable the remotely-executing data management tool to modify a manner of operation of server 306 1. - Data
center management agent 336 includes areporting component 340.Reporting component 340 is operable to collect data concerning the operation of server fan(s) 330 and to send such data to the remotely-executing data center management tool. Such data may include, for example, a target speed of aserver fan 330 as determined by fan control component 332 (or other component configured to select a target speed to whichserver fan 330 is to be driven) at a particular point in time or within a given timeframe as well as an actual speed of theserver fan 330 as detected by afan speed sensor 334 at the same point in time or within the same timeframe. - In an embodiment, reporting
component 340 operates to intermittently collect a target speed and an actual speed of eachserver fan 330 and to send such target speed and actual speed data to the remotely-executing data center management tool. For example, reportingcomponent 340 may operate to obtain such data on a periodic basis and provide it to the remotely-executing data center management tool. The exact times and/or rate at which such data collection and reporting is carried out by reportingcomponent 340 may be fixed or configurable depending upon the implementation. In an embodiment, the remotely-executing data center management tool can specify when and/or how often such data collection and reporting should occur. The data collection and reporting may be carried out automatically by reportingcomponent 340 and the data may then be pushed to the remotely-executing data center management tool. Alternatively, the data collection and reporting may be carried out by reportingcomponent 340 only when the remotely-executing data center management tool requests (i.e., polls) reportingcomponent 340 for the data. - As will be discussed elsewhere herein, the target and actual speed data for each
server fan 330 that is conveyed by reportingcomponent 340 to the remotely-executing data center management tool can be used by the remotely-executing data center management tool to determine if a pressure anomaly exists in the data center. - Data
center management agent 336 also includes a serveroperation management component 342. Serveroperation management component 342 is operable to receive instructions from the remotely-executing data center management tool, and in response to those instructions, change a manner of operation of server 306 1. As will be discussed elsewhere herein, the change in manner of operation of server 306 1 may be intended to remediate or otherwise mitigate a pressure anomaly that has been detected within the data center in which server 306 1 resides. The ways in which serveroperation management component 342 may change the manner of operation of server 306 1 may include but are not limited to: changing (e.g., reducing) a speed of one ormore server fans 330, causing datacenter management agent 336 to begin monitoring and reporting the temperature of internal server components to the remotely-executing data center management tool (or increasing a rate at which such monitoring/reporting occurs), terminating at least one process executing on server 306 1 and/or discontinuing the use of at least one resource of server 306 1 (e.g., pursuant to the migration of a customer workflow to another server), reducing an amount of power supplied to one or more internal components of server 306 1, or shutting down server 306 1 entirely. - As further shown in
FIG. 3 ,computing device 302 includes a datacenter management tool 310. Datacenter management tool 310 comprises a software component that is executed by one or more processors of computing device 302 (not shown inFIG. 3 ). Generally speaking, datacenter management tool 310 is operable to collect operational data from each of servers 306 1-306 N relating to the target and actual fan speeds of those servers and to use such operational data to determine if a pressure anomaly exists within the data center in which those servers are located. Furthermore, datacenter management tool 310 is operable to take certain actions in response to determining that such a pressure anomaly exists such as generating an alert and/or changing a manner of operation of one or more of servers 306 1-306 N in a manner intended to remediate the anomaly. - Data
center management tool 310 includes afan monitoring component 312, a pressureanomaly detection component 318, and a pressureanomaly response component 320. In order to perform its operations, datacenter management tool 310 is operable to access real-time fan data 314 andfan reference data 316. Real-time fan data 314 andfan reference data 316 may each be stored in volatile and/or non-volatile memory withincomputing device 302 or may be stored in one or more volatile and/or non-volatile memory devices that are external tocomputing device 302 and communicatively connected thereto for access thereby. Real-time fan data 314 andfan reference data 316 may each be data that is stored separately from datacenter management tool 310 and accessed thereby or may be data that is internally stored with respect to data center management tool 310 (e.g., within one or more data structures of data center management tool 310). -
Fan monitoring component 312 is operable to collect information from a reporting component installed on each of servers 306 1-306 N (e.g., reportingcomponent 340 installed on server 306 1), wherein such information includes operational information about one or more server fans on each of servers 306 1-306 N. As was previously described, such information may include target speed and actual speed data for each monitored server fan on servers 306 1-306 N.Fan monitoring component 312 stores such operational information as part of real-time fan data 314. Such real-time fan data 314 may comprise the raw target and actual fan speed data received from servers 306 1-306 N, or it may comprise a processed version thereof. For example,fan monitoring component 312 may perform certain operations (e.g., filtering, time-averaging, smoothing, error correcting, or the like) on the raw target and actual fan speed data before storing it as real-time fan data 314. - Pressure
anomaly detection component 318 is operable to compare the obtained real-time fan data 314 to fanreference data 314 to determine whether a pressure anomaly exists in the data center in which servers 306 1-306 N reside.Fan reference data 314 is data that indicates, for each server fan that is monitored byfan monitoring component 312, how an actual speed of the server fan relates to a target speed of the server fan in a substantially pressure-neutral environment (i.e., in an environment in which the pressure at the cold air intake(s) of the server is at least roughly equivalent to the pressure at the hot air outlet(s) thereof). By comparing the target-vs-actual speed data that is obtained during operation of the server fans to the reference target-vs-actual speed data for the same server fans in a substantially-pressure neutral environment, pressureanomaly detection component 318 is able to determine whether or not a pressure anomaly exists in the data center. Specific details concerning howfan reference data 314 may be obtained and how pressureanomaly detection component 318 is able to detect pressure anomalies by comparing real-time fan data 314 to fanreference data 316 will be provided below with respect toFIGS. 5 and 6 . - Pressure
anomaly response component 320 is operable to perform certain actions automatically in response to the detection of a pressure anomaly by pressureanomaly detection component 318. For example, pressureanomaly response component 320 may generate an alert or send instructions to one or more of servers 306 1-306 N to cause those servers to change their manner of operation. Such changes may be intended to remediate the pressure anomaly. Specific details concerning the automatic responses that may be performed by pressureanomaly response component 320 in response to the detection of a pressure anomaly will be provided below with respect toFIG. 7 . - Depending upon the implementation,
computing device 302 may be located in the same data center as servers 306 1-306 N or may be located remotely with respect to the data center. Furthermore, it is possible that various subsets of servers 306 1-306 N may be located in different data centers. In such a scenario,data management tool 310 may be capable of detecting pressure anomalies in different data centers and responding to or remediating the same. - Furthermore, although data
center management tool 310 is shown as part ofcomputing device 302, in alternate implementations, data center management tool may be installed and executed on any one or more of servers 306 1-306 N. For example, an instance of datacenter management tool 310 may be installed and executed on one of servers 306 1-306 N and operate to perform pressure anomaly detection and remediation for servers 306 1-306 N. Alternatively, an instance of datacenter management tool 310 may be installed and executed on one server in each of a plurality of subset of servers 306 1-306 N and operate to perform pressure anomaly detection and remediation for the servers in that subset. -
FIG. 3 depicts a datacenter management system 300 in which server fans are monitored and information obtained thereby is used to detect pressure anomalies. However, other types of fans used to dissipate heat generated by servers in a data center may be monitored in accordance with embodiments and information obtained thereby can also be used to detect pressure anomalies. By way of example only,FIG. 4 depicts an alternate datacenter management system 400 in which blade server chassis fans are monitored and information obtained thereby is used to detect pressure anomalies. - As shown in
FIG. 4 , data center management system includes acomputing device 402 that executes a datacenter management tool 410 and a plurality of blade server chassis 406 1-406 N, each of which is connected tocomputing device 402 via anetwork 404.Computing device 402, datacenter management tool 410, andnetwork 404 may be substantially similar to previously-describedcomputing device 302, datacenter management tool 310 andnetwork 304, respectively, except that datacenter management tool 410 is configured to collect blade server chassis fan operational information as opposed to server fan operational information and to detect pressure anomalies via an analysis thereof. - To this end, data
center management tool 410 includes afan monitoring component 412, a pressureanomaly detection component 418 and a pressureanomaly response component 420 that may operate in a substantially similar manner to fanmonitoring component 312, pressureanomaly detection component 318, and pressureanomaly response component 320, respectively, as described above in reference toFIG. 3 . Furthermore, datacenter management tool 410 is operable to access real-time fan data 414 andfan reference data 416 which may be substantially similar to real-time fan data 314 andfan reference data 316 except that such data may refer to blade server chassis fans as opposed to server fans. - Blade server chassis 406 1-406 N represent blade server chassis located within a data center. Generally speaking, each of blade server chassis 406 1-406 N is configured to house one or more blade servers. As further shown in
FIG. 4 , blade server chassis 406 1 includes a number of components. These components include one or more bladeserver chassis fans 430, afan control component 432, one or morefan speed sensors 434, and a datacenter management agent 436. It is to be understood that each blade server chassis 406 2-406 N includes instances of the same or similar components, but that these have not been shown inFIG. 4 due to space constraints and for ease of illustration. - Blade chassis server fan(s) 430 comprise one or more mechanical devices that operate to produce a current of air. For example, each blade
server chassis fan 430 may comprise a mechanical device that includes a plurality of blades that are radially attached to a central hub-like component and that can revolve therewith to produce a current of air. Each bladeserver chassis fan 430 may comprise, for example, a fixed-speed or variable-speed fan. Blade server chassis fan(s) 430 are operable to generate airflow for the purpose of dissipating heat generated by one or more blade servers installed within blade server chassis 406 1. - In one embodiment, blade server chassis 406 1 includes one or more cold air intakes and one or more hot air outlets. In further accordance with such an embodiment, each blade
server chassis fan 430 may be operable to draw air into blade server chassis 406 1 via the cold air intake(s) and to expel air therefrom via the hot air outlet(s). In still further accordance with such an embodiment, the cold air intake(s) may be facing or otherwise exposed to a data center cold aisle and the hot air outlet(s) may be facing or otherwise exposed to a data center hot aisle. In this embodiment, each bladeserver chassis fan 430 is operable to draw cooled air into blade server chassis 406 1 from the cold aisle and expel heated air therefrom into the hot aisle. -
Fan control component 432 comprises a component that operates to control a speed at which each bladeserver chassis fan 430 rotates. The fan speed may range from 0 RPM (i.e., server fan is off) to some upper limit. The different fan speeds that can be achieved by a particular blade server chassis fan will vary depending upon the fan type.Fan control component 432 may be implemented in hardware (e.g., using one or more digital and/or analog circuits), as software (e.g., software executing on one or more processors of blade server chassis 406 1), or as a combination of hardware and software. -
Fan control component 432 may implement an algorithm for controlling the speed of each bladeserver chassis fan 430. For example,fan control component 432 may implement an algorithm for selecting a target fan speed for each bladeserver chassis fan 430 based on any number of ascertainable factors. For example, the target fan speed may be selected based on a temperature sensed by a temperature sensor internal to, adjacent to, or otherwise associated with blade server chassis 406 1, or based on a determined degree of usage of one or more blade server components, although these are only a few examples. It is also possible thatfan control component 432 may select a target fan speed for each bladeserver chassis fan 430 based on external input received from a data center management tool or other entity, as will be discussed elsewhere herein. - Although only a single
fan control component 432 is shown inFIG. 4 , it is possible that blade server chassis 406 1 may include multiple fan control components. For example, blade server chassis 406 1 may include different fan control components that operate to control the speed of different blade server chassis fan(s), respectively. - Fan speed sensor(s) 434 comprise one or more sensors that operate to determine an actual speed at which each blade
server chassis fan 430 is operating. Any type of sensor that can be used to determine the speed of a fan may be used to implement fan speed sensor(s) 434. In one embodiment, fan speed sensor(s) 434 comprise one or more tachometers, although this example is not intended to be limiting. - Data
center management agent 436 comprises a software component executing on one or more processors of blade server chassis 406 1 (not shown inFIG. 4 ). Generally speaking, datacenter management agent 436 performs operations that enable remotely-executingdata management tool 410 to collect information about various operational aspects of blade server chassis 406 1 and that enable remotely-executingdata management tool 410 to modify a manner of operation of blade server chassis 406 1. - Data
center management agent 436 includes areporting component 440.Reporting component 440 is operable to collect data concerning the operation of blade server chassis fan(s) 430 and to send such data to remotely-executing datacenter management tool 410. Such data may include, for example, a target speed of a bladeserver chassis fan 430 as determined by fan control component 432 (or other component configured to select a target speed to which bladeserver chassis fan 430 is to be driven) at a particular point in time or within a given timeframe as well as an actual speed of the bladeserver chassis fan 430 as detected by afan speed sensor 434 at the same point in time or within the same timeframe. In an embodiment, reportingcomponent 440 operates to intermittently collect a target speed and an actual speed of each bladeserver chassis fan 430 and to send such target speed and actual speed data to remotely-executing datacenter management tool 410. The target and actual speed data for each bladeserver chassis fan 430 that is conveyed by reportingcomponent 440 to remotely-executing datacenter management tool 410 can be used by remotely-executing datacenter management tool 410 to determine if a pressure anomaly exists in the data center. - Data
center management agent 336 also includes a blade server chassis (BSC)operation management component 442. BSCoperation management component 442 is operable to receive instructions from remotely-executing datacenter management tool 410, and in response to those instructions, change a manner of operation of blade server chassis 406 1. As will be discussed elsewhere herein, the change in manner of operation of blade server chassis 406 1 may be intended to remediate or otherwise mitigate a pressure anomaly that has been detected within the data center in which blade server chassis 406 1 resides. The ways in which BSCoperation management component 442 may change the manner of operation of blade server chassis 406 1 may include but are not limited to changing (e.g., reducing) a speed of one or more bladeserver chassis fans 430, causing datacenter management agent 436 to begin monitoring and reporting the temperature of blade servers and/or blade server components to remotely-executing data center management tool 410 (or increasing at rate at which such monitoring/reporting occurs), or shutting down blade server chassis 406 1 entirely. - In an embodiment, a data center management agent may also be installed on each blade server installed within blade server chassis 406 1. These agents may be used by data
center management tool 410 to carry out blade-server-specific remediation actions such as but not limited to: terminating at least one process executing on a blade server and/or discontinuing the use of at least one resource of a blade server (e.g., pursuant to the migration of a customer workflow to another server), reducing an amount of power supplied to one or more components of a blade server, or shutting down a blade server entirely. - In a further embodiment of a data center management system, server fans included in one or more servers and blade server chassis fans included in one or more blade chassis are monitored and information obtained thereby is used to detect pressure anomalies. In further accordance with such an embodiment, remediation actions can be taken by changing the manner of operation of one or more servers, server fans, blade server chassis, blade server chassis fans, or blade servers.
-
FIG. 5 depicts aflowchart 500 of one example method for generating 316, 416 as described above in reference tofan reference data FIGS. 3 and 4 , respectively. The method offlowchart 500 is described herein by way of example only and is not intended to be limiting. Persons skilled in the relevant art(s) will appreciated that other techniques may be used to generate 316, 416.fan reference data - As shown in
FIG. 5 , the method offlowchart 500 begins atstep 502 in which data is obtained that indicates, for each of a plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment. Each fan may comprise, for example, a server fan or a blade server chassis fan. A substantially pressure-neutral environment may comprise an environment in which the pressure at the fan inlet is roughly or substantially equivalent to the pressure at the fan outlet. - Such data may be obtained, for example, by testing a fan using tachometer or other suitable sensor while the fan is operating in a substantially pressure-neutral environment to determine how the actual speed of the fan compares to the target speed to which the fan is being driven. Such data may also be obtained by calibrating the design of a fan so that it operates at a particular actual speed when being driven to a particular target speed. Such data may further be obtained from product specifications associated with a particular fan. The data obtained during
step 500 may refer to an individual fan only or to a particular type of fan (e.g., a particular brand or model of fan). - The data that is obtained during
step 500 may indicate, for a particular fan or fan type, how the actual speed of the fan relates to the desired target speed of the fan for multiple different target speeds. For example, for a variable speed fan, a range of actual fan speeds may be determined that relate to a corresponding range of target speeds. In further accordance with this example, a range of actual fan speeds may be determined that relate to a target speed range of 0 RPM to some maximum RPM. - At
step 504, the data obtained duringstep 502 in stored in a data store or data structure that is accessible to a data center management tool, such as either of datacenter management tool 310 ofFIG. 3 or datacenter management tool 410 ofFIG. 4 . By way of example, the data obtained duringstep 502 may be stored in a data store that separate from data 310, 410 and accessed thereby or may be data that is internally stored with respect to datacenter management tool center management tool 310, 410 (e.g., within one or more data structures of datacenter management tool 310, 410). -
FIG. 6 depicts aflowchart 600 of a method for automatically detecting a pressure anomaly within a data center in accordance with an embodiment. The method offlowchart 600 may be performed, for example, by datacenter management tool 310 ofFIG. 3 or datacenter management tool 410 ofFIG. 4 and therefore will be described herein with continued reference to those embodiments. However, the method is not limited to those embodiments. - As shown in
FIG. 6 , the method offlowchart 600 begins atstep 602, during which each of a plurality of fans used to dissipate heat generated by one or more servers in a data center is monitored to obtain data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans. The fans may be, for example, server fans and/or blade server chassis fans. This step may be performed, for example, byfan monitoring component 312 of data center management tool 310 (as described above in reference toFIG. 3 ) orfan monitoring component 412 of data center management tool 410 (as described above in reference toFIG. 4 ). As was previously described, these components can collect such data from data center management agents executing on the servers or blade server chassis that house such servers. As was also previously described, such data may be stored as real- 314, 414.time fan data - At
step 604, the data obtained duringstep 602 is compared to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment. This step may be performed, for example, by pressureanomaly detection component 318 of data center management tool 310 (as described above in reference toFIG. 3 ) or pressureanomaly detection component 418 of data center management tool 410 (as described above in reference toFIG. 4 ). This step may comprise, for example, comparing real-time fan data 314 to fanreference data 316 or comparing real-time fan data 414 to fanreference data 416. - At
step 606, based at least on the comparison conducted duringstep 604, it is determined whether a pressure anomaly exists in the data center. Likestep 604, this step may also be performed, for example, by pressureanomaly detection component 318 or pressureanomaly detection component 418. - The comparing carried out in
step 604 between the data obtained duringstep 602 and the reference data may comprise, for example, determining a measure of difference or deviation between an actual-to-target speed relationship specified by the data obtained duringstep 602 and an actual-to-target speed relationship specified by the reference data. For example, if a particular degree of deviation between the obtained data actual-to-target speed relationship and the reference data actual-to-target speed relationship is observed or if a particular pattern of deviation is observed over time, then pressure 318, 418 may determine that a pressure anomaly exists. By way of example, in a scenario in which a positive pressure is building up in a hot aisle containment unit relative to one or more adjacent cold aisles, one might expect to see that the actual speed achieved by fans blowing heated air into the hot aisle containment unit for a given target speed will be lower than that obtained for the same target speed in a substantially pressure-neutral environment.anomaly detection component - In one embodiment, pressure
318, 418 may determine that a pressure anomaly exists if the measure of difference for a particular number of the fans exceeds a particular threshold. This approach recognizes that a pressure anomaly such as that described above (i.e., a positive pressure is building up in a hot aisle containment unit relative to one or more adjacent cold aisles) may be likely to significantly impact the behavior of a large number of fans. For example, if an N % or greater deviation from a reference actual-to-target speed relationship is observed for M % or greater of the monitored fan population, then pressureanomaly detection component 318, 418 may determine that a pressure anomaly exists. In addition to the foregoing, pressureanomaly detection component 318, 418 may consider the proximity or location of the fans for which deviations from a reference actual-to-target speed relationship is being reported.anomaly detection component -
FIG. 7 depicts aflowchart 700 of a method for automatically taking actions in response to the detection of a pressure anomaly within a data center in accordance with an embodiment. The method offlowchart 700 may be performed, for example, by datacenter management tool 310 ofFIG. 3 or datacenter management tool 410 ofFIG. 4 and therefore will be described herein with continued reference to those embodiments. However, the method is not limited to those embodiments. - As shown in
FIG. 7 , the method of flowchart begins atstep 702, in which it is determined that a pressure anomaly exists in the data center. This step is analogous to step 606 offlowchart 600 and thus may be performed in a manner described above with reference to that flowchart. Step 702 may be performed, for example, by pressureanomaly detection component 318 or pressureanomaly detection component 418. - At
step 704, in response to the determination instep 702 that a pressure anomaly exists in the data center, one or more actions are selectively performed. This step may be performed, for example, by pressureanomaly response component 320 of data center management tool 310 (as described above in reference toFIG. 3 ) or pressureanomaly response component 420 of data center management tool 410 (as described above in reference toFIG. 4 ). -
706, 708 and 710 show various types of actions that may be selectively performed in response to the determination that a pressure anomaly exists. Each of these steps may be carried out in isolation or in conjunction with one or more other steps.Steps - In
step 706, an alert is generated. This alert may be audible, visible and/or haptic in nature. The alert may be generated, for example, via a user interface ofcomputing device 302,computing device 402, or via a user interface of a computing device that is communicatively connected thereto. The alert may be recorded in a log. The alert may also be transmitted to another device or to a user in the form of a message, e-mail or the like. By generating an alert in this manner, data center personnel can be notified of the pressure anomaly as soon as it is detected, thereby enabling them to take steps to help remediate the issue. - In
step 708, the manner of operation of at least one of the fans used to cool the servers is modified. For example, in an embodiment, pressureanomaly response component 320 may send commands to serveroperation management components 342 executing on servers 306 1-306 N to cause the speed of certain server fans that are determined to be associated with the pressure anomaly to be reduced. Likewise, pressureanomaly response component 420 may send commands to BSCoperation management components 442 executing on blade server chassis 406 1-406 N to cause the speed of certain blade server chassis fans that are determine to be associated with the pressure anomaly to be reduced. This may have the effect of reducing the pressure within a hot aisle containment unit toward which the fans are blowing heated air, thereby helping to remediate the pressure anomaly. - In an embodiment, after pressure
310, 410 reduces the speed of one or more fans, pressureanomaly response component 310, 410 may also begin to monitor the temperature of internal server components via dataanomaly response component center management agents 336, 436 (or increase the rate at which such information is reported) so that pressure 310, 410 can determine whether the reduction of the fan speed is going to cause those components to exceed specified thermal limits and potentially be damaged. If pressureanomaly response component 310, 410 determines that the reduction of the fan speed is going to cause those components to exceed specified thermal limits and potentially be damaged, then pressureanomaly response component 310, 410 may take additional steps, such as increasing fan speeds or shutting down one or more servers.anomaly response component - In
step 710, the manner of operation of at least one of the servers in the data center is modified. For example, in an embodiment, pressure 310, 410 may interact with data center management agents to shut down one or more of the servers that are determined to be impacted by the pressure anomaly. As another example, to ensure that customer service level agreements (SLAs) are satisfied, pressureanomaly response component 310, 410 may operate to migrate one or more customer workflows from servers that are determined to be impacted by the pressure anomaly to servers that are not. As yet another example, pressureanomaly response component 310, 410 may interact with data center management agents to reduce an amount of power supplied to one or more internal components of a server that is determined to be impacted by the pressure anomaly, which can have the effect of reducing the temperature of such internal components.anomaly response component - The foregoing are only some examples of the steps that may be taken by pressure
320, 420 to try and remediate a detected pressure anomaly. Since such steps can be carried out automatically, they can help remediate a pressure anomaly before equipment damage is caused and without requiring intervention by data center personnel.anomaly response component -
FIG. 8 depicts an example processor-basedcomputer system 800 that may be used to implement various embodiments described herein. For example,computer system 800 may be used to implementcomputing device 302, any of servers 306 1-306 N,computing device 402, blade server chassis 406 1-406 N, or any of the blade servers installed therein.Computer system 800 may also be used to implement any or all of the steps of any or all of the flowcharts depicted inFIGS. 5-7 . The description ofcomputer system 800 is provided herein for purposes of illustration, and is not intended to be limiting. Embodiments may be implemented in further types of computer systems, as would be known to persons skilled in the relevant art(s). - As shown in
FIG. 8 ,computer system 800 includes aprocessing unit 802, asystem memory 804, and abus 806 that couples various system components includingsystem memory 804 toprocessing unit 802.Processing unit 802 may comprise one or more microprocessors or microprocessor cores.Bus 806 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.System memory 804 includes read only memory (ROM) 808 and random access memory (RAM) 810. A basic input/output system 812 (BIOS) is stored inROM 808. -
Computer system 800 also has one or more of the following drives: ahard disk drive 814 for reading from and writing to a hard disk, amagnetic disk drive 816 for reading from or writing to a removablemagnetic disk 818, and anoptical disk drive 820 for reading from or writing to a removableoptical disk 822 such as a CD ROM, DVD ROM, BLU-RAY™ disk or other optical media.Hard disk drive 814,magnetic disk drive 816, andoptical disk drive 820 are connected tobus 806 by a harddisk drive interface 824, a magneticdisk drive interface 826, and anoptical drive interface 828, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer. Although a hard disk, a removable magnetic disk and a removable optical disk are described, other types of computer-readable memory devices and storage structures can be used to store data, such as flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and the like. - A number of program modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. These program modules include an
operating system 830, one ormore application programs 832,other program modules 834, andprogram data 836. In accordance with various embodiments, the program modules may include computer program logic that is executable by processingunit 802 to perform any or all of the functions and features ofcomputing device 302, any of servers 306 1-306 N,computing device 402, blade server chassis 406 1-406 N, or any of the blade servers installed therein, as described above. The program modules may also include computer program logic that, when executed by processingunit 802, performs any of the steps or operations shown or described in reference to the flowcharts ofFIGS. 5-7 . - A user may enter commands and information into
computer system 800 through input devices such as akeyboard 838 and apointing device 840. Other input devices (not shown) may include a microphone, joystick, game controller, scanner, or the like. In one embodiment, a touch screen is provided in conjunction with adisplay 844 to allow a user to provide user input via the application of a touch (as by a finger or stylus for example) to one or more points on the touch screen. These and other input devices are often connected toprocessing unit 802 through aserial port interface 842 that is coupled tobus 806, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). Such interfaces may be wired or wireless interfaces. - A
display 844 is also connected tobus 806 via an interface, such as avideo adapter 846. In addition todisplay 844,computer system 800 may include other peripheral output devices (not shown) such as speakers and printers. -
Computer system 800 is connected to a network 848 (e.g., a local area network or wide area network such as the Internet) through a network interface oradapter 850, amodem 852, or other suitable means for establishing communications over the network.Modem 852, which may be internal or external, is connected tobus 806 viaserial port interface 842. - As used herein, the terms “computer program medium,” “computer-readable medium,” and “computer-readable storage medium” are used to generally refer to memory devices or storage structures such as the hard disk associated with
hard disk drive 814, removablemagnetic disk 818, removableoptical disk 822, as well as other memory devices or storage structures such as flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and the like. Such computer-readable storage media are distinguished from and non-overlapping with communication media (do not include communication media). Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wireless media such as acoustic, RF, infrared and other wireless media. Embodiments are also directed to such communication media. - As noted above, computer programs and modules (including
application programs 832 and other program modules 834) may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. Such computer programs may also be received vianetwork interface 850,serial port interface 842, or any other interface type. Such computer programs, when executed or loaded by an application, enablecomputer system 800 to implement features of embodiments of the present invention discussed herein. Accordingly, such computer programs represent controllers ofcomputer system 800. - Embodiments are also directed to computer program products comprising software stored on any computer useable medium. Such software, when executed in one or more data processing devices, causes a data processing device(s) to operate as described herein. Embodiments of the present invention employ any computer-useable or computer-readable medium, known now or in the future. Examples of computer-readable mediums include, but are not limited to memory devices and storage structures such as RAM, hard drives, floppy disks, CD ROMs, DVD ROMs, zip disks, tapes, magnetic storage devices, optical storage devices, MEMs, nanotechnology-based storage devices, and the like.
- In alternative implementations,
computer system 800 may be implemented as hardware logic/electrical circuitry or firmware. In accordance with further embodiments, one or more of these components may be implemented in a system-on-chip (SoC). The SoC may include an integrated circuit chip that includes one or more of a processor (e.g., a microcontroller, microprocessor, digital signal processor (DSP), etc.), memory, one or more communication interfaces, and/or further circuits and/or embedded firmware to perform its functions. - A method that is performed by data center management software executing on at least one computer is described herein. In accordance with the method, each of a plurality of fans used to dissipate heat generated by one or more servers in a data center is monitored to obtain data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans. The obtained data is then compared to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment. Based on the comparison, it is determined that a pressure anomaly exists in the data center. Based on the determination that the pressure anomaly exists in the data center, one or more of the following is performed: generating an alert and modifying a manner of operation of one or more of at least one of the fans and at least one of the servers.
- In an embodiment of the foregoing method, the plurality of fans comprise one or more of a server fan and a blade chassis fan.
- In another embodiment of the foregoing method, each of the plurality of fans is configured to blow air into a hot aisle containment unit.
- In yet another embodiment of the foregoing method, modifying the manner of operation of at least one of the fans comprises reducing a speed of at least one of the fans. In further accordance with such an embodiment, the method may further include monitoring a temperature of one or more internal components of one or more of the servers responsive to reducing the speed of the at least one of the fans.
- In still another embodiment of the foregoing method, modifying the manner of operation of at least one of the servers comprises migrating a customer workflow from at least one of the servers.
- In a further embodiment of the foregoing method, modifying the manner of operation of at least one of the servers comprises shutting down at least one of the servers.
- In a still further embodiment of the foregoing method, modifying the manner of operation of at least one of the servers comprises reducing an amount of power supplied to one or more internal components of one or more of the servers.
- In an additional embodiment of the foregoing method, comparing the obtained data to the reference data comprises determining, for each of the fans, a measure of difference between an actual-to-target speed relationship specified by the obtained data and an actual-to-target speed relationship specified by the reference data. In further accordance with such an embodiment, determining that the pressure anomaly exists in the data center based on the comparison may comprise determining that the measure of difference for a particular number of the fans exceeds a particular threshold.
- A system is also described herein. The system includes at least one processor and a memory. The memory stores computer program logic for execution by the at least one processor. The computer program logic includes one or more components configured to perform operations when executed by the at least one processor. The one or more components include a fan monitoring component, a pressure anomaly detection component and a pressure anomaly response component. The fan monitoring component is operable to monitor each of a plurality of fans used to dissipate heat generated by one or more servers in a data center to obtain data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans. The pressure anomaly detection component is operable to compare the obtained data to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment and, based on the comparison, determine that a pressure anomaly exists in the data center. The pressure anomaly response component is operable to perform one or more of the following in response to a determination that the pressure anomaly exists: (i) generate an alert and (ii) modify a manner of operation of one or more of at least one of the fans and at least one of the servers.
- In an embodiment of the foregoing system, the pressure anomaly response component is operable to modify the manner of operation of at least one of the fans by reducing a speed of at least one of the fans. In further accordance with such an embodiment, the pressure anomaly response component may be further operable to monitor a temperature of one or more internal components of one or more of the servers responsive to reducing the speed of the at least one of the fans.
- In another embodiment of the foregoing system, the pressure anomaly response component is operable to modify the manner of operation of at least one of the servers by migrating at least one service or resource from at least one of the servers.
- In yet another embodiment of the foregoing system, the pressure anomaly response component is operable to modify the manner of operation of at least one of the servers by shutting down at least one of the servers.
- In still another embodiment of the foregoing system, the pressure anomaly response component is operable to modify the manner of operation of at least one of the servers by reducing an amount of power supplied to one or more internal components of one or more of the servers.
- In a further embodiment of the foregoing system, the pressure anomaly detection component is operable to compare the obtained data to the reference data by determining, for each of the fans, a measure of difference between an actual-to-target speed relationship specified by the obtained data and an actual-to-target speed relationship specified by the reference data. In further accordance with such an embodiment, the pressure anomaly detection component may be operable to determine that the pressure anomaly exists in the data center based on the comparison by determining that the measure of difference for a particular number of the fans exceeds a particular threshold.
- A computer program product is also described herein. The computer program product comprises a computer-readable memory having computer program logic recorded thereon that when executed by at least one processor causes the at least one processor to perform a method that includes: monitoring each of a plurality of fans used to dissipate heat generated by one or more servers in a data center to obtain data that indicates how an actual speed of each of the fans relates to a target speed of each of the fans; determining that a pressure anomaly exists in the data center based on at least the obtained data; and based on the determination that the pressure anomaly exists in the data center, performing one or more of: generating an alert; and modifying a manner of operation of one or more of: at least one of the fans; and at least one of the servers.
- In one embodiment of the foregoing computer program product, determining that the pressure anomaly exists in the data center based on at least the obtained data comprises comparing the obtained data to reference data that indicates, for each of the plurality of fans, how an actual speed of the fan relates to a target speed of the fan in a substantially pressure-neutral environment.
- While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and details can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
Claims (20)
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/524,096 US10342162B2 (en) | 2014-10-27 | 2014-10-27 | Data center pressure anomaly detection and remediation |
| CN201580058819.6A CN107148811B (en) | 2014-10-27 | 2015-10-26 | Data center management software execution method, data center management system and medium |
| PCT/US2015/057271 WO2016069419A1 (en) | 2014-10-27 | 2015-10-26 | Data center pressure anomaly detection and remediation |
| BR112017005622A BR112017005622A2 (en) | 2014-10-27 | 2015-10-26 | data center pressure anomaly detection and remediation |
| EP15795261.5A EP3213611B1 (en) | 2014-10-27 | 2015-10-26 | Data center pressure anomaly detection and remediation |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/524,096 US10342162B2 (en) | 2014-10-27 | 2014-10-27 | Data center pressure anomaly detection and remediation |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20160120070A1 true US20160120070A1 (en) | 2016-04-28 |
| US10342162B2 US10342162B2 (en) | 2019-07-02 |
Family
ID=54548252
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/524,096 Active 2036-11-05 US10342162B2 (en) | 2014-10-27 | 2014-10-27 | Data center pressure anomaly detection and remediation |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US10342162B2 (en) |
| EP (1) | EP3213611B1 (en) |
| CN (1) | CN107148811B (en) |
| BR (1) | BR112017005622A2 (en) |
| WO (1) | WO2016069419A1 (en) |
Cited By (63)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170359922A1 (en) * | 2016-06-14 | 2017-12-14 | Dell Products L.P. | Modular data center with passively-cooled utility module |
| US20180103562A1 (en) * | 2013-03-15 | 2018-04-12 | Switch, Ltd. | Data center facility design configuration |
| US20180232174A1 (en) * | 2017-02-15 | 2018-08-16 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Data Migration Between Cloud Storage Systems |
| US20190150313A1 (en) * | 2016-07-28 | 2019-05-16 | Suzhou A-Rack Enclosure Systems Co., Ltd. | Modular Computer Room for Servers |
| US10298479B2 (en) * | 2016-05-09 | 2019-05-21 | Mitac Computing Technology Corporation | Method of monitoring a server rack system, and the server rack system |
| US10353357B2 (en) * | 2015-06-23 | 2019-07-16 | Dell Products L.P. | Systems and methods for combined active and passive cooling of an information handling resource |
| US10942195B1 (en) * | 2020-07-24 | 2021-03-09 | Core Scientific, Inc. | Measuring airflow for computing devices |
| CN115823716A (en) * | 2022-11-25 | 2023-03-21 | 珠海格力电器股份有限公司 | Indoor static pressure adjusting method and device, electronic equipment and storage medium |
| US11973784B1 (en) | 2017-11-27 | 2024-04-30 | Lacework, Inc. | Natural language interface for an anomaly detection framework |
| US11991198B1 (en) | 2017-11-27 | 2024-05-21 | Lacework, Inc. | User-specific data-driven network security |
| US12021888B1 (en) | 2017-11-27 | 2024-06-25 | Lacework, Inc. | Cloud infrastructure entitlement management by a data platform |
| US12032634B1 (en) | 2019-12-23 | 2024-07-09 | Lacework Inc. | Graph reclustering based on different clustering criteria |
| US12034754B2 (en) | 2017-11-27 | 2024-07-09 | Lacework, Inc. | Using static analysis for vulnerability detection |
| US12034750B1 (en) | 2017-11-27 | 2024-07-09 | Lacework Inc. | Tracking of user login sessions |
| US12058160B1 (en) | 2017-11-22 | 2024-08-06 | Lacework, Inc. | Generating computer code for remediating detected events |
| US12095879B1 (en) | 2017-11-27 | 2024-09-17 | Lacework, Inc. | Identifying encountered and unencountered conditions in software applications |
| US12095796B1 (en) | 2017-11-27 | 2024-09-17 | Lacework, Inc. | Instruction-level threat assessment |
| US12095794B1 (en) | 2017-11-27 | 2024-09-17 | Lacework, Inc. | Universal cloud data ingestion for stream processing |
| US12126695B1 (en) | 2017-11-27 | 2024-10-22 | Fortinet, Inc. | Enhancing security of a cloud deployment based on learnings from other cloud deployments |
| US12126643B1 (en) | 2017-11-27 | 2024-10-22 | Fortinet, Inc. | Leveraging generative artificial intelligence (‘AI’) for securing a monitored deployment |
| US12130878B1 (en) | 2017-11-27 | 2024-10-29 | Fortinet, Inc. | Deduplication of monitored communications data in a cloud environment |
| US12261866B1 (en) | 2017-11-27 | 2025-03-25 | Fortinet, Inc. | Time series anomaly detection |
| US12267345B1 (en) | 2017-11-27 | 2025-04-01 | Fortinet, Inc. | Using user feedback for attack path analysis in an anomaly detection framework |
| US12284197B1 (en) | 2017-11-27 | 2025-04-22 | Fortinet, Inc. | Reducing amounts of data ingested into a data warehouse |
| US12309185B1 (en) | 2017-11-27 | 2025-05-20 | Fortinet, Inc. | Architecture for a generative artificial intelligence (AI)-enabled assistant |
| US12309181B1 (en) | 2017-11-27 | 2025-05-20 | Fortinet, Inc. | Establishing a location profile for a user device |
| US12309182B1 (en) | 2017-11-27 | 2025-05-20 | Fortinet, Inc. | Customer onboarding and integration with anomaly detection systems |
| US12309236B1 (en) | 2017-11-27 | 2025-05-20 | Fortinet, Inc. | Analyzing log data from multiple sources across computing environments |
| US12323449B1 (en) | 2017-11-27 | 2025-06-03 | Fortinet, Inc. | Code analysis feedback loop for code created using generative artificial intelligence (‘AI’) |
| US12335286B1 (en) | 2017-11-27 | 2025-06-17 | Fortinet, Inc. | Compute environment security monitoring using data collected from a sub-kernel space |
| US12335348B1 (en) | 2017-11-27 | 2025-06-17 | Fortinet, Inc. | Optimizing data warehouse utilization by a data ingestion pipeline |
| US12341797B1 (en) | 2017-11-27 | 2025-06-24 | Fortinet, Inc. | Composite events indicative of multifaceted security threats within a compute environment |
| US12348545B1 (en) | 2017-11-27 | 2025-07-01 | Fortinet, Inc. | Customizable generative artificial intelligence (‘AI’) assistant |
| US12355787B1 (en) | 2017-11-27 | 2025-07-08 | Fortinet, Inc. | Interdependence of agentless and agent-based operations by way of a data platform |
| US12355626B1 (en) | 2017-11-27 | 2025-07-08 | Fortinet, Inc. | Tracking infrastructure as code (IaC) asset lifecycles |
| US12355793B1 (en) | 2017-11-27 | 2025-07-08 | Fortinet, Inc. | Guided interactions with a natural language interface |
| US12363148B1 (en) | 2017-11-27 | 2025-07-15 | Fortinet, Inc. | Operational adjustment for an agent collecting data from a cloud compute environment monitored by a data platform |
| US12368746B1 (en) | 2017-11-27 | 2025-07-22 | Fortinet, Inc. | Modular agentless scanning of cloud workloads |
| US12368747B1 (en) | 2019-12-23 | 2025-07-22 | Fortinet, Inc. | Using a logical graph to monitor an environment |
| US12368745B1 (en) | 2017-11-27 | 2025-07-22 | Fortinet, Inc. | Using natural language queries to conduct an investigation of a monitored system |
| US12375573B1 (en) | 2017-11-27 | 2025-07-29 | Fortinet, Inc. | Container event monitoring using kernel space communication |
| US12381901B1 (en) | 2017-11-27 | 2025-08-05 | Fortinet, Inc. | Unified storage for event streams in an anomaly detection framework |
| US12395573B1 (en) | 2019-12-23 | 2025-08-19 | Fortinet, Inc. | Monitoring communications in a containerized environment |
| US12401669B1 (en) | 2017-11-27 | 2025-08-26 | Fortinet, Inc. | Container vulnerability management by a data platform |
| US12407702B1 (en) | 2017-11-27 | 2025-09-02 | Fortinet, Inc. | Gathering and presenting information related to common vulnerabilities and exposures |
| US12407701B1 (en) | 2017-11-27 | 2025-09-02 | Fortinet, Inc. | Community-based generation of policies for a data platform |
| US12405849B1 (en) | 2017-11-27 | 2025-09-02 | Fortinet, Inc. | Transitive identity usage tracking by a data platform |
| US12418552B1 (en) | 2017-11-27 | 2025-09-16 | Fortinet, Inc. | Virtual data streams in a data streaming platform |
| US12418555B1 (en) | 2017-11-27 | 2025-09-16 | Fortinet Inc. | Guiding query creation for a generative artificial intelligence (AI)-enabled assistant |
| US12425428B1 (en) | 2017-11-27 | 2025-09-23 | Fortinet, Inc. | Activity monitoring of a cloud compute environment based on container orchestration data |
| US12425430B1 (en) | 2017-11-27 | 2025-09-23 | Fortinet, Inc. | Runtime workload data-based modification of permissions for an entity |
| US12445474B1 (en) | 2017-11-27 | 2025-10-14 | Fortinet, Inc. | Attack path risk mitigation by a data platform |
| US12452272B1 (en) | 2017-11-27 | 2025-10-21 | Fortinet, Inc. | Reducing resource consumption spikes in an anomaly detection framework |
| US12457231B1 (en) | 2017-11-27 | 2025-10-28 | Fortinet, Inc. | Initiating and utilizing pedigree for content |
| US12464003B1 (en) | 2017-11-27 | 2025-11-04 | Fortinet, Inc. | Capturing and using application-level data to monitor a compute environment |
| US12463994B1 (en) | 2017-11-27 | 2025-11-04 | Fortinet, Inc. | Handling of certificates by intermediate actors |
| US12463996B1 (en) | 2017-11-27 | 2025-11-04 | Fortinet, Inc. | Risk engine that utilizes key performance indicators |
| US12463997B1 (en) | 2017-11-27 | 2025-11-04 | Fortinet, Inc. | Attack path risk mitigation by a data platform using static and runtime data |
| US12463995B1 (en) | 2017-11-27 | 2025-11-04 | Fortinet, Inc. | Tiered risk engine with user cohorts |
| US12470578B1 (en) | 2017-11-27 | 2025-11-11 | Fortinet, Inc. | Containerized agent for monitoring container activity in a compute environment |
| US12470577B1 (en) | 2017-11-27 | 2025-11-11 | Fortinet, Inc. | Kernel-based monitoring of container activity in a compute environment |
| US12483576B1 (en) | 2017-11-27 | 2025-11-25 | Fortinet, Inc. | Compute resource risk mitigation by a data platform |
| US12489771B1 (en) | 2023-01-31 | 2025-12-02 | Fortinet, Inc. | Detecting anomalous behavior of nodes in a hierarchical cloud deployment |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP4310419A1 (en) * | 2022-07-22 | 2024-01-24 | Thermo King LLC | A refrigeration system for a transport unit and a method of controlling airflow in a refrigerated transport unit |
Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4706553A (en) * | 1984-03-05 | 1987-11-17 | Phoenix Controls Corp. | Fume hood controller |
| US20030218465A1 (en) * | 2002-05-24 | 2003-11-27 | Compaq Information Technologies Group, L.P. | Apparatus for autonomous activation of system/chassis cooling fan |
| US20080245083A1 (en) * | 2006-08-15 | 2008-10-09 | American Power Conversion Corporation | Method and apparatus for cooling |
| US20090061756A1 (en) * | 2007-08-30 | 2009-03-05 | Mark Germagian | System and method for cooling electronic equipment |
| US20090301123A1 (en) * | 2008-05-05 | 2009-12-10 | Brian Monk | Integrated Computer Equipment Container and Cooling Unit |
| US20090327778A1 (en) * | 2008-06-30 | 2009-12-31 | Yoko Shiga | Information processing system and power-save control method for use in the system |
| US20110239025A1 (en) * | 2008-10-21 | 2011-09-29 | Dell Products, Lp | System and Method for Adapting a Power Usage of a Server During a Data Center Cooling Failure |
| US20130062047A1 (en) * | 2010-05-26 | 2013-03-14 | Bripco Bvba | Data Centre Cooling Systems |
| US20130265064A1 (en) * | 2012-04-04 | 2013-10-10 | International Business Machines Corporation | Corrosion detector apparatus for universal assessment of pollution in data centers |
| US20140031956A1 (en) * | 2008-12-04 | 2014-01-30 | Io Data Centers, Llc | Data center intelligent control and optimization |
| US20140298839A1 (en) * | 2012-01-30 | 2014-10-09 | Fujitsu Limited | Air-conditioning system |
| US20150096714A1 (en) * | 2013-10-08 | 2015-04-09 | Johnson Controls Technology Company | Systems and methods for air conditioning a building using an energy recovery wheel |
| US20150153109A1 (en) * | 2011-09-23 | 2015-06-04 | R4 Ventures Llc | Real-Time Individual Electronic Enclosure Cooling System |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7305316B2 (en) | 2004-12-23 | 2007-12-04 | Minebea Co., Ltd. | Microcontroller methods of improving reliability in DC brushless motors and cooling fans |
| US8456840B1 (en) | 2007-07-06 | 2013-06-04 | Exaflop Llc | Modular data center cooling |
| DE112008004004T5 (en) | 2008-09-03 | 2011-06-22 | Hewlett-Packard Development Co., L.P., Tex. | Systems and methods for controlling a fan in an electronic system |
| US8600560B2 (en) | 2008-12-30 | 2013-12-03 | Synapsense Corporation | Apparatus and method for controlling computer room air conditioning units (CRACs) in data centers |
| US9204578B2 (en) | 2010-02-09 | 2015-12-01 | It Aire Inc. | Systems and methods for cooling data centers and other electronic equipment |
| US8467906B2 (en) | 2010-08-10 | 2013-06-18 | Facebook, Inc. | Load balancing tasks in a data center based on pressure differential needed for cooling servers |
| TWI410211B (en) | 2011-04-07 | 2013-09-21 | Hon Hai Prec Ind Co Ltd | Data center and heat dissipating system of the same |
| WO2013016845A1 (en) | 2011-08-04 | 2013-02-07 | Telefonaktiebolaget L M Ericsson (Publ) | Method and device for detecting clogging of a filter |
| US8798797B2 (en) | 2011-08-25 | 2014-08-05 | International Business Machines Corporation | Air pressure measurement based cooling |
| CN102625643A (en) | 2012-03-27 | 2012-08-01 | 合肥通用制冷设备有限公司 | Data center cooling system and method |
| CN202679885U (en) | 2012-06-04 | 2013-01-16 | 艾默生网络能源有限公司 | Cabinet air conditioner ventilation unit and modularization data center |
| US20140014292A1 (en) | 2012-07-16 | 2014-01-16 | Google Inc. | Controlling data center airflow |
| JP6044235B2 (en) | 2012-09-28 | 2016-12-14 | 富士通株式会社 | Container type data center, air conditioning control program, and air conditioning control method |
-
2014
- 2014-10-27 US US14/524,096 patent/US10342162B2/en active Active
-
2015
- 2015-10-26 CN CN201580058819.6A patent/CN107148811B/en active Active
- 2015-10-26 EP EP15795261.5A patent/EP3213611B1/en active Active
- 2015-10-26 WO PCT/US2015/057271 patent/WO2016069419A1/en not_active Ceased
- 2015-10-26 BR BR112017005622A patent/BR112017005622A2/en not_active Application Discontinuation
Patent Citations (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4706553A (en) * | 1984-03-05 | 1987-11-17 | Phoenix Controls Corp. | Fume hood controller |
| US4706553B1 (en) * | 1984-03-05 | 1991-07-23 | Phoenix Controls Corp | |
| US20030218465A1 (en) * | 2002-05-24 | 2003-11-27 | Compaq Information Technologies Group, L.P. | Apparatus for autonomous activation of system/chassis cooling fan |
| US20080245083A1 (en) * | 2006-08-15 | 2008-10-09 | American Power Conversion Corporation | Method and apparatus for cooling |
| US20090061756A1 (en) * | 2007-08-30 | 2009-03-05 | Mark Germagian | System and method for cooling electronic equipment |
| US20090301123A1 (en) * | 2008-05-05 | 2009-12-10 | Brian Monk | Integrated Computer Equipment Container and Cooling Unit |
| US20090327778A1 (en) * | 2008-06-30 | 2009-12-31 | Yoko Shiga | Information processing system and power-save control method for use in the system |
| US20110239025A1 (en) * | 2008-10-21 | 2011-09-29 | Dell Products, Lp | System and Method for Adapting a Power Usage of a Server During a Data Center Cooling Failure |
| US20140031956A1 (en) * | 2008-12-04 | 2014-01-30 | Io Data Centers, Llc | Data center intelligent control and optimization |
| US20130062047A1 (en) * | 2010-05-26 | 2013-03-14 | Bripco Bvba | Data Centre Cooling Systems |
| US20150153109A1 (en) * | 2011-09-23 | 2015-06-04 | R4 Ventures Llc | Real-Time Individual Electronic Enclosure Cooling System |
| US20140298839A1 (en) * | 2012-01-30 | 2014-10-09 | Fujitsu Limited | Air-conditioning system |
| US20130265064A1 (en) * | 2012-04-04 | 2013-10-10 | International Business Machines Corporation | Corrosion detector apparatus for universal assessment of pollution in data centers |
| US20150096714A1 (en) * | 2013-10-08 | 2015-04-09 | Johnson Controls Technology Company | Systems and methods for air conditioning a building using an energy recovery wheel |
Cited By (75)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180103562A1 (en) * | 2013-03-15 | 2018-04-12 | Switch, Ltd. | Data center facility design configuration |
| US10353357B2 (en) * | 2015-06-23 | 2019-07-16 | Dell Products L.P. | Systems and methods for combined active and passive cooling of an information handling resource |
| US10298479B2 (en) * | 2016-05-09 | 2019-05-21 | Mitac Computing Technology Corporation | Method of monitoring a server rack system, and the server rack system |
| US20170359922A1 (en) * | 2016-06-14 | 2017-12-14 | Dell Products L.P. | Modular data center with passively-cooled utility module |
| US10736231B2 (en) * | 2016-06-14 | 2020-08-04 | Dell Products L.P. | Modular data center with passively-cooled utility module |
| US20190150313A1 (en) * | 2016-07-28 | 2019-05-16 | Suzhou A-Rack Enclosure Systems Co., Ltd. | Modular Computer Room for Servers |
| US20180232174A1 (en) * | 2017-02-15 | 2018-08-16 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Data Migration Between Cloud Storage Systems |
| US10613788B2 (en) * | 2017-02-15 | 2020-04-07 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Data migration between cloud storage systems |
| US12058160B1 (en) | 2017-11-22 | 2024-08-06 | Lacework, Inc. | Generating computer code for remediating detected events |
| US12335348B1 (en) | 2017-11-27 | 2025-06-17 | Fortinet, Inc. | Optimizing data warehouse utilization by a data ingestion pipeline |
| US12363148B1 (en) | 2017-11-27 | 2025-07-15 | Fortinet, Inc. | Operational adjustment for an agent collecting data from a cloud compute environment monitored by a data platform |
| US11973784B1 (en) | 2017-11-27 | 2024-04-30 | Lacework, Inc. | Natural language interface for an anomaly detection framework |
| US11991198B1 (en) | 2017-11-27 | 2024-05-21 | Lacework, Inc. | User-specific data-driven network security |
| US12021888B1 (en) | 2017-11-27 | 2024-06-25 | Lacework, Inc. | Cloud infrastructure entitlement management by a data platform |
| US12483576B1 (en) | 2017-11-27 | 2025-11-25 | Fortinet, Inc. | Compute resource risk mitigation by a data platform |
| US12034754B2 (en) | 2017-11-27 | 2024-07-09 | Lacework, Inc. | Using static analysis for vulnerability detection |
| US12034750B1 (en) | 2017-11-27 | 2024-07-09 | Lacework Inc. | Tracking of user login sessions |
| US12470577B1 (en) | 2017-11-27 | 2025-11-11 | Fortinet, Inc. | Kernel-based monitoring of container activity in a compute environment |
| US12095879B1 (en) | 2017-11-27 | 2024-09-17 | Lacework, Inc. | Identifying encountered and unencountered conditions in software applications |
| US12095796B1 (en) | 2017-11-27 | 2024-09-17 | Lacework, Inc. | Instruction-level threat assessment |
| US12095794B1 (en) | 2017-11-27 | 2024-09-17 | Lacework, Inc. | Universal cloud data ingestion for stream processing |
| US12120140B2 (en) | 2017-11-27 | 2024-10-15 | Fortinet, Inc. | Detecting threats against computing resources based on user behavior changes |
| US12126695B1 (en) | 2017-11-27 | 2024-10-22 | Fortinet, Inc. | Enhancing security of a cloud deployment based on learnings from other cloud deployments |
| US12126643B1 (en) | 2017-11-27 | 2024-10-22 | Fortinet, Inc. | Leveraging generative artificial intelligence (‘AI’) for securing a monitored deployment |
| US12130878B1 (en) | 2017-11-27 | 2024-10-29 | Fortinet, Inc. | Deduplication of monitored communications data in a cloud environment |
| US12206696B1 (en) | 2017-11-27 | 2025-01-21 | Fortinet, Inc. | Detecting anomalies in a network environment |
| US12244621B1 (en) | 2017-11-27 | 2025-03-04 | Fortinet, Inc. | Using activity monitored by multiple data sources to identify shadow systems |
| US12261866B1 (en) | 2017-11-27 | 2025-03-25 | Fortinet, Inc. | Time series anomaly detection |
| US12267345B1 (en) | 2017-11-27 | 2025-04-01 | Fortinet, Inc. | Using user feedback for attack path analysis in an anomaly detection framework |
| US12284197B1 (en) | 2017-11-27 | 2025-04-22 | Fortinet, Inc. | Reducing amounts of data ingested into a data warehouse |
| US12309185B1 (en) | 2017-11-27 | 2025-05-20 | Fortinet, Inc. | Architecture for a generative artificial intelligence (AI)-enabled assistant |
| US12309181B1 (en) | 2017-11-27 | 2025-05-20 | Fortinet, Inc. | Establishing a location profile for a user device |
| US12309182B1 (en) | 2017-11-27 | 2025-05-20 | Fortinet, Inc. | Customer onboarding and integration with anomaly detection systems |
| US12309236B1 (en) | 2017-11-27 | 2025-05-20 | Fortinet, Inc. | Analyzing log data from multiple sources across computing environments |
| US12323449B1 (en) | 2017-11-27 | 2025-06-03 | Fortinet, Inc. | Code analysis feedback loop for code created using generative artificial intelligence (‘AI’) |
| US12335286B1 (en) | 2017-11-27 | 2025-06-17 | Fortinet, Inc. | Compute environment security monitoring using data collected from a sub-kernel space |
| US12470578B1 (en) | 2017-11-27 | 2025-11-11 | Fortinet, Inc. | Containerized agent for monitoring container activity in a compute environment |
| US12341797B1 (en) | 2017-11-27 | 2025-06-24 | Fortinet, Inc. | Composite events indicative of multifaceted security threats within a compute environment |
| US12348545B1 (en) | 2017-11-27 | 2025-07-01 | Fortinet, Inc. | Customizable generative artificial intelligence (‘AI’) assistant |
| US12355787B1 (en) | 2017-11-27 | 2025-07-08 | Fortinet, Inc. | Interdependence of agentless and agent-based operations by way of a data platform |
| US12355626B1 (en) | 2017-11-27 | 2025-07-08 | Fortinet, Inc. | Tracking infrastructure as code (IaC) asset lifecycles |
| US12355793B1 (en) | 2017-11-27 | 2025-07-08 | Fortinet, Inc. | Guided interactions with a natural language interface |
| US12463995B1 (en) | 2017-11-27 | 2025-11-04 | Fortinet, Inc. | Tiered risk engine with user cohorts |
| US12368746B1 (en) | 2017-11-27 | 2025-07-22 | Fortinet, Inc. | Modular agentless scanning of cloud workloads |
| US12463997B1 (en) | 2017-11-27 | 2025-11-04 | Fortinet, Inc. | Attack path risk mitigation by a data platform using static and runtime data |
| US12368745B1 (en) | 2017-11-27 | 2025-07-22 | Fortinet, Inc. | Using natural language queries to conduct an investigation of a monitored system |
| US12375573B1 (en) | 2017-11-27 | 2025-07-29 | Fortinet, Inc. | Container event monitoring using kernel space communication |
| US12381901B1 (en) | 2017-11-27 | 2025-08-05 | Fortinet, Inc. | Unified storage for event streams in an anomaly detection framework |
| US12463996B1 (en) | 2017-11-27 | 2025-11-04 | Fortinet, Inc. | Risk engine that utilizes key performance indicators |
| US12401669B1 (en) | 2017-11-27 | 2025-08-26 | Fortinet, Inc. | Container vulnerability management by a data platform |
| US12407702B1 (en) | 2017-11-27 | 2025-09-02 | Fortinet, Inc. | Gathering and presenting information related to common vulnerabilities and exposures |
| US12407701B1 (en) | 2017-11-27 | 2025-09-02 | Fortinet, Inc. | Community-based generation of policies for a data platform |
| US12405849B1 (en) | 2017-11-27 | 2025-09-02 | Fortinet, Inc. | Transitive identity usage tracking by a data platform |
| US12418552B1 (en) | 2017-11-27 | 2025-09-16 | Fortinet, Inc. | Virtual data streams in a data streaming platform |
| US12418555B1 (en) | 2017-11-27 | 2025-09-16 | Fortinet Inc. | Guiding query creation for a generative artificial intelligence (AI)-enabled assistant |
| US12425428B1 (en) | 2017-11-27 | 2025-09-23 | Fortinet, Inc. | Activity monitoring of a cloud compute environment based on container orchestration data |
| US12425430B1 (en) | 2017-11-27 | 2025-09-23 | Fortinet, Inc. | Runtime workload data-based modification of permissions for an entity |
| US12445474B1 (en) | 2017-11-27 | 2025-10-14 | Fortinet, Inc. | Attack path risk mitigation by a data platform |
| US12452279B1 (en) | 2017-11-27 | 2025-10-21 | Fortinet, Inc. | Role-based permission by a data platform |
| US12452272B1 (en) | 2017-11-27 | 2025-10-21 | Fortinet, Inc. | Reducing resource consumption spikes in an anomaly detection framework |
| US12457231B1 (en) | 2017-11-27 | 2025-10-28 | Fortinet, Inc. | Initiating and utilizing pedigree for content |
| US12464003B1 (en) | 2017-11-27 | 2025-11-04 | Fortinet, Inc. | Capturing and using application-level data to monitor a compute environment |
| US12463994B1 (en) | 2017-11-27 | 2025-11-04 | Fortinet, Inc. | Handling of certificates by intermediate actors |
| US12395573B1 (en) | 2019-12-23 | 2025-08-19 | Fortinet, Inc. | Monitoring communications in a containerized environment |
| US12368747B1 (en) | 2019-12-23 | 2025-07-22 | Fortinet, Inc. | Using a logical graph to monitor an environment |
| US12032634B1 (en) | 2019-12-23 | 2024-07-09 | Lacework Inc. | Graph reclustering based on different clustering criteria |
| US10942195B1 (en) * | 2020-07-24 | 2021-03-09 | Core Scientific, Inc. | Measuring airflow for computing devices |
| US11092614B1 (en) * | 2020-07-24 | 2021-08-17 | Core Scientific, Inc. | Measuring airflow for computing devices |
| US12489770B1 (en) | 2022-08-31 | 2025-12-02 | Fortinet, Inc. | Agent-based monitoring of a registry space of a compute asset within a compute environment |
| US12495052B1 (en) | 2022-10-07 | 2025-12-09 | Fortinet, Inc. | Detecting package execution for threat assessments |
| CN115823716A (en) * | 2022-11-25 | 2023-03-21 | 珠海格力电器股份有限公司 | Indoor static pressure adjusting method and device, electronic equipment and storage medium |
| US12489771B1 (en) | 2023-01-31 | 2025-12-02 | Fortinet, Inc. | Detecting anomalous behavior of nodes in a hierarchical cloud deployment |
| US12500910B1 (en) | 2023-03-31 | 2025-12-16 | Fortinet, Inc. | Interactive analysis of multifaceted security threats within a compute environment |
| US12500911B1 (en) | 2023-06-09 | 2025-12-16 | Fortinet, Inc. | Expanding data collection from a monitored cloud environment |
| US12500912B1 (en) | 2023-07-31 | 2025-12-16 | Fortinet, Inc. | Semantic layer for data platform |
Also Published As
| Publication number | Publication date |
|---|---|
| EP3213611B1 (en) | 2020-08-12 |
| WO2016069419A1 (en) | 2016-05-06 |
| CN107148811B (en) | 2019-12-03 |
| BR112017005622A2 (en) | 2017-12-19 |
| US10342162B2 (en) | 2019-07-02 |
| EP3213611A1 (en) | 2017-09-06 |
| CN107148811A (en) | 2017-09-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10342162B2 (en) | Data center pressure anomaly detection and remediation | |
| US10499540B2 (en) | Systems and methods for detecting impeded cooling air flow for information handling system chassis enclosures | |
| US10519960B2 (en) | Fan failure detection and reporting | |
| US10372575B1 (en) | Systems and methods for detecting and removing accumulated debris from a cooling air path within an information handling system chassis enclosure | |
| JP6179196B2 (en) | Data center | |
| JP6051829B2 (en) | Fan control device | |
| US9110642B2 (en) | Optimization of system acoustic signature and cooling capacity with intelligent user controls | |
| JP5691933B2 (en) | Air conditioning control method, air conditioning control system, and air conditioning control device | |
| WO2010050080A1 (en) | Physical computer, method for controlling cooling device, and server system | |
| US20090017816A1 (en) | Identification of equipment location in data center | |
| US20150095270A1 (en) | Systems and methods for automated and real-time determination of optimum information handling system location | |
| US10254807B2 (en) | Systems and methods for policy-based per-zone air mover management for offline management controller | |
| CN111918518A (en) | Temperature control method and device and machine frame type equipment | |
| CN105487567A (en) | Fan control method and network equipment | |
| US10405461B2 (en) | Systems and methods for fan performance-based scaling of thermal control parameters | |
| JP6589299B2 (en) | COOLING CONTROL DEVICE, CIRCUIT BOARD, COOLING METHOD, AND PROGRAM | |
| CN113377188A (en) | Storage server temperature control method, device and equipment | |
| CN115666097A (en) | Computer room temperature control method and device, storage medium and electronic equipment | |
| JP5568535B2 (en) | Data center load allocation method and information processing system | |
| US8903565B2 (en) | Operating efficiency of a rear door heat exchanger | |
| US11765870B2 (en) | Software-defined infrastructure for identifying and remediating an airflow deficiency scenario on a rack device | |
| CN100440095C (en) | temperature control method | |
| US9690339B2 (en) | Systems and methods for providing user-visible thermal performance degradation monitoring in an information handling system | |
| CN114828579A (en) | Energy-saving control method of container data center and related equipment | |
| US20240292565A1 (en) | Monitoring closed loop liquid air assisted cooling module performance in real time |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MYRAH, MICHAEL G.;EASON, MATTHEW J.;SIGNING DATES FROM 20141020 TO 20141021;REEL/FRAME:034038/0284 |
|
| AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034819/0001 Effective date: 20150123 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |