Dell High Temperature Equipment Highlights
- Increases high temperature excursion to 45C for new servers, storage and switches
- Will help cut the time chillers run in data centres and associated Op Ex costs
- Relates to the aims of European Union Code of Conduct for Data Centres
- Conforms to ASHRAE A3 and A4 standards
- Is the first to implement Intel changes
Customers Spend More On Cooling In Hot Countries
Ambient air temperature is very important when choosing data centre locations. There are significant advantages of building in cooler climates – in Scandinavia, Iceland and Canada for instance. Standard servers, storage and switches are designed to run in temperatures up to 27C with high temperature excursions up to 35C, which means using expensive chillers when fresh air cooling is disabled by summer heat. Closer to the equator countries can be very hot – in Dubai, for instance, the government stops TV companies from reporting temperatures over 45C. This means that there are significant cost advantages running data centres in colder regions. The general rule that it takes as much electricity to cool as it does to run IT equipment holds true: however the time you need to cool is governed by the ambient air temperature.
Data Centre Equipment Is Designed To Deliver Maximum Performance Per Watt
From a poor start the last 10 years have seen ITC suppliers working tirelessly to improve the efficiency of their products. Intel for instance reports that over a six-year period the efficiency of its chips have improved 60 fold, while performance has seen a 35 x increase.
Chips reached the highest physical speeds they could run at (around 3.5GHz) some time ago forcing chip designers to bifurcate their strategies. They decided to put more cores and processors on single chips and incorporate previously discrete components such as graphics and north bridge circuitry, as in AMD’s APUs and Intel’s SOC designs.
Currently equipment designers require narrow power envelops for mobile devices, necessitating fan-less processing for smart phones and tablets – an area where ARM is expert. However data centre computing equipment is currently designed to give the maximum performance per watt, which means running the most powerful processors at the highest speeds.
There are arguments for lower powered equipment. For instance:
- Musicians used Atari ST micros in the 1980s with no internal fans
- IBM deliberately chose RISC processors with low clock speeds for its massively parallel Big Blue machine
- A number of vendors have designed micro servers over the last year, making servers with multiple low-powered processors
However early sequencers were very basic and the power IBM saved on the processors was spent on the power-hungry interconnects of its system. Micro servers are likely to suffer from the same challenges.
The need for data centres to use the fastest and most powerful machines is natural, as higher density means more ’bang for bucks’ and in consequence data centres get hotter.
Dell’s Idea – Raise The Operational Temperature
Enabled in part by Intel’s decision to warranty short-term operation at temperatures outside of their own specifications, Dell is the first vendor to introduce servers – part of its PowerEdge product line – which can operate up to 45C in high temperature excursions. Similar design changes allow it to add certain PowerConnect switches and Equallogic storage devices to its high temperature portfolio. This is a very different approach to the energy efficiency work all vendors have been engaging in and the advantages are not in reducing the cost of electricity users need to run the new equipment, but in the reduction in the cost of cooling data centres.
HP’s data centre in Wynyard uses fresh air cooling in Northern Britain, facilitated by its location in a massive disused retail warehouse. To safe guard operations in unusual temperatures or if it had to circumvent the fresh air cooling due to a local fire HP installed the usual chillers at a cost of around £4m – however it expects to run them for only about 10 hours a year – just to confirm its business continuity strategy. In other words it’s making significant Op Ex savings, but couldn’t avoid the Cap Ex. If and when it follows Dell by introducing more temperature-tolerant kit, it will be able to make similar savings by building in locations closer to the equator than Wynyard in future.
Regional Perspectives On Higher Operational Temperatures
Dell has an advantage in meeting a number of requirements in Europe, America and Japan for higher temperature data centres. In particular:
- The European Union Code of Conduct for Data Centres encourages equipment manufacturers to increase the operational temperatures of their machines in order to increase the number of chiller-less data centres throughout the EU: this applies to telecom and IT facilities
- In the USA Dell’s new products meet the ASHRAE A3 and A4 classifications, which are more stringent than earlier requirements; it believes it can extend the area of the country in which chiller-less facilities can be built in future
Following the great earthquake - in Japan power utilities have forced temporary limits on power usage, giving an advantage to data centres spending less Op Ex on cooling
Of course Dell’s initial move will not allow many to take much advantage yet, since many of its own and almost all of its competitors’ equipment is still limited to lower temperature limits. New server models in its own line represent 3 of the 20 or so total products, although they represent about 70% of current sales.
Some Conclusions – Savings In Chiller Op Ex – Not Cap Ex – For Now
I really like Dell’s data centre strategy and this latest move is both innovative and important. It is unlikely to lead to a mass building of chiller-less facilities, even when most other vendors follow suit. Even in Northern climates assuming that the ambient temperature will never exceed 45C would be dangerous, even if it has never happened before – just remember the tsunami barrier assumptions for Japanese nuclear power stations. Data centre designers are a cautious, conservative breed.
Dell’s move will not lead to lower electricity usage for IT equipment, or even in the amount of time internal fans need to run. However its move will lead to significant savings in electricity by reducing the time data centres need to run their chillers – so this is an Op Ex saving for now. Eventually higher temperature equipment will reduce the number of chillers needed, although current facilities often require double the investment due to the need for redundancy.
Electricity and energy efficiency are becoming vital issues, as electricity prices soar, power utilities reach maximum capacity and governments introduced carbon taxes. Advanced organisations have merged IT and facilities budgets in order to address these issues, although in general most data centres pay little attention to electricity costs. When the costs of electricity are reduced through these products the comparative advantages of distance from the equator will be preserved of course –if they come, chiller-less facilities will be built only in the coldest climates.
Eventually we are likely to see more permanent requirements like those introduced temporarily in Japan. In talking to Lotus Renault recently I discovered the processing restrictions the F1 governing body has placed on CFD processing – a move which has led to the introduction of a more efficient super computer with less MFLOPs than its previous system. Such a move suggests that in future data centre efficiency will encompass the need for reducing the power envelope. For now most design assumptions are based on a ‘maximum performance per watt and so Dell’s announcement is highly relevant.
We’re always on the look out for ways of reducing data centre costs. Please let us know you approach by commenting on this post.