Around 10 years ago, the typical data centre had a PUE between 1.8 and 1.9, according to Schneider Electric data centres general manager Andrew Kirker.
PUE is the total energy consumption of a data centre divided by the total energy consumption of the IT equipment it contains. Therefore the closer a data centre's PUE gets to 1.0, the less energy is being "wasted" on cooling, lighting, power delivery and so on.
Today, a PUE of 1.2 to 1.3 is achievable, mainly due to three things, Kirker told iTWire.
Ashrae (the American Society of Heating, Refrigerating and Air-Conditioning Engineers – the acronym is normally used in part to reflect the organisation's worldwide membership) changed its data centre guidelines about a decade ago. Where recommended inlet air temperatures were around 20-23C, temperatures up to 27C are now acceptable.
Combined with looser humidity requirements, this makes it easier to use free cooling (essentially a method of taking advantage of cooler outside air rather than relying solely on evaporative or refrigerated cooling).
Secondly, more effective hot/cold air containment within data centres improves air conditioning efficiency and, thanks to its almost universal adoption, supports much more widespread use of free air cooling.
Thirdly, the efficiency of uninterruptible power supplies has improved. Kirker pointed particularly to new technology introduced by Schneider Electric over the last year or two that can provide about 15% better efficiency.
This technology is coming to equipment aimed at smaller sites this year, he added.
Not all sites are taking advantage of these improvements. While PUEs between 1.2 and 1.3 can be achieved, 1.5 to 1.6 is still quite common, Kirker observed.
Payback times for UPS upgrades are reasonably short, but "modernising cooling systems is a bit trickier". Free cooling is a significant feature of many new data centres as it is viable in several Australian cities.
It's not necessary to go all-in with free cooling, as it can be used in conjunction with traditional cooling. Data centres in Canberra can operate on free cooling alone during the cooler months — especially overnight — and, in any case, free cooling can play a part in ambient temperatures up to 20C or so, he said.
Another side to the problem of data centre power consumption is that "an absolute tranche" of new workloads is increasing demand for data centre capacity. These workloads include cryptography, blockchain, AI (including autonomous vehicles) and IoT.
Cryptocurrencies use a "freakish" amount of energy, he said, with bitcoin mining alone accounting for 0.2% of the world's energy consumption.
(Australian efforts seem to be focused on reducing the cost of electricity to cryptocurrency miners — eg, by selling electricity directly from the Hunter Valley power station to miners — rather than finding ways to reduce the amount of electricity required.)
On the reliability side, Kirker expects to see growing use of Li-ion batteries in data centres, rather than lead-acid batteries and generator sets, due to their dramatically falling cost, longer life, greater reliability and higher density. Generator sets will still be used in larger data centres, but Li-ion batteries will provide backup power for small and medium centres, he predicted.
He also expects to see more interaction between data centres and the electricity grid. In addition to pressure on organisations to use more electricity generated from renewables (Greenpeace has observed "a significant increase in the prioritisation of renewables among some of the largest Internet companies"), Kirker expects to see data centres become parts of microgrids, either supplying electricity from their generator sets or relying on their batteries during times of peak demand.
Furthermore, the increasing adoption of software-defined technologies makes it easier to move workloads around, and this helps run data centres more efficiently, for example by adopting a "follow the moon" model to take advantage of cheaper cooling at night.