Since the introduction of the concept in 2007, PUE has rapidly become a measurement standard for the datacentre industry. This is with good reason – a datacentre’s power usage should be monitored and checked, it’s a huge factor in the cost of running your own datacentre or hiring a colocation facility, plus it has a huge effect on your environmental impact. We’re fully in support of these things being measured but do have to question PUE in its current state as the way of doing that.
Our doubts about PUE are largely with its accuracy. The Uptime Institute’s 2013 Data Industry Survey found that average PUE is now 1.65, falling from 2.5 when it first ran the survey in 2007. The industry as a whole has made great steps in that fairly short space of time – we introduced fixes like ensuring hot & cold isle arrangements and UPS and power distribution systems have also improved. These have been great, but the industry has also begun to use PUE as a marketing tool, creating an incentive for us to publically display our best possible PUE numbers.
For instance, the PUE of Telehouse West is 1.12, significantly below the average. If Telehouse West is full; however,it could be subject to a rise in PUE. Most people, when procuring a colocation partner, know the PUE offered probably isn’t constant, so treat it with a great deal of skepticism. It gets worse when you count in the fact that there’s no agreed standard on how PUE is measured. The Uptime Institute found that only 15% of datacentres are measuring and controlling air temperatures from the server inlet. Nearly a third measure at room level, which is the least accurate method of measuring.
The upshot of all this is that one datacentre may tell you a PUE of 1.3, having recorded their rating at near-capacity and from the server inlet, whilst another will tell you its PUE is 0.6 – a rating that is, in-fact, impossible, because they took their measure at room level, and near empty. Google, long proud of its PUE ratings, claims a PUE of 1.12, much like Telehouse, but is quick to highlight that, if it wanted to it could record a rating of 1.06. It’s no wonder Search datacentre called PUE an extreme sport.
So, how do we make it more reliable? Well, context is step one. We shouldn’t just be taking a number but the capacity of the datacentre and where the rating was measured, at the very least.
We’re still not sure this will be good enough, however. Technology and reliability have moved on swiftly since PUE was envisioned. This year, for instance, Telehouse will break ground on its new London datacentre, affectionately known in-house as Telehouse Far-East. This new colocation facility will use ambient airflow, a new cooling model designed to utilise natural resources to manage the temperature throughout the datacentre. Ambient airflow will ensure that, no matter how full Telehouse Far East is, or where we measure it, the datacentre’s PUE will be 1.12.
With methods such as this arriving in datacentres, we’re not sure a ‘snapshot’ measurement is what is needed. We need a way of presenting the continuous efficiency of the entire datacentre, not the current limited snapshot.
We admit – we don’t have the solution to this quandary yet, but we’re thinking about it, and if you have any ideas, let us know.