During the cultural fights between mainframe and server computers (which servers won), one of the taunts tossed at the mainframe supporters was that the machine was so huge it could heat the building. This is not an idle statement and, in many cases, the heat generated by the mainframe was used to provide some or all of the building's heat.
In rough terms, for every watt of electricity used to power equipment in a data center, another watt is required to cool it. Companies whose data centers reside in old buildings were feeling the floor space pinch. Proliferating servers were replaced with "blade servers." Blade servers did allow data centers to pack more servers into the same floor space, but often required running additional power feeds into the data center to keep everything running (and cooled) at once. It is not unusual to see a blade center rack whose power supplies exceed ten kilowatts.
Offices suffer the same issues in a more dispersed fashion. Have you ever walked into a room full of desktop computers (usually a training center) where the air conditioning had stopped working? It is very hot. The same holds true in offices. Computers left running when no one is around add significantly to a building's air conditioning load. The difference is that the equipment is spread over a larger area, and the heat created by individual units is not as obvious.
Finally, electronic devices depend on solid-state components. The primary enemy of solid state components is wide temperature variations—
very hot or freezing cold. (Now you know why the data center's temperature is maintained at something colder than a meat locker!) Equipment that runs hot all of the time is more likely to suffer a hardware failure than similar devices that run cooler. Reducing the number of hours per year that idle equipment is running will extend its useful life.
Was this article helpful?