As a Prius-driving vegan, I'm doing everything I can to reduce my carbon impact on the planet. This also includes an effort to build "green" data centers. My next few posts will be about the power consumed by the technology we use in healthcare. It's estimated that between 1.5 and 3.5% of all power generated in the US is now used by computers.
I recently began a project to consolidate two data centers. We had enough rack space, enough network drops, and enough power connections, so the consolidation looked like a great idea way to reduce operating costs. All looked good until we looked at the power and cooling requirements of our computing clusters and new racks of blade servers. For a mere $400,000 we could run new power wiring from electrical crypts to the data center. However, the backup generators would not be able to sustain the consolidated data center in the event of a total power loss. So, we could install a new $1 million dollar backup generator. Problem solved? The heat generated by all this power consumption would rapidly exhaust the cooling system, driving temperatures up 10 degrees. We investigated floor tile mounted cooling, portable cooling units, and even rack mounted cooling systems. All of these take space, consume power and add weight. At the end of the planning exercise, we found that the resulting new data center cost per square foot would exceed the cost of operating two less densely packed data centers. We looked at commercial data hosting options and ran into the same issue. Power limits per rack meant half full racks and twice as much square footage to lease, increasing our operating costs.
At my CareGroup data center, we recently completed a long term planning exercise for our unused square footage. Over the past few years, we've met increasing customer demand by adding new servers and power has not been a rate limiting step. However, as we retire mainframe, mini and RISC computing technologies and replace them with Intel/AMD-based blades, the heat generated will exceed our cooling capacity long before real estate and power are exhausted.
The recent rise in the cost of energy has also highlighted that unchecked growth in the number of servers is not economically sustainable. In general, IT organizations have a tendency to add more capacity rather than take on the more difficult task of controlling demand, contributing to growth in power consumption.
Power consumption and heat is increasing to the point that data centers cannot sustain the number of servers that the real estate can accommodate. The solution is to deploy servers much more strategically. We’ve started a new “Kill-a-watt” program and are now balancing our efforts between supply and demand. We are more conservative about adding dedicated servers for every new application, challenging vendor requirements when dedicated servers are requested, examining the efficiency of power supplies, and performing energy efficiency checks on the mechanical/electrical systems supporting the data center.
We have also begun the extensive use of VMware, Xen and other virtualization techniques. This means that we can host farms of Intel/AMD blades running Windows or Linux, deploying CPU capacity on demand without adding new hardware. We're connecting two geographically distant data centers together using low cost dark fiber and building "clouds" of server capacity. We create, move and load balance virtual servers without interrupting applications.
Managing a data center is no longer simply a facilities or real estate task. We've hired a full time power engineer to manage the life cycle of our data center, network closets and disaster recovery facilities. New blade technologies, Linux clusters, and virtualization are great for on demand computing, but power and cooling are the new infrastructure challenge of the CIO.
No comments:
Post a Comment