The New Cool Data Centers

The information technology industry is working on all fronts to better manage its intense consumption of energy.
This course is no longer active
[ Page 3 of 6 ]  previous page Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 next page
From GreenSource
Nancy Solomon

Although alternative, more energy-efficient configurations do exist, electricity entering a data-center facility in the United States typically goes through an uninterruptible power supply (UPS), which conditions the electricity to smooth out power spikes and serves as a battery backup in case the utility grid experiences a disruption in power. From the UPS it flows into power-distribution units (PDU), then onto IT equipment, which are often generically referred to as “servers.” A single server—which measures about 19 inches wide and 20 inches deep, and no less than 1.75 inches in height—typically performs one function, for which it is named. Examples include database servers, file servers, mail servers, and Web servers.

Multiple servers are mounted one above the other on open racks about 6 feet tall. While small offices may have just a server closet with one rack and larger offices a server room with several racks, stand-alone enterprise data centers like those for Facebook and Yahoo! have halls with hundreds of racks lined up, row after row.

FACEBOOK’s data center in Prineville, Oregon, is a LEED Gold facility designed as one large evaporative cooler, in which fans blow air through wet filters, cooling and humidifying the air before it reaches the servers.

Photo © Jonnu Singleton Photography

Because reliability is such a critical aspect in the computing world, data centers typically build in various redundancies—including extra UPS equipment, multiple air-conditioning systems, and additional servers—in the hope that there will always be sufficient backup power, cooling, and computing capacity in case user demand spikes or a problem occurs with one system or component. Although data centers do require some staff to operate and maintain the facility and processes, the number of occupants in a typical facility is relatively small compared with the square footage dedicated to IT equipment. The bulk of the energy consumed by data centers goes to powering and cooling their servers.

Establishing Metrics

One of the industry's early challenges in developing greener data centers was to come to a consensus on what it would measure to track success. In 2010, under the auspices of the Green Grid, an industry association formed in 2007 to improve resource efficiency within the IT field, an agreement was made to use a ratio called power-usage effectiveness (PUE), which is determined by dividing the total power used in both computing equipment and building infrastructure by the power used in the computing equipment alone. According to the Green Grid, the ideal PUE is 1.0, which would mean all energy used by a data center goes into powering the IT equipment rather than the building.

The average PUE for conventional facilities is currently about 2. In other words, the building that houses the IT equipment consumes the same amount of energy used by the data-processing equipment itself. “Some are much worse, like 3 or 4,” says Dale Sartor, a member of LBNL's building-technology and urban-systems department. “The best projects are those with a PUE less than 1.1,” he continues. According to Sartor, organizations like eBay, Google, Yahoo!, and Facebook have created the “icons” of high-efficiency projects: “They push the envelope in both the IT equipment and infrastructure side—and the gray area in between.”

YAHOO! has a state-of-the-art data center in Lockport, New York, featuring several long, heavily ventilated sheds to encourage natural airflow. Less than 1 percent of the building‘s energy consumption is needed to cool the facility, which has been aptly nicknamed the “Yahoo! Chicken Coop.”

Photo courtesy Yahoo!

 

[ Page 3 of 6 ]  previous page Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 next page
Originally published in GreenSource
Originally published in September 2012

Notice

Academies