The New Cool Data Centers

The information technology industry is working on all fronts to better manage its intense consumption of energy.
This course is no longer active
[ Page 6 of 6 ]  previous page Page 1 Page 2 Page 3 Page 4 Page 5 Page 6
From GreenSource
Nancy Solomon

Air management can be particularly problematic in older computer rooms that rely on multiple computer-room air-conditioning units fitted with internal humidifiers. If not calibrated together, these stand-alone machines, which are located within the computer room and cool with dedicated compressors and refrigerant cooling coils rather than chilled-water coils, can start working against each other—one is humidifying the room while an adjacent one is dehumidifying. Current ASHRAE guidelines recommend that only one unit should handle humidity levels—both dehumidification and humidification functions should be removed from all the others—to avoid this common problem. Monitoring and control systems also play an important role in conserving energy. Traditionally, control systems were based on the temperature of the return air, but current guidelines suggest placing sensors near the server's air intake, which is where the desired temperature is most critical.

Undoubtedly, the most significant change in data-center air management in recent years has been the industry's re-evaluation of acceptable environmental conditions. For years, IT professionals insisted that all computer rooms remain between “68 and 70 degrees Fahrenheit with a relative humidity between 40 and 60 percent,” says Paul Bonaro, who manages the Yahoo! data center in Lockport, New York. These ranges, however, were never backed by any industrywide, research-based consensus. The ASHRAE Technical Committee 9.9 was formed in 2003 as an outgrowth of an industry effort to establish such specifications. In 2004 it published its first edition of Thermal Guidelines for Data Processing Environments, which provided recommended and allowable temperature and humidity ranges for two classes of data centers—categorized according to the needed level of reliability—based on input from major IT-equipment manufacturers. The temperature and humidity ranges were broadened in a second edition, which came out in 2008, to reflect growing concern over energy costs. These ranges were increased again and the number of data-center classes expanded to four—reflecting a more nuanced understanding of reliability among different centers—in the third edition, which came out in 2011.

According to the most recent guidelines, a data center requiring the lowest level of reliability could, on occasion, have a server-intake temperature as high as 113°F (assuming the relative humidity is in the 10 to 30 percent range) or relative humidity levels as high as 90 percent (assuming the temperature does not go above 75°F). Meanwhile, testing at LBNL and elsewhere has demonstrated that servers can function perfectly well with outside air that has been properly filtered. The more relaxed requirements have given designers a great deal more leeway in terms of how computer spaces can be conditioned with less energy. At the very least, raising the acceptable room temperature in server rooms means that the traditional mechanical systems used to supply conditioned air to data centers do not have to generate as much cooling. Furthermore, in areas where the outdoor temperatures regularly dip below 75°F or so for some periods, these systems can be designed for air-side economizers. In this scenario, the compressor temporarily shuts off while the cool outside air is pulled into the mechanical system and directed to the computer room for “free” or “compressor-less” cooling. Several industry leaders—including Facebook and Yahoo!—have gone further: eliminating conventional mechanical systems completely and relying instead on a combination of outdoor air and evaporative cooling techniques to maintain proper interior temperatures and humidity levels year-round.

Servers at NREL’s highly efficient data center will be cooled year-round with water maintained at 75 °F by cooling towers. Waste heat from servers will condition both offices and lab spaces. Mechanical system designed by Integral Group.

Image courtesy Integral Group

Data centers are also experimenting with using water for cooling. Water is a much more efficient medium than air for heat transfer, because it absorbs and releases heat more easily and can transport a greater amount of thermal energy within a smaller volume. To reduce reliance on the mechanical cooling of water, some innovative data centers are going to water-side economizers, in which the outside environment cools the water to the desired temperature: Cooling towers can remove excess heat from water through evaporation, or cool water can be pulled from nearby rivers, lakes, seas, and even sewers—and cleaned up, if necessary, before being returned to its source.

eBAY is planning to use Bloom Energy fuel cells to power its new data center in South Jordan, Utah.

Photo courtesy Bloom Energy

The cool water from the outside can be routed through more conventional chilled-water mechanical systems that rely on air handlers to deliver cooling to the actual data halls, or it can be brought closer to the servers themselves. Although there is some concern within the industry regarding bringing water into an area densely filled with electronic equipment, strategies have been—and continue to be—developed to do this in as safe a manner as possible, because the closer the cool water is brought to the server, the more efficient the system.

Clearly, there is no one magic bullet to lower the energy use of data centers, but a whole host of interlinking strategies to consider. Fortunately, the industry has at least begun the transition. According to a recent comment from the EPA, “We have seen data center operators making great strides in their efforts to reduce energy use in their facilities.” And the agency believes the industry can continue to achieve significant energy and cost savings: “Based on the latest available data, improving the energy efficiency of America's data centers by just 10 percent would save more than 6 billion kilowatt-hours each year, enough to power more than 350,000 homes and save more than $450 million annually.”

 

[ Page 6 of 6 ]  previous page Page 1 Page 2 Page 3 Page 4 Page 5 Page 6
Originally published in GreenSource
Originally published in September 2012

Notice

Academies