Passive Cooling in Data Centers
Collaboratively designing economical air flow management
Continuing Education
Use the following learning objectives to focus your study while reading this month’s Continuing Education article.
Learning Objectives - After reading this article, you will be able to:
- Discuss the three basic methods for achieving energy saving air separation through hot aisle containment, cold aisle containment, and vertical exhaust ducts strategies.
- State the relative environmental and energy saving advantages and disadvantages for the different separation strategies along with application appropriateness.
- Quantify the energy cost-benefits for a well-designed air flow management system.
- Explain the overall value of good air flow management, and how it can be achieved using design elements that can enable or compromise best results for indoor environmental conditions and energy use.
Spaces built for housing centralized computers and network servers have evolved and changed rapidly in recent years. More data can be saved and stored on physically smaller pieces of interconnected equipment but the demand for that data continues to grow, meaning many pieces of this equipment need to be housed in densely arranged configurations in buildings. Modern data centers of this type may be room-sized to serve a particular business or institution, or they might be entire buildings unto themselves serving a much larger population. In any scenario, they consume energy—typically lots of energy—both for the operation of the equipment and very significantly, cooling of that equipment and the room(s) it occupies. Achieving energy efficiency in this unique building use requires a collaborative design approach between architects, engineers, and building owners with a solid understanding of the issues and successful available approaches to reduce energy demand effectively.
Data Center Design Basics
A fundamental design approach often taken by architects and engineers is to design a space from the outside inward. This often works well for space and building mass planning, but in the case of data centers it does not work well for equipment and energy planning. Instead, fundamental initial design decisions for these spaces need to be made that addresses data equipment layout and the associated cooling design from the “inside out.”
The first design step obviously begins with identifying and understanding the owner’s needs for data management. This is expressed in terms of quantity of data storage, backup capability, reserve for growth, redundancy, etc. Input from the owner is important in this regard, but needs to be translated by computer equipment manufacturers and designers to turn that data need into the appropriate array of computer hardware needed to accommodate such a need.
Open racks provide free access and open circulation of air. Images courtesy of Chatsworth Products, Inc. |
Once the basic quantity of equipment is identified, the next step is to focus on how it can best be arranged. Typically, computer servers or other hardware are stacked vertically either in open racks or inside enclosed cabinets. Racks provide unrestricted ventilation and air flow and easy, open access to cables and equipment. Due to this open condition, the heat generated by the equipment is released directly to the room and dissipated and treated as part of the room HVAC system. For relatively small systems in a larger building, this may be fairly straightforward to address. In cases where large systems are needed, then it may be more appropriate to consider an enclosed approach that uses cabinets to intentionally restrict and control air flow and ventilation around equipment. While cabinets will obviously cost more than open racks, they allow greater opportunities for security control, as well as the potential to work within a system that can better control cooling needs and the associated costs of energy use for that cooling. Each rack and cabinet will have a rated capacity of pieces of equipment that can be installed that will need to be determined by equipment suppliers. Limiting criteria include not only the size of the actual pieces of computer hardware, but also the required clearances and spacing around them. In the case of the cabinets, the ratings for allowable heat buildup inside the cabinet will also come into play. In some cases, other user restrictions may apply. Nonetheless, through the appropriate collaborative review, the total number of needed racks or cabinets can be determined along with their physical dimensions as individual units and as a group connected together.
Enclosed cabinetry can be an overall cost and energy efficient approach in data center design. Images courtesy of Chatsworth Products, Inc. |
Next in our design sequence is the resultant determination of the best configuration or layout of those racks/cabinets. From a user standpoint, both the front and rear of the equipment need access. The front is typically accessed for user interface and monitoring of computer activity while rear access is required for running electrical power and connective computer cables. The most common approach for a layout, then, is to provide equipment in rows that create an aisle such that the fronts of the equipment face each other and allow a person to access and view equipment on the left and right side of the aisle. Similarly, the back sides of the equipment then face each other allowing a common cabling aisle to serve the two facing rows of racks or cabinets. Thus aisles alternate between accessing the front and rear of the vertically stacked equipment.
Other items such as wall mounted cabinets and cable runway need to be factored into the final layout and design of a computer room. Images courtesy of Chatsworth Products, Inc. |
Supplementing this approach, it is also possible to use some smaller wall mounted cabinets around the perimeter of the room. These need to be assessed carefully for appropriate use, but in some cases they can provide compact, space saving storage for a wide variety of applications. Additionally, some thought needs to be given to the need for additional work or maneuvering space required by the owner/users, the main distribution path of the cables, the monitoring and control equipment that may be required, and the normal entrance and egress requirements. Once all of the options for a layout are reviewed, a final decision can be made as to the optimum size and shape of the space or spaces needed to house it all.
Attention then needs to shift to the control of the cooling system for the room(s). As mentioned, a small single row rack system may be readily incorporated into the overall HVAC system of a larger building without much difficulty. However, a larger system with multiple rows of dense equipment requires closer analysis and collaboration.
The first thing to recognize is that the nature of the aisles with equipment facing each other creates aisles along the back where heat from that hardware is directed, usually by internal equipment fans. The fronts of the equipment are comparatively cooler since the heat is not blown in that direction. Hence, the reality of a typical data center room layout is that it is not only a series of alternating interface access and cabling access aisles, it is also a series of alternating hot aisles and cold aisles. Therein lies the essence of the cooling design problem—how to appropriately cool the equipment and the surrounding space in a fairly uniform manner that is efficient in both energy use and costs.
Alternating hot and cold aisles in data centers prevents equipment exhaust air from blowing directly into the fronts of cabinets. Images courtesy of Chatsworth Products, Inc. |
Traditional Computer Room Cooling Approach
In the days of large mainframe computers, there was a need for careful environmental controls within limited temperature and humidity ranges. While computer equipment has changed notably and is a bit less environmentally sensitive, the fundamental concern of environmental control remains. Recognizing the need for change in 2008, the American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE) relaxed the recommended environmental conditions for “Class 1 Electronics” which includes the type of equipment found in data centers and computer rooms. Room temperatures are now targeted to remain between 18 °C – 27 °C (64.4 °F – 80.6 °F) compared to the previous 20 °C – 25 °C (68 °C – 77 °F). This change in standards by itself permits more flexibility in the design of cooling systems and energy efficiency by allowing a larger temperature swing and a higher operating temperature within the room (i.e., less cooling demand). Similar relaxing was done for the range of low end and high end moisture content in the air which now allows for up to 60% relative humidity in computer rooms under the right conditions. This is also a significant change that helps with overall energy efficiency but provides an appropriate interior environment for most computer equipment.
The typical approach to meeting the ASHRAE targets in computer rooms has been to provide free standing cooling units located inside the room. Referred to as Computer Room Air Conditioning (CRAC), these cooling units are commonly very sensitive to temperature and humidity changes, include correspondingly sensitive controls to maintain design conditions, and commonly carry a high price tag to go along with their high level of sophistication. Nonetheless, their common means of operation would be to cool the air in the room by setting the controls to call for air conditioning once the return air temperature reached a specified level, typically 72 °F. At that point, the unit would kick on and provide cool air typically set for 55 °F as it enters the room. This cooled air is intended to flow freely around open racks of equipment and theoretically provide a fully conditioned, uniformly cooled, and optimized environment both for the computer equipment to operate in and for people to work in when needed. In practice, however, a number of common problems have emerged from this traditional approach.
Uneven cooling / hot spots
The first and most common issue is that the cooling is in fact not uniform throughout the room. The supply and return air mix in a more or less random pattern influenced thermal dynamics, equipment use, and occupants. As a result, unwanted hot spots can be created that don’t receive the proper cooling, causing adverse effects on both the room and the equipment in that location. This condition is exasperated by cabinet or rack conditions that interrupt or short circuit intended air flow creating a condition referred to as cooling bypass. This commonly occurs when cooling air intended to reach the warm aisles is inadvertently channeled or directed elsewhere due to openings or blockages in cabinetry or air passages that prevent the equipment from being properly cooled, hence the resulting “hot spot.”
Mixing supply air and return air in typical data centers using traditional cooling methods with Computer Room Air Conditioning (CRAC) systems can produce unwanted hot spots and require equipment to operate at lower set points for cooling. Images courtesy of Chatsworth Products, Inc. |
High energy costs
The second issue with this traditional approach is that energy usage and the associated costs are high. The equipment in these rooms typically runs 24/7, 365 days a year. Keeping the entire room continuously cooled within the design levels can be a huge, ongoing, demand in all seasons, under all utility pricing structures, and sometimes more critically than other occupied spaces in the building.
Change constraints
Traditionally cooled rooms are subject to the capacity of the cooling equipment that is connected to that room. Over time, owners or users of the room commonly seek to upgrade the computer equipment or add to it either because of changes in their needs or due to normally scheduled technology refreshing. The typical result in any of these conditions is both higher density of computer equipment and higher total cooling load. The concern of course is that the existing cooling system may be too inefficient to accommodate those changes, thus constraining the amount of change or growth that is feasible after the initial design and construction of the room or data center.
Now, to be fair, all of these problems have potential solutions in the traditional way of thinking. The issue is that the solutions can be rather complex and costly to implement, making them less attractive than some increasingly more popular alternatives.
Alternative Air Isolation Approach
Recognizing the shortcomings of using traditional cooling approaches on today’s computer and data center equipment, an alternative has emerged. Rather than focus only on the supply of cool driven by the return air temperature in the room, a more effective approach is to concentrate on isolating the heated return air from the cooler supply air to begin with. By adopting this fundamental approach, the design now focuses on options to create enclosures around the computer equipment that effectively capture the heat from the computer equipment and return it to the CRAC without mixing it with the cooler supply air. In the process two other room components become important contributing factors. The first is the plenum space above a suspended ceiling. As an open area for air flow, this can be an effective way to capture and deliver return air back to the CRAC unit just as it is commonly done in other building design applications. Alternatively, the plenum space could be used for ductwork to perform the same function where that is warranted by engineering design. The second room component to consider is a raised floor that has been commonly used in computer rooms and other energy efficient buildings in general. The space between the raised floor and the actual structural floor creates a second plenum space below the equipment for the supply of cool air to flow through.
Hot Air Isolation is an approach that separates the hot air from the supply air, thus eliminating hot spots and any concern about return air temperature. Images courtesy of Chatsworth Products, Inc. |
Adopting this fundamental approach to maintain the environmental control in computer rooms has a number of distinct advantages:
- Elimination of hot spots: In the traditional, air-mixed approach, hot spots are common and often drive the decision of where to set the temperature control to suit this “worst case” in the room. Thus, the system operates much less efficiently since it is over compensating for the non hot-spot areas. Isolating the heated return air from the cooler supply air means that distribution is designed to meet the demand of different equipment and the hot spots can be effectively eliminated. Hence cooling temperature set points can be increased meaning that less cooling is needed and controlled accordingly.
- Allows higher heat and power densities: In designs that use cabinets to stack the computer servers or related equipment, the heat and power limits inside of the cabinet become an important consideration. Determining the density of equipment that can be put into the cabinet based on power required and heat produced becomes an issue of testing and rating. By using methods that connect directly to the cabinets to isolate and channel heat from the equipment back to the CRAC, those ratings can be improved dramatically, meaning more equipment can be installed and cooled in fewer square feet. Independent testing has shown that heat and power densities of up to 4 times higher are possible. In a traditional system, 6 kW of power and associated heat would be typical, while using air isolation techniques, this can be increased to over 30kW—potentially a higher rating on the cabinet than could ever be accommodated inside it.
- Full utilization of supply air: Part of the cooling equipment assembly includes a Computer Room Air Handler (CRAH) that employs the fans to move the air within the room. In a standard, open room, traditional approach with hot and cold aisles, the CRAHs are required to produce significantly more chilled air than is directly required by the computer equipment. The design requirement for this surplus is to be sure the cold aisles are adequately filled with a volume of cold air to minimize the effect of the hot aisle re-circulating over the top of the server cabinets or wrapping around the ends of cabinet rows. This practice typically results in some amount of bypass air flow that returns to the air handlers without having conducted any effective heat transfer in any of the computer equipment. In addition, there is frequently inadvertent bypass where cold air is short-circuited back to an adjacent air handler without picking up any heat load. Surveys and audits have found this over-production to range over 2.5 times the actual airflow demand of the data center cooling load. With containment, all the cool air produced can only return to the cooling units after having passed through the computer equipment and thereby conducting heat transfer and eliminating the need for any over-production. Because fan energy does not have a linear relationship to fan flow, the elimination of this wasted production is can be significant, up to a calculated 64% in some cases.
- More effective heat transfer: Another effect of isolating the return air is to achieve a much higher return air temperature, which is not necessarily a bad thing. In actuality, higher return air temperatures actually bump up the performance ratings of chilled water cooling units. Note that direct expansion (DX) units have a more or less fixed temperature difference that they work within so higher return temperatures will actually drive up the supply temperatures accordingly. However, with chilled water CRACs cooling performance efficiency actually increases with higher return air temperatures. In addition, as the temperature rise increases, the amount of air required for the same amount of cooling decreases. Anecdotal cases have demonstrated data centers that were experiencing significant over-heating problems evolve to actually placing half of their cooling unit capacity in reserve merely by effectively managing these variables.
- Chiller efficiency improvement: As previously illustrated, traditional, open return air data centers will typically operate cooling units with a return air set point around 72 °F, resulting in an under-floor supply temperature around 54 °F – 55 °F. These low supply temperatures were shown to be necessary so that, after re-circulation and mixing, computer equipment would not see temperatures exceeding the ASHRAE recommended maximum threshold of 80.6 °F. Since isolated air systems eliminate the effect of re-circulated return air on the supply air, it allows the supply air to be set in the mid-to-upper 70s, while still assuring that computer equipment is seeing appropriate temperature air. Most data center chiller plants are sending water to the CRAH cooling coils around 42 °F – 45 °F, in order to produce that 54 °F – 55 °F supply air; however, with the supply set at 75 °F or higher, that water temperature coming from the chiller plant can now be set around 65 °F or 20 degrees higher! Depending on the age and style of the chiller, operating costs are reduced by 1.5 – 4% per degree increase in the water temperature. Since the chiller constitutes anywhere from 65% to 90% of the total data center cooling cost, these potential savings of 30% to 80% exceed all other potential efficiency improvement opportunities other than turning off the chiller completely for economization.
- Economizer improvements: Economizer functions on cooling systems use outdoor air for cooling on days where the outdoor temperature and humidity conditions are favorable for that to happen. That is true in CRAC units as well, so finding ways to optimize this “free” cooling makes considerable sense for this application. As we have noted, isolated air systems operate quite well at higher thermostat settings in comparison to traditional systems. This means that the number of days and hours when the outdoor air is suitable for using the economizer cycle can be increased dramatically. Depending on the climate and equipment used at a facility, the difference can be as dramatic as a jump from using an economizer cycle for 5% of the operating hours of a traditional system to over 90% of the operating hours of an isolated air system.
- Allows for design options: Open return air data centers have inherent design restrictions due to the need to deliver an adequate amount of cool supply air to each point of use, (i.e, rack or cabinet) and by the need to locate CRACs and CRAHs to minimize dragging warm return air over or through cold aisles. By contrast, isolated air systems remove both of those design constraints because the complete removal of waste warm air eliminates the criticality of the CRAC/CRAH physical location. They also allow supply air to be delivered merely to maintain room pressure rather than to push an adequate volume to a particular point of use. Therefore, a raised floor becomes an option but is no longer a necessity with the associated perforated floor tile located directly at a point of need. In addition, if rooftop air-side economizers are being used, the cool air can be ducted directly down into the data center without having to add fan energy to overcome duct loss to rout that supply air around the data center and under the floor. Similarly, in some cases, the cool air can be delivered through a wall again without having to apply the additional fan energy required to maintain adequate static pressure under the floor. In short, isolated air systems allow the data center design to be more flexible and respond to a higher level of business and operational issues.
Clearly, then, the energy saving impact can be quite significant by changing from a traditional open air approach to a design strategy that isolates the warm return air from the cooler supply air. How much of an impact will depend on the particular conditions of a given facility of course, but it will also depend on the selection of the most appropriate specific strategy employed by the design team to achieve that separation.
Hot Aisle and Cold Aisle Containment Strategies
Based on the typical layout of alternating aisles of equipment faces, a popular air isolation strategy is to focus on enclosing or containing the aisles, however, there has been considerable debate and discussion on whether it is the hot aisle or the cold aisle that should be contained.
In hot aisle containment, (HAC) cool supply air is directed to the front of the computer equipment, or the cool aisle, while the back of the equipment faces the “hot” aisle. In this strategy, it is this hot aisle that is enclosed or contained by doors at either end. In the ideal design scenario, the cool supply air is drawn up through the floor plenum in front of the equipment and pulled through the cabinet or rack over the computer equipment to pick up the unwanted heat. It now becomes return air that enters the enclosed hot aisle where it is captured. Since this hot aisle is also isolated from the top of the equipment up to an open ceiling plenum, it then returns back to the CRAC to be treated and the cycle repeats as needed. Under this strategy, the room itself is the area that receives the cool supply air and the hot air is isolated and returned. To the degree that cabling and other maintenance work will be done in the rears of cabinets, the HAC will create a relatively inhospitable working environment of 95 °F – 130 °F.
Hot Aisle Containment (HAC) isolates the hot aisle, captures the hot exhaust air from IT equipment and directs it back to the CRAC/CRAH, keeping it segregated from the cold air. Images courtesy of Chatsworth Products, Inc. |
In the cold air containment (CAC) strategy, the approach is reversed. Here, the cold aisle is enclosed or contained with solid doors and a containment ceiling to separate it from the rest of the room. In this case, the cool supply air enters again ideally through the floor plenum and is drawn through the racks or cabinets of computer equipment. As it picks up the heat and exits out the rear, it enters the room and is drawn back, typically below the room ceiling, to the CRAC. The enclosure is essentially the same on either end of the aisle between the hot air and cold air containment strategies. The difference is that a ceiling enclosure is needed in the cold air containment strategy to create the open air space between the top of the computer equipment and the room ceiling hence the warmed return air is not captured, but released into the room. The raised floor becomes the important design feature to deliver the cool supply air and create the air isolation. Cold aisle containment is typically superior for retrofitting existing spaces because it is easier to avoid overhead obstructions such as cable pathway or power distribution busway and, as noted below, has been demonstrated to provide longer thermal ride through in higher density applications. Unfortunately, the remaining non-contained area of the data center becomes the “hot aisle” and will subject workers to temperatures ranging from 95 °F – 130 °F.
Cold Aisle Containment (CAC) isolates the cold air, preventing it from mixing with the hot air in an entirely closed off area targeted at cooling equipment instead of the room. Images courtesy of Chatsworth Products, Inc. |
A recently published (2011) paper by Dr. Rainer Weldmann of T-Systems International and Dr. Markus Leberecht of Intel focused on the differences between efficiencies and energy performance of both hot air and cold air containment strategies. Under the banner of joint project between their companies known as Data Center 2020, they have previously looked at the specifics of air isolation including detailed investigations of hot air and cold air containment. The results of their research have affirmed that the strict separation of cold and hot air in raised floors and contained aisles optimizes air flow and minimization of leakage and mixing, which also reduces the fan speed of the CRAH units. They have also shown that raising the supply air temperature delivered along with raising the chilled water temperature minimizes the hours required for cooling by the chillers and extends the hours available for economizer (free) cooling. They have further investigated multiple ways to vary conditions and achieve the best energy performance and efficiencies. All else being equal, they have concluded that there are no significant differences between the efficiencies of hot aisle and cold aisle containment systems. Contrary to conventional wisdom, they did find, however, that HAC provided more thermal ride-through time in the event of catastrophic cooling failure than CAC for lower density cabinets, while CAC provided more time for higher density cabinets (17kW). Nonetheless, properly designed, either one can be expected to provide reasonably good and similar performance, particularly when compared to traditional open air systems.
Hot Air Isolation Strategy using Vertical Exhaust Ducts
In addition to the HAC and CAC systems, a third strategy is also available to designers interested in achieving superior performance and efficiency levels. Referred to as the Vertical Exhaust Duct (VED) strategy, it improves overall air isolation and energy performance. This approach requires a gasketed solid rear door to seal off the rear of the cabinet from the rest of the data center and unimpeded flow path for cool air to the server in-takes, either through a high percent open perforated metal front door or opening in a floor tile inside the front boundaries of the cabinet. Computer equipment fans draw that cool air into across the heat producing parts of the equipment and then pass it into the enclosed rear chamber of the cabinet. It is then drawn up as return air into the vertical duct (sometimes referred to as a chimney) at the top of the cabinet and into the ceiling area plenum or ductwork where it returns back to the CRAC unit. This strategy effectively isolates the equipment and its heat from the room and changes the focus from aisles to individual cabinets that can potentially be arranged more freely.
By isolating cold supply from hot exhaust air, the inefficient mixing of hot and cold air is eliminated, allowing only cold supply air to be directed through equipment. Images courtesy of Chatsworth Products, Inc. |
The performance of this VED strategy is notable. The working air temperature around the computer equipment drops from the 95 °F to130 °F range down to the 77 °F to 80 °F range. In terms of thermal transfer or ride, this VED approach will vary depending on boundary conditions. If the VEDs are not directly coupled to the enclosed return air path, then this will provide the best thermal ride-through due to the conductive heat absorption of all the extra sheet metal in the data center space, though the lack of coupling obviously allows a path for compromising the complete isolation and thereby diminishing the overall efficiency. When coupled into a fully closed system, ride through improves to the degree that the return air space exceeds the volumetric capacity of the supply space and it diminishes as that ratio is reversed.
As VED air isolation strategies have become integrated into computer rooms and data centers, some misconceptions have arisen that need to be corrected:
- Capacity myth: This misconception is born from a mistaken belief that there is an upper limit to the air cooling capacities of VED systems that are well below today’s potential heat load densities inside a cabinet. This perceived ceiling to air cooling capability is commonly based on how much air can be delivered out of a perforated access floor tile in a raised floor system. A standard value for this is typically placed at around 700 CFM. Hence, assuming that 700 CFM of chilled air is available through a perforated access floor tile located in front of a rack containing common computer equipment, one could expect to cool roughly between 4.5-8 kW of heat. Actual experiments confirm that the chilled air is consumed by the bottom half of the cabinet, thereby cooling approximately half of a potential heat load of over 9 kW. This would lead designers to think that the cabinet would never fully cool and that hot spots would emerge. Therefore, any kind of air cooling solution for high-density heat loads will need to eliminate the dependency on chilled air from a perforated access floor tile in front of the computer equipment. Instead, a fully integrated cabinet with the capability to receive larger amounts of chilled air and properly contain and channel that air is readily available to more than meet the heating capacity of the equipment. In tested conditions, capacities exceeding 30 kW of heat are possible within a single cabinet, allowing a higher density of equipment inside that cabinet with capacity to spare.
- Return air temperature myth: This misunderstanding focuses on a belief that high-density VED air cooling systems create unmanageable high return air temperatures. While in some circumstances that may be true, if an IT manager were to tell his facilities manager that he was experiencing high return air temperatures, the facilities manager would likely respond by telling him to keep up the good work, especially in a chilled water environment. The reason for this unexpected response is quite clear. Chilled water computer room air conditioners (CRAC) improve efficiency; that is, they increase cooling capacity with higher return air temperatures. Of course, there are some caveats to this statement. First, there is not a flexible performance curve for direct exchange (DX) cooling units—this capacity bonus is only available with chilled water units. Secondly, there is a limit to how high this return air can be before the performance curve starts on a path of diminishing returns, primarily by raising the supply air temperature. Hence, the best design decision is to specify chilled water cooling unit solutions that accommodate wider differences in temperature (ΔT) between the supply air and the return air, and which, in fact, deliver superior performance at those higher ΔT; whereas in a DX environment some amount of bypass airflow will be required to control return air temperatures, particularly in blade server applications.. By making the return air plenum large enough (i.e., double the supply plenum) this system will be self-regulating for most ranges of airflow fluctuation. In summary, this VED air cooling solution does create high return air temperatures, which is good up to a point and then there are simple site management strategies to allow the data center manager to continue reaping the benefits of high ΔT without driving supply air too high.
- Operating cost myth: The final misperception is a belief that the lower acquisition costs for VED air cooling solutions are overshadowed by significantly higher operating costs, particularly when compared with liquid cooling solutions. The primary basis for this belief is derived from the inherent inefficiency associated with standard hot and cold aisle facilities at higher densities, plus the common over-capacity designs that are typically driven by the need to supply cooling continuously at a worst case level. With the improved efficiency of complete isolation between supply air and return air and the resultant operating economies associated with that separation, close-coupled liquid cooling solutions lose their operating cost advantage. In terms of understanding the difference, particularly in first costs, the cost of construction can be a significant determinant. If a lower density array of cabinets or racks is selected to distribute the heat and avoid hot areas, then more square footage needs to be built to house this equipment, usually at a rate in excess of $200 per sq. ft. Since the VED system has been shown to quite comfortably handle higher densities, the required square footage can be proportionately less as a result. Further, typical high density solutions require some redundancy in cooling equipment in case one unit fails or is inoperable for a time. As for operating costs, with the removal of normal inefficiencies and necessary over-supply through a containment strategy, apples-to-apples comparisons between precision perimeter cooling with containment and close-couple cooling with containment reveal very little differences in energy efficiency ratings (EER, i.e., BTU’s heat removed per kW energy expended to remove the heat). In fact, some types of precision perimeter cooling equipment perform consistently better than popular close-couple systems. However, the benefits don’t stop there. A study conducted by the McKinstry Company of eight different geographic regions around the U.S. reveals a more complete picture of the economic benefits from a complete separation between supply air and return air including:
- 74% reduction in critical refrigeration tonnage
- 35% reduction in peak water use
- 44% reduction in tank that would be required for 24 hours water storage
- 74% reduction in thermal tank volume to allow generators to restart
- 65% reduction in HVAC load on generator, utility distribution
- 78% reduction in three phase component connection requirements
- 89% reduction in floor space requirements for indoor HVAC equipment
- 63% reduction in cost for HVAC water
- 49% reduction in maintenance costs
The VED hot air containment model also essentially eliminates the regular activity of monitoring hot spots and making adjustments to the room to balance cool air distribution. Hence, the above demonstrates that containment acquisition costs, plus construction costs, plus operating costs, combine for a total lower cost of ownership for high-density passive air cooling solutions over alternative solutions for high-density cooling.
Overall then, these three common myths simply don’t hold up. The VED passive cooling solution effectively cools heat loads well in excess of 30 kW per cabinet, while reducing energy costs and the resultant carbon footprint of a data center. The basic principle behind this passive cooling solution is to use the equipment cabinet not as a box for housing computer servers, but rather as an architectural feature of the data center that secures the isolation between the chilled supply air and the heated return air. In addition, the basic misconceptions described come from traditional thinking about liquid cooling systems, while the analysis shown reveals the superior uptime reliability of this cooling approach over any other means of cooling high-density data center heat.
Conclusion
Effectively and efficiently cooling computer rooms and data centers requires a collaborative design process to find the right balance between density of computer equipment, square footage, cooling system equipment, and cooling strategies. Of the strategies presented, the greatest cooling and space efficiency is achieved when the computer server cabinet becomes a central feature of the data center. By using the vertical exhaust duct (VED) approach strategy, the isolation between the supply air and the return air is secured and the cabinet is free from its dependence on specific perforated or grated floor tiles to cool its heat load. Passive cooling solutions have performed adequately in active data centers up to 32 kW per cabinet and have been tested successfully up to 45 kW. This performance level is achieved through the application of sound, but relatively elementary physics principles. Not only does this solution set provide a viable option for cooling higher densities, but it removes any uncertainties associated with potential points of failure, and provides the basis for being able to take advantage of significant energy savings associated with higher HVAC efficiencies and access to greater economizer hours.
A comparison of the three common hot air isolation strategies reveals that the Vertical Exhaust Duct (VED) method is often the superior performing system, particularly for high density applications. Chart courtesy of Chatsworth Products, Inc. |
Chatsworth Products, Inc. (CPI) is a global manufacturer providing voice, data and security products and service solutions that optimize, store and secure technology equipment. CPI Products offer innovation, configurability, quality and value with a breadth of system components, covering virtually all physical layer needs. www.chatsworth.com |