This course is part of the Mastering Movement™ Academy

All photos courtesy of Robert Benson Photography
Sci-fi meets the real world in data center design. Architects need to keep up with the latest innovations and design demands for power, cooling, and security to compete in this expanding field.
This article explores the design, infrastructure, and evolving technologies of modern data centers. As the demand for AI and cloud services expands, data centers have become essential components of the built environment. This course provides a historical overview of data center development and examines how evolving computational loads have transformed cooling systems, building resilience, and visibility within communities. Special attention will be given to the transition from air-cooled to liquid-cooled systems, heat mitigation and reuse strategies, and architectural solutions to reduce noise and visual impact. Through this lens, participants will better understand the intersection of performance, sustainability, and livability in today’s digital infrastructure.
The Rise of Data Centers
Data centers are the new buzzword that is captivating the attention of our current world. But what exactly are data centers? To understand the design expectations of these unique and somewhat mysterious new building types, it is important to have a general grasp of the history of data centers and how they evolved from simple office utility rooms to the multi-million square foot complexes.
A Brief History
The modern world runs on data, and behind every internet search, digital purchase, cloud file sync, or AI-generated result lies a data center quietly doing the heavy lifting. These facilities have evolved
dramatically over the past several decades, mirroring the growth of computing itself. What started as simple centralized mainframes housed in secure government buildings has exploded in scale to globally distributed cloud infrastructures and high-density AI-driven server farms.
The concept of a data center dates back to the earliest days of electronic computing. In the mid-20th century, mainframe computers such as the UNIVAC and IBM 701 required dedicated rooms with controlled environmental conditions to operate reliably. These early computing centers were largely the domain of military, academic, or corporate institutions. The physical spaces housing these machines were often highly secured, air-conditioned environments that foreshadowed the purpose-built facilities we now recognize as data centers.
By the 1960s and 70s, organizations increasingly used centralized computing infrastructure to process payroll, store customer records, and automate logistics. These computing rooms were supported by raised flooring systems to accommodate cabling and airflow, uninterruptible power supplies, and basic climate control systems.
The term “data center” itself gained broader usage during the 1980s and 90s as businesses embraced local area networks (LANs), client-server computing, and early enterprise resource planning systems. During this time, colocation centers also began to emerge, allowing multiple companies to rent space in shared facilities to run their servers and IT hardware. These early commercial data centers typically prioritized uptime, redundancy, and physical security, especially for financial institutions and telecommunication providers.
Welcome to the Internet
The rise of the internet in the 1990s ushered in the first true wave of explosive growth in data center infrastructure. As websites became central to business operations and e-commerce platforms like Amazon and eBay gained prominence, the need for always-on, scalable, and resilient server infrastructure surged. Companies built out large-scale server farms to deliver websites, host user data, and process online transactions.
By the early 2000s, the model shifted again with the development of web applications, online gaming, and social media. Companies such as Google, Facebook, and Yahoo required massive server capacity to deliver increasingly dynamic, personalized content to users around the globe. These demands fueled the growth of massive single-user hyperscale data centers dedicated to supporting the operations of a single cloud or internet company, often numbering in the hundreds of thousands of servers per site.
During this period, data centers became a recognized building type, with design firms, equipment manufacturers, and construction teams specializing in their development. Key design priorities included redundant power systems of N+1 or 2N configurations. N+1 and 2N describe redundancy levels for power, cooling, or other essential systems. “N” means the exact capacity needed to handle the load. N+1 means there’s one extra unit beyond that – as redundancy – to ensure operations continue without interruption. 2N means full duplication or two completely independent systems, each capable of handling 100% of the load. Beyond power supply stability, fire suppression systems, biometric security, and increasingly sophisticated HVAC infrastructure became critical design components.

Data centers can be stand-alone structures with over a million square feet of space, or incorporated into existing designs, such as university campus buildings.
The Cloud
The next major leap in the evolution of data centers came with the proliferation of cloud computing in the late 2000s. In simple terms, the cloud is a network of remote servers accessed via the internet that store, process, and manage data or applications.
It enables on-demand computing resources remotely to minimize the need for physical hardware, like high-speed processors and physical memory space, for the end user. With computing power transferred from the individual computer to a centralized location, companies like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform transformed the data center from a fixed, capital-intensive asset into a flexible, scalable utility.
AI Workloads and the Next Generation of Data Centers
While cloud computing continues to dominate enterprise IT strategy, a new type of data center has emerged over the past five years to support the demands of artificial intelligence. AI workloads are expanding as the market identifies new opportunities to incorporate the technology and reduce human-based tasks. Specifically, the demand for training large language models (LLMs), computer vision, and generative tasks is growing that requiring orders of magnitude more processing power and energy than typical cloud applications. This has given rise to a new generation of high-density, AI-specific data centers.
These facilities are optimized for GPU clusters and AI accelerators such as NVIDIA’s H100 or Google’s Tensor Processing Units (TPUs). Compared to standard cloud servers, AI training environments generate significantly more heat per rack and require specialized cooling solutions, including direct-to-chip liquid cooling and rear-door heat exchangers.
AI data centers also require significant architectural planning to accommodate their unique operational needs. Electrical substations must be integrated directly on site or nearby, and mechanical spaces are often expanded to house specialized pumping stations and heat recovery systems. Due to the immense capital investment required to build and operate these centers, developers increasingly consider long-term siting issues such as access to renewable energy, grid redundancy, water availability, and proximity to universities or research hubs.
Growing in Number, Size, and Expectation
The development and growth of data centers have closely followed the technology trend of incorporating automation and computation into society. As the demand for instant access to more data has accelerated, so has the number, physical size, and expectation of what data centers can provide.
For context as to why the demand for data centers has increased so quickly, it is helpful to try and understand the amount of new data constantly being created. Globally, data creation is accelerating rapidly. In 2020, global data creation was estimated to be around 64 zettabyes annually. A zettabyte is one billion terabytes or one trillion gigabytes. Though hard to conceptualize, one zettabyte could stream 36 million years of high-definition video. According to forecasts by IDC International Data Corporation, 2025 will close with roughly 180 zettabytes of new data, and by 2028, it is expected that number will rise to 394 zettabyes.
As new applications are developed, existing data is used to create even more data. For instance, applications like ChatGTP take existing information and leverage it to create even more data. Likewise, as companies farm individual user information, buying preferences, and search histories, and convert existing content into a digital format, even more storage and processing capability is required.
As of the end of 2024, the United States had an estimated 1,240 data centers either in operation or approved for construction. Northern Virginia is statistically leading the number of data center locations with 329 facilities concentrated in an area noted for its abundant power supply, ample water resources, and dense high-speed fiber network.

Data centers are a unique and challenging project for architects that require thoughtful design and specification to ensure a functional facility and a building that blends with the local community.
Maricopa County, Arizona, ranks second with a rapidly growing number of data centers and is emerging as one of the fastest-growing markets fueled by its available land, expanding infrastructure, and business-friendly climate. While these two regions are experiencing the most rapid growth, other areas such as Phoenix, Dallas–Fort Worth, Atlanta, and Chicago are also expanding quickly as organizations seek to diversify geographic locations, reduce operational risks, and take advantage of lower costs.
The physical size of the data center has also increased significantly. From a business perspective, owning a data center and leasing space is akin to leasing physical storage spaces. Once the land is purchased and the basic infrastructure installed, increasing the size and capacity is the most efficient way of increasing revenue. More space in the data center means more servers, and therefore more clients.
While early data centers may have been built in a storage room or simply take up an entire floor of an office building, modern centers now measure in millions of square feet. One example is the Citadel Campus in Reno, Nevada. Operated by Switch Inc., the Citadel Campus includes the Tahoe Reno 1 building at around 1.4 million square feet, which has plans to expand to over 6 million square feet.
Companies like Google, Microsoft, and Amazon often choose to own and operate their own data centers and have all made significant investments in size and scale. Meta, formerly Facebook, owns the largest data center in the United States in Prineville, Oregon. The hyperscale campus currently is about 4.6 million square feet and includes multiple buildings to house intensive cloud and AI workloads.
Uptime Almost all the Time
In the world of data centers, clear distinctions in providers are measured not in square feet or number of servers, but in minutes and seconds. Uninterrupted service or “uptime” has become one of the key metrics to evaluate the quality of a data center.
One benchmark established by the telecom industry in the 1970s was the “five nines,” which refers to 99.999 percent uptime. Achieving five nines means that a facility experiences no more than about 5 minutes and 15 seconds of unplanned downtime per year, or about 6 seconds a week. “Five nines” is an operational target that companies often use in Service Level Agreements (SLAs), and it’s usually measured internally through uptime monitoring rather than certified by a third party.
To quantify uptime objectively, the Uptime Institute’s Tier Certification is the most widely recognized independent measure of data center infrastructure resilience. The Uptime Institute is a globally recognized organization specializing in the standardization, evaluation, and certification of data center performance and operational sustainability. Founded in 1993, it is best known for creating the Tier Classification System, which ranks data centers from Tier I through Tier IV based on their infrastructure redundancy, fault tolerance, and ability to maintain operations during maintenance or failures.
- Tier I represents basic capacity, with dedicated space for IT systems, uninterruptible power supply (UPS), and cooling, but no redundancy, which means a single equipment failure can cause downtime. This level typically achieves about 99.671 percent uptime, or roughly 28.8 hours of downtime annually.
- Tier II builds on these basics by adding redundant power and cooling components (N+1), allowing some failures to occur without interrupting operations, and improves uptime to about 99.741 percent, or around 22 hours of downtime per year.
- Tier III facilities are designed for concurrent maintainability, with all critical systems fully redundant and multiple independent distribution paths, though only one is active at a time. Maintenance
- can be performed without taking the facility offline, resulting in about 99.982 percent uptime, or approximately 1.6 hours of downtime annually.
- Tier IV, the highest rating, is fully fault-tolerant, with two active, independent distribution paths for both power and cooling, allowing continuous operation even during component or system failures. This tier delivers about 99.995 percent uptime, or just 26.3 minutes of downtime each year, and is reserved for mission-critical operations where uninterrupted service is essential.
In the global market, most enterprise and colocation data centers operate at Tier III, which balances high reliability (about 99.982 percent uptime) with lower cost and complexity compared to Tier IV. Tier IV is less common due to the significantly higher construction and operational expense, and it’s generally reserved for sectors like banking, military, or critical government operations where uninterrupted service is non-negotiable.

Private data centers are often incorporated into existing structures but pose a challenge for engineers to ensure that uptime and cooling requirements are met.
In addition to the Tier Standard, the Uptime Institute provides certifications such as Tier Certification of Design Documents (TCDD), Tier Certification of Constructed Facility (TCCF), and Tier Certification of Operational Sustainability (TCOS). These certifications are earned through rigorous, independent assessments that verify a facility’s compliance with the standards.
Classifying Data Centers
Now that we have explored the basic evolution, size, type, and expectation of data centers, a final element is classifying them based on potential power consumption. Data centers are typically classified in terms of how many megawatts (MW) can be provided to the facility. A megawatt (MW) is one million watts, or a thousand kilowatts. For context, one megawatt is the amount of electricity needed to power approximately 750 to 1,000 average American homes.
For data centers, the more servers, networking gear, and cooling infrastructure inside the facility, the more electricity is required. The MW rating identifies how much computing equipment the building can run simultaneously, along with the cooling and electrical systems that support it. A MW classification does not translate directly to how many gigabytes or how many transactions a facility can handle, as two facilities with the same MW rating might perform vastly different tasks. Cloud storage generally requires less energy than AI-specific or high-density compute centers. Smaller edge facilities often operate with 1-5 MW power capacity, while the Citadel Campus mentioned earlier is rated at roughly 130 MW.

All photos courtesy of Robert Benson Photography
Sci-fi meets the real world in data center design. Architects need to keep up with the latest innovations and design demands for power, cooling, and security to compete in this expanding field.
This article explores the design, infrastructure, and evolving technologies of modern data centers. As the demand for AI and cloud services expands, data centers have become essential components of the built environment. This course provides a historical overview of data center development and examines how evolving computational loads have transformed cooling systems, building resilience, and visibility within communities. Special attention will be given to the transition from air-cooled to liquid-cooled systems, heat mitigation and reuse strategies, and architectural solutions to reduce noise and visual impact. Through this lens, participants will better understand the intersection of performance, sustainability, and livability in today’s digital infrastructure.
The Rise of Data Centers
Data centers are the new buzzword that is captivating the attention of our current world. But what exactly are data centers? To understand the design expectations of these unique and somewhat mysterious new building types, it is important to have a general grasp of the history of data centers and how they evolved from simple office utility rooms to the multi-million square foot complexes.
A Brief History
The modern world runs on data, and behind every internet search, digital purchase, cloud file sync, or AI-generated result lies a data center quietly doing the heavy lifting. These facilities have evolved
dramatically over the past several decades, mirroring the growth of computing itself. What started as simple centralized mainframes housed in secure government buildings has exploded in scale to globally distributed cloud infrastructures and high-density AI-driven server farms.
The concept of a data center dates back to the earliest days of electronic computing. In the mid-20th century, mainframe computers such as the UNIVAC and IBM 701 required dedicated rooms with controlled environmental conditions to operate reliably. These early computing centers were largely the domain of military, academic, or corporate institutions. The physical spaces housing these machines were often highly secured, air-conditioned environments that foreshadowed the purpose-built facilities we now recognize as data centers.
By the 1960s and 70s, organizations increasingly used centralized computing infrastructure to process payroll, store customer records, and automate logistics. These computing rooms were supported by raised flooring systems to accommodate cabling and airflow, uninterruptible power supplies, and basic climate control systems.
The term “data center” itself gained broader usage during the 1980s and 90s as businesses embraced local area networks (LANs), client-server computing, and early enterprise resource planning systems. During this time, colocation centers also began to emerge, allowing multiple companies to rent space in shared facilities to run their servers and IT hardware. These early commercial data centers typically prioritized uptime, redundancy, and physical security, especially for financial institutions and telecommunication providers.
Welcome to the Internet
The rise of the internet in the 1990s ushered in the first true wave of explosive growth in data center infrastructure. As websites became central to business operations and e-commerce platforms like Amazon and eBay gained prominence, the need for always-on, scalable, and resilient server infrastructure surged. Companies built out large-scale server farms to deliver websites, host user data, and process online transactions.
By the early 2000s, the model shifted again with the development of web applications, online gaming, and social media. Companies such as Google, Facebook, and Yahoo required massive server capacity to deliver increasingly dynamic, personalized content to users around the globe. These demands fueled the growth of massive single-user hyperscale data centers dedicated to supporting the operations of a single cloud or internet company, often numbering in the hundreds of thousands of servers per site.
During this period, data centers became a recognized building type, with design firms, equipment manufacturers, and construction teams specializing in their development. Key design priorities included redundant power systems of N+1 or 2N configurations. N+1 and 2N describe redundancy levels for power, cooling, or other essential systems. “N” means the exact capacity needed to handle the load. N+1 means there’s one extra unit beyond that – as redundancy – to ensure operations continue without interruption. 2N means full duplication or two completely independent systems, each capable of handling 100% of the load. Beyond power supply stability, fire suppression systems, biometric security, and increasingly sophisticated HVAC infrastructure became critical design components.

Data centers can be stand-alone structures with over a million square feet of space, or incorporated into existing designs, such as university campus buildings.
The Cloud
The next major leap in the evolution of data centers came with the proliferation of cloud computing in the late 2000s. In simple terms, the cloud is a network of remote servers accessed via the internet that store, process, and manage data or applications.
It enables on-demand computing resources remotely to minimize the need for physical hardware, like high-speed processors and physical memory space, for the end user. With computing power transferred from the individual computer to a centralized location, companies like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform transformed the data center from a fixed, capital-intensive asset into a flexible, scalable utility.
AI Workloads and the Next Generation of Data Centers
While cloud computing continues to dominate enterprise IT strategy, a new type of data center has emerged over the past five years to support the demands of artificial intelligence. AI workloads are expanding as the market identifies new opportunities to incorporate the technology and reduce human-based tasks. Specifically, the demand for training large language models (LLMs), computer vision, and generative tasks is growing that requiring orders of magnitude more processing power and energy than typical cloud applications. This has given rise to a new generation of high-density, AI-specific data centers.
These facilities are optimized for GPU clusters and AI accelerators such as NVIDIA’s H100 or Google’s Tensor Processing Units (TPUs). Compared to standard cloud servers, AI training environments generate significantly more heat per rack and require specialized cooling solutions, including direct-to-chip liquid cooling and rear-door heat exchangers.
AI data centers also require significant architectural planning to accommodate their unique operational needs. Electrical substations must be integrated directly on site or nearby, and mechanical spaces are often expanded to house specialized pumping stations and heat recovery systems. Due to the immense capital investment required to build and operate these centers, developers increasingly consider long-term siting issues such as access to renewable energy, grid redundancy, water availability, and proximity to universities or research hubs.
Growing in Number, Size, and Expectation
The development and growth of data centers have closely followed the technology trend of incorporating automation and computation into society. As the demand for instant access to more data has accelerated, so has the number, physical size, and expectation of what data centers can provide.
For context as to why the demand for data centers has increased so quickly, it is helpful to try and understand the amount of new data constantly being created. Globally, data creation is accelerating rapidly. In 2020, global data creation was estimated to be around 64 zettabyes annually. A zettabyte is one billion terabytes or one trillion gigabytes. Though hard to conceptualize, one zettabyte could stream 36 million years of high-definition video. According to forecasts by IDC International Data Corporation, 2025 will close with roughly 180 zettabytes of new data, and by 2028, it is expected that number will rise to 394 zettabyes.
As new applications are developed, existing data is used to create even more data. For instance, applications like ChatGTP take existing information and leverage it to create even more data. Likewise, as companies farm individual user information, buying preferences, and search histories, and convert existing content into a digital format, even more storage and processing capability is required.
As of the end of 2024, the United States had an estimated 1,240 data centers either in operation or approved for construction. Northern Virginia is statistically leading the number of data center locations with 329 facilities concentrated in an area noted for its abundant power supply, ample water resources, and dense high-speed fiber network.

Data centers are a unique and challenging project for architects that require thoughtful design and specification to ensure a functional facility and a building that blends with the local community.
Maricopa County, Arizona, ranks second with a rapidly growing number of data centers and is emerging as one of the fastest-growing markets fueled by its available land, expanding infrastructure, and business-friendly climate. While these two regions are experiencing the most rapid growth, other areas such as Phoenix, Dallas–Fort Worth, Atlanta, and Chicago are also expanding quickly as organizations seek to diversify geographic locations, reduce operational risks, and take advantage of lower costs.
The physical size of the data center has also increased significantly. From a business perspective, owning a data center and leasing space is akin to leasing physical storage spaces. Once the land is purchased and the basic infrastructure installed, increasing the size and capacity is the most efficient way of increasing revenue. More space in the data center means more servers, and therefore more clients.
While early data centers may have been built in a storage room or simply take up an entire floor of an office building, modern centers now measure in millions of square feet. One example is the Citadel Campus in Reno, Nevada. Operated by Switch Inc., the Citadel Campus includes the Tahoe Reno 1 building at around 1.4 million square feet, which has plans to expand to over 6 million square feet.
Companies like Google, Microsoft, and Amazon often choose to own and operate their own data centers and have all made significant investments in size and scale. Meta, formerly Facebook, owns the largest data center in the United States in Prineville, Oregon. The hyperscale campus currently is about 4.6 million square feet and includes multiple buildings to house intensive cloud and AI workloads.
Uptime Almost all the Time
In the world of data centers, clear distinctions in providers are measured not in square feet or number of servers, but in minutes and seconds. Uninterrupted service or “uptime” has become one of the key metrics to evaluate the quality of a data center.
One benchmark established by the telecom industry in the 1970s was the “five nines,” which refers to 99.999 percent uptime. Achieving five nines means that a facility experiences no more than about 5 minutes and 15 seconds of unplanned downtime per year, or about 6 seconds a week. “Five nines” is an operational target that companies often use in Service Level Agreements (SLAs), and it’s usually measured internally through uptime monitoring rather than certified by a third party.
To quantify uptime objectively, the Uptime Institute’s Tier Certification is the most widely recognized independent measure of data center infrastructure resilience. The Uptime Institute is a globally recognized organization specializing in the standardization, evaluation, and certification of data center performance and operational sustainability. Founded in 1993, it is best known for creating the Tier Classification System, which ranks data centers from Tier I through Tier IV based on their infrastructure redundancy, fault tolerance, and ability to maintain operations during maintenance or failures.
- Tier I represents basic capacity, with dedicated space for IT systems, uninterruptible power supply (UPS), and cooling, but no redundancy, which means a single equipment failure can cause downtime. This level typically achieves about 99.671 percent uptime, or roughly 28.8 hours of downtime annually.
- Tier II builds on these basics by adding redundant power and cooling components (N+1), allowing some failures to occur without interrupting operations, and improves uptime to about 99.741 percent, or around 22 hours of downtime per year.
- Tier III facilities are designed for concurrent maintainability, with all critical systems fully redundant and multiple independent distribution paths, though only one is active at a time. Maintenance
- can be performed without taking the facility offline, resulting in about 99.982 percent uptime, or approximately 1.6 hours of downtime annually.
- Tier IV, the highest rating, is fully fault-tolerant, with two active, independent distribution paths for both power and cooling, allowing continuous operation even during component or system failures. This tier delivers about 99.995 percent uptime, or just 26.3 minutes of downtime each year, and is reserved for mission-critical operations where uninterrupted service is essential.
In the global market, most enterprise and colocation data centers operate at Tier III, which balances high reliability (about 99.982 percent uptime) with lower cost and complexity compared to Tier IV. Tier IV is less common due to the significantly higher construction and operational expense, and it’s generally reserved for sectors like banking, military, or critical government operations where uninterrupted service is non-negotiable.

Private data centers are often incorporated into existing structures but pose a challenge for engineers to ensure that uptime and cooling requirements are met.
In addition to the Tier Standard, the Uptime Institute provides certifications such as Tier Certification of Design Documents (TCDD), Tier Certification of Constructed Facility (TCCF), and Tier Certification of Operational Sustainability (TCOS). These certifications are earned through rigorous, independent assessments that verify a facility’s compliance with the standards.
Classifying Data Centers
Now that we have explored the basic evolution, size, type, and expectation of data centers, a final element is classifying them based on potential power consumption. Data centers are typically classified in terms of how many megawatts (MW) can be provided to the facility. A megawatt (MW) is one million watts, or a thousand kilowatts. For context, one megawatt is the amount of electricity needed to power approximately 750 to 1,000 average American homes.
For data centers, the more servers, networking gear, and cooling infrastructure inside the facility, the more electricity is required. The MW rating identifies how much computing equipment the building can run simultaneously, along with the cooling and electrical systems that support it. A MW classification does not translate directly to how many gigabytes or how many transactions a facility can handle, as two facilities with the same MW rating might perform vastly different tasks. Cloud storage generally requires less energy than AI-specific or high-density compute centers. Smaller edge facilities often operate with 1-5 MW power capacity, while the Citadel Campus mentioned earlier is rated at roughly 130 MW.
Designing for the Technology Within
Modern data centers are complex environments where digital infrastructure intersects with sophisticated building systems. Behind the digital services they enable lies a physical world of structural engineering, mechanical precision, and advanced system integration, all designed around the single objective of ensuring continuous, reliable operation. To support the enormous computing power housed within, data centers must withstand environmental extremes, cyber and physical threats, and the relentless heat generated by high-density processors.
Design Standards
As with any building design, architects and engineers should identify and understand the performance expectations of the building before starting the design. Because resilience and uptime are generally the key expectations for data centers, clearly documenting the expectations of the client early in the process is critical. This means communicating with the client to understand the type, style, and, importantly, the level of uptime the project is intended to achieve. Architects and engineers must then coordinate across multiple domains, ensuring seamless interaction between structural elements, environmental systems, and IT infrastructure.
Assisting in achieving the Uptime Institute Tier levels are two key standards that can guide the design and operation of data centers. These are the ANSI/BICSI 002 and TIA-942-C standards. ANSI (American National Standards Institute) and BICSI (a professional association supporting the advancement of the information and communications technology (ICT) profession) released the ANSI/BICSI 002 standard, which provides a comprehensive set of best practices for the planning, design, construction, and commissioning of data centers. It addresses critical aspects such as site selection, architectural layout, electrical and mechanical systems, cabling, and operational sustainability.
ANSI/BICSI 002 explicitly addresses resilience to seismic activity, tsunami or flood zones, and high-wind environments as part of its guidance on safe and reliable data center site selection. Seismically active regions demand reinforced racks, braced raised floors, and vibration-dampening systems. The standard directs designers to research seismic charts and assess the level of seismic activity at the proposed site, then specify systems for equipment like racks and cabinets to withstand anticipated seismic forces.
For flooding or potential risks from
tsunamis, river overflow, tidal basins, or dam failure, the standard emphasizes choosing locations outside high-risk areas and consulting flood hazard maps during site evaluation. In flood-prone areas, critical spaces are elevated above grade, and dry-proofing methods such as perimeter berms and sealed penetrations are integrated into the building envelope.
Likewise, ASNI/BICSI 002 advises that facilities ideally be in zones where wind speeds don’t exceed 120 mph or, at a minimum, design the structure to withstand higher wind loads.
TIA-942-C, developed by the Telecommunications Industry Association, focuses specifically on telecommunications infrastructure. It defines requirements for structured cabling, network topology, and redundancy, while also incorporating specifications for physical facilities, power, cooling, and security.

Location of data centers relies heavily on access to consistent power, water, and telecommunication lines.
When used together, the standards offer a successful path for the design and operation of a data center. ANSI/BICSI 002 provides broad, holistic design guidance for the building, and TIA-942-C ensures compliant telecom and cabling systems. Both standards support the design principles that align with Uptime Institute Tier levels. For example, a Tier II facility that typically incorporates redundant capacity components, but a single distribution path, can draw from ANSI/BICSI 002 for efficient space planning and TIA-942-C for meeting cabling and power redundancy guidelines. A Tier IV facility, designed for fault tolerance with multiple active distribution paths, requires strict adherence to both standards to ensure maximum reliability.
Increased Security and Fire Safety
After the initial design, layout, and technical specification of the building is complete, there remain other considerations to help ensure the data center operates reliably and safely.
Fire is a key concern, especially in vulnerable areas and where extreme amounts of power are required. Water is no friend to electronic systems, at least when it comes to fire suppression, and clean-agent suppression systems such as FM-200 and Novec 1230 are favored over water-based sprinklers. These systems extinguish flames without harming sensitive electronic equipment and are often paired with advanced smoke detection that can sense particles at the earliest stage of combustion.
Data centers also must be hardened against man-made hazards. While the software firewall protection and cybersecurity requirements are not within the purview of the architect, physical security design should be a priority.
Most data centers include multiple layers of access control and incorporate mantraps for additional security. Mantraps—physical security systems designed to control and verify access to sensitive areas such as the data hall—allow only one person to pass through at a time after identity verification. A mantrap consists of a small vestibule or enclosed space with two interlocking doors where the first must close and lock before the second will open. This prevents “tailgating,” or an unauthorized person following someone else, and provides an additional checkpoint for identity and security clearance.
Other security features often employed in data centers include perimeter fencing, anti-ram barriers, biometric entry, 24/7 surveillance, and security personnel and monitoring stations. In high-security applications, wall systems and enclosures may be rated for blast resistance or ballistic impact, further reducing vulnerability to sabotage or intrusion.
Protecting the electrical infrastructure of a data center is as critical as the servers it supports. Redundant utility feeds, uninterruptible power supplies (UPS), and on-site backup generators ensure continued operation during grid failures or maintenance. Modern facilities are increasingly incorporating Battery Energy Storage Systems (BESS) that use lithium-ion or flow battery technologies to reduce reliance on diesel generators and also improve response time during transitions. These systems help support sustainability goals by reducing greenhouse gas emissions and offering integration with renewable energy sources.
Cooling: A Mission-Critical System
While processor types and chips are constantly being designed to increase speed and computing capacity, one overarching factor has remained constant in data center design: the need to keep cool.
Managing the heat generated within the data center is one of the greatest challenges. To better understand the challenges of heat, we will explore what generates heat, the consequences of poor heat management, and the current cooling systems used in data centers.
What Makes the Heat
Data centers are organized into racks of servers or processors, each with its own chips installed. A rack is a standardized metal framework used to mount servers, storage devices, networking hardware, and other IT equipment in a secure and organized way. The most common type is the 19-inch rack, referring to the width between the vertical mounting rails. The rack rail systems are used to hold nodes. Nodes are basically the boxes where processing and computing units like CPUs (Central Processing Units), GPUs (Graphics Processing Units), and AI accelerators such as TPUs (Tensor Processing Units) are installed. These boxes are measured in rack units, “U” for height, where U = 1.75 inches. Common sizes for nodes are 4U, or 7 inches tall. Racks are also classified this way, with typical heights of 42U, or a rack about six feet tall. Depending on the configuration, usually dictated by cooling capacity and power supply, a single rack may fit 8-10 nodes, and each node may contain 8 GPUs or similar chips.
Although real-world examples will vary, for popular GPUs like the NVIDIA H100, this configuration pencils out to roughly 64 individual GPUs per rack.
Heat is generated from the GPUs by billions of microscopic transistors (the NVIDIA H100 contains around 80 billion transistors) that toggle on and off billions of times per second. These switching operations generate heat due to the resistance of semiconductor materials and the leakage currents that persist even when transistors are idle. The result is substantial thermal output. Power load and head load are synonymous in a data center context, where 1 watt of electrical power consumed is equal to roughly 1 watt of heat output.
The NVIDIA H100 can draw 700 watts or more under load. A rack of 64 chips can equate to a heat load between 30 to 50 kW.
For perspective, a typical home space heater is rated at about 1.5 kW, meaning it produces roughly 5,100 BTUs of heat per hour. A single high-density server rack at 30 kW generates about 20 times that heat output, or around 102,000 BTUs per hour. A space heater can warm a small room, but a high-density rack’s heat output is more comparable to the combined load of an entire commercial kitchen or a small residential furnace running at full power.
While enterprise and colo halls may have hundreds of racks in their data halls, very few data centers are capable of mitigating that much heat energy.

Cooling is a critical part of data center design, and louvers and screens can assist in both supporting chillers as well as improving the aesthetics of the project.
When processors in a data center overheat, the effects can be immediate and severe, impacting both performance and operational stability. Modern CPUs and GPUs are designed with thermal protection mechanisms called thermal throttling that automatically reduce their processing speed to lower heat output and prevent permanent damage. While this protects the hardware, it significantly reduces computational performance, slowing down workloads, increasing latency, and in some cases causing service interruptions.
Note that most laptops and home computers also have this safeguard in place. Many laptops are designed to expel heat through the keyboard, which is why it is important to keep them open, even when laptops are connected to external monitors. Likewise, providing ample air flow around towers and keeping ventilation screens clean will both improve performance and increase longevity.
If temperatures continue to rise beyond safe limits, processors may shut down entirely to avoid catastrophic failure. This can trigger system crashes, loss of in-progress data, and, in clustered environments, potentially an overload. In extreme cases, persistent overheating can shorten component lifespan, degrade reliability, and lead to costly hardware replacements. In the context of large-scale data centers, processor overheating can also breach Service Level Agreements (SLAs) if uptime or performance guarantees are missed. This is why facilities invest heavily in precision cooling systems, thermal monitoring, and airflow management.
Chilling Out
There are two main systems used to capture and expel the heat energy from data halls: air-cooled systems and liquid-cooled systems.
Air-Cooled Systems
Air-cooled systems have been the norm since the first data center design. Originally, smaller server rooms used the building’s basic HVAC system to maintain operational temperatures, but as size, scale, and processing power increased, air-cooled systems also grew.
Raised-floor designs, introduced in the 1960s, allowed chilled air from computer room air conditioners (CRACs) to flow upward through perforated tiles, delivering cool air to server intakes while hot exhaust air was returned to the CRAC units for re-cooling. Over time, strategies like hot/cold aisle arrangements and containment systems emerged.
Hot/cold aisle containment is a data center design strategy that separates the paths of hot and cold air. In a typical server layout, racks are arranged in alternating rows so that the fronts of two rows face each other, forming a cold aisle, while the backs of two rows face each other, forming a hot aisle.
Cold aisles are supplied with chilled air from the facility’s cooling system, directed toward the intake side of the servers. After passing through the equipment and absorbing heat, the exhaust air is expelled into the hot aisle. Containment systems are designed to enclose the cold aisle and keep the supply air separated until it reaches the server intakes. Conversely, containment systems may also partition the hot aisle to capture and return heated exhaust directly to the CRACs.
This approach worked well when rack power densities averaged 2–5 kilowatts, but became overwhelmed as processing demands grew with cloud computing and AI workload densities climbing into the 20–50 kW range. The limited heat-carrying capacity of air then became a bottleneck, requiring massive airflow volumes and higher fan energy use, which increased operational costs and sometimes still failed to maintain optimal component temperatures.
The demand for more processing speed and with it the generation of more heat has led to the development of liquid-cooling solutions. However, air-cooled data centers are still being constructed today with heat energy expelled through traditional ventilation systems, which can provide an economic benefit for smaller niche data center designs.
Liquid-Cooled Technology
Liquid cooling has deep roots in computing history, dating back to the 1960s and 70s, when high-performance mainframes such as IBM’s System/360 used water circulation to manage heat.
As computing shifted toward smaller, air-cooled servers in the 1980s and 90s, liquid cooling largely disappeared from mainstream data centers and was only utilized in specialized supercomputers. The rise of high-density computing in the 2000s for scientific research and financial modeling revived interest in liquid solutions due to air cooling’s limitations. The tipping point came in the 2010s, as more sophisticated and faster GPUs and AI accelerators began pushing rack densities beyond what air could efficiently manage.
Today, liquid cooling is no longer a niche but a rapidly growing, if not dominant, segment of the data center industry, especially in AI, HPC, and hyperscale environments. Water and other coolants have much higher thermal conductivity than air, allowing them to absorb and transport heat away from processors with far greater efficiency. Liquid systems can also specifically target areas within the hot aisle, increasing efficiency and reducing ambient temperatures more effectively. Liquid-cooled systems, when designed, installed, and maintained properly, can enable higher densities of nodes within racks, reduce energy use, and improve performance per watt in individual chips.
Direct-to-Chip Cooling: This conduction method of heat transfer delivers coolant directly to cold plates mounted on CPUs, GPUs, or other high-heat components. Coolant inside the plates then absorbs that heat through convection as it flows to a heat exchanger via a closed-loop system. It’s efficient, compact, and allows targeted cooling without immersing the entire server.
Rear-Door Heat Exchangers: A rear-door heat exchanger is mounted on the back of a rack and uses liquid-cooled coils to capture and remove heat from exhaust air via convection. Fans pull heated air over the coils, resulting in a much lower ambient temperature and light load for the CRAC. The heated fluid is then pumped to a heat exchanger or chiller elsewhere in the facility, where it is expelled. Often, these systems can be incorporated into existing air-cooled hardware, making it economical and expedient to upgrade the cooling system without a significant redesign of the infrastructure or extended downtime for the servers.
Immersion Cooling: Immersion cooling submerges entire servers in a dielectric (non-conductive) fluid that directly absorbs heat from all components. The warmed fluid is circulated to an external heat exchanger, allowing extremely high rack densities and eliminating most internal fans, which reduces power consumption and noise. While highly effective at cooling, immersion cooling systems are not compatible with all rack, node, and chip types. Also, these systems are heavy and require additional engineering and loading considerations.
Liquid-cooled systems are becoming more attractive for new data center designs because they can enable much higher rack densities, reduce the size of mechanical systems, and offer greater efficiency in energy use. However, these systems also come with some risk and potential maintenance issues. Liquid-cooled systems require pumps, heat exchangers, and fluid distribution infrastructure that must be integrated into the architectural and mechanical layout of the building. Operators must plan for redundancy in fluid systems just as they would for electrical infrastructure. Maintenance practices also need to evolve, as servicing a liquid-cooled rack involves fluid containment and heat exchanger management in addition to traditional IT protocols. Liquid systems also must accommodate potential leak detection and maintenance.
Another growing concern is also the amount of water used for these systems. Many systems that use water-to-liquid heat exchangers or evaporative cooling depend on significant volumes of water to reject heat from the facility. In regions facing water scarcity, this can strain local resources and create environmental or regulatory challenges. While direct-to-chip or immersion cooling can reduce the total volume of water needed inside the data hall, the facility still requires a heat rejection method, and cooling towers are a common choice. These towers consume water through evaporation and blowdown, sometimes amounting to millions of gallons annually in large-scale operations.
To address the issue, operators are increasingly turning to closed-loop systems, recycled or non-potable water sources, and even dry coolers in suitable climates.
One potential benefit of liquid cooling is that it enables the recovery and reuse of waste heat. In traditional air-based systems, this heat is often exhausted into the atmosphere. Liquid systems, by contrast, can direct captured heat into hydronic systems that serve adjacent buildings or industrial facilities. In colder regions, district heating networks can integrate data center heat as a renewable energy source. Cities like Stockholm and Odense have implemented such systems, capturing excess heat from racks and distributing it to homes and commercial buildings through insulated pipe networks.
Liquid cooling has additional design implications, as well. Because liquid systems enable greater heat control and server density, data centers can reduce their physical footprint. This is particularly advantageous in urban settings or in adaptive reuse projects where space is limited.
Hybrid Cooling Solutions and the Role of Louvers in Data Center Design
As computing demands continue to escalate, data center cooling strategies are evolving to balance performance, efficiency, and adaptability. While liquid cooling is rapidly gaining traction, air cooling remains a viable and cost-effective solution in many facilities. Forward-looking designs are embracing hybrid cooling models, where both air and liquid systems operate in parallel. This approach allows operators to match cooling methods to workload profiles, power density, and equipment type.
Even when data centers are designed specifically with liquid cooling systems for hot/cold aisle configurations, controlled air flow through the building envelope remains a critical need for an effective building envelope strategy.
Louvers, in this case, provide an opportunity to support data center building design in many strategic ways. Specifically, proper louver specification and procurement can help with effective ventilation, water mitigation, and noise control, and improve building aesthetics.
Performance and Efficiency Through Envelope Integration
Due to the imperative need for data centers to be designed, specified, and built to a standard that supports uptime goals and resilience, selecting building materials and products that satisfy related building standards is critical. For louvers, there are several recognized standards that evaluate the performance and durability of these necessary design elements.
The AMCA 500-L standard is the industry benchmark for testing the air and water performance of louvers. It establishes uniform methods for measuring air leakage, pressure drop, and wind-driven rain penetration across louver assemblies.
In mechanical systems, pressure drop refers to the reduction in air pressure as air moves through components such as ducts, filters, coils, and louvers. This drop occurs because airflow encounters resistance such as friction against surfaces, turbulence from changes in direction, and obstructions within the airstream. In the design of a data center, excessive pressure drop in intake or exhaust paths forces fans to work harder to maintain required airflow volumes. This increases energy consumption, raises operational costs, and can reduce overall cooling efficiency. If airflow is insufficient due to high pressure drop, server inlet temperatures may rise, triggering fan speed increases inside IT equipment and potentially leading to thermal throttling or even equipment shutdowns.
Louver selection to reduce the potential of pressure drops incorporates the need to specify products that are designed to maintain an adequate and reliable air flow. Everything from size, blade number, and orientation to installation can impact the performance of the ventilation system.
Equally important is the need to protect the building from moisture, debris, and even pest intrusion. Products satisfying AMCA 500-L ensure that intake and exhaust systems deliver the required airflow and block moisture and debris while maintaining consistent ventilation.
Related, ASTM E330 is a test standard that evaluates the ability of these systems to resist uniform and cyclic static pressure. This standard evaluates a building material’s ability to withstand external environmental elements such as high winds and other forces without deformation. Failure to perform reliably in the increasingly unpredictable world of high wind events, storms, and heavy precipitation events can lead to catastrophic failure for data centers. Louvers, because of their unique position as a transition element between an internal mechanical system and the building envelope, must be able to withstand external forces. This is especially critical for large rooftop or perimeter installations in exposed locations.
ASHRAE guidelines published in “Thermal Guidelines for Data Processing Environments” (ASHRAE TC 9.9, 5th Edition, 2021) emphasize maintaining a narrow range of temperature and humidity for optimal equipment reliability. Louvers contribute to this stability by enabling consistent airflow while preventing environmental contaminants from disrupting internal climate control. In combination with strategic plenum design, ceiling clearances, and hot aisle/cold aisle containment, the envelope becomes an active participant in maintaining thermal efficiency.
Fire safety is another essential consideration. Materials used in or around louvers must comply with ASTM E84 for flame spread and smoke development, and in many cases NFPA 285 to prevent vertical flame propagation in wall assemblies. These requirements protect not only the equipment inside but also adjacent structures and occupants.
Noise Control and Community Impact
Modern data centers are often sited near commercial or residential areas, making noise mitigation a priority. The continuous operation of fans, generators, and cooling systems can create a persistent low-frequency hum or broadband noise. Without proper mitigation, this can affect neighboring buildings, particularly in quieter residential zones.
To address this, data center designers now frequently specify acoustic louvers, noise-dampening wall assemblies, and mechanical enclosures engineered to absorb or redirect sound. ASTM E90 is the standard test method for measuring airborne sound transmission loss through building elements such as walls, floors, doors, windows, and louvers. The results are expressed as a Sound Transmission Class (STC) rating, indicating how well a product reduces noise passing through it. In data centers, acoustical louvers designed and tested per ASTM E90 can significantly reduce mechanical equipment noise. Satisfying the requirements of ASTM E90 is particularly important in jurisdictions with strict environmental noise ordinances, where compliance helps preserve community relations and avoid operational restrictions.

Louver fin style, size, and position can impact both the ventilation flow and noise transmission. These are key concerns when designing the aggressive requirements for cooling data centers.
The Demand for Power and Need for Aesthetics
Modern data centers are typically windowless, industrial-looking buildings with high perimeter security and aesthetics as an afterthought. For neighboring communities, these structures can appear stark or intrusive, particularly when built near residential or mixed-use areas. Often, data center designs relegate chillers, HVAC, and even power generation equipment to the rooftop. With unsightly mechanical equipment in full view, increased noise generation, as well as appearance, can negatively impact both neighboring attitudes and property values.
Adding to the challenge of data center design is the increasing demand for power. Or more specifically, lack of access. Securing electric service in key markets currently can take up to two years longer than traditional commercial facilities. While our economy hinges more on the expansion in both size and number of data centers, the aging power grid is failing to keep up.
Responding to this, many data centers are now shifting toward on-site power generation, which has significant implications for both the developer and the local community. By 2030, projections suggest that 38% of data centers will incorporate on-site generation for primary power, and notably, almost 30% will be fully powered by on-site sources. This is more than double the percentage expected for on-site power generation just a year ago.
While traditional back-up power generators have been large diesel units, natural‑gas generators, and fuel cell assemblies are currently the most popular for primary energy generation. Emerging technologies such as modular microreactors or repurposed nuclear plant infrastructure have also captured the attention of both the media and environmentalists as the industry grapples with the demand for more power.
This trend of primary on-site power generation has given rise to sprawling equipment yards adjacent to main data center buildings. The scale and industrial appearance of bulky generator containers, stacks, turbines, and support systems can clash harshly with residential neighborhoods.
The challenge of addressing the demand for power with the need for aesthetics presents a unique opportunity for architects to create a cohesive and functional design for all elements of the data center compound.
Louvers, Screens, and the Aesthetics of Data Center Design
Whether on the roof or adjoining equipment yard, architects and developers are placing renewed emphasis on visual integration and façade treatment to reduce the impersonal and industrial perception of data centers.
The visual screening function of louvers and mechanical enclosures, as well as grilles, screen walls, and vegetative berms, can reduce the perceived industrial scale of data center infrastructure. Integrated with architectural cladding or branding elements, these elements help facilities present a refined, purpose-built appearance that suggests a more integrated visual structure. Louvers can be used as design elements, even if they are not functional. For instance, installing a series of louvers horizontally where only one or a few are incorporated into the mechanical system can create an architectural element of consistency and style.
For rooftops and especially equipment yards, perforated metal panels or architectural screens can be used to shield unsightly mechanical equipment from public view. When specified in tandem with louver design, architects can enhance the overall appearance of the building and create a more welcoming and unobtrusive visual for the neighborhood. Screens can also address the issue of acoustics by offering noise suppression. Careful specification of screens is important, especially in high wind areas, as these elements can provide physical shielding for sensitive equipment while allowing convenient access for maintenance.
Designing in the World of Data Centers
Modern data centers represent the convergence of rapid technological evolution, resilient building practices, and increasingly complex environmental and operational demands. From their origins as simple computing rooms to today’s sprawling hyperscale and AI-driven campuses, these facilities now play a pivotal role in global connectivity, commerce, and innovation. The rise of high-density computing has driven advancements in cooling technologies, energy efficiency, and redundancy strategies, requiring architects and engineers to think beyond traditional building systems. Today’s designs must balance uptime and performance with sustainability goals, resource efficiency, and community acceptance, ensuring that these critical infrastructures remain adaptable to emerging technologies.
Looking forward, the data center industry faces the challenge of accommodating ever-increasing computing power while minimizing environmental impact and maintaining operational resilience. Hybrid cooling systems, intelligent louver integration, acoustic control, and innovative facade treatments are all key considerations in how the next generation of data centers will not only process the world’s information but also acclimate to communities nationwide.
Andrew A. Hunt is Vice President of Confluence Communications and specializes in writing, design, and production of articles and presentations related to sustainable design in the built environment. In addition to instructional design, writing, and project management, Andrew is an accomplished musician and voice-over actor, providing score and narration for both the entertainment and education arenas. www.confluencec.com https://www.linkedin.com/in/andrew-a-hunt-91b747/