Artificial Intelligence
Learning Objectives:
- Outline the history of AI.
- Describe the possible design-process advantages and efficiencies of AI.
- Describe ways AI is being deployed to create architectural components, buildings, and cities that are responsive to environmental conditions and users’ needs.
- Discuss the privacy and security concerns associated with the collection of vast amounts of data necessary to use AI as a design tool.
This course is part of the Business of Architecture Academy
Although not a design firm, Sidewalk Labs, which is owned by Google’s parent company, Alphabet, has developed a tool called Delve that integrates financial parameters, energy models, and site constraints to help architects and planners design complex urban projects. It depends on generative design and AI to analyze factors such as density, daylight, and walkability to propose a range of options and show the trade-offs inherent in each one. The software is mostly used during the planning, feasibility, and entitlement stages of a project to demonstrate how various schemes will meet city requirements and the client’s bottom line, says Violet Whitney, director of product management for Delve.
PHOTOGRAPHY: © FOSTER + PARTNERS/AUTODESK
TO DEVISE ADAPTIVE facades with self-deforming materials, Foster + Partners reversed the typical workflow, starting with the desired outcome and then allowing AI to figure out how to achieve it.
Artificial intelligence offers the potential for creating environments that are more responsive to their users’ needs at all scales, including the urban scale. Cities have been collecting vast amounts of data, and companies like Google have been providing functions like Street View and Earth for years now. AI can harness all this information, analyze it, and use it to help make places work better. Such data are the raw material that urbanists like Kevin Lynch and William H. Whyte had to generate periodically and painstakingly to develop their ideas; now these data are continually updated and available in real time, says Carlo Ratti, who directs MIT’s Senseable City Lab and practices architecture. His firm, Carlo Ratti Associati, is using such data to map Pristina, the capital of Kosovo, to better understand the city’s public spaces for Manifesta 14, the European cultural biennale that will take place there in 2022. Ratti’s team is using algorithms to reveal hidden spatial and social patterns and identify key squares, streets, parks, and green areas that are either underused or misused. This past summer, Ratti applied this information to initiate a series of temporary urban interventions that offer new ways of using and reclaiming these spaces. The third phase of the project will track residents as they “vote with their feet” and show how the reconfigured spaces actually function. This feedback will then help identify which interventions—such as converting the area around an old brick factory into an “urban living room” and turning space for cars into places for people—are retained for the future development of the city. “AI can help us understand visual clues that might not be apparent using older tools,” says Ratti.
Ratti cautions that the recent explosion in data collection comes with very real concerns. “Ninety percent of all the data on the planet have been created in the last two years,” he says. Cities need to be transparent about how they collect, store, and use data. They also need to wrestle with privacy issues and prevent—or at least identify—biases that might be embedded in the information they use.
Ratti sees AI’s being used to “turn buildings into living things,” pointing to facades in particular. With sensors that collect information on humidity, temperature, and air quality, envelopes can respond in real time to enhance comfort, reduce energy use, and maximize efficiency. “Building facades today are corsets,” he says, “but we can make them living skins.”
One firm that has been exploring AI as a tool for devising adaptive facades is Foster + Partners. Working with Autodesk, Foster’s applied research-and-development group has been investigating self-deforming materials that can change their shape without any mechanical forces. Instead of employing motorized louvers or other such devices, these materials respond to environmental conditions in the same way an eye’s iris does to light. They do this by combining thermo-active materials with passive laminates (multilayered materials, usually plastic) and exploiting the difference in expansion and contraction rates to change the shape of the facade.
Because there’s a nonlinear relationship between the laminates’ internal forces and their behavior, designing the material to work in a desired way is remarkably complex. So Foster used machine learning to build what is known as a “surrogate model” to study all of the interactions and how the various layers would react to changing conditions. Instead of repeatedly adjusting the arrangement of laminates to get the desired deformed state, the designers started with the preferred end state and let AI figure out how to get there.
In addition to such surrogate-modeling work, Foster is also exploring a more advanced form of machine learning that it calls “design-assistance” modeling, says Martha Tsigkari, a partner at Foster. The goal of this kind of modeling is to facilitate architectural processes that do not have definitive answers —those that require subjective approaches—and “work alongside the intuition of designers in the creative process.” The firm is trying to understand the potential of AI at different stages—from design to construction to building operation, says Tsigkari. Ideally, AI would create a continuous information loop, so feedback from the operation of a completed building would help architects with the design and construction of their next one.
AI, though, is not just for big firms and big projects. Suchi Reddy, whose 16-person New York–based practice, Reddymade, combines art and architecture, worked with Amazon Web Services (AWS) for two years to develop AI technologies for a kinetic light installation in the 90-foot-high central rotunda of the Smithsonian’s Arts and Industries Building in Washington, D.C. Called me + you, the installation was commissioned for Futures, an exhibition that opened in November in the Smithsonian’s original home, which had been closed since 2004 due to structural concerns. Visitors can speak into nine circular “listening stations” at the base of the piece and tell their “future visions.” An AI-driven system then translates the meaning and tone of the spoken words into a kinetic “mandala” of color and light in the piece’s central totem. Each person’s sentiment changes the pattern and color of the totem, creating a constantly changing collective vision of the future. A web app allows people from around the world to add their voices to the sculpture, providing readings of the global “temperature” of sentiments on the future at any given moment.
Reddy says her work uses “emotional AI” that blends physics, neuroscience, and data technology. “I want to integrate feelings with technology to help us engage on a human level,” she says.
Another architect using AI to explore the relationship between the built environment and psychology is Mona Ghandi, who runs Morphogenesis Lab, a cross-disciplinary program at Washington State University (WSU) that includes students in architecture, neuroscience, computer science, and materials science. With an interest in “compassionate spaces,” Ghandi and her Morphogenesis team created an AI-driven installation called Wisteria that responds to the emotions of people interacting with it. Exhibited at WSU’s Pullman campus from February to August 2020, Wisteria comprised a “forest” of cylindrical fabric “shrouds” suspended from the ceiling that changed shape and color depending on biometric data collected from the people moving underneath it. By weaving into the shrouds a shape-memory alloy programmed to respond to readings of visitors’ body temperature and pulse, the Morphogenesis team enabled the fabric cylinders to move and activate LEDs that change color. The work is “contingent on user involvement and engagement,” says Ghandi, and “illustrates the collective emotion” of the visitors at any particular moment.
Ghandi sees Wisteria as a first step in developing spaces that can respond to the needs of people with neurological and emotional issues, such as autism and post-traumatic stress disorder, or to improve the cognitive performance of children in school.
While AI is still in its infancy, architects are taking it in a wide range of directions—some of which may prove to be dead ends and others more successful. What’s clear, though, is that architects must get out in front of the technology or get run over by it.
Supplemental Materials
Harnessing AI to Design Healthy, Sustainable, and Equitable Places
By Phillip Bernstein, Mark Greaves, Steve McConnell, and Clifford Pearson