Data Centers Are Preparing for High-Density Computing and Advanced Cooling Technologies

July 24, 2024 • Ella Krygiel, BOMA International

Data centers are growing at the speed of light and to keep up with this high-density power, decisions must be made about whether legacy data centers require an upgrade. The CBRE article, Five Considerations to Prepare Data Centers for High-Density Computing, offers tips for assessing these requirements, and Barry Sullivan, article writer and Global Projects Director at CBRE, Data Center Solutions, weighs in on his thoughts. Read Barry’s insights below to learn more about the thought processes behind upgrading legacy data centers, the differences in advanced cooling systems and what trends we can expect for data centers moving forward.

The article, Five Considerations to Prepare Data Centers for High-Density Computing describes the process for updating legacy data centers. How can data centers upgrade their equipment as technology continues to grow?

It’s a really exciting time for data centers. About 10 years ago, the widespread move to the cloud meant that additional cooling and power was required from standard data centers. Within the last 18 to 24 months, however, the acceleration of AI use has necessitated more changes in cooling technology, as it’s required to service chips in very high-density servers. 

To support AI servers and applications, the majority of legacy data centers need a secondary cooling solution to be implemented – either to support the air cooling, like a direct-to-chip cooling, or a brand new cooling medium such as immersion cooling or active rear door cooling. AI is real and it’s here. The mechanical electrical infrastructure that is supporting and housing these technologies are running at breakneck speed to catch up with the requirements of the new AI Chips. The data center design is much different than what it was 3-5 years ago and it is constantly evolving. CBRE, as the largest provider of outsourced operations in the Data Center Sector, have a broad base of customer sites and use that information to help transition legacy data centers to a cloud-based, AI-ready environment.

What is the process for determining which cooling infrastructure is best? Can you break down the differentiators between the three cooling systems mentioned, Direct-to-Chip Cooling, Immersion Cooling and Active Rear Door Cooling – and how can one determine the best system to use?

The biggest changes we see with data centers as mentioned are with advanced cooling technologies. All these systems are quite new. As a general rule, extremely high-density installations will require immersion cooling. Medium to high density data centers would be supported by direct-to-chip cooling and will have air cooling for the ancillary components of the servers. From a chip cooling perspective, there will be a liquid that will flow into the server on a continuous loop to extract the heat generated from the chip. The active rear door cooling would be implemented for low to medium density data centers and support the air at the server level to enable more airflow across the circuit breakers and boards. However, there are a lot of variables to consider when implementing these cooling systems, such as the server manufacturer, external air temperatures and climate.

It is mentioned that these newer cooling technologies can enhance energy efficiency – much more than the traditional model. Can you describe what makes them energy efficient and what technologies are implemented to ensure they are consuming less power?

In legacy data centers, the mechanical plant and equipment required to dissipate heat were primarily Chiller water systems, Air Handling Units and Air-conditioning Units, and these required a significant amount of power to run. If you immerse servers into the liquid, heat dissipation is nearly instantaneous and does not require as much power and infrastructure. This process takes out a lot of the mechanical plant that you would need to service that cooling dissipation, which will lead to a reduction in your energy costs, which then leads to reduced maintenance costs because it’s a significantly less amount of equipment to service, operate and maintain.

How can data centers future-proof their technology? When unexpected outages occur, for instance, the expenses incurred can be extreme. What can help contribute to its long-term sustainability?

There has been a lot of advancements in the resilience of data centers over the last 15 years to preserve data. Cloud computing, for example, duplicates information in the event of an outage. The biggest factor that would determine this resiliency is dependent on holding data in multiple regions and ensuring the data centers housing data are well operated and maintained. CBRE has a CERM process (Critical Environment Risk Management) aligned to the best practices of running data centers which reduces the risk of potential outages.

One of the considerations listed was investing in reliable maintenance and support of these data centers. How can data centers navigate staffing shortages?

We’re trying to generate more knowledge about the sector, because a lot of people don’t know about the impact of data centers. They just know that they can access the internet or social media whenever they want, not realizing that all this information is being housed somewhere. It’s all about educating people, and at CBRE we have a partnership with CNET that gives training to technicians coming from other sectors. CBRE just won two awards for diversity and talent development which recognizes our contribution to the training and growth of data center experts – and I think that the market as a whole needs to invest heavily in talent to solve staffing shortages.

Where are data centers heading; what trends can we expect to see in the future?

Over the next three to five years, I expect to see more small data centers, many of them located in major metros. And when I say small, I’m talking boxes, 10x10 foot box containers which will give low latency connections to the population centers. This is one of the elements stemming from AI: to learn population hubs which will feed into its knowledge of constant adaption and evolution and the availability of Edge computing.