Data Center Providers: Land And Power Shortages Hampering The AI Era
‘I’ve been out preaching this for well over a year now, that data center capacity is going to be an issue in this industry. ... Trying to go out and find any capacity for this, of any meaningful size right now, it’s impossible. It doesn’t exist,’ Applied Digital CEO Wes Cummins tells CRN.
Data center providers told CRN that electricity and available real estate were in high demand before generative AI. And now the size and power requirements of the GPU’s stack are pushing demand to a “fever pitch,” said Wes Cummins, CEO of Dallas-based Applied Digital, an Nvidia Elite partner.
“I've been out preaching this for well over a year now, that data center capacity is going to be an issue in this industry,” said Cummins, whose organization has deployed 6,100 of Nvidia’s H100 GPUs across three of its data centers and has 1.3 Gigawatts of space under development. “It’s a huge issue. If you are trying to deploy a lot of GPUs right now—and we do that in our cloud business —like trying to go out and find any capacity for this, of any meaningful size right now, it’s impossible. It doesn’t exist.”
Lisa Miller, senior vice president of Platform Alliances and Global Channel at Redwood City, Calif.-based Equinix, told CRN that at the moment supply and demand are “a little out of whack.”
“So we are looking globally, everywhere, where we can start adding capacity. And that is not just Equinix,” she said. “I think many of our competitors are doing that same thing where we are all looking at how we can grow and expand.”
[RELATED: Dell And Nvidia Say GenAI’s Data Byproducts Give Partners 'Massive' Storage Opportunity]
Alvin Ngyuen, senior analyst at Forrester, said while the demand for data center space is at a high point now, communities are beginning to push back against the resource-heavy computer plants.
“Some areas have restricted data center development due to not having enough power to supply the data centers, residents and other businesses,” he told CRN via email. “This means the power demands for generative AI may exceed the ability of the power infrastructure to support it and the broader market as well.”
Enterprises will have only a couple of choices: go to the cloud for whatever service availability they can get or wait for space and power to become available either with on- premises or with a colocation partner, he told CRN.
“The AI infrastructure supply constraints will lead to enterprises who can get AI chips and servers to bring them either on-premises or to a colocation partner,” he said. “I am recommending enterprises go to a colocation partner since new GenAI servers require more power and cooling than older data centers can handle, so offloading that to a partner versus building out a new data center yourself makes more sense,” Ngyuen told CRN.
It’s just not the power that is constraining growth, Miller said. Some of the physical compute and storage being deployed tip the scales at a ton and a half.
“I don't know if they got into the weight and size of some of the Nvidia equipment and Dell equipment, but you’re talking big. I’ll just say from GTC [Nvidia’s conference] one of the new offers with Nvidia is about 3,000 pounds,” Miller told CRN. “So if you want to pull 3,000 pounds into a customer premise, you can’t do many of them.”
Cummins adds that the new AI data center arrays also have to be physically near one another to conserve data transfer speeds and cut latency.
“These need to be really close together, they need to be stacked on top of each other, they need to be networked together. Because you’re moving from a world where latency outside the data center was everything that mattered to you’re moving to a world where latency inside the data center is everything that matters because you’re interconnecting all of these GPUs.”
Finding a use for all the heat that is created as a byproduct of running massive compute in close proximity has been a top priority for Equinix. One of its data centers in Paris is pumping the heat to the swimming pools that will be used at the 2024 Olympic games, Miller said.
“I’m really proud of our Paris facility that we turned up in 2023. To make certain that we’re sustainable, we are actually taking heat that comes out of our data center and we are transporting that heat via water pipes to the aquatic facility that will be used for the aquatics for the Olympics,” Miller said. “And those are the type of things that we’re doing where we’re providing free power to heat the pools for the next 15 years. So you also are trying to find ways to take that power and the heat that comes from the equipment and now become somebody that’s really great in the community.”
There are no easy answers for the big bottlenecks around power and space, as Dell Technologies founder, Chairman and CEO Michael Dell acknowledged on stage last month at Dell Technologies World 2024.
“I get asked all the time ‘So how big is this AI thing really?’ like there’s some super-secret answer only I know. Sorry. I don’t know. I don’t think anybody really knows the answer to this question. What is the demand for intelligence? Is there a limit to the demand for intelligence? What is the appropriate investment in infrastructure to meet this potentially limitless demand? I can tell you this. The early movers are making massive bets.”
The president of Dell’s Infrastructure Solutions Group, Arthur Lewis, told CRN that the No. 1 concern Dell hears from customers is that they lack the intellectual skills on staff to meet their AI wishes. He said that is a great place for Dell partners to leap in with help.
“These skills are not everywhere. The technology is moving very quickly. They don’t want to stand up a new ITOps team. They need help here. That’s why we are very focused on delivering solutions,” Lewis said. “Partners can manage this solution. There are all sorts of white-glove deployment services, management services. I think there’s a plethora of opportunity for channel partners.”
For the near term, together with partners, Lewis said Dell has a chance to deliver the complete AI outcome to enterprise.
“What we are after is we want to be able to talk to an enterprise customer and say, ‘What do you want? Leave it with us.’ The customer doesn’t have to worry about silicon diversity, network diversity or storage architecture,” he said. “You don’t need to build a whole other AIOps team for AI. ‘Tell us the outcome you want. Leave it with us. From the desktop to the data center to the cloud, what is the outcome that you’re looking for?’”