We'll keep moving here. Welcome everyone to our fireside chat on Power and AI in the emerging data center energy crunch. I'm Will Thompson. I'm a member of the thematic investing team here at Barclays. I've spent a lot of time writing extensively on the Power and AI theme. So I'm really thrilled to be joined by Marc Ganzi, the CEO of DigitalBridge. Marc has decades of experience investing in digital infrastructure. DigitalBridge, maybe I'd describe it as a relatively small asset manager but a major investor in digital infrastructure that is pursuing somewhat of a differentiated, grid-independent power solutions. You know, Marc, I think you've been pretty adamant about this topic. So maybe just a quick introduction to DigitalBridge for the audience.
Yeah, sure. So, DigitalBridge is a $106 billion AUM global asset manager. We focus on the development, ownership, and management of digital infrastructure assets. Today, that portfolio is roughly about 40% data centers, roughly about 30% towers, and then the rest would be fiber and other associated forms of infrastructure. As I like to say, we're the sort of accidental tourist in the power space. We sort of backed into being in the power space through our footprint, which is over 400 data centers. We have a power bank of about 22 gigawatts globally across nine different data center businesses. And, there's every form of power that this conference would know is tethered to one of our data centers. So we are, every way you know, we're sort of in the vortex of all of this as it's evolving.
So, Marc, you, I think you've quantified the AI opportunity at $7 trillion, while powering AI would require maybe $1.3-$1.5 trillion of new power infrastructure. You know, the biggest question I'm getting today is, is when or if we will hit the power wall? Does this force hyperscalers to actually cut back on CapEx? And maybe I'd love you to dig into sort of DigitalBridge's secured power bank, which I think you just talked to, and its venture with ArcLight. You know, maybe just get a sense on how are you, what's your vision for powering AI?
Yeah.
And general cloud needs.
Sure. And look, the energy crunch situation that's coming through data centers and through power, it's really just data, right? I'm just a big believer in data. Everything I've been doing for 32 years as a CEO is data-driven. We don't make decisions without really good data. What's happening is there is a trade-in-supply imbalance between power on our grid and how much leasing activity is happening in data center land. So there's a really interesting slide that a certain research firm put out, not named Barclays, but it was very telling in the sense that the sector has been kinda chugging along at, you know, you go back to 2022, the sector signed three gigawatts of leases. You go to 2023, that jumped to about four gigawatts. You jump to 2024, it was another 5.7 gigawatts .
This is the first year the sector will sign 6 gigawatts of leases. The problem is the grid is only turning up about, on average, about 5 gigawatts of power per year of incremental power on the U.S.'s backbone infrastructure. And what's happening is leasing is going from 6 gigawatts to, which will be, you know, this year's number, next year somewhere around 7, the year after that 8.6, then it jumps to like almost 10. And then by 2032, it goes to 20, okay, 20 gigawatts of new leasing. Meanwhile, the grid is chugging along at, you know, 5-6 gigawatts per year of new power. So each year, last year was the first year we had a supply-demand imbalance, which means the U.S. power sector didn't deliver about 900 megawatts against what was leased. So we started this year with a 900-megawatt deficit.
This year, that deficit grows by 1.1 gigawatts, so if you're accumulating that deficit and you're just doing math, it's a 2-gigawatt deficit already, and we're in 2025, and as we look forward and you get out to that 2032 year where the grid is turning up about 8 gigawatts and the sector's leasing 20, that's a 13-gigawatt deficit just in one year. Now, the question is, what AI number do you believe? And there's sort of three for you know, there's three forecasts around the total amount of AI infrastructure needed to make this all work. There's the conservative guidance, which is 137 gigawatts. There's the mid-range of that guidance, which is the number we're anchored on, which is about 196 gigawatts.
And then there's the MO we always pick on Masa Son 'cause he's such an optimist, but he believes that AI will consume about 300 gigawatts. Now, if there's anything near kind of the AI Sam Altman number, which is the 300 number, we're in a world of trouble. But I don't think that number actually happens. I think to your point, CapEx does curtail. And why does it curtail? Because whether it was digital PCs, the internet, cloud computing, I've been a CEO across all these thematics, and all these things have a seven-year cycle, right? You get that steep slope in the first three to four years, which is we're sort of in the year three of AI infrastructure. It's gonna go hard for another two more years, four to five. It's gonna taper, and then it's gonna fall off a little bit.
By the way, that's been every technology CapEx cycle for the last 30 years has followed that bell curve. So we think that bell curve lands us right at that 196-gigawatt number. Today, data center capacity in the U.S. is roughly about 60 gigawatts of compute. And again, with the sector adding 6 gigawatts this year of leases, you get to 66. And then if you follow that trajectory through 2032, you land right at about shy of about 190 gigawatts of power, which is getting through large language models, generative AI, and inferencing. Inferencing is kinda the next big sort of leg up into compute. So again, I'm, I'm a realist. I'm not an alarmist. I've been around four or five different technology cycles. I really have a good feel for what our customers are doing in terms of CapEx.
We have a very good feeling for how fast these guys are getting the return on their investment in AI, which is faster than cloud. It took us just to frame cloud for a second. Let's go back in time. Let's go back to 2011 when the public cloud was formed. Public cloud to today, the cloud is essentially 14 years old. We're 80% built on public cloud. We're not finished yet.
Yep.
And so, and remember, of that 60 gigawatts I just told you, AI is only 35% of that. The other 65% is public cloud. And we're just now starting to build private cloud, which is a whole new vertical. So everyone talks about AI, but one of the adjuncts of AI is data sovereignty and building private cloud and the sovereign cloud. Those are also building private large language models, which is its own language model for AI, which is outside of the hyperscalers, so I, I'd like to frame all this with just math and not, not to confuse people, but really to embrace it and to understand the gap. And so we saw this, this gap about three years ago, and we started going down the path of finding other forms of energy to our data centers.
And it really started when we took Switch private about two years ago. Switch was really a, you know, a very interesting company that really focuses on private cloud, but we weren't really so much focused on Rob Roy's passion for private cloud. I was more focused for his passion for these gigacampuses and his ability to procure power outside of the grid. I thought that was really interesting. And so it was a public stock that we took private for $11 billion. The market really didn't understand it as a story. We've gone on to quadruple the size of the company. We've done a ton of bookings, and we've built new data center capacity. But as we've built that capacity, we've been building power. And we've been building grid-independent power across a series of microgrids. And those microgrids are sourced with, you know, we work with utilities.
We have interconnection agreements in all of our microgrids. In some instances, we're building private lines from renewable power directly into the microgrid. We're releasing that infrastructure through the regulated utility company, but at the end of the day, we've found a way to aggregate four or five or six different sources of power into a microgrid. We've been able to create our own set of backup batteries where we store that power. Then we've optimized a 24-hour clock on how we use that power where we're constantly trading in and out of power with the regulated utility in that geography. A great example of that is we have a really positive relationship with Nevada Power and Light. In fact, our two mega campuses, one in Vegas and one in Reno, those two campuses together consume almost 3.4 gigawatts of power.
We've got a 1-gigawatt microgrid in Vegas. We now have a 1.8-gigawatt microgrid in Reno, and we're expanding both of those microgrids now, supplementing them with LNG. Both of those microgrids have five different sources of power, including hydro, wind, solar, LNG, and grid connectivity. I think when people ask us, "Why are you doing this? Why, what's the purpose of this?" we say, "Look, we just can't be beholden to one source of power. It's just not feasible for what we're doing in these campuses, particularly when we're powering NVIDIA and CoreWeave and some of these really high-power density compute modules that we built out." Part of this has been necessity and survival and also our ability to embrace renewable power. The other microgrid we built. We have two small microgrids in São Paulo, of all places.
So São Paulo, Brazil, is really interesting for us. There we have a 500 megawatt microgrid. We have a 300 megawatt microgrid all in the city of Tamboré. And there we lease the transmission infrastructure, but we have two sources of hydro. So we're 100% hydro in across 14 different data centers. Today, we have excess power of about 300 megawatts. We sell all that power back into the São Paulo grid. And we also sell power to Digital Realty, and we sell power to Equinix, our two competitors. Why? Well, we make money on it. And what we found is that building our own grid-independent infrastructure has actually been a great return because we own the data center. We have the relationship with the customer. We've figured out how to negotiate PPAs directly with them. And the excess power we're trading in all day long.
We have found we've kinda turned a negative. I'm not gonna tell you it's a positive yet because there's a lot of hard work to do. But we do have a really unique relationship with ArcLight. We have a fantastic relationship with them. We've combined up strategically. We have a pipeline of about nine gigawatts of new power projects we're building with them. And it doesn't have to be a microgrid. It can be we may just build a solar farm and have an offtake agreement to Google. We may build an integrated solar farm and data center, like in a place, for example, like Zaragoza, Spain. So we're coming up with unique ideas on how we build power generation adjacent to data centers, or we're building our own microgrids, or we're sourcing the power and bringing it into our data centers.
But in all instances, we're tethering that to a forward 10- 15 year commitment with our customers who are looking for power and need power, and so we're taking kind of a negative and we're turning it into a, we think, a positive, particularly for our portfolio companies.
And so there seems to be bipartisan support that the U.S. needs to compete with China on AI, yet there's no consensus on how we're gonna power it. And so to your point, it seems like a push for all the above energy sources. Can you maybe talk about just the priorities now? 'Cause it seems like speed to power is the priority. We've seen new policy measures in terms of something like Texas's Senate Bill 6, requiring some level of onsite power. But it seems like secondary measures have been emissions, capital costs, electricity prices relative to speed to market. Now, how is the data center industry prioritizing what's the requirements to source the electricity?
Again, I'll try to keep it, you know, simple. I don't believe we are in an AI arms race with China. I maybe have a very different perspective on it, which is a kind of a 12-year view of watching China build their state-controlled LLM. You know, a lot was said about DeepSeek, not to take a tangent, but everyone asks me my opinion on what about DeepSeek. I say, "Look, it's really simple. DeepSeek doesn't exist without Meta. It's really simple. If Mark Zuckerberg's open-source LLM doesn't exist, DeepSeek does not exist," because if it did, it would just be another adjunct LLM that runs off of China's state-controlled LLM.
Here in the U.S., we're building seven large language models, seven, privately funded through a series of different hyperscalers that are highly sophisticated, incredible data gatherers, and generally speaking, pretty secure in terms of the structure of that data. You sort of look at that in contrast to China. There's one LLM being built, which is the state's LLM, highly controlled, highly manipulated, and doesn't have the funding and the capability that the seven hyperscalers have. So, but what China does have is they have a state regime that is very focused on power and making sure that China has an edge on power. No regulations in terms of the size of their LLM and where it goes and how it supports, you know, Tencent, Alibaba, and ByteDance. Those are all companies that are supported by the Chinese sort of infrastructure.
But DeepSeek only came to prominence by using a US LLM, right? Not a Chinese LLM. They didn't use the state-owned apparatus. And so DeepSeek's first version came out. Anyone know how accurate it was? About 63% accurate, their first version. Their second version, which is now being run on the state's LLM, has dropped to 53% accuracy. So that should tell you everything about the difference between China's LLM capabilities and the United States' capabilities. I'm betting on the US. Now, our constraining factor is we don't have fields and fields of solar farms in the middle of nowhere, which China has done a very good job of doing. They've decided to weaponize their apparatus to go build as much renewable power to power AI. But ultimately, if you don't allow an LLM, this sounds weird, it needs to grow.
Large language models need to grow and learn and keep moving. If along that road you're manipulating the data, you basically destroy the real sort of truth behind AI, which is that it has to get to that phase of inferencing where it begins to think for itself. But if you have a model that's constantly being told, "I gotta help it think what it needs to think," you've sort of corrupted the whole concept of inferencing. Now, will China get there? I don't know. It's not, as I say, it's not my monkey. It's not my circus. But we keep our eye on China because it's, you know, some of those customers like Tencent and Alibaba and ByteDance are customers in my data centers in Asia, and they're customers in my data centers in Europe. Our portfolio is a global portfolio.
But coming back to the U.S., you know, I think at the end of the day, as most everyone at this conference knows, what I've always been, sort of looked as the, as the sort of limiting factor to where we go is just our PUC structure. It's just very antiquated. You gotta go state by state. A lot of people in the data center space don't understand that. We've been building infrastructure for 30 years, so I know that building towers and fiber networks and data centers is a highly localized business. You then take that localization, then you put the PUC on top of it, and then you got FERC sitting on top of it. You have these layers of regulatory that I don't even think the White House fully understands or appreciates.
At the end of the day, power is really the gatekeepers of the public utility commissions in each state. That is going to be the limiting factor. We can remove all of the red tape, in Washington, but until you remove the bureaucracy at the state levels, each state is looking at their baseload and saying, "Okay, I'm concerned because, as you said correctly, people are waking up to the fact this is gonna hurt consumers.
So what do we need to do?" And we say, "Look, at the end of the day, if I can be a net contributor to the grid, or if the microgrids we build will reaggregate power and I can sell power back into baseload, I turn from being an enemy into a friend, at the PUC level." What I do worry about is that this weaponization of the cost to consumers is gonna get politicized, and it's gonna slow us down. And I think that'll be a real problem for the hyperscalers. Look, the last administration, our former Secretary of Energy, she, I met with her twice. I think she's really smart. I really like her. But her answer was, "Well, we'll just wait around and let the hyperscalers pay for it." That's never the right answer.
There has to be a solution that brings D.C. together, PUCs together, private investors like us, and the hyperscalers to build some of these grid-independent solutions, which is what we decided to just go do on our own. Kind of, we were left to our own devices.
Yes. Can we often throw the different workloads in the same bucket in terms of data centers? Can you talk about what are the restrictions or requirements when we think about cloud, which can be hundreds of different cloud products? And you talked about public and private cloud. And now we have AI inferences and AI workload. And there's different latency, plan requirements. There's potentially different, you know, power fluctuation requirements. And then obviously.
Yes.
The five nines often gets brought up in terms of reliability. Can you just talk to us of the different site and loc considerations when we think about those different workloads?
What's interesting is there are very distinct workloads now. They tend to sit in different types of data centers, and they're using different types of GPUs. Ultimately, you know, for AI inferencing and large language models, you wanna use the highest power GPU you can get your hands on. When we talk about, you know, NVIDIA's next-generation chips and you talk about the Blackwell chip, they're really expensive.
Yep.
But the hyperscalers wanna get their hands on them because ultimately that processing capability and the ability for that large language model to learn is much faster. Now, the adjunct to that is there's, as most of you know as power people, that is more power density.
Yep.
So you're trying to squeeze more power into a smaller GPU, which is a smaller rack. If, you know, any of you ever get the chance to tour Switch in Las Vegas, it's actually where CoreWeave recorded their entire IPO roadshow. It's where they did the IPO. And you can go in a couple of those data halls, and you hear these Blackwell chips, and you've never heard a sound like this. The hissing, the sort of, the pitch to that is deafening. But what it is, it's power density.
Yep.
Those chips are no longer cooled with forced air. In fact, a Blackwell chip melts if you go through forced air. The only way you can power NVIDIA's next-generation chip is through liquid cooling, which is what we're doing at Switch, and I think the other reason that Jensen and Mike at CoreWeave have chosen Switch is, to your point, they're the only Tier 5 operator out there. So it's not five nines. It's a 100% uptime.
Yep.
A Switch data center has never failed, and people will pay for that. A customer will pay more rent to be there. And our liquid cooling system, our EVO system, which is our patented cooling system, is really revolutionary. We don't lose one drop of water, which I'm actually pretty excited about. I'm from Colorado, so we get a little excited when we talk about water 'cause we're losing water every day. Once we're done talking about the degradation to the consumer in AI, people are gonna move on to water next. That'll be the next topic that we'll be talking about a year from now.
But for right now, I've gotta just solve the problem that's in front of me, which is how to convince, you know, public utility commissions that AI is not the devil and it's not what's gonna be driving up consumer prices, which right now it is. If you look at the data, the amount of power that's being consumed in the data centers is impacting baseload, which is impacting consumers. So I think some of that's gonna have to come back to, the state level, which unfortunately those PUCs are gonna have to be rethinking about rates. There's gonna be so many rate cases in the next 12 months. You're gonna see rate cases in every state, and I guarantee you're gonna see very few rate cases where the rates are going down. My suspicion is rates will go up, and it'll be segregated between AI rates and consumer rates.
We will begin to see a parsing of how government charges the consumer versus how they charge data centers. Now, for us, as an owner of data centers, that's a pass-through. We don't pay for the power. Now, if we have our own microgrid and we're producing our own power and we're bringing it into the data center, then we do have an offtake agreement with our customer. So I do, looking around corners, I worry a little bit about that.
We get the sense that some of the utilities are driving these sort of data center-specific tariffs, right? We see it.
Yes.
In Ohio. I mean, is that what you're talking about in terms of both the public utility commissions and the utilities?
I think.
Recognizing that there is an inflationary effect to other retail and industrial.
Look, if I were running NextEra, my friend John runs NextEra. I really like him. He's a great CEO. If I was running a big public utility company, I would probably be doing that.
Yep.
Because I'd wanna get out in front of that before I'm stuck in a rate case. Rate case is the two dirty words for them. So I, I think that the industry can stomach it, to a certain degree. I think at some point there will be pushback, and then the customers will seek their own solutions. But the reality is the solutions that if you're a hyperscaler and you're trying to build a one-gigawatt data center, you don't have a lot of solutions.
Yep.
You know? Where, where, where are you gonna get the turbine for your LNG solution or CCG solution? Turbines right now are backed up two years. We have a relationship with all the producers of turbines, and we have our forward log of turbines is we're set till the end of 2026, and then we even have a problem in terms of sourcing, you know, for our microgrids. But look at, you know, this thing is complicated. And like I said, the one thing we have learned in the last three years is, one, our friend is the public utility companies. We work with them. We're interconnected to them. And most of our power is bought from them across our global portfolio. And I think what we're trying to do is create solution sets that augment that and ultimately are a net contributor.
I think if we stay in that swim lane and we keep going, I think we'll be pretty. We should be successful.
Do you see a situation like we've seen in Texas with Senate Bill 6 as sort of forcing the hand to require microgrids that are interconnected or onsite power generation backup just become when the grid does become constrained?
I think, look, I think Abbott's smart. I think he knows his base. His base is the gas industry. There's an abundance of gas in Texas. We've proven that microgrids can work in Texas. The first Stargate site Carusoe is half grid, half ERCOT, half microgrid. We're working on a solution in Lancaster, Texas. It's very similar. It'll probably be about 75% microgrid and about 25% ERCOT, and so Texas is pretty unique 'cause there's ERCOT. It's one of those sort of for us, it's kind of a fish out of water, and so we've had to spend a lot of time with ERCOT trying to understand what the problem is. And by the way, ERCOT has its own problems.
Yep.
Which is, you know, they're on a 10-year project to completely rebuild Texas. So anything, I think the governor's being smart because he knows you gotta have supplemental power because the baseload on ERCOT right now is already fragile, as we've seen the last, you know, three years in the wintertime. So, Texas is pretty unique. And most of our grid-independent infrastructure is being built in Texas. Three out of our first five microgrids are in Texas. And by the way, he's made it easy. I mean, he hasn't made it hard.
Yep.
He's been very clear about what Texas wants. ERCOT's been also very clear too. If you want interconnection with us, here's what we expect from you. I kinda like building data centers in Texas 'cause the rules of the road are quite clear.
Yep.
You may not like the cost. You may not like the final outcome, but at least they make it very clear where you can go and where you can't go.
In the last five minutes here, maybe you made some comments earlier about the digital infrastructure cycle. We often get a lot of questions on the ROI of AI and the sustainability hyperscalers' CapEx. You suggested we're sort of aiming three of a seven in a game maybe in your viewpoint. I get the question, like, how is this different than the telecom boom-bust of, like, the early, you know, the earlier this decade?
What's interesting is if you go back to the advent of the PC, the mobile phone, internet, you know, sort of mobile data and cloud computing. Those are kinda sort of five tectonic shifts in technology. There's a slope around adoption and how long it took to adopt. To get to, you know, widespread adoption in the PC took 12 years. It took essentially less than a year to get almost 92% of Americans to touch AI.
Yep.
So adaptation in AI, the slope on that was the fastest we've seen, in any sort of technology introduction. What's also interesting at that same time is the cost to produce AI or to produce a token, which is a measure of AI, is radically falling. And it's fallen 40x since the inception of AI. So what's really interesting is cost per token is down. Adaptation is like this where you look at the PC, which was the curves were like this. Adaptation came down like this, and cost was like this. So sorry, adaptation like this and cost like this. And in between there were these other introductions of different technology. So what's interesting to me is the positive revenue impacts or the positive ROI in AI took three years. Public cloud took five to six years.
So if you go back to the earnings of Microsoft, it really wasn't until 2016 that Azure was producing positive EBITDA. And then it just ramped. And so what's happening right now, like with, for example, with chat, and you look at the earnings from Amazon, you look at the earnings from Microsoft, these guys are all now producing positive net income from AI. We're three years in. It's early.
Yep.
The use cases for AI in public cloud, there were two use cases. In public cloud, you had public cloud, which is what we use for internet and document storage. Then you have consumer, which are all the applications that sit on our phone. All of those run on the public cloud. When we go to AI, we actually have five different use cases for AI. There are three new use cases in AI that didn't exist in public cloud. You’ve got, obviously, you’ve got enterprise, you’ve got industrial applications, you’ve got consumer, which is the same as public cloud, you have enterprise, which is the same as public cloud. There are two other new verticals that come out of that, which is data sovereignty, which is huge. The one that nobody talks about is machine-to-machine learning.
So today there's about 30 billion connected devices to the Internet of Things, to IoT. In the next seven years, that goes to 60 billion devices. Remember, machine-to-machine connectivity means there's no human in between that. So you got one language, you got one model talking to another model, one AI agent over here, one AI agent over here going back and forth. And so it could be a public safety network talking to an autonomous vehicle. It could be a wireless electricity meter talking to your credit card company.
Yep.
So imagine a world where you've got 60 billion wireless devices accelerating the conversation between two machines. That will be 80% of AI consumption will not involve a human being. That is staggering.
Is that because that's where part of my debate seems to be, is that in my view we're now entering the generative AI era.
Yep.
That's time-tested scaling on steroids.
Correct.
Then you have this potential for Tesla humanoids, which is physical AI, which again would be time-tested scaling on steroids. That's the surge we're seeing in maybe AI inference demand. Is that a fair way to think about it?
It is a fair way to think about it. But I think, you know, the Tesla robot for me is like an anomaly. It's like an outlier to a certain degree because there's robotics inside of that that are less independent sitting on a factory floor that are a lot more productive than an Elon robot. Robotics is a part of that machine-to-machine learning, right?
Yep.
but the amount of data that will be consumed to deal with machine-to-machine learning, there we haven't even started the baseball game yet.
Yep.
We're like top of the first. The pitchers are out of the mound warming up. And there's so many revenue models that pop out of that.
Yep.
And there's so many implications to, to fiber and to cell towers and to mobile infrastructure. The ecosystem is just getting warmed up. As again, to your point, we're in the third inning of a baseball game, so there's another six innings left of a lot of infrastructure spend coming.
All right. Well, obviously, I could talk to you for another half hour just to.
Thanks.
Just to cover the topic, but we're out of time. I appreciate Mark coming. And, thank you, everybody, for joining us.
Thank you for having me. Appreciate it.