All right, good afternoon, everyone, and welcome to TD Cowen's 11th Annual Communications Infrastructure Summit. For those of you who don't know me, my name is Michael Elias, and I am the Common Threat Analyst here. For this session, we're joined by Equinix, and from Equinix, we have Raouf Abdel, who's our EVP of Global Operations. This is structured as a fireside chat. We have about 40 minutes for it. I have a ton of questions prepared. I promise you, I will open it up for questions. With that, Raouf, thank you so much for being here today. I really appreciate it.
Thank you, Michael. Thanks for having me. Great to be here.
My understanding is that you have something to say before we get started.
I do. Being a public company, we have to start with, you know, I'll be making some forward-looking statements, and please be sure to check our SEC filings for any factors that may impact those forward-looking statements.
All right, perfect.
Get that out of the way.
Done.
These two over here snickering because they were part of public and now they're private, and we don't have to say that.
All right, let's jump into it. Raouf, you joined Equinix in 2012, but from my perspective, you've had a major impact on the company based on conversations that I've had with both current and prior Equinix executives. For those who are less familiar with you, could you just give us an overview of your time at Equinix and a bit about your path to your current seat?
Thank you for the kind words. This month, I will reach my 13-year anniversary with Equinix. For nostalgic reasons, I went back and looked a little bit at where was Equinix in 2012? We were just under $2 billion in revenue. Our market cap was $10 billion. In the span of that 13 years, we've expanded and scaled, and we're, you know, revenue is about 5x what we were back then. Our market cap is, depending on which day you pick for the stock price and which, you know, a few months ago, it was about 9x. It's more like 7.5x right now, but we'll get it back up to 9x, I think. It's been an incredible ride. I've got some former colleagues in the crowd, and they're familiar with the journey we've been on. We've scaled a lot. We've expanded our geographic presence. We've expanded our employee base.
We've expanded our data center footprint. It's been a great ride. I've had the privilege to lead our Global Operations function for about the last eight years of that. When I first joined the company, I ran the Americas operations. Some of you in the crowd may remember Charles Meyers. He was, oh yes, the President of the Americas at the time, and he's the one that hired me. He and I go back to our Level 3 days together. We've worked together prior to that. I ran that for about five years. When we say operations at Equinix, it's really about the build and operate portion of the data center. It's design, it's construction. I currently have procurement, energy, which I'm sure we will get to at some point in this conversation.
I also have the team that runs the data centers day to day, maintaining the facilities and the sites, as well as supporting our customers every day.
All right, you've been in your current seat for a while, but I am curious, what are the strategic priorities for you right now and for the business over the near to medium term? As part of that, I'm just curious, where are you spending most of your time these days?
Yeah, I would say there's been a bit of a shift for both the company as well as me personally. I would say I used to spend more of my time on the operate side of things. I think anybody that's in the data center space knows that you live on the stability of your service and your product. That was not something you ever wanted to have disruption in. Most of our customers have mission-critical deployments inside our data centers, and so maintaining reliability was always paramount and remains paramount for us. My recent focus has gotten a lot more to be on the build side of things, partly because we're investing at a higher level and partly because that space has become a little bit more challenging, both from an energy, securing energy standpoint as well as supply chain.
I spend the majority of my time on talent, energy, supply chain, and the strategic mapping and planning of what capacity we're going to build where. I think you've heard about the initiative that we call Build Boulder in the company. I lead that for the company in terms of the planning, the strategy, the investment profile, and probably most importantly, like where are we going to go, where are we going to build, at what levels, at what pace, and what throughput. Underpinning all that is how we actually going to execute to do that build.
We're going to have, this is going to be fun. I want to talk to you about something that we started out today talking about on stage, which is deal sizes. Particularly, we've seen enterprise deal sizes increase. We've also seen, obviously, on the hyperscale side, deal sizes increase. We're going to put aside rack densities for a second. As we think about your Build Boulder initiative, one of the things that I think underpins it is that, hey, look, your typical enterprise deal has gotten larger. As part of that, the construction motion and the standard capacity that you build needs to grow in size in order to accommodate that larger deal. I'm curious, in your time in the seat, how have enterprise deal sizes evolved?
As part of that, what were the changes that you made to your standard design in terms of the number of megawatts per building or per phase? How has that evolved?
Yeah, absolutely. In my time, all of those dimensions have increased over the years. When I first joined Equinix, we were building 10 MW buildings. That was, at the time, a really big business.
It was big, yeah.
We wouldn't dream of building a 10 MW data center today. We're in the 30 MW- 60 MW sort of range for any given building. Then we're clustering them together in campuses to build multi-hundred megawatt campuses. Historically, in a retail space, we were building just-in-time inventory and being very frugal with the capital that we were investing. We were also expanding in many, many locations simultaneously, so you couldn't sort of concentrate that investment in fewer locations. We were managing that capacity in, call it, 4 MW or 5 MW chunks. What we've been finding over the course of the last few years is by the time we finished a phase, we were selling it out and we were starting the next phase. We've changed that model a bit so that it's more capital efficient. It's easier on our supply chain. It's easier on the people that are managing the projects.
We're moving away from the notion of it's going to be just in time because it's really hard to manage that way. In the days of building a 4 MW or 5 MW chunk, selling a 2 MW to a customer right out of the gate, you're almost out of capacity the day you finish. Certainly in our really high demand markets, Ashburn, London, Frankfurt, Singapore, we're building 10 MW, 20 MW, 30 MW phases, and the buildings are bigger too. When we say bigger, this is always a confused point. When we say bigger, we don't mean the building is bigger. In fact, the building's smaller. What's bigger is the power. What's interesting is you still need roughly the same amount of land because the infrastructure consumes more space now, but the core and shell is actually, if anything, shrinking because the density is going up.
You pack more power into that particular building.
If we're starting with the building size, 10MW, now we're talking 30 MW- 60 MW. We'll get to the phases in a second. To that last point that you made about the footprint shrinking, obviously, the average rack density that you're architecting these data centers to is increasing. As part of that, how are you thinking about the standard rack density that you designed the data center to currently? I'm more curious, how does that compare to what we saw before the beginning of this AI demand boom? Let's call it end of 2022, first quarter of 2023. How has that average rack density that you built to evolved?
I would say five years ago, we were thinking about 6 kW a cabinet.
Okay.
I would say even before the AI sort of change and dynamic, we had stepped that up to about maybe 8 kW. Now we're thinking 12 kW plus on average. I think the average is a really important point because not every cabinet is going to be 12 kW or even the big numbers that we hear about, the, you know, the 100 kW- 150 kW. Those are going to be a portion of those cabinets. What we sort of designed for is an average and the capability to increase that average with more infrastructure as needed to future-proof for those data centers. What's also interesting is in today's data center design, the ability to support concentrated heat load is much better than it used to be. I'll give you a couple of simple examples of how to articulate that.
In the days where we used raised floor, it was much harder to deliver a concentrated amount of cooling capacity to a given cabinet. We don't use raised floors anymore in industry pretty much.
It's all slab.
I would venture to say we pioneered that. In addition to that, the way a data center is designed today is the room is one giant plenum, as we would call it. What that does is gives you the ability not to have to manage hotspots the same way. In the past, you had to be really careful where you put hotspots.
Yeah.
In the days of raised floor, you literally had to redirect air. If the whole thing is a plenum, it completely changes how you manage cooling and the ability to have 20kW, 30 kW here and five or ten right next to it. The room can be balanced a lot easier with the type of cooling design and infrastructure that, you know, we're putting in data centers these days.
You know, when you say 12 kW per cabinet, the first thing that comes to mind is, when I think of the super high densities that you hear NVIDIA talk about, right? Obviously, this is much lower. The next thing I think about is, you're much better off stranding shell than you are having no space in the data center and just running out of power. From my perspective, it's the reason why you're building at 12 kW, so that is because you still see there's plenty of opportunity in that, call it 6 kW- 10 kW. That's going to be the standard, but you have the ability to flex up and new cooling solutions have made it easier for you to be able to support those loads. Is that the right way to think about it?
Absolutely the right way. I would say to add to that, we have to have the ability to flex. As with any business, you don't want to overdeploy capital and you don't want to strand capital. We don't want to build for 30 kW today and have that be stranded. The other important dimension to what you're describing is if we get to the point that we're having to support throughout big numbers, 70 kW, 80 kW, 100 kW per cab, you're immediately moving into the liquid cooling space.
Yeah.
In which case you aren't relying on that air distribution that's in that plenum, as I described it. You're moving into liquid cooling. The beauty of today's design is it still leverages the same chilled water loop. It just takes it from the central plant into a unit that's going to deliver a liquid versus taking it to an air handler that's going to deliver air. A lot of the same common infrastructure, it's all about how you manage and flex that central infrastructure. That's really going to be the challenge that we're all going to face in managing increased density.
I want to dig in on that with you, and we'll get to it in a little bit.
Dig in on a lot.
I like this. This is fun.
Are we including the crowd?
I did want to talk to you about the phases. You made the comment that you're changing the way that you're looking at phases. We talked about the build sizes increasing, and we talked about how the densities are evolving. As I think about the capacity delivery schedule for Equinix, I build my model based on the number of cabinets, and those are the phases that you have for your sites. How are you thinking about changing either the quantum of cabinets or the quantum of megawatts that you deliver in a single phase?
Yeah. I'll start by saying we always think about power. We think less about it's this number of cabinets because a given phase is going to have so much power that it's going to be able to support. How many cabs you can fit in that is a function of the density of those cabs. It's simple math, and we're going to increase the size of any phase that we do from a power standpoint. I would say even in, you know, medium-sized markets where demand may not be the same, we're going to be 10 kW or higher. On the upper end, in Ashburn, for example, which is one of our highest demand locations, we're building at 50 kW in the retail space at a time now. It wouldn't make any sense to do 10 kW in Ashburn.
You know, one of the things that I think about, and I'm going to put this to you because obviously CapEx has come into the focus, right? Particularly after the analyst day. I want to take what you said and present it to you, and I'd love your thoughts. The way I think about it is that we've seen enterprise deal sizes increase, and it feels like they've been increasing at an accelerating rate in the last year. If you're building just-in-time inventory, you're going to have to flex up in terms of your builds in the future in order to be able to handle continued increases in deal sizes. There's a second, I call it a twin engine. There's another dynamic, which is that we're seeing the hyperscaler start to scale inference, right?
I think there's an expectation that there will be a hybrid cloud model or a hybrid model, just like we saw with cloud. You need to be positioned with capacity in order to service that demand when it comes. Those two things are driving your CapEx higher. Do you think that is a correct framework for thinking about the Build Boulder initiative and the increase in CapEx that you're forecasting?
Absolutely. I mean, for us, it's about all the above. We're not counting and banking on any one trend to be the sole reason that drives our capacity needs. Obviously, AI is going to play a factor. We think inferencing is probably likely to be our sweet spot. We also see a lot of digital transformation happening across the enterprise broadly. That's what's been leading up to the demand inflection that we're currently seeing for the larger footprint. I don't think AI is that prevalent in terms of infrastructure deployment today widely across the enterprise. It's coming.
Yes.
Obviously, we want to be prepared for it, and we want to anticipate it, so we're going to lean into that. That's not the only use case or scenario that's driving demand today, for sure.
Okay. I do want to transition a little bit, talk about xScale, some of the hyperscale trends. My understanding is that you oversee both the retail as well as xScale construction. Is that, do I have that right?
Correct.
Okay. What I'm curious about is, are there any big differences that you'd call out in terms of the construction management process or the go-to-market in terms of development, aside from larger, higher density, right? Any differences that you'd call out in terms of the go-to-market for building that capacity?
I think the biggest, there's two big differences. One is just the sheer size and scale of that project, typically, and the second revolves around whether there's a customer in the equation at the time we're building it or not.
Okay.
When there is a customer, it's a much more demanding program management challenge. I think what you're poking at, and I can sort of read between the lines, it takes a different caliber of program management, talent, and capability to manage one of those projects versus a retail project, particularly with how we approached it in the past. However, those two lines are going to merge because we're going to build much bigger retail buildings and phases. Everything's going to be larger scale, which means we have to increase our program management capability. We're adding lots of talent, lots of people in our design and construction team to prepare and anticipate for that. We're changing something we call gearing. Gearing for us is the ratio of program management people to the project.
We used to run it fairly light, and we're realizing that given today's complexity, I think this will resonate with most of the people in the room that are building. It is much harder to build today than, say, five years ago and much harder than 10 years ago. Everything about today is harder. Getting equipment on time, failure rates, labor supply, productivity, permitting, regulation, design, everything about how we build today is harder than it used to be. For that reason, as well as just we want to be on top of the programs in a tighter fashion, we're increasing everything about how we manage projects.
You know, where I wanted to go with this is I'm curious, and I appreciate the answer in terms of, hey, on the retail side, the sizes are increasing, more akin to hyperscale, but then we're also seeing hyperscale. We weren't talking about gigawatt deals, you know, two years ago, right? The whole continuum has moved up. I'm curious, as that dynamic plays out, do you think it's increasingly appropriate to maybe separate out those two functions and have like a dedicated hyperscale build team versus having a retail team?
Short answer is it depends on what layer of the organization you're talking about. What we do today and what we believe is the right answer, and it's not necessarily the answer for the ages because we'll keep evaluating, is that the programmed teams are separate and dedicated per project. We have a group of projects in any given location that have this dedicated team that's more capable as well as experienced with those types of projects on an ongoing basis because we don't build once. We continue to build in any given market. As it comes up to the tower of the organization, we think having it unified from a supply chain, from a contractor standpoint, from a planning standpoint, from a tool standpoint, from a sort of leadership and design perspective, we think it all still makes sense to merge at the upper layers of the organization.
That doesn't mean you don't need unique capability on the ground, which is really where we're making the changes.
Okay, perfect. Thank you for that. For xScale, we've seen you move bigger scale, go into Atlanta. It seems like there are other markets on the horizon. Now that you're building these large hyperscale campuses, I'm curious how the engagement with the utility changes, right? Because when you're procuring 10 M or 30 MW, it's one conversation. When you say 200 MW, all of a sudden, it's a completely different world, right? How has that go-to-market evolved?
Yeah, you know, we used to go plan where we wanted to build a data center.
Yeah.
Secure that land, and then we'd make an application for that power. When we were talking 10 MW, 15 MW, it was a question of time. Time was defined in a year or two. You could feel confident you'd get that power.
Yeah.
Those days are gone for two reasons. One, we're no longer building in those increments. Two, the amount of power we need isn't sitting around on the grid. We are planning, and I think most people in the room that are doing data center development are ensuring you have a clear line of sight to that power before you take down any land or plan any data center capacity. That might mean a longer cycle, a longer planning horizon with the utility. It may mean a different level of infrastructure. In the past, if you were taking down 10 MW, 15 MW, you basically had maybe a step-down transformer inside of your building, and you had medium voltage delivered to your front door.
Yep.
It was connected to a transformer that maybe you owned, maybe the utility owned, and it was fairly straightforward.
Yeah.
Today, if we go ask a utility for 100 MW, 200 MW, 300 MW, 400MW, you name the number, we're having to, one, plan differently. We're having to put different infrastructure in. We're building our own substations that we either own or we build and then turn over to the utility. For the most part, we're connecting at the high transmission voltage level. It's a very different proposition. Even then, we may not get utility grid fast enough, and therefore, we're looking at alternates such as on-site generation, either as a bridge or as the first phase, and then wait for the utility for the second phase. I think in today's world, you have to be super flexible, multi-pronged with your energy approach. That's what we're trying to do. Somebody asked me earlier in one of our private sessions, do you feel like you have it under control?
Do you feel at ease with energy? I was like, are you kidding me?
Yeah, I don't think anyone can.
It's going to be a period, the same with supply chain, right? It's going to be hard work, grinding, managing. We've built a very large energy team within the company to do nothing but manage these various dimensions, variables, dynamics, changes. The utility, I mean, the days of I'm going to order something for utility and I'll wait two years and hopefully it arrives, forget it.
Yeah, that doesn't work.
We're structuring a special deal with that utility, and we're going to monitor every milestone along the way to ensure that in two, three years, whatever the time horizon is, it's actually going to arrive. If there are key milestones missed along the way, we're going to be engaging with the utility. It's going to take a whole different level of power planning, engagement, and discussion. Public utility regulation is changing around. If you don't use it, you're going to lose it. We're seeing that model emerge in a number of markets. Monitoring and keeping in close dialogue with the utility so you understand what power they've actually allocated to you is important because you have to think about power, I guess, in the simplest of terms and two sort of relevant variables.
One is the infrastructure that they brought to you has a capability to it, has a capacity to it, but there's power that sits behind that that has to be managed by the utility.
Yeah.
If they're not planning for the infrastructure or to deliver the amount of power that's in that sort of connection, then you get out of balance really easily. We've literally seen that. You have to be in constant dialogue with the utility. Hey, I've got a 20 MW connection. I'm using, whatever, 12 MW. Are you reserving the other 8 MW for me? Here's my projection of when I'm going to ramp into that. Capacity planning for the utilities is no easy task these days. For the longest time, a utility's load looked like this.
Yeah.
With electrification coming to fruition, now they're seeing upticks that they have to get on top of from a planning and capacity standpoint.
I want to take what you just said. Let's take a step further. As you think about the retail buildings that you're building, I appreciate that when you're doing 100 MW, 200 MW, 300 MW campuses on the hyperscale side, that's the conversation.
I want to touch on that point, but go ahead.
Okay. One of the things that, when we're talking about hyperscale, we hear mandatory minimum commits, clawbacks on power, cost-to-native construction where you're, to what you just said, you're putting the money up for either a transformer or maybe even some of the transmission upgrades. What I'm curious is, at the retail level, are you being required to do that in order to get the 60 MW of power in Northern Virginia? Because if the answer is yes, then that introduces obviously a new set of questions, particularly if there are mandatory minimum commits involved.
The short answer is yes, because of the increments you're talking about. If you're building a 60 MW xScale or hyperscale facility or a 60 MW retail facility, the power dynamic to get power to that location is the same.
Yeah.
The ramp-up might be a little different, but otherwise, the arrangement you make is one similar. This is actually a good segue to the point I wanted to make, which is if you look at what we're doing in Hampton, which I know you're very familiar with.
Yes.
That is what we refer to as a hybrid location, which means we'll have retail and xScale in the same campus. We got four buildings to work with.
Yeah.
The score of how those four buildings will be used is to be told yet. We have complete flexibility. We may end up with two xScale, two retail. We may end up with one retail, three xScale, or any scenario, any combination you can think of.
I want to.
We're maintaining complete flexibility with how we use that infrastructure.
I was having trouble sleeping last night because I was excited about the conference. One of the things I was thinking about is Hampton, the Hampton site. I was thinking, CapEx is obviously a focus for people. You have a JV partner that is funding, let's say, 80% or 75% in the new JV, 25% for you. Is there a world in which it actually makes sense from being capital light to lease capacity that's built by xScale for use on the retail platform?
Yes. There are various scenarios out there. I'll paint the picture of the two most likely and easiest to wrap your head around. One is the entire property and the infrastructure and the buildings are owned and on the JV balance sheet, in which case Equinix would lease a building or buildings. The other is when we go master plan, and you'll see more of these cases coming.
Okay.
Not to lead too much into it, but when we master plan a large campus, we may buy a large plot of land. We may plan the upfront utility in a common way, and we may take that parcel and split it. Some will be xScale and on the JV books, and some will be on Equinix's balance sheet. Master planned together, utility solution together, and again, gives us utmost flexibility in terms of how we manage that and probably a little bit more capital efficient too.
I mean, that to me goes back to the core proposition, or at least how I remember being presented to me of xScale, which is that there is value in having the compute node sit on the same campus as the network node and the interplay between the two. It's like this, it's the same idea, just at a large campus scale is kind of how I'm interpreting it with some added benefits of capital efficiency and you being able to have one go-to-market motion with the utility.
The utility, the supply chain, capacity management, all of those benefits would be there. The ability to track more network providers to that campus. It just, it's got the full networking effect proposition. The more we have, you know, multi-tenant data centers as well as hyperscale deployments all in that same general, it's actually more operationally efficient too.
Yeah.
One of the things we're looking hard at now is, okay, in this new world where I've got a campus that supports 240 MW, what does it take to run that? What is the operating?
Yeah.
How do we manage deliveries? How do we manage the infrastructure? How do we maintain it? For sure, I can't, I'm not going to quote numbers, but for sure, the number of people needed on a unit basis is dramatically lower in that environment versus I've got a standalone 20 MW data center and the number of people needed to run that, again, on a unit basis.
Yeah.
is much more efficient.
Got it. I wanted to ask you, as we've talked about Hampton, have you secured power for that site?
We have.
Is that from Georgia Power?
The way there's a co-op down there called EMC, I think is the acronym.
Okay.
It's a regional player that fronts Georgia Power. It's a complex system down there, and people are going to run into this in numerous locations the more you get out into the outskirts. Basically, they're the local distribution, and what sits behind them are the transmission line and the generation. There are three parties involved. You interface with the ultimate provider, which is EMC in this case, and they go and secure the transmission and the generation to deliver you that local power.
Okay.
You strike a deal, but you have to be aware of what's happening behind that front line.
That's what I was going to ask. My understanding is the cooperatives, they don't procure generation, they don't build generation.
They build it.
They rely on somebody else. The next question becomes, do they have the power from Georgia Power?
Yes.
Okay.
Yes.
All right. That's great to hear.
We wouldn't do that deal if we didn't have confidence that the full system was secure.
Kudos. Okay, cool. Now I want to transition in the time we have left. I want to talk about.
You can mislead the crowd.
Exactly.
It's time for Q&A.
I'll ask one and then we'll do Q&A. All right. We'll make it fun. Here's my question for you. Rack densities continue to increase. One of the questions I get from investors is about the installed base of data centers. This came up in the enterprise panel earlier, where we were talking about the potential need to retrofit. How do you think about that? I know that you've undertaken a project, I want to say this was two years ago, to start getting your existing sites ready to support liquid cooling. I also want to talk about that and how that has gone. How do you think about retrofit, and what are you doing in terms of supporting liquid cooling capabilities within the existing fleet of data centers?
Yeah, remember what liquid cooling is. It is taking the chilled water from a central plant and distributing it to it in a different way than just taking it to an area.
Yep.
The retrofit we're doing to enable liquid cooling is tapping into that chilled water system to extend it to offer liquid cooling.
Okay.
You have to just tap into that pipe, and as long as you have sufficient central plant capacity, then you can leverage that.
Yep.
Now, we can upgrade the central plant.
Okay.
As necessary. Keep in mind, in the data center environment, broadly speaking, there is a lot of headroom in terms of what people buy and what they use.
Yeah.
We are going to continue to push the boundary on what you can utilize. In all the locations that we've described, we have spare chilled water central plant capacity to tap into. If need be, we'll add more chillers, we'll add more, what have you, in cases where that's possible. The retrofit isn't as dramatic as it sounds because we're just tapping into existing infrastructure. What we're really trying to do, just to put it in context, is we're trying to monetize the pockets of space that aren't used. Most of these centers are fairly well occupied in the 80%+ range, right? There are some pockets that will be an opportunity.
Is that on a cabinet basis, on a square footage basis, or on a power basis?
That is on a space basis.
On a square footage basis.
On a space basis, the power is actually lower than that because there's lots of headroom. People, you know, when a typical system is designed and engineered, somebody's going to plan for, let's just say, I'm going to take down 100 kW. I'm going to leave 20 kW, 30 kW, 40 kW of that for engineering headroom because engineers are conservative as well as growth. We monitor that trajectory and that load. It's actually about, you know, half of what is consumed.
Okay. I said I'd open up for questions. Any questions from the audience? Yes, please.
Thanks for the comments, Raouf. Great stuff. Curious, are you starting to see, we're starting to see some of the utilities, Dominion, Encore, start these proceedings around large electrical loads, you know, how they can impact the grid based on SAGs, based on the oscillation that we're seeing with these GPUs. Nothing's come out yet, but just wondering how you guys are thinking about that, and how you guys are thinking about kind of incorporating that, you know, in terms of your offering and how to protect yourself and all that stuff.
Just to be clear, that's we're talking about like load spiking with GPUs, right?
That's correct.
Okay, cool. That was going to be my next question. Thank you.
I think it's early days to understand how that's going to work in mass and whether, if you had a center that's completely filled with that versus it's a blend of that plus other things and the load is somewhat stable and static. I think we're actually more worried about it down at the next level down at the UPS level because that's what's going to tie directly to that load. I think a lot of work's being done to evaluate how that's going to play out. I'd say it's early days. I actually thought you were getting at a different dynamic, which is historically what's happened. Historically, what's happened is, if you were going to take down a bunch of capacity off of the grid and incremental infrastructure was needed, the capital required to do that was spread across the rate base.
I think what we're seeing more and more of is that's not going to work. That's not going to fly. The rate base isn't going to, you know, just stand by for that. The PUC is not going to stand by for that. Players in the data center space are going to have to step up to underwriting that infrastructure in various different ways. That's playing out real time as well. I think there's a lot of dynamic dimension to what's happening in the energy space right now, including how to manage the future world of spikes and variables and the like.
All right. With that, I'm looking at the back of the room. Looks like we're out of time. Wow, it was a pleasure.
Did you leave time for Q?
I did. I did. Finally, I did. Thank you. Thank you very much, Raouf, for joining us. Thank you very much, everyone, for being here. That was awesome, man. Thank you. Thank you, sir.