All right, well, I guess it's afternoon now, good afternoon, everyone, and welcome to our 9th Annual Communications Infrastructure Summit. For this meeting, we have Equinix, and from Equinix, we have Jim Poole, their VP of Business Development, and Chip Newcom, who's their Director of IR. This session will be structured as a fireside chat, which will run for 40 minutes. I have a list of questions prepared, to the extent that there are any questions from the audience, I'll do my best to take questions from you guys towards the end. With that, Jim, Chip, thank you very much for being here. Really appreciate it.
Thank you.
Awesome. Let's start off, as we start things off, Jim, for those who may be a little less familiar, with you, can you give us an overview of your role at Equinix, and as part of that, how your role at the firm has evolved, over time?
Okay, sure. As stated, I'm a VP in the business development organization, which at Equinix is really around ecosystem extension, since that is core to the business model. I joined the company 14 years ago. I originally worked in the marketing organization, where I ran service provider marketing, and today I run a program inside of business development we call Evolving Edge, which is essentially what is the role that Equinix plays in things like 5G, subsea, satellite, and AI.
Okay, all right. As part of that, I would ask you, where do you spend most of your time focused these days? What takes the lion's share of your time? Is it one thing, many things?
It's probably two things. I hear this AI thing has become super interesting to people.
Yeah!
you know, I joke that I started working on that 4.5 years ago, and no one cared, and now it's amazing. The other thing would be 5G. I spend a lot of time on wireless.
Okay. All right. Let's transition and talk a little bit about the demand environment that you guys are seeing. 2022 was a record leasing year for the broader industry, but this year seems to be marked by some tougher macro conditions. You know, we've been hearing more hesitation on the enterprise side in terms of their willingness to lease. If we're going to put AI aside for the moment, you know, can you frame up the enterprise demand for digital transformation that Equinix is seeing, and perhaps how it compares to what you were seeing this time a year ago?
Sure, sure. You know, on a comparative basis, without a doubt, obviously we've got, you know, inflation in a place that it wasn't a year and a half ago, and that certainly, I think, has, in the minds of many customers, being super careful and cautious about, you know, where they spend their money. That being said, you know, the Equinix value prop to, you know, our average customer is, you know, I like to say, is an ROI argument.
It's really around: How do I get the best yield for infrastructure that I have to deploy for some reason? I know Charles used the statistic at, in, in the earnings call just recently, where, you know, we talked about the fact that, you know, with, say the, you know, the Fortune 500, we're looking at, what? 30%, digital services, revenue generation, in terms of contribution to those inter- to those companies.
In 2022, moving to 40% by 2026. you know, the reality is digital transformation for our customer base is not so much a if. It would be nice if I could. It's more of a reason for being and something that they have to invest in. While, you know, I think we're seeing people try to be cautious, I don't see any slowdown in, you know, the demand for what we provide as part of that equation.
You know, if I take that and dig a little bit deeper, what it sounds like is, you know, even if there was a pullback, let's say, because of macro conditions, at the end of the day, it just leads to pent-up demand, and at the end of the day, they still need to, you know, move forward with digital transformation. Is that the right way to interpret it?
Well, yeah. I mean, I, I don't see any customer, you know, going, "Maybe I'll try digital transformation, and if it doesn't work, I'll go back to my standard brick-and-mortar." No. That does not happen, right? Either they're going to do it or they're not. The question might be the pace at which they do it, but when you take that against, you know, say, you know us as the example of, you know, 10,000 customers, at any given moment, what portion of that customer base is in what portion of that journey? If you think about, like, you know, where our sales activity comes from, the vast majority of our sales is selling to customers we already have, right? They're already in the system, they're already seeing the benefit, and so it's a land and expand, you know, sort of equation.
We have, you know, a lot of ways of sort of, you know, looking at these moments in time where, you know, the macroeconomic environment, like, might look challenging from the, from the outside. It doesn't necessarily have a huge impact on us, because either you're digital or you're not. You're not sort of halfway, you know, in between.
Yeah.
The thing as well, too, that I'd add is, as you think about all the various different technology trends out there, all of them point back to Equinix when it comes down to it. I think part of the durability that we've seen in our business for now, you know, 25 years, is the fact that because of this broader trend towards digitization, whether it's thinking about 5G or AI or hybrid multi-cloud, all of that requires doing more stuff with Equinix, because we end up being the hubbing point for all of your networking traffic, or where your private cloud deployment is going to sit, because you want it to be proximate to the, you know, 40% market share that we've got for the on-ramps to the cloud service providers in the market where we operate.
I think that's part of what we benefit from, is that, you know, if, if the thesis is that the world is not going to go more digital, then admittedly, that's, that's a challenging market environment for us. Even across macroeconomic cycles, enterprises right now are still trying to figure out how do they get more efficient, how do they get more digital, which means how are they going to work more with Equinix over time?
That makes, that makes sense to me. You know, sticking with the topic of digital transformation, this is a question that I asked earlier in the enterprise session. You know. We've been talking about digital transformation and enterprise outsourcing for a while now, and I made the, the joke that for a while we've been talking about 70%, and, you know, it seems like when I go to these conferences, I hear the same thing over and over again. You know, Jim, from your perspective, as you think about what inning we're in, to use a baseball analogy, you know, where do you think we are from a digital transformation perspective? As part of that, maybe you could tell us what you think, where we stand in the U.S. versus maybe other regions around, around the world.
Sure. Sure. So Probably without being able to call the exact inning, I would say we still have a lot of game left.
Okay.
Certainly, if I had to stack rank progression, obviously North America is, is the most advanced, then followed by Europe, then followed by APAC. As far as the how much more do we have to go? You know, we use a really simple way to think about it, which is if you looked at our penetration into the Fortune 5, we're at 58%. If you looked at the Global 2000, we're at 41%, right?
Okay.
That is the exact profile of customer that benefits the most from being an Equinix customer. You know, at the end of the day, there is a lot of market yet to tap.
It's a long runway of future demand for you guys.
You know, along those lines, when we've spoken with Keith and Charles in the past, they've highlighted the focus on signing the right deals at the right price points. You know, how is Equinix thinking about what the right deal is, and as part of that, also, what the right price point is, you know, over the past few years?
Oh, sure. This goes back to my, you know, my earlier point of, you know, the discussion that we typically have with a customer is a conversation around ROI for the deployment that they put with us, right? If you come to us and you say, "I want to be in a single site, and the guy down the street is, you know, $10 per KVA cheaper," you should, you should probably talk to him. You shouldn't be talking to us, right? I use this with. You know, I gave an example in an earlier meeting today with, you know, network service provider customers that we have. If I said, "If you stack ranked every deployment you have of infrastructure in your network by where you get the best return, who's the top 10? Equinix is." Right?
You know, the, the question we're having is: How am I helping you do your business better? So when we say get the right deal, that's exactly, you know, what we mean is, you know, 'cause ecosystem effects are additive, right? It's not like a Venn diagram, where there's this thing over here and this other thing over here.
They, you know, talk to each other. It's more like this one, and then this one, and then this one, and so as you keep adding-- It's incumbent upon us to maintain the value, right? If, if we keep customers who are super interested in the idea that says, "Hey, I've got a supply chain that involves 10 different companies, and the cheapest way I can interconnect to those 10 different companies is to do a deployment at Equinix.
Yep.
That is the perfect customer. They get the most value out of that because we make that part of what they're trying to do as simple as possible.
You know, I want to, stick on that, because one of the things also I think about as you think of ecosystem effect is, one of the things I imagine that's important is propensity to consume interconnection, right?
Yes.
Connecting to, to other folks within, within the ecosystem. You know, when you're having conversations with, with, let's say, a new logo, you know, how do you frame up what the potential opportunity is on the interconnection side? As part of that, you know, how a customer could be actually additive to the ecosystem. I'd be curious-
Sure.
How you think about that.
I guess you could do it in two, you know, two ways. One would be sort of on the consumer versus the supplier side, right? On the, on the consumer side, start with the enterprise side.
You know, you're looking at generally a customer, where you're saying: Okay, you're in some, you know, phase of your digital transformation. You know, who are your biggest suppliers, right? So that's where we always get back to, you know, as much as it's an overused term, I always say, "It's not just hybrid multi-cloud, it's hybrid multi-cloud, multi-network." Why? Because all of those customers have three characteristics in common. They all use multiple networks, they all use multiple cloud providers, and there is something that they do that they do not want in the public domain. H as a propensity to interconnect to the public domain, right? If you looked at our enterprise customer, the most common use case is private storage, public compute. They're fine using the hyperscalers for the compute cycles, but they don't ever want the data to stay resident in that location.
Right away, you can qualify out a whole lot of, you know, potentially bad deals by looking and finding that customer who's like: Yep, that's, that is exactly the profile I fit. On the supplier side, it's almost, you know, the inverse, and so I'll go back to my network analogy, is that, you know, if you're a network operator or you're a CSP, a, a cloud provider deploying at Equinix, you're fishing in a barrel. The highest propensity consumers are the ones that sit in our buildings, and I've seen it on both sides. If you were to ask a network provider, they've got the most customers possible from doing a deployment. I could do one deployment in an Equinix campus, and I can go after hundreds of customers. You know, my joke is: A data center without a network is a refrigerator, right?
You know, at the end of the day, for them, that's a big deal. Same thing on the cloud side. If, if the cloud providers told you how they would characterize an Equinix customer versus an average customer, an Equinix customer is a way more sophisticated, pushes way more traffic, right? In, in the cloud space, how do they make their money? They make their money on traffic, right? What goes in and out of the cloud.
Yes, yes. Yep.
So therefore, you know, we are the perfect sort of combination of characteristics for whether you're an enterprise looking to talk to a service provider or vice versa.
I want to take that point and build on it and tie it to AI. When, when we think about what they're putting in your facilities, it's the stuff they don't want to relinquish control over. To your point, they can. They'll use the public compute, but they don't want that sitting in the public cloud environment. When I think about AI, I'd imagine, you know, there's sensitive data that they want to be able to do something with, but they don't want to relinquish control.
As you've spent time over the last few years thinking about AI and how this works. How do you see the enterprise deployment architecture for AI differing from what, if at all, from what we've seen with the cloud, essentially hybrid multi-cloud?
Yeah. Well, the reality is, at least at this, at this point in time in the market, it doesn't actually look any different. Why? Because the vast majority of CPU or of GPU capacity sits in the hyperscalers, right? If I just looked at, you know, the AI customers that we're talking to today, what are they doing? They're basically saying, "I have private data that I do not want, 'cause that's the secret sauce of my company-
Yep.
sitting resident inside of, you know, one of these AI services. However, I'm fine interconnecting to it." They're asking us, they're saying, "Hey, you know, where are all the Azure gateways? Because I want access to OpenAI," right? It's a, you know, a very, you know, standard thing. On the flip side of that, you know, where we're starting to see some newcomers, you know, come in, some of these upstarts who are starting to deploy, you know, GPUs as service. You know, they may decide to, you know, deploy somewhere in the middle of nowhere where there's cheap power, and then the next day, they show up at us, and they're like, "I need a node in Chicago, Dallas, Silicon Valley, and D.C.," because that's where all the customers are. The training function within AI is not a horribly latency-sensitive thing.
Right? Then the next phase of my deployment after I get my model trained and I do everything I want, is, okay, where do I put, you know, things like the inferencing functions, which is a far more distributed function, and they look at Equinix and go, "Oh, you know, you're within, you know, 10 milliseconds of 80% of the GDP of the planet," right? "So, you know, where else am I gonna go?
As we consider that, you know, we're talking about training being less location, latency sensitive, right, relative to, to, the inference side of the house. You know, would you expect Equinix to play in that side of the market to go after the, the training opportunity, or, or should we think of it as strictly, inference as what you'd pursue?
Well, our default is inference, without a doubt, because that is purpose-made for what we do. Just to give some insight to that, the inference function within sort of AI more generally is not a GPU-specific function, right?
Most inferencing functions run on CPUs just fine, right? So a lot of that has to do with the fact that training is a, is a highly parallel process, and so that's what GPUs are very good at. Inferencing is a serial process. You ask a question, and it gives you an answer, right?
That lends itself more to, to distribution. However, you know, per the last earnings call the other day, you know, Charles did acknowledge the fact that, you know, we are now thinking about xScale.
which is essentially our exposure to the hyperscale market, which, you know, traditionally for us, has been a non-U.S. activity, and we are looking at that in, in the U.S. at this point. We were looking at that in the U.S. already, sans the whole AI demand. Does AI have a potential influence on that? Yeah, potentially, and we're, you know, that's part of the calculus as far as us thinking about it, but, I don't know if you have any other.
Well, the thing I'd add, too, is as you think about AI, it's not gonna be homogenous.
There's going to be different types of training that's required. There's gonna be different types of inference that's required. As you think about, like, what we've been doing in AI for years already, what you now call effectively machine learning, sits within our facilities, and you can do training where you've got structured data coming in from a manufacturing floor, or something that is easier to train on relative to unstructured data, like what you're gonna have in a large language model.
Do that very easily and efficiently within an Equinix facility. We're already supporting different forms of AI, not necessarily the high, high compute needs for something like a large language model, but for portions of AI, that's already sitting in our facilities. It's not necessarily a new function. It's just because of OpenAI and because of what has happened, sort of with that exploding on the scene, it's put a spotlight on one very specific portion of AI models. We're supporting a whole host of other different types of AI applications already.
Yeah. It's important to note that, and, and some people don't necessarily appreciate this, but solving the language problem from an AI perspective was the hardest problem they could think to, to go after, right? Because the variability in language is almost infinite. Right? On a comparative basis, and I use this as an example, genomics has less moving parts, right?
You can do genomics jobs on a fraction of, of the GPU capacity, that you would do something like a large language model, because they went after something that was a really hard problem to solve. You know, it's drawn a lot of attention, but to Chip's point, it's not the only thing happening in the market.
That's a fair point. As I think you brought up xScale when I asked about the training side of it. The way I thought of xScale is, I don't necessarily think of it as like commoditized hyperscale. The way I think of it is, whatever workload is going into xScale needs to sit proximate to essentially, to the retail campus, because there is some interconnection that's going to be happening with the existing ecosystem. From what you're saying, it sounds like there is going to be a subset of training workloads that will need to leverage the existing ecosystem on your campus, and as such, it makes sense to do xScale in the U.S. as you look to service that type of demand. Am I interpreting that correctly?
Well, if you, if you looked at our xScale footprint internationally today, that is exactly what it conforms to. We do not build xScale facilities in the middle of nowhere in France o r Germany, or anyplace else. They always sit in markets where we're already deployed. You know, that is the model by which. You know, we, we are not chasing where is the absolute cheapest place we can deploy, so that we can sell at the absolute lowest dollar. That is not, not the model for the xScale business from day one.
Okay. Then, just talking about the demand related to AI. You know, you've made the point that you've, you've been servicing these types of workloads, I think now, which has gotten a lot more, a lot more attention. You know, as we think of kind of the evolution of demand related to this specific type of workload over the course of the last 6 months, when it really became popularized with ChatGPT, have you seen an inflection higher in terms of the demand from your customer base associated with this, you know, essentially doing, leveraging AI? I think Charles said on the call, like, enterprises want to integrate AI. Curious what you're seeing in that, in that regard?
Yes. Yeah. That example I used earlier where, you know, we are seeing customers who want to access foundation models, that were built in something like a hyperscaler, like OpenAI is built inside of Azure, right? Customers are specifically looking to say, "Hey, I've got data." Oddly enough, the customers that are asking, these are existing customers, right? It's already that their data's been sitting in an Equinix facility, and now all of a sudden they're like, "Yeah, I think I might want to try this out. So how do I do that?" Nothing faster than a cross connect or a fabric port to connect you to, you know, that hyperscale infrastructure, you know, as quickly as you want to.
You know, to the extent that we are seeing some new-ish things, like I said, those, those upstart companies coming out looking for comms nodes. We're starting to see that. That's, you know, activity that you didn't see, you know, say, six months ago. Also, you know, cases where you do have certain types of inference jobs that are heavy and need to be proximate. The example I've been using for people, is, you know, rendering. Rendering is a very video intensive, so even the inferencing function tends to use a lot of GPUs. It-it's not the case that, you know, the guy in Hollywood sends the workload to Salt Lake and back again.
That doesn't make a lot of sense. Instead, they come to somebody like us, and they say, "Hey, where can I put this in the L.A. market?" Just the sheer size of the amount of data that they're trying to process requires timeliness. It's not so much a latency thing for latency's sake. It's more of a, you know, size and heaviness of the workload. You know, data gravity is a real thing.
That that's completely fair. You know, it sounds like from what you're saying, you have seen an uptick in demand associated with these, with these kinds of workloads. I'm curious, in terms of the power density on the inference side, specifically? propensity to consume interconnection and the cooling needs associated with it, how does it compare to the traditional enterprise hybrid multi-cloud deployments that you have been, you have been seeing?
On the inference side, in particular, it is almost completely identical to everything before. Like I said, there's that rendering case, which is a very unique sort of a case.
There's only a handful of customers that are going to need that kind of processing. For the vast majority of customers that are doing more structured data, machine learning type workloads, those actually run on CPUs. So the fact that, you know, we've been. That's, that's what we do, right? If you need to put infrastructure proximate to end users for some sort of latency or performance reason, then you tend to come to Equinix because you need to deploy that in 20 metros around the world. AI doesn't look any different from an inferencing perspective. In fact, actually, one of the benefits we believe we can, can bring to this, and this is a large part of, you know, if you listen to our Analyst Day when we had NVIDIA on stage-
Was this idea that says, you know, Equinix facilities are all interconnected by a fabric, right, which is a network that connects everything together and connects into the hyperscalers, all on an SDN, sort of API-driven basis. This idea that says, I'm gonna develop this great training thing off somewhere in the middle of nowhere, where it's super cheap to do. Sure, you can go do that. The reality is, I got to take that algorithm, and I have to deploy it proximate to all the places where somebody's going to consume that function, and I want to do that as easily as possible.
Am I going to go out and say, I need to go find 4 network providers and 3 cloud providers that are the same in each of these locations, or I can just go to Equinix, and they can tell you exactly how that's going to happen.
What I find interesting about what you're saying is, when you talked about inference, essentially leveraging CPUs, you know, rather than GPUs, and now you're saying that the workloads that you're seeing for AI inference are essentially very similar to what you saw in terms of hybrid multi-cloud deployments.
From that perspective, that suggests to me that, you know, we've been talking about higher power densities. On the inference side, you know, maybe we don't need to essentially reinvent the wheel in terms of how the interconnect-oriented data centers are architected in order to support it. What are your thoughts there?
Yeah, I mean, we've been asked the question that says: Hey, do you need to go back and, you know, retrofit a bunch of your facilities?
No.
No, we don't. The first thing to level set with is, you know, Equinix, as a retail colocation provider, exists in a demand market where demand is heterogeneous. Meaning, we don't know until we talk to the customer, what the customer wants to bring to us. If you walked into a hyperscale facility, you would see rows upon rows upon rows of the exact same thing.
If you walk into our facility, you see, you know, you name it, right? Supercomputers on one side of the room and, you know. We have to be able to accommodate, you know, liquid cooling today. We have to be able to accommodate HPC today. The average may have sort of stayed in this kind of, you know, 4-6 KVA range for a very long time, and that's been the case. Certainly, I think on a go-forward basis, as we expect, you know, GPUs now to be starting to be used in more and more different types of applications, and the. Like, we talked to NVIDIA, Intel, AMD, and if you ask them about their HPC systems, in the next, say, 3-5 years, they all go to liquid cooling, period. Right?
They are right at the point now where air cooling, they're the last generation that can be air-cooled. On average, on a go-forward basis, to the extent that HPC will be some portion of the number of workloads, it's not like all the regular IT workloads vanished.
Right? That's still growing, and we're still benefiting from that. The portion that we require for the HPC, we have to be able to accommodate, and so we'll build that into, as we go forward, new buildings, will have better capacity to be able to handle things like liquid cooling. For example, we have a, you know, co-innovation facility in Ashburn, Virginia, where I live. One of the things we've been working on is, you know, liquid to the rack. Why, why are we doing that? Well, we're doing liquid to the rack because liquid to the rack basically allows for us to handle customers who both do radiators on chips.
Right? just literally, or radiators on the back of the rack, what they call a rear door heat exchanger. Either way, I have to be able to put, you know, a cooling distribution unit on the floor, and I've got to be able to push out water to that, you know, particular rack. Those kinds of innovations will start showing up in new builds as we move forward.
That's interesting. To take that a step further, it's like you will see power densities in your data centers increase. Like, if you build right now to 4-5 KW per, per cab on average, maybe that goes to, like, 5-7 or something like that.
Right.
Well, you'll be able to support the higher power density workloads in pockets of the data centers. To your point, CPU, GPU compute is not taking over the entire data center, so there's still going to be that 5 KW workload that's gonna go into the facility. As part of supporting those higher power densities, you bring in liquid cooling into the facility-
Sure.
in order to, in order to support it.
Correct.
It's, this doesn't seem like it's we're gonna go to now you need to build, 30 KW per cab data centers. That's what I'm taking away from this conversation.
Yeah, that is. It's not a, like a binary flip, right? Where all of a sudden the whole world just decides, "Oh, everything needs to be done on a GPU." Uh-uh. GPUs, like I said, they're very good at doing something very specific, which is parallelism. That is not the case for the vast majority of IT workloads. They don't need that. It will be part of the equation, it'll be a bigger part of the equation than it was in the past, but it's not the only part of the equation.
Okay. Now, as we think about bringing in liquid cooling into these facilities, you know, I'm curious to know, in terms of the, the cost associated with the cooling architecture, liquid cooling versus traditional air cooling, right? You know, is there a material difference in the cost associated with deploying liquid? If so, is there any kind of framework you can provide for thinking about it, either, either on a per megawatt basis or on a per cab basis?
I mean, it's not, it, it is not different enough than what we're doing already for customers today to, you know, worth noting. It doesn't show up as a, you know, "Oh, it's gonna cost 20% more," or, you know, nothing like that. Because like I said, we've been doing this all along. You have to think, data centers already consume a lot of water, right? That-that's how the cooling function happens. The fact that all of the, the water infrastructure is in the building, that was the biggest part of the cost to begin with, right? To the extent, like, when a customer comes to us and says, "Oh, I wanna do, you know, 20 racks, and I need this very specific thing," okay, there might be some NRR associated with some very specific infrastructure that they need for that customer.
That may show up in, in the mix, but it's not like it's going to cost us, you know, huge amounts more money to accommodate that for a customer.
The chillers are on site already. Really, any incremental CapEx costs would be just bringing literally the liquid to the, to the cabinet.
Yes, correct.
That's where it is. Then on the OpEx side, is there any material change, you know, between between the two? I'd be curious.
Well, I mean, most of the feeding and care of the infrastructure that sits inside of an Equinix facility is done by the customer or done by their contractors. What I can tell you, just as a perspective on the market, is that, you know, one of the things we have noted is that, you know, the industry notices that there is a skills gap in the maintenance of very high-end GPU-based infrastructure, right? You know, not every IT guy in an enterprise knows what InfiniBand is, or knows any of the sort of resource management tools that you use for GPU clusters, as opposed to what you would do for CPU. There is, I think, you know, a lot of companies, why you're seeing some of these startup, you know, GPU-as-a-service companies, is for exactly that reason.
It's a very sort of unique set of hardware and aren't as good. We don't, you know, materially participate in that end.
Okay, last question on, on this topic. This is something that, I've been reflecting on, as I think about Equinix. I think that the moat around Equinix's business goes back to the essentially pioneering the carrier-neutral data center, right?
Getting all the networks into the facility. That then had a derivative benefit when it came time for the cloud. If you had to pick one, the cloud had to pick one place to deploy a network pop, well, you'll put it where all the networks are, right?
I think that led, that translated into the market share that you have of cloud on-ramps. You know, to the extent that AI represents the third cycle, the first was internet, then was cloud, and now AI is the next one. How do you position Equinix to make sure that you have the market share-
Yep.
that you had in the first two legs, in the first two big cycles?
Well, I think, you know, some things are sort of nicely built in. My-- back to my comment about HPC systems and where they're going as far as, you know, power consumption and, and cooling. If I had to say that normally, historically, there were sort of 3 buckets of demand for places you could put IT, it was on premise, it was in a multi-tenant, or it was in a hyperscaler. AI came along, and this just disappeared, right? There is no option now to, to stick it in my building because I can't get the power, the cooling, and I'm not willing to even, you know, attempt to try to figure that out. Instead, because of things like the skills gap, I'm more likely to go to a service provider who is set up in a multi-tenant data center or who happens to be a hyperscaler.
From our perspective, that's just fuel to the fire, right? To your, to your point, do we think it is going to be additive to the business overall? Absolutely. In part because you have, you know, one of the alternatives just came off the table.
Fair, fair point. Let's switch gears and talk a little bit about power. One of the things that I was most surprised about through my conversations today is when I ask folks: What's the, the greatest challenge or bottleneck in their business? They all say the same thing, utility power. As we think about Equinix, how do you position the company to ensure that you have long-term access to power to support the continued growth of that retail business? Gotcha. He's got a good.
Well, I, I think the start, as we think about it, is first, scale really matters now, and, and scale and having the on-the-ground market knowledge and expertise. The fact that as you look at our business, we're doing 53 major builds right now across 40 metros around the world. We're building, you know, that's more than half of our metros around the world. As we're thinking about it from a corporate development function, one is, it is increasingly important not just to think about land banking, but also thinking about power banking.
To make sure that you have that long runway. As we're thinking about our largest metros, you know, we're proactively going out there and making sure that we have the 5-plus year lead time for where we think demand is going to be. I think the other big difference, too, as you look at our business relative to others, is the velocity with, with which we sell power is completely different. Taking, taking the example of Northern Virginia, where we've got 2 facilities that we're just finished wrapping up on, and we've got the power to be able to continue to operate there through the current challenges that Dominion is having in terms of transmission. The reason that we can do that is because we're not selling a sort of the 5-10 MW clips. We're selling at the 5-10 cabinet clip.
When we're going to have a conversation with a utility operator, one, we're not taking down the same level of capacity where, you know, yeah, if you walk up to a utility provider and say, "Okay, I need 150 megawatts, and I need it in 6 months." Yeah, that, that market's gone. The difference is when we're saying, "Okay, we need 20 or 30, and we're going to use that up over the next 5 years," that's a very, very different conversation. I think the other thing, too, is, as we think about certain capacity-constrained markets, so a Singapore.
Great example, where, you know, that has been a market with a moratorium. We're very, very pleased to have gotten an allocation recently. The reason that we're able to continue to drive that is because the value that we're creating for the ecosystem there and for the economy there is huge. To, to Jim's point, talking about the, you know, ROIC, that a customer can create within our facilities, I think it's having that conversation with the local municipalities, with the local boards, with the local utility operators. 'Cause in many cases, it's not just simply power now, it's also then, you know, can you get the approval from the local council? In Europe, for example, if you're not thinking about how are you sustainably building your data center into the broader ecosystem of an environment, you're never going to have any shot.
If you're a sort of smaller sub-scale operator building two or three builds a year, yeah, it's, it's going to be a lot harder relative to a scale operator who is building in multiple metros, building at size the way we are.
Sticking on the topic of power, you know, one of the things that's risen in popularity, at least from the conversations I have had, is cogeneration or self-generation of power in capacity-constrained markets. I believe Equinix leverages Bloom Energy fuel cells in certain markets to power part of its data centers. How do you think of the role that self-generation will play, if at all, in data centers over the long term?
It, it's going to get bigger. So as we think about self-generation right now, Bloom, for the most part, is actually more for backup power in a lot of the markets where we're operating. In Silicon Valley, one of our facilities is actually operated primarily on Bloom.
Okay.
We found that's working well. Take, for example, in the Irish market, where, you know, EirGrid has said, "No new data centers connect to the grid until 2028." We're still building. You know, we just opened up DB5 last year. We're building DB6 right now. The reason we're able to do that is 'cause we're operating those off of a gas turbine power plant that we built. The beauty of it is, from a grid perspective for EirGrid, is over time, will we connect to the grid? Sure, we probably will. We then also can become a swing consumer for the grid for them. As they're making that transition towards renewable energy, which is part of the, the challenge that they're having, shifting towards wind and solar and other sources.
If it's not shining, if the wind's not blowing, we can turn on our, our gas turbines, come off the grid, and then from a grid payer's perspective, it actually keeps their rates low because then EirGrid's not the one building a power plant.
Got it. You know, let's talk a little bit about the supply chain dynamics, and this is a, a point that I, I raised recently, or rather in the last session. As you think about it, the supply chain kind of gapped out in 2022 for a lot of the key pieces of equipment. That was in part, I think, in response to a demand shock that we saw at the end of 2021. A little bit of COVID supply chain issues sprinkled in on top of that.
You know, now as I'm looking about, at the magnitude of demand that's hit the market, you know, has the way that you think about winding up your supply chain or maybe the capital that you're committing to lock in key pieces of equipment, has that changed, if at all, as a way to hedge yourself against, you know, an expansion in lead times for key pieces of equipment?
No, this, this again, goes back to my previous comment of having scale really matters because as you're building or as we're building right now, 53 major projects, and oh, by the way, there is a long tail of smaller projects that we're doing that aren't on our expansion sheet. You know, we can then think about interoperability as we're looking at our supply chain and putting our capital to its highest and best uses. If, you know, we talked about on the Q2 call, if to get a 3 megawatt generator right now is 120 weeks. If you're not, if you're not in line already, you're, you're having a lot of challenges.
For us, if we've got all these equipment already pre-ordered, we've got our position in line, depending on what we need for highest and best uses, as we're seeing demand flow into our markets and doing our builds, we can go back to our supplier and say: Okay, well, actually, that piece of equipment that we said we need you to ship to D.C., no, we're actually gonna need that in Chicago, or instead of having that go to London, let's send that over to Paris. That puts, puts us in a position to be able to continue to deliver on the types of timelines that we want to deliver.
Also, given just the scope and breadth of what we're doing, you know, there are certainly the hyperscalers are gonna be bigger consumers of the plant and equipment associated with what we're building, but they're not gonna be many other larger consumers of these key data center components the way we are.
All right. Let's switch gears and talk about something which I think we haven't talked about for a while with Equinix, which is what I would call tech-oriented M&A.
We saw you guys did the Packet acquisition. You're building out your digital infrastructure services suite, and, you know, somewhere along the ways, we kind of stopped talking about it. I'm curious, have you backed off of your interest in kind of growing the technology or digital infrastructure services suite via M&A, and now you're just building organically? I'm curious where that falls in terms of the priority list for the business.
Well, certainly-
I can start.
Better view on this one.
And, and then you can, you can go. Absolutely. I mean, if you and I think we mentioned this in the last, you know, if you looked at the, the CAGR against digital services for us in the last few years, it's around, you know, 44%, right? You know, Fabric continues to be the gift that keeps on giving, and is a, you know, the fastest-growing, you know, product we had. Part of the reason we did Network Edge and acquired Packet was for the same thing. Why? Well, the thing that we realized was that, you know, the average customer, especially the average enterprise customer, what they're taking advantage of with Equinix is that sort of positional value of being in the right market, connected to the right counterparties, catering to the right set of customers, right?
They're, they're trying to solve for those 3 things. What's the biggest barrier to entry for that customer? It's the CapEx associated with expansion. It's not us, right? Normally they would come to us, and they'd say, "Oh, it's a few hundred dollars per rack. That's no big deal, except I got to go procure, you know, tens of millions of dollars a kit to go do it." By offering things like metal-as-a-service and offering things like Fabric and Network Edge, what we're capable of doing is allowing a customer to essentially consume colo as a completely virtualized construct. What that does is it just simply speeds up the whole land-and-expand behavior that is, you know, basically most of our enterprise customers. It just speeds it up. We are very committed to digital services for that particular reason.
you know, Packet was a newer acquisition. Fabric, we, you know, it started off life as what we call Ethernet Exchange, then it became Cloud Exchange. It finally became, you know, Fabric. What we're seeing there is the fact that almost every single network provider, cloud provider, everyone in the market today is all around as-a-service consumption. I need to make everything go as fast as possible. One of the things that a lot of our customers have realized is, hey, say, I'm a network provider, and I'm connecting to 10 clouds. Well, how am I doing that? I'm probably doing it through Equinix. What if I want to connect to all the other customers sitting in Equinix facility? Well, Fabric will allow you to do that on an automated basis.
My choice is, either I go off and build a snowflake, and then I individually try to figure out, with every counterparty I deal with, how I'm gonna actually automate that interconnection process, or by default, I can just use what Equinix has already built. Part of the reason we're seeing such good demand for our digital services is that, you know, every network as a service, every compute as a service offer, requires some amount of automation going on in the background that links their systems to the counterparty they're trying to, to talk to. That's what Fabric does.
All right. Well, look, with that, we're just over time
actually.
Gentlemen, thank you so much for being here with us today. Really appreciate it. Again, thank you.