Equinix, Inc. (EQIX)
NASDAQ: EQIX · Real-Time Price · USD
1,076.37
-13.48 (-1.24%)
Apr 28, 2026, 3:58 PM EDT - Market open
← View all transcripts

Goldman Sachs Communicopia + Technology Conference 2025

Sep 8, 2025

Jim Schneider
Analyst, Goldman Sachs

Good afternoon, everybody. Welcome to the Goldman Sachs Communacopia and Technology Conference. My name is Jim Schneider. I'm the data center analyst here at Goldman Sachs, and we're really happy to have Equinix here today, joined by Steve Madden, VP of Global Technical Advisory at Equinix, as well as Chip Newcom, Senior Director of Investor Relations. Welcome, guys. Thanks for being here.

Chip Newcom
Senior Director of Investor Relations, Equinix

Perfect. Thanks for having us. Quickly, before I know you've got a lot of questions, just so we cover it for my legal team, some of the things we're going to say here are forward-looking in nature. Please see our SEC disclosures for risks and uncertainties.

Jim Schneider
Analyst, Goldman Sachs

Very good. We'd be remiss to start any place except AI as a key theme across the conference, but also a driver for your business. In your role as VP Global Technical Advisory, you work with customers and partners to optimize their use of Equinix platforms. You've got a bird's eye view to that. Maybe start off by giving us some insight into how AI is shaping the customer conversations you're having and how that's evolved over time.

Steve Madden
VP of Global Technical Advisory, Equinix

Okay. Yeah. I think that we've been on this journey of growing data, using data, doing machine learning, doing analytics, doing different things. AI came along and kind of kicked all of that up a notch and accelerated the need to do more with data more quickly. A lot of our customers today, whether they're enterprises or providers, I deal with both, I mean, they're having a conversation with enterprises who are trying to explore the space, understand what this means to them, where do they get started, how do I get going in the cloud, do I need infrastructure, or I'm having conversations with new service providers, and GP was a service provider, and how do they get into this business and where do they go?

I think that overall, a lot of what we saw in this trajectory is just being accelerated and amplified by AI, but a lot of what they're trying to do and solve has been challenges we've already had.

Jim Schneider
Analyst, Goldman Sachs

I mean, given this intense focus on AI, can you give us an update on sort of the amount of AI workload demand you see within your data centers at Equinix, and sort of how you're positioning the company to capture the growing demand for inference in particular?

Steve Madden
VP of Global Technical Advisory, Equinix

Yeah, I'll start and I'll turn it to you. I always thought, why don't we set the context in that when a customer comes to us with a project, the project is going to require usually considerable infrastructure, connectivity. I need to manage data and have data ready and available in order to source and learn from it. I'm going to need potentially some GPUs or some AI accelerators. I'm also going to need other partners to consume models from. I'm going to be subscribing to data. I'm going to want to sell my data to people. It's actually a multi-location, multi-deployment discussion that we're having. Yes, during that conversation, they might want to deploy GPUs in a particular location. When we talk about AI and AI deployments, I get lost a little bit.

Do you mean large LLM training in a particular facility, or do you mean a project that's related to AI? We have this conversation internally all the time. I would say that all of our customers have it on their radar. Some of our customers are actively deploying in our infrastructure today. Some of them on the other end of the spectrum don't even have any plans to look at AI right now. It's just too far out for them. I think that we're in a bird's eye position that we start to see what the peloton of the market is doing and where they're going with it, where the middle of the market is kind of sitting and exploring, and where there's sort of this tail where nothing's really happening at all.

Chip Newcom
Senior Director of Investor Relations, Equinix

Yeah. I think to Steve's point, we are definitely seeing the leading edge in terms of what a lot of enterprises are doing. There are specific verticals that are further along. As we think about financial services and what customers are doing there, whether it's things around high-performance compute for capital markets or thinking about things like fraud detection, we've seen a lot of those use cases continue to proliferate. In other places like within healthcare, thinking about a couple of health tech companies that are working on new drug discovery using artificial intelligence around some of their therapeutic data, we're seeing both inferencing as well as training getting deployed into our data center.

The hard part is, to Steve's point, what is like a quote-unquote, "AI deployment?" In some cases, it actually can be relatively challenging to definitively say because, if it's an NVIDIA DGX POD at 120 kW, clearly that's AI. That one's easy to tag. It might be networking associated with transiting data to a cloud service provider or a GPU as a service provider to start developing a model. It might be storage associated with what they're going to be doing within a data center, again, preparing for working with a third-party vendor. There can be a variety of different IT use cases going into an Equinix data center that may very well be associated with artificial intelligence, but it might just not be a DGX superPOD where you're going to go and actually train something specific, which isn't to say we're not winning those too.

We were the first company to be working. We had a press release with Block earlier this year where they were the first to deploy a GB200 stack within an Equinix data center for their own purposes for AI modeling. We're definitely supporting that, but it's a snowball that is rolling downhill. It's not necessarily a massive avalanche yet.

Jim Schneider
Analyst, Goldman Sachs

Fair enough. I think you're an analyst today. You noted that the mix of AI training versus inference is about 50/50 today that you're seeing. From your perspective, how exactly does it manifest itself in the conversations you're having with partners and customers? How do the requirements from a facility's perspective change between the two, if at all?

Steve Madden
VP of Global Technical Advisory, Equinix

Yeah. For the first half, which is the training part, you said the 50/50, you said for training. In a lot of cases, that's a group of enterprises or customers that have started exploring and built something or several things inside the cloud, but they have enough momentum now and enough demand internally, and they know they're going to need more of a consistent base infrastructure. They'll deploy their own DGX PODs or their own infrastructure outside of cloud or cloud adjacent, where they can start to see that more mature towards that's going to be where they're going to build a factory to start producing models. To the extent that Chip was just saying, there are certain industries where we see that more prevalent than not. On the inferencing side, it's a combination.

We sort of say inferencing is going to boom only because when we say inferencing, we're not saying that enterprise is then going to take that model, deploy it at the edge, and use it. Yes, they do do that. What we're actually seeing is those models are useful to multiple companies and not even just in their own industry, but also cross-industry. Yes, they go and deploy that model, they just start using it to make money or save money, but they also use it as a revenue stream and offer it to dozens of other companies to help them make money. I'll give you an example. One company who is very much in terms of energy and energy tech built a model that helps building management systems be more efficient. The building itself consumes less energy.

They're smarter about how they turn things on, turn things off, but they keep everybody in the building happy. That's a very difficult thing to do, but it's real-time data, real-time analytics, real-time inferencing at the edge. How many companies do you think are interested in something that's going to save them 30% on energy in buildings? Everybody with a building is going to care about that. They don't want to all go and build a $100 million model. Instead, we're seeing a lot more of these models that are mature, that solve a very horizontal use case, are being deployed more prolifically and more availably today, whereas the bespoke, sort of very proprietary models that are coming out for very fit-for-purpose occasions are coming a little bit more slowly. It is a bit of both.

I think that where we're excited about everything being built in these factories is going to have to end up accessing and using and interacting with the physical world eventually. It's going to be done at more horizontal scale than just the companies producing it, which is why we think inferencing is going to be so big.

Jim Schneider
Analyst, Goldman Sachs

Interesting. You know, you cover a very diverse customer base. I'm kind of curious, you know, are you seeing any kind of departure or difference between the technical and facility requirements for AI that you're seeing from hyperscale kind of customers versus the ones you're seeing from enterprise?

Steve Madden
VP of Global Technical Advisory, Equinix

Oh, yeah. Hyperscale customers build everything at scale. It's kind of in the name. They essentially design their own footprints and things for massive blocks of deployment chunks. Enterprises typically do the opposite. They really only want smaller footprints, and they grow in different sort of chunk sizes. Where it crosses over with AI being the topic is certain infrastructure that either of them buy have particular requirements around density, cooling, and so forth, which I'm sure you all know. We need to make sure that when an enterprise is buying AI infrastructure specifically that's high-density requirement, that we steer them to the right area of the campus or the metro that's designed for that. We have facilities already in all of our metro campuses that can accommodate those larger footprints. We work with them on which is it that they need to use.

We also have to have a conversation around what kind of cooling they want to use because there isn't a single standard yet. There is a conversation to make sure we augment the environment to support that particular infrastructure, but we can do that. Whereas hyperscale, it's already kind of pre-planned, pre-deployed, pre-built into the infrastructure before we turn it over. I would just add that inferencing has a much lower requirement. When I say inferencing at the edge, I'm not talking about needing massive megawatt cooling. It's much smaller and easier to accommodate in existing facilities.

Chip Newcom
Senior Director of Investor Relations, Equinix

I'd add to part of the genesis behind our Build Bolder strategy really is to what Steve is talking about, the amount of digital infrastructure that enterprises are looking to consume now, not just because of AI, but because of a diversity of different IT use cases. The deal that used to be 100 kW is now 0.5 MW, and the deal that used to be 0.5 MW is now 1.5 MW. Part of what we're looking at is from a capacity perspective, given the demand that we're seeing and the demand signals we're seeing from our customers, we want to be building in advance of that because it takes us anywhere between 18- 24 months to build the next facility or to build the next phase of a facility.

Recognizing that we need to be forward-building in advance of the demand that we're seeing with customers, it's critically important for us to be able to be bringing capacity online.

Jim Schneider
Analyst, Goldman Sachs

From a constraint perspective, at the highest level, how do you see in the next couple of years in terms of the industry and the power needs? Are we going to run into a point where the industry just simply does not have enough power to accommodate all the data centers that are intended to be added? How is that impacting plans specifically for Equinix over the next two years?

Chip Newcom
Senior Director of Investor Relations, Equinix

I think one, the demand is robust. There's no doubt about that. That's whether it's for providers like Equinix on the retail side, what we're trying to do on the xScale side. We're really trying to be able to serve the entire gamut of the data center industry. The reality of it is there are constraints in the marketplace, whether that's power availability in the key metros where we're looking to operate, whether that's getting key mechanical and electrical. A credit to our Chief Procurement Officer and her team, we've already got $600 million worth of capital equipment on our balance sheet to be able to help our forward builds.

A big part of what we're doing is trying to lean in into our ability to forward procure, into the fact that we've got a very large balance sheet to ensure that we're in a position to be able to build capacity. I think part of where we're differentiated is when we're thinking about building and bringing new supply online, we're looking to build it in major metropolitan areas. As you think about our Build Bolder strategy, the vast majority of our capital spend is going into those largest markets where we already generate over $100 million in revenue. That's going into the Washington, D.C. type markets, into the London type markets, into Tokyo. These are markets where we've got large established ecosystems. As we're thinking about power availability there, certainly that can be constraining.

We've been working through our corporate development efforts for years to make sure that we've got the land bank and the power bank so that we can continue to keep building. Also on the retail side of things, we consume our capacity in a much smaller way. We're not consuming in 50 MW, 100 MW chunks the way that you might with a hyperscaler. We're not necessarily having to go back and reload with the utility quite as frequently. I think the big difference too is that when our customers are coming to us for 0.5 MW , 1 MW , we can continue to keep building on a relatively consistent load ramp relative to when a hyperscaler puts out an RFP for 100 MW. They're then going to all of a sudden go out and talk to, we'll call it 12 different vendors to see who will facilitate that.

Those 12 vendors end up going to XYZ grid operator, and all of a sudden it looks like there's a gigawatt and change worth of capacity needed when actually it's really 100 MW is the end demand need. I think part of what we benefit from is that as we work with the utility operators, we've been very good consistent load ramp. We're the ones who we've got a high SAD ratio in terms of the actual load that we want to get and the load that we procure. As a result, we've been in a very good position to continue to drive our growth.

Jim Schneider
Analyst, Goldman Sachs

From a competitive standpoint, how do your technical capabilities and strategic partnerships sort of differentiate your AI offerings, especially when you're competing with some of the very larger competitive data center providers and ones that are private equity-based or otherwise?

Steve Madden
VP of Global Technical Advisory, Equinix

Two answers to that question. One is in a lot of cases when we're talking about AI deployments inside our retail business. Our retail business is very ecosystem-centric. When the enterprise comes in, all of the suppliers they want to use are all close partners of ours. All the providers, GPUs of service, models of service, data of service are already there, data management platforms, et cetera. It's really a matter of if I deploy in this infrastructure, all of the things I'm going to need to connect to, to wire up whatever it is I'm trying to do, or even if I'm not sure what that is, that's going to change, it doesn't matter because everyone's here.

Whereas if you go to a wholesale location where there might be four or five customers in that whole building, you have to bring all that stuff with you or somehow get it to you. Where the case is that it's more of a how do I exchange value, we're typically in the best position for that. If the requirement doesn't require that at all, I would start to ask, why would you want to put it here then? If it doesn't require all of that, maybe you shouldn't put it in our data center. If it's really, I just want it nearby. It doesn't have to be in the central point where everybody connects, but I don't want to be too far away. We do have an aspect of our business where we do have capacity that we bring online, keep ready for that sort of thing.

We call it sort of like a data hub adjacent where we can serve that capacity. The proximity back into that infrastructure is really close. I just argue if the workload is either going to be revenue generating or the implications of cost is somewhat significantly lower because it's in proximity, you're going to want to put it here. If those things aren't true and it really doesn't matter where it goes, we would advocate that you should just think of it that way and put it where it needs to go.

Chip Newcom
Senior Director of Investor Relations, Equinix

Yeah, I mean, that's almost why you hear us talk about, almost like a mantra, this idea of right customer, right facility for the right outcome.

Steve Madden
VP of Global Technical Advisory, Equinix

Right.

Chip Newcom
Senior Director of Investor Relations, Equinix

There is a real opportunity cost of our capital in the sense that data center capacity is scarce. If we go out and sell a data center to an undifferentiated partner that's taking up half of the facility, that means we can't sell that to 50, 100 different customers, all of whom are going to interconnect with each other. As we think about how we're going to sell, we're very thoughtful in terms of trying to continue to curate these ecosystems. Seeding artificial intelligence and seeding all of the various different companies within that ecosystem, continuing to work with the hyperscalers, with the SaaS companies, because as you think about our business, we're really a place where both buyers and sellers of digital services are coming together to connect to each other to enable the digital economy.

We want to make sure that we're continuing to develop that within our facilities because, again, the secret sauce of our business, building a data center, that's not the secret sauce. If you've got a good general contractor, you can build a data center. The secret sauce of our business is the ecosystems and interconnection. It's that 19% of our revenues. It's that 492,000 total interconnections that differentiates our business relative to others because that is how, when you call me up on Zoom and you have to connect from Goldman's network to Zoom's network to our network and go across all the various different network service providers, every single one of those is an exchange point of data where it has to go from one network to another. That happens in an Equinix data center.

Jim Schneider
Analyst, Goldman Sachs

Do you think it's fair to say that the interconnect piece is still the most differentiated competitive advantage that you have? Do you think, you know, is that advantage increasing in importance or decreasing in importance as we move to AI?

Steve Madden
VP of Global Technical Advisory, Equinix

Increasing. I mean, the amount of data, the amount of bandwidth connectivity, the amount of participants involved, the amount of people you want to exchange with, the amount of partners and providers you're subscribing from, the amount of people you want to sell it to are all just growing. It's physics, but it's not rocket science. If you've got massive amounts of data that need to be exchanged the most efficient way possible, doing it in the same building, on the same campus, really is far more effective through volumes, throughput, and cost than trying to run it across half the country. We're seeing that the density of population increasing is what's increasing the value of facility.

Chip Newcom
Senior Director of Investor Relations, Equinix

Yeah, I think that's actually, I love your, it's physics, but not rocket science in the sense that the thing that hasn't changed with AI is the speed of light.

That has remained constant. Latency is still a very real issue. Now, for certain consumer applications, if you're going to be accessing an LLM on your phone, the latency of that doesn't necessarily matter in the sense that if you're pinging a server in the middle of Timbuktu, that's okay because, you know, to that server, half a millisecond doesn't matter, a millisecond doesn't matter. Relative to, as you start thinking about the world of agentic AI, where it's now machines talking to machines for things like building management or any number of other use cases, there that latency becomes critically important because if you're connecting to multiple different data sources to be able to run real-time inferencing at the edge deployment, machine to machine, that needs to be in the major metro area where that application is going to be running.

Steve Madden
VP of Global Technical Advisory, Equinix

Throughput, the volume of throughput goes up exponentially.

Jim Schneider
Analyst, Goldman Sachs

Yeah. Fair. Maybe I'll shift to some technology questions. You know, power densities continue to climb. We've moved pretty quickly from 5 kW per rack to 50 kW . Now, I think some people are talking about 500 kW per rack in the cases of NVIDIA's Rubin.

Chip Newcom
Senior Director of Investor Relations, Equinix

Yeah.

Jim Schneider
Analyst, Goldman Sachs

What are the technical capabilities and limitations in your infrastructure for supporting these rather extreme demands? What advancements are being made to prepare for those future power and cooling requirements?

Steve Madden
VP of Global Technical Advisory, Equinix

Okay. Clearly, I think of it twofold. One is necessity is the mother of invention, right? When we didn't have customers showing up with 120 kW cabinets, clearly the generic building infrastructure was fine. When we started to see those requirements come in, we were already getting ready for growth and increasing kilowatt requirements. Let's talk about extreme cases. You actually deploy or build an area of the data center that's designed to handle that particular capability. You don't have to change the whole facility. You just change an area to cope for what that requirement is. We've seen what those standards look like, and we've got more familiarity around customers of what they're thinking deployed, what that looks like as a workload, how much more of that we think there's going to be. We have designated areas to just take that under control and handle it.

I would also say the flip side is that there are chips and technologies and models and things coming out which require a fraction of the power of what we've seen before. In a lot of cases, workloads are shifting to lower power alternatives that have the same performance but don't have the same commitments or requirements. I think you're going to see a balance of both where we won't see it go up necessarily to 500 kW a cabinet, but you will see it sort of start to normalize out a little bit, and you'll see this mix of what the technology types will change over time to a different mix of power thresholds and requirements. That will also reduce the demand as well as improve the supply.

Chip Newcom
Senior Director of Investor Relations, Equinix

Going back to what we were talking about earlier, this concept of right customer, right asset, right outcome, part of what we're doing as we're building new data centers is we are building to higher overall average power densities. Again, we're supporting a diversity of different IT workloads. Despite what some vendors might tell you, the reality of it is the vast majority of IT workloads do not need to run on a GPU right now. Core networking infrastructure is still relatively low power density, as is storage, as is any number of other applications. When we're designing a facility, to Steve's point, we may have an individual data hall that is pre-provisioned for liquid cooling out of the box so that we're ready to go for a customer.

Even as we're thinking about, like, take for example here in Silicon Valley, if you go to our Great Oaks campus down the way, down in South San Jose, our SV1 and SV5 assets, which are core networking assets, like as everyone is sending an email or doing any internet traffic here, it's probably either going through SV1, SV5, or SV8 here in the Valley. We're not going to put super high power density workloads in those facilities because that is for core networking infrastructure. That is proverbially the beating heart of the internet. What we will do is we're building other assets around that. We'll have SV10 and 11, which are designed for more of 5 kVA- 6 kVA a cab. We're building SV18 across the road right now that's going to be north of 12 kW a cabinet.

Depending on what the customer workload is, in theory, could we put a single cabinet at 0.5 MW in? Sure, it might be glowing like the sun, but you just have a roller skating rink around that, basically.

Steve Madden
VP of Global Technical Advisory, Equinix

Yeah.

Jim Schneider
Analyst, Goldman Sachs

As you think about expanding the footprint of your facilities, how do you balance growth in your established interconnection markets with expansion to emerging regions? Are there any technical drivers that influence the international markets that you go into?

Chip Newcom
Senior Director of Investor Relations, Equinix

I'd say certainly as you look at our CapEx plans right now, north of two-thirds of our CapEx is going into our major markets. Those are the 15-odd markets where we generate over $100 million in revenue. We're continuing to invest in those markets in large part because we see very consistent good fill rates. That's where a lot of our customers already are, and that's where they're continuing to grow. Part of our strategy is then to seed other markets so that they develop over time to hopefully become those $100 million plus major markets. There is an element of what drives us to go to new markets that is largely speaking our customers. Case in point, we had been talking about getting into India for years, and it had been a market that had been a target for our customers.

It wasn't until we found both the right opportunity with the acquisition that we did a few years ago to get into Mumbai, combined with getting a great leader locally, that really then propelled us to get into that market. Most of our capital, as you think about our analyst day guide that we gave back in June, is going to be going to our largest metros.

Jim Schneider
Analyst, Goldman Sachs

Okay. From a JV perspective, how do you think about, you know, are you shifting the way you consider what you do in a JV versus what you do on your own books these days based on any of these technical requirements, or is it mainly driven by the kind of customers you're pursuing?

Chip Newcom
Senior Director of Investor Relations, Equinix

It's really driven by the kind of customers and the size of the deployments. As you think about what we're trying to do in our xScale product, part of the reason why we set that off balance sheet is, again, driven by the return profile that you're going to get when you're doing a hyperscale deal. If you're signing and filling up an entire data center, 30 MW- 60 MW going to a single customer, the return profile of that is completely different relative to the return profile that you're going to be getting in a retail colocation facility.

As we think about our highest and best uses for our capital, that continues to be our retail colocation facilities where, as you look, last quarter we generated a 26% cash-on-cash yield in the PP&E we invest, and we continue to underwrite projects we expect to be in the sort of mid-20s, around 25%. We want to put as much of our capital to work on those assets as we can, recognizing that we still want to be able to support our largest customers who are the hyperscalers. Being able to have a more capital-efficient way to do that where we're putting in 20%- 25% equity, our LP partners are putting in the balance of the equity, we can lever it differently.

The benefit of xScale is it allows us to get a really attractive return on invested capital because of the fee income that we get and the leverage that we're taking, but do it in such a way where we're not swamping our balance sheet.

Jim Schneider
Analyst, Goldman Sachs

Interesting. I think there's a lot of people who are interested in the supply chain dynamics here. Maybe give us an update on any long poles you're seeing in the tent at this point from a cost or procurement perspective. Are there any parts of that process that are getting easier, or is everything just getting harder?

Chip Newcom
Senior Director of Investor Relations, Equinix

I'd love to say it's getting easier. Unfortunately, given the demand profile, it's still relatively long. In some cases, you can be talking about several years to be able to get a generator and for backup power. I think credit to Ali Ruckteschler, our Chief Procurement Officer, and her team, they had the foresight relatively early on in the pandemic to go out and start forward-procuring for a lot of long lead items. Case in point, we've got $600 million worth of mechanical and electrical equipment that we've already pre-purchased that's sitting on our balance sheet to be able to support our current CapEx plans for building our new data centers. Part of the benefit we have as one of the largest players in the space is we can very confidently use our balance sheet and lean into our plans where we've got 59 major projects underway around the world.

We can very confidently go to XYZ vendor and say, "Hey, we want to get this much of a capacity. We know that we're going to be good for it because we've got all of these projects that we're building," and then put it to highest and best uses.

Steve Madden
VP of Global Technical Advisory, Equinix

I think if you add the fact that we're so predictable and consistent with all of our suppliers, they're more likely to want to work with us because they know we're legitimate around what we do, and we're direct contracting.

Chip Newcom
Senior Director of Investor Relations, Equinix

Absolutely.

Jim Schneider
Analyst, Goldman Sachs

Yeah. Just maybe close on a couple of different financial questions if I could. Given everything we just said about sort of the exploding demand profile for data centers and everything else, you know, I would have expected or most people would sort of assume that your utilization rates from a rack perspective are just kind of going to the moon. At least on the way it's disclosed, that's not the case in terms of like the overall utilization that you report. Maybe kind of give us somewhat a sense of the moving pieces that explain why utilization hasn't really gone like in the way that most people would have intuitively thought. Maybe, you know, what's the path to right-sizing that utilization trend? Are you considering any kind of new disclosures that would help illuminate that for investors?

Chip Newcom
Senior Director of Investor Relations, Equinix

Yeah, you know, I think certainly the one challenge, of course, is that utilization is a, what utilization is in a data center is a multi-dimensional metric in the sense that you've got the space capacity, you've got the power capacity, and you've got the heating and cooling capacity. With our cabinet equivalent metric, we're trying to sort of do all three in one metric. Part of the challenge also is an averaging factor where, as you look in our Tier 1 markets, we're actually relatively highly utilized. In our Tier 2 and Tier 3 markets, we actually have more capacity available because that tends to get consumed more slowly. Part of what we're doing right now with Build Bolder strategy is we are trying to rapidly build capacity in those larger markets because of the demand that we're seeing.

We're also then trying to be thoughtful to appropriately steer our capacity to the markets where we have capacity. Part of what the sales team is trained on is now going out to the customer and saying, "Hey, look, you know, based off of all of your capacity needs, what you're looking for in terms of partners, what you're looking for from latency, have you considered these three markets?" Look, anyone is going to be able to say, "Yes, I'd be happy to be in Ashburn, data center capital of the world." Not every application actually needs to sit there. That's, again, part of this right facility, right outcome for the right reasons that we're going after is how do you steer to the elements of capacity, but then again, continue to Build Bolder to bring capacity online in those constrained markets.

Steve Madden
VP of Global Technical Advisory, Equinix

I mean, also the sawtooth.

Chip Newcom
Senior Director of Investor Relations, Equinix

Oh, yeah. There's also sawtoothing as well. As we open up a new facility, inherently that's going to tick down utilization, and then you have to fill it back up and tick down as you open up more capacity.

Steve Madden
VP of Global Technical Advisory, Equinix

Yeah. I'll also add that infrastructure is refreshing much faster than it used to. In a lot of cases, especially with the accelerators, every two years is this big bump in capacity performance. That may or may not mean that they need fewer infrastructure from us to run the same workload.

Chip Newcom
Senior Director of Investor Relations, Equinix

Yeah.

Steve Madden
VP of Global Technical Advisory, Equinix

They double up. Yeah, the technology is changing too.

Jim Schneider
Analyst, Goldman Sachs

Fair enough. Finally, in the capital structure, I think your analysts say you've expressed some preference for raising more incremental debt rather than equity going forward. Maybe talk about that in terms of the context of your overall preferred capital structure. How do you plan to measure or exert the various levers at your disposal, whether that's JV , acquisitions, new builds, and anything else?

Chip Newcom
Senior Director of Investor Relations, Equinix

Yeah, I mean, in terms of the outlook that we gave at our Analyst Day back in June, we did say our expectation is that in terms of funding our growth CapEx over the course of the next several years, that's going to be through a combination of our internal free cash flow generation after paying out our dividend, plus then moderately increasing our leverage. As Keith noted on stage, our expectation, again, back at our Analyst Day, is we'll add about $8 billion of debt capital to the books over the course of the next several years through 2029. We do have capacity to go as high as 4.5x net levered while still maintaining our current BBB+ investment-grade credit rating. We think that that's the appropriate way to look at our capital structure. Again, relative to many of our peers, we actually have relatively modest levels of leverage.

Part of what we like about that is it gives us a lot of strategic flexibility where, depending on as we see opportunities in the marketplace, if whether it's M&A, whether it's leaning into Build Bolder even more, we have the strategic capacity to be able to lean into those types of opportunities.

Jim Schneider
Analyst, Goldman Sachs

Great. I think with that, we're almost out of time, but thank you both for being with us. We really appreciate it.

Steve Madden
VP of Global Technical Advisory, Equinix

Great. Thanks, Jim.

Chip Newcom
Senior Director of Investor Relations, Equinix

Thanks for having us.

Steve Madden
VP of Global Technical Advisory, Equinix

Thanks for having us. Bye-bye.

Powered by