Equinix, Inc. (EQIX)
NASDAQ: EQIX · Real-Time Price · USD
1,065.28
-24.57 (-2.25%)
Apr 28, 2026, 1:29 PM EDT - Market open
← View all transcripts

Morgan Stanley Technology, Media & Telecom Conference 2026

Mar 2, 2026

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

Okay. Might as well get started. Thank you all for coming. My name is Cameron McVeigh. I cover communications infrastructure here at Morgan Stanley. Pleased to welcome Jon Lin, Chief Business Officer at Equinix.

Jon Lin
Chief Business Officer, Equinix

Thank you, Cameron.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

Welcome. Before we get started, I have to read this. For important disclosures, please see the Morgan Stanley Research Disclosure website. If you have any questions, please reach out to your Morgan Stanley sales representative. Okay. Done. Jon, let's get started. you know, Equinix reported fourth quarter and full-year earnings in February. CFO Keith Taylor called it the best quarter ever. Jon, what in your view was the main driver of this momentum?

Jon Lin
Chief Business Officer, Equinix

Yeah, I think a couple of different areas there. Yeah, I'd say first and foremost, when you look at the performance of the business in terms of our bookings, which is, you know, how we treat our customer sales essentially, you know, best quarter ever. You know, I think in a recurring revenue business, we're always under pressure to do that. To be able to deliver that in a momentum that we're seeing around that, I'd say, I think for the full year of 2025, we end up delivering 27% growth in bookings year-over-year. I think for Q4, ended up being closer to a 40%+ number compared to the previous Q4 of the prior year.

It really just shows, Hey, you're starting to see continued momentum in the business in terms of customer demand manifesting across a number of different dimensions. I'd also say, you know, in terms of managing the business, I'd say, you know, we're continuing to see continued improvements on driving operating margins. Also, the treasury team has done a fantastic job of managing the balance sheet, continuing to do work on our capital raises, et cetera. When you put all of that together, I think it just puts us in an incredible position to win. That bookings and that customer growth was also across a really distributed base. I think we had 4,500 deals across 3,400 customers.

It's not, you know, one particular segment that ends up translating into across both service providers and enterprises and across different segments inside the enterprise base, really sustained demand.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

Great. As you just laid out, you know, this solid 2026 guidance, was well ahead of market expectations in the previous guide given at the Analyst Day in June. Maybe just talk about your key priorities and key focus for the year.

Jon Lin
Chief Business Officer, Equinix

Yeah. I think three different areas there. First, continuing to capitalize on the customer demand that we're seeing out there, satisfying the requirements. It is definitely an area where, one, we've got to make sure we've got the capacity to be able to satisfy the customer requirements. We've got to make sure we've got the coverage and kind of mechanisms from a sales and marketing standpoint to get in front of customers, hear the requirements earlier to be able to understand and shape the demand as they're thinking about these AI requirements. Over the course of the last three years, I think we've been looking at AI at large, and, you know, our early read around the industry trend was gonna be there's gonna be a lot of activity on the AI training side, which I think we've all seen.

There's gonna be a lot of investment into that, again, which we've all seen. Our belief was always, hey, the durable value creator for AI is gonna be around inference. When that materializes in a real way, that's when we'll wanna go ahead and continue to step up in terms of our activity. I would say that's what we're seeing. I think, you know, some of the announcements that we had at Analyst Day around our CapEx intensity and the increase there was a real realization that we're starting to see the enterprise take off here around durable use cases, around value creation for them.

It's not just on the service provider side, where of course you'd expect service providers to invest early to be able to see, like, build in front of where the revenue capture was gonna be. Now we're seeing that kind of long-term trend across multiple different segments around various use cases, both ones that are gonna be agentic in nature, as we're all hearing about now, around how that's gonna change things around generative AI. Around also, though, core machine learning systems across different modalities that are gonna need different intersections with different data sources. We believe our, again, Equinix's fundamental value is when data is from different locations, whether that's geographic, whether that's in different clouds, whether that's from different counterparties, we're the logical place to interconnect all of that and really bring that to life.

We're starting to see that demand. The priority number one is keep building as fast as we can around that, keep delivering that capacity to support these requirements, keep staying in front of customers to understand not just where are those requirements to land immediately, but also where are those requirements into the future so that we can. Again, you've heard us talk about we're pre-selling more than we ever have in the past. We want to be able to capture the requirements to inform our development pipeline as well. The second area is just, you know, we're continuing to drive improved operating margins for the business.

I would say, again, using a lot of the tools that we're looking at for digital transformation that our customers are talking about, we're also seeing the opportunity to transform our Lead-to-Cash systems, our kind of, back office systems to be able to drive increased efficiency there, and really flatten the curve in terms of the, of the growth of those costs. Then the third area is managing the balance sheet. Again, capital-intensive business, making sure we've got kind of best-in-class cost of capital access to be able to deliver, services and build as we're, you know... Again, a lot of our construction can be multi-year in front of when the revenue is going to show up. We want to make sure we have the best balance sheet and structure possible to drive that.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

Great. That's helpful. Jon, I wanted to ask on latency. You know, how often does the issue of latency come up when you're speaking to customers? Are there any particular workloads that are latency sensitive? You know, maybe how do you expect Equinix to serve a greater share of, you know, those workloads going forward?

Jon Lin
Chief Business Officer, Equinix

Certainly all the time because we bring it up. You think about it from two lenses for interconnection requirements for our customers. There's both the latency and also the throughput required for these workloads for AI in particular, but across the board. You know, the way Equinix has decided where to locate ourselves geographically, operating in 77 metros in 37 countries now around the globe, it's to be closer to where the eyeballs are, right? We're not developing our data center campuses where there's cheap power. We're not, like, looking for empty farmland. We're actually located to where the nexus of population and, like, fiber connectivity exists so that we can serve those workloads.

Whenever we're out there, we like to say, well, I think we're within 90% of the world's population within 10 milliseconds, right? When you think about 10 milliseconds, it's faster than the blink of an eye. It really lets you serve almost any kind of workload that is latency sensitive, whether that's, you know, real-time interactions, whether that's gaming, whether that's financial trading, whether that's, you know, media distribution most effectively. There's a lot of different areas that we can, we can drive around that. Those conversations come up a lot. I'd say for AI, in particular, it ends up being twofold. One, you know, when all of us are interacting with our chatbot of choice, like, that's not particularly latency sensitive, right?

We're all waiting for the dots there, and, like, the token generation takes longer than the latency of the network. Where it does become valuable, though, is when you start actually dealing with multimodal systems, so different data pools that are coming in there being ingested by your AI. Well, every time that machine needs to go out, establish a new connection, find a new data source, get that and process that, well, that's like essentially hair pinning that traffic back and forth. When you start seeing those effects, that's when you really start to see latency sensitivity come into play. The second is any kind of machine-to-machine communication. Again, when you start interacting with humans, we're all slow. That's okay, especially on a, you know, Monday morning.

You know, when you start seeing the machines actually trying to converse with each other, if you're not able to have the lowest latency possible, well, that's just costing you either, like, time to token or time to value whatever your transaction might be. You know, in the case of, you know, financial services, you know, that might look like, hey, for risk management calculations for end-of-month runs, that's not particularly latency sensitive.

When you're trying to do trading, and you're trying to make sure, like, you understand what your next decision's gonna be on a purchase around that, or you're trying to go ahead and inform, you know, what your next set of, you know, currencies that you might want to move during a day period might look like around that or, you know, commodities today, right? Like, any of that, the latency sensitivity can be dramatically higher, and so, like, that's an example of a workload. I also mentioned, like, in addition to latency, the throughput has actually become a more and more important element of the conversation. Like, for a long time, we've been dealing with, you know, data across the network is increasing.

I would say, you know, post-COVID, it had kind of plateaued a little bit, right? I mean, HD content is HD content is HD content. 3 Ds didn't come about. You know, 4K, like, streaming is there, but, you know, it's not massively relevant in terms of passing bits. The part that we had, as an industry, been thinking about was, like, what's gonna be the next big driver of traffic across the network? I would say AI is an example of that, right? The ability to process AI is entirely reliant on the ability to process data and actually, like, process those workflows quickly. Every time that that data is hung up on the network instead of hitting your GPU, you're costing yourself money, right?

The GPU is the most expensive thing on the planet right now in terms of bringing that to life. You know, you'll hear Jensen talk about that tomorrow. Like, the unit of value that is most important is making sure that these GPUs are maximized. That's why we're focused so much around, like, well, in order to maximize that GPU, you've got to make sure that data is flowing into that thing as quickly as possible. That's increased the amount of throughput required. You know, we have, for Equinix, a software-defined network that we call Fabric. It connects to both the clouds and all of our locations together. When we started that, those were 1 Gig circuits growing to 10 Gig circuits with, like, 10 Gig ports growing to 100 Gig ports.

Now it's like we're upgrading that network fully across the network to 400 Gig ports with 100 Gig VXCs, right? So a 10x multiple in terms of the throughput and capacity required. That can be anywhere from our service providers that are looking to drive AI traffic and workloads into enterprise deployments in private clouds, to, you know, again, enterprises trying to move data and workloads from different clouds into a central data repository so that they can have their private AI work against that.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

Great. Related to that point, during your presentation at the Analyst Day, you mentioned partnerships with NVIDIA, Dell, Rackspace. You know, how is Equinix leveraging these relationships? How has the partnership evolved? How do you expect it, you know, these partnerships to help capture more of these AI workloads?

Jon Lin
Chief Business Officer, Equinix

Yeah. We're in a unique position in the marketplace as Equinix. We're, you know, we've talked about the neutrality of our business, and that's been core to our thesis since we were founded 27-some years ago now. Neutrality means a couple of things. One is we're carrier neutral, so we invite all of the different carriers to come to our facilities and exchange traffic. It also means we're technology neutral, right? We want to make sure that we understand and can bring to life all of the most important technologies in the world to our customers. That's both from a sales perspective, but more importantly from a how can we accommodate that and how can we bring that to life for these deployments.

Our partnership with NVIDIA started maybe five years ago, right, before AI was a real thing and before enterprise AI became a real thing. What we saw was, hey, they're making incredible progress around machine learning, and that's going to continue to be more and more relevant. Well, when that infrastructure comes to life, how are we going to be able to support that for our customers in our data centers? A lot of our conversations start there. You know, they've ended up evolving and obviously becoming an incredible partner from a go-to-market perspective. First we want to understand how can we help them bring this technology to the rest of the world's customers. Again, you know, our customers are the world's digital leaders at 10,500 customers now, roughly divided 50% of them are service providers.

We've got all of the major cloud providers, all the major networks, and all of the emerging ones, the Neocloud as they're developing, and all of the enterprises that then need to consume those services. It's really important for us to then not just understand, but then also be able to take an opinion to these customers, like when they're asking, "How can I support these liquid-cooled GB200s? I have no idea. I don't have liquid cooling in my enterprise data center." We can tell them, "Oh, we've already thought this through. We've worked with NVIDIA for 18 months on designing solutions around this. We can operate this for you with the same reliability of any production facility, tomorrow for you, right?

You can go ahead and start deploying." That comfort level that can come from that, but also the value then that they can get from that technology becomes very powerful.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

That's helpful. You know, on the topic of liquid cooling, curious if you could provide an update on the rollout of these liquid cooling solutions in the new and existing capacity, and then, you know, just how important is liquid cooling for AI inference?

Jon Lin
Chief Business Officer, Equinix

Yeah. Equinix has about 280 data centers around the world, around 100, slightly more than 100 of them at this point across 45 metros, again, globally, are liquid cooling, like, able to support, right? We've done pre-engineering work. These include facilities of ours that are 10, 15 years old, where we understand what the central plant looks like. We understand how to bring water in the cooling systems directly to a customer infrastructure. We're actually comfortable operating that, right? I'd say, you know, 3 or 4 years ago, if you asked a data center operator, "Oh, we're going to have water coming to customer equipment that's like a heavily electrically loaded.

Is that going to be okay?" Everyone's like, "that might be pretty challenging for us." We've spent a lot of time figuring out, well, how do you actually operationalize that, right? The plumbing is one piece, but knowing what to do in the event of a leak, knowing what to do in event of a fire, knowing what to do in the event of an operational situation where if you lose cooling capacity into that liquid, these machines are so hot that you will thermally overrun, like, essentially within seconds. Like, that's obviously a more complicated scenario, right? To be able to do this in production and be globally consistent around that. We did that work about two years ago, right?

In terms of understanding how to productize the capability for liquid cooling, bring that to life for customers, and have been solving for that, in what we would consider like a standardized, productized way. We had been doing this as one-offs, though, for customers for over a decade, right? Like, customers would ask us, and we'd try and figure that out. That is going great. I'd say we continue to see customer demand scale now, which is super exciting to see. I think we built honestly in advance of where those requirements were going to be, and now, you know, we actually understand that pretty well. We're doing this in a repeatable motion. It's still, though, from a, like, total percentage of deployments or, like, customer request basis, it's still the minority, right? To be clear.

It's, you know, when we see these requirements again in the, what, 3,600 transactions that we talked about there, you know, in the, like, low double-digit percentages of total deployments. Call it, you know, maybe several dozen to maybe 100, we'll have inquiries around that and figure out how to support that. It's still, you know, relatively low volume. These workloads are generally when they are liquid-cooled, it's almost always going to be an AI deployment that's probably the most expensive gear that's on the planet that's going into those deployments. We want to make sure that we're satisfying the customer on that delivery.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

That's great. Jon, I wanted to hit on cabinet density.

Maybe just an update on, you know, your power density across your existing portfolio versus some of the planned data centers coming online and, you know, how we should think about the average power density, currently and, you know, at what rate that might start to tick up over time.

Jon Lin
Chief Business Officer, Equinix

Yeah. It's a, it's an interesting one. Obviously, you know, you look at like Blackwell, that's at 120 kW a cabinet for a liquid-cooled deployment supporting the latest NVIDIA, and that's going to go up, right? We all know, like, you know, NVIDIA is talking about 300 kW a cabinet for future architectures around that. That's true. Like, again, the deployments that we have supporting some of that are at 120 kW a cabinet. When you think about the cabinets that are also supporting that, right? It might be 4 or 8 cabinets of 120 kW a cabinet GPU capacity. It'll need 20 cabinets of storage. It'll need another, like, 5 cabinets of networking gear and infrastructure to support that.

That average can actually come down pretty rapidly. Across our new deals, I think new deals book, we're at 6.6 kW a cabinet across the entire Equinix fleet. Our existing base, though, across all of the things that are deployed and live is somewhere just north of 4 kilowatts a cabinet. When we think about data center design for the future, we're certainly upticking that though, right? The latest design that we're deploying in Dallas, I want to say, is around 18 kilowatts a cabinet. You know, we generally think somewhere between the 15- 20, depending on the mechanical infrastructure that we have in place, that feels like the right average. The reason I say that is we'll have liquid cooling to support as dense as you want.

I think the latest designs where the team was telling me it was more like 500 kW a cabinet or maybe even a MW. It's just you need a bigger pipe fitting for that, which blows my mind. The important part is it gives you the flexibility, though, right? Again, if we used up, let's say we're wrong on density, it ends up being overly dense. We can support that with the liquid cooling infrastructure that we've deployed there. All it means is we strand shell, right? Honestly, the shell of a data center is the cheapest part about the construction. We want to maximize the utility feed that we're getting out of the utility provider. We want to maximize our electrical and mechanical systems, so we're delivering that.

If we're wrong the other way, if you build too dense and you actually can't sell that dense, you've stranded all of your capital investment on all of your electrical and mechanical plant. That's a really bad outcome. What we've seen over the years is it's really hard to tell where the puck's gonna land. Our data center investments, when we're building them, this is not a 3-year investment, right? This is a 2-year construction project, and we're expecting that to be able to deliver revenue for us for the next 20+ years. Like, we wanna make sure we're thinking about flexibility of that infrastructure then and the long-term value that that thing can drive for us cause we're not just worried about the customer for today, we're worried about the customer on the renewal, we're worried about the computers and workloads that'll be supporting there.

And what we've seen over time is that can move, right? Again, everybody has been focused on GPU density and, like, the training in that density. It is certainly very dense. We've been talking with silicon startups for, you know, the last 5 years. When you think about inference and even, like, NVIDIA's acquisition of Groq, the, you know, the AI inference chips are actually much less dense, right? The compute required to drive that, the complexity of the silicon is significantly lower. When you think about inference, a lot of that can be air-cooled, that can be lower density. You know, 5 years ago, everyone was saying, "Hey, Arm is gonna be the, you know, replace all of x86 chips in every data center.

You know, densities are gonna go down to 4 kW a cabinet," and maybe that'll happen, right? We wanna make sure we're designing, though, so that we can go ahead and drive revenue and value for the long term.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

Got it. Jon, I'm curious how you think about, you know, price elasticity of your customer base. You know, monthly recurring revenue per cabinet has been going up and, you know, would you expect this to continue over 2026?

Jon Lin
Chief Business Officer, Equinix

You know, I think there's definitely some amount of price elasticity there. I think it's certainly something that we're always trying to identify and figure out, "Hey, where are the opportunities for us?" It comes from a couple of directions. One is, you know, our pricing per cabinet will go up, like, inflation is a real thing around the world, right? Like, our input costs in terms of data center construction, development, all of that, you know, it has been going up over time and probably will continue to. Utility costs are highly variable. That can go up over time.

The density of the cabinet itself, we're delivering, again, more kilowatts into a cabinet than in the past, so that can drive pricing and yield up on a per-cabinet basis. Our interconnection offerings continue to drive more and more value for our customers. Again, one of the reasons Q4 was so great for us, like, when Keith says firing on all cylinders, like, interconnection actually started to reinflate in terms of growth, and we're starting to actually see that go up, and that drives more dollars into that yield. I'd say overall, it's a complex algorithm to be able to get to exactly how far you can push that.

I will say the part that's very, like, interesting over the course of the last, like, 2 years, especially around AI and the importance of the workloads that are being driven there, you know, the value creation being done out inside of that infrastructure now, also the pricing of, like, the GPUs in that technology stack is significantly higher than, you know, your standard, like, web servers of the past, right? When more and more value gets created out of that deployment, then there's a lot more elasticity around, like, well, it's going cost more to make sure it runs reliably, is always on, and can be connected to all of the data sources that matter.

Like, people are like, "Okay, I'll pay up to be able to go ahead and make sure that I'm maximizing my dollar value out of this massive capital investment that we're putting in.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

Great. Jon, I wanted to be sure we mentioned power and talk about it. you know, on the last earnings call, Adaire had mentioned that 3 GW of developable capacity is powered or close to being powered. Can you talk about maybe just the power procurement environment and potential challenges Equinix is seeing at the moment?

Jon Lin
Chief Business Officer, Equinix

Yeah. It's, you know, on a global basis, it's interesting and, like, for the first time to be able to see in the mass media, like, talk about data centers all the time. You know, infrastructure's sexy again. That's great, except when it's negative press. It's fascinating, right? I'd say, one, you have to be long-term thinkers in this space to be able to have the relationship with the utilities that matter, and they have to trust you to know that you're going to be there. Candidly, there's a lot of money that's come into the data center space. They aren't long-term operators. They're putting a lot of weird requests in, and as regulated utilities are, they have to respond to those, right?

They're not allowed to just say, like, "I have no idea what you're talking about, and I've never seen you before. I'm not going to give you power." They have to respond to that. That creates a lot of challenges for the entire market of, like, well, how do you sort between all of these different requests that may actually be at the end of it with one customer underneath that load, right? You end up seeing, you know, GW worth of power requests coming into one particular market or jurisdiction.

Again, many of us in the industry are like, "Oh, we know exactly what hyperscale that customer wants that requirement." It's not actually like, you know, 2 GW, it's maybe 500 MW, and there's this many people that are all trying to sell to that one customer, trying to get power certainty on land. We've been working with the utilities and, you know, I'm really, really happy with the work that we've done there to just say, like, "Hey, like, raise the bar," right? Like, actually force people to make it a little bit harder to get access to the utilities. Actually put processes in place that require, like, payments for these load studies, so it can't just be random land developers doing that.

We wanna make sure that there's real long-term providers that are credible making these requests so that we can get some of the noise out of the system. Again, do it fairly. That's obviously very important for us, and we're very serious about our neutrality. It's like we wanna make the rules for everybody the same, but, like, just raise the bar a little bit so that you can get some of the noise out of the system, and that's been a great help. The fact that we're willing to do that, and we're willing to invest in all of the development, and, you know, we've been very, like, loud and open with the communities at large, along with the utility providers.

Like, the data center industry needs to make sure that we're paying the fair share of what we're doing, whether that's net new generation creation or transmission costs that we're creating so that we're protecting consumer ratepayers for paying for what we're doing as an industry, right? One of those is just from an ethos basis, we take that very seriously. Again, we're gonna be long-term owners and operators of these facilities. Our people live there. I live in Northern Virginia. Like, you know, I see the data center development. I want to make sure, like, my kids are having the benefit of the tax revenue, et cetera, around that. So we're gonna be in there for a long time, and we're really deeply partnered with the communities.

The second part is, like, if we look, like, more, more selfishly around that or more commercially, if we're doing bad things and consumer ratepayers are, like, ending up being impacted, you're gonna get regulated. Like, that's not a good outcome, and you're starting to see the pushback now, right? Again, I would say some players that were working in this industry weren't thinking as long-term around that, and you start seeing people say, like, "Oh, we should put a moratorium." Like, "These data center things are gonna, like, impact customer housing." I would just say it's an opportunity for the industry as a whole to kind of grow up, take this seriously, and work with government, right? Work in close partnership to say, like, "We need to do this." I...

Like, you know, one of the good things that I'm seeing out of the administration right now is seeing, hey, there seems to be that lean of we need to make sure we're protecting the consumers from the impact of this net new piece. You know, we've been working with whether it's ComEd or PG&E. You might see some announcements that went out over the past couple of months where, again, because we're working with the utility around this and consuming load of generation that was already there, it actually reduces the consumer ratepayers' price, right? Again, we're paying for substation development. We're paying for actually upgrades to the transmission system to be able to take that.

If you have unused generation that's out there in the system, well, the consumer is paying for that as well, right, in terms of a higher unit cost on their power. So the ability to have that load is useful. You're starting to see more and more happen in this space though, right? I think, you know, there's a lot of talk right now about, hey, if you're gonna bring large load onto portions of the grid, you've got to go ahead and make sure that you're actually kind of bringing your own electrons, so to speak, creating net new generation around that. Like, total believers in that, right?

Huge believers that, you know, we need to make sure that this industry as a whole is paying our share of the way around that, and whether that's ourselves or our customers around that. Like, the total economic benefit that we're driving around this is high enough that we should be able to take that, right? I think that's a really important message to communities at large when they hear about data center development. You know, I'm sure all of you guys have been covering some of like that and watching the space. There's a lot of pushback, like what are these things actually doing? It's like, yeah, the data centers are not producing like cat pictures and stuff. Like, some are, but really, like, it is the fundamental, like, driver of economic activity now, right?

Like, digital infrastructure is the factory of today. It's the mall of today. It's the economic center that's happening because so much is happening inside of these facilities. It's really important for us and as an industry and for our investors, et cetera, to understand that and be able to tell that to your stakeholders around like, "Hey, like this is actually how, like, GDP is created now, right?" It's a meaningful portion of that that's gonna show up digitally. That's what's happening in the data center, right? It's actually coming online, converting power into money for somebody. That money for power is actually not for the data center developer generally. It's for the customers inside that data center.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

Great. Just to follow up on that point, when, you know, how does Equinix internally, how are you thinking about using grid connections versus some of these behind-the-meter power solutions? You think about fuel cells or turbines, you know, how does that fit into the strategy?

Jon Lin
Chief Business Officer, Equinix

Yeah, I mean, we're operating, like I said, in 77 markets across 37 countries. It's really a all-of-the-above strategy. In some jurisdictions, again, in the vast majority of the time, we want to work with the local utility, right? Like, grid power is the best solution because you end up creating more shared resource across a broader set of constituents to be able to do that. In cases where the grid is not able to accommodate our power requirements, so we have done and actually created, like, net new power generation. We're looking into the future around that as well as like, "Hey, how can we scale that up? How can we do that?" It's...

On a global basis, you know, post kind of World War II and kind of the industrialization of the global economy, like we've actually generally seen power utilization and draw go down, right? For I want to say, like, a 40-year period. Only recently is it starting to inflect back up because of electrification of the grid, electrification of vehicles, more efficiency around heat pumps, like all of that's driving electric utilization up, and then data centers as well. That was, like, before AI. Like, you were actually for the first time actually seeing, like, draw increase across the globe. Then now you layer on, hey, all of a sudden, there's also all of this data center activity associated with AI and the electricity consumed by that. That's net new, like, load that's coming on.

For a long time, again, because utility was going down, well, those power plants were still there, right? You hear about all these, like, shuttered power plants. It's like, well, they were created because we were smelting aluminum. You know, I will say from an environmental impact, the data center's far, far better than, like, having aluminum smelting in the community you're operating in. Like, that's kind of the replacement that we're talking about now of just like, okay, it's fundamentally converting power into financial benefit.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

It's helpful. We have a few minutes left. I wanted to open it up to any audience Q&A, if there are any questions. No, you can think about them. I'll ask one.

Jon Lin
Chief Business Officer, Equinix

All right.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

Yep, go ahead. Up in the front. There's a mic runner here.

Speaker 3

Hi. If you could just talk a bit more about the 20-year life of the assets and how you see that playing out? Obviously, the CapEx you've spent, your 2-year build cycle, and then you're talking about the 20 years. In terms of the upgrade cycle you expect to have to do to make that 20-year life cycle work for the assets, can you just talk a bit more about that?

Jon Lin
Chief Business Officer, Equinix

Sure. Yeah, I would say it's 20-year plus, right? Again, most of our assets we actually think about as essentially perpetual. Like, you know, we've constructed the data center. We're putting in the maintenance CapEx to keep it going there, but it's going to be generating revenue for the entire, like, you know, Outside of our planning horizon, I guess is the way I would say it. Some of that does require maintenance CapEx, right? Obviously, we're putting that in on an annual basis.

There's also requirements, like, for some of the longer lead elements of that, like, we've started categorizing, like, redevelopment CapEx to say, like, "Hey, if we have to replace the 15-year plant," like some of the, some of the core data center infrastructure, whether that's the generators that are running or whether that's like, you know, some of the electrical switching gear or some of the central chiller plant, that could be like a 15-year asset, and we might be able to say, "Okay, we can put enough maintenance in there to get a little bit longer out of that," but at some point, you're going to go ahead and want to refresh that.

The plus side of all of that work is there's been so much additional work that's happened in terms of efficiency gains and, like, you know, just upgrading and understanding of how to do that mechanical and electrical work more, more efficiently that we can upsize that, right? In a lot of cases, we've ended up taking, like, our DC2 facility right now that's going through some of this redevelopment. We can add incremental power availability into that site from both efficiency and from upsizing the infrastructure, which means we can actually generate more revenue into the same facility, right? That's how we think about that from an underwriting perspective.

Yeah, I mean, we're essentially, you know, continuing to model our ability to generate revenue out of these things is for a very, very long time.

Speaker 3

Hello. Could you briefly touch on xScale? has it... if you think about it, five years down the line, has it become a bit bigger part of your overall business compared to how you thought about it a year or two ago?

Jon Lin
Chief Business Officer, Equinix

Yeah, it's a great question. I think yes is the answer. Certainly, that's the reason that we. You know, when we started xScale, we were thinking, okay, it's our customers had been asking for it for a long time. For folks who don't know what about xScale, basically, you know, our core business for Equinix on the data center side, they're multi-tenant facilities that we call retail, right? In any given facility, we'll have anywhere from dozens to multi-hundreds of customers inside one data center. A large part of the market, like the hyperscalers and some of the other large customers may want an entire facility for themselves, right?

When we look at that, our, you know, our average expectation on returns across where we're developing capital for our core business, it's in the, I think our target that we've said publicly is 25%. It's in the 20%, like mid-20s and up range, right? It's, like, again, it's a beautiful business. I'd say when you're developing a single-tenant facility for a customer, like, your costs are pretty well known, right? It's pretty apparent to the market at large around that, and yields around those facilities are generally call it in the high single digits to low teens. You can take some leverage on that, work that up into the mid-teens. For two reasons then, Equinix had not wanted to pursue that as one of our growth vectors.

One, it can be highly capital-intensive with a much lower return. Two, from a actual, like, the structuring of that, because you want to apply so much leverage to that, we thought we wouldn't be able to maintain or at the time when we were thinking about it, get to investment grade or maintain investment grade if we were doing a bunch of hyperscale development around that. That's largely been true, right? When you look at outside of the data center landscape, I think there's two maybe, like, you know, depending on how you squint, maybe four public players right now. That's the reason why, right?

When we looked at it, though, our customers were asking us loudly enough for this product that we said, "Okay, let's see if we can figure out how to do this." We ended up creating a development joint vehicle structure. Our first one was with GIC, the sovereign wealth fund in Singapore. We did one with PGIM. We announced one recently in the U.S. with both GIC and CPPIB, the Canadian Pension Investment growth together, for $15 billion in the U.S. That's where we're doing kind of solving for the hyperscalers around that. I'd say so. Yes, in the sense that, you know, our initial investment for xScale was we had gotten $7.5 billion of equity commitment there.

We've tripled that in total size now with the equity commitment from CPP and GIC, and we're executing against that. I'd say it's delivered what we were hoping out of that, right? One, it gives us capital efficiency, but still gives us the customer being able to solve for the customer demand in the ways that we wanted to. It's also given us quite a bit more scale in terms of our procurement capability on utility as well as equipment. It's been good, and we'll continue rolling forward with it.

Cameron McVeigh
Vice President in Equity Research, Morgan Stanley

Excellent. We are at time. Jon Lin, thank you so much.

Jon Lin
Chief Business Officer, Equinix

All right. Thanks, all.

Powered by