All right, I think we'll get started. I'm Ryan Gravett from the research team here at UBS. Our next speaker is Jon Lin, Executive Vice President and General Manager of Data Center Services at Equinix. Jon, thanks for being here.
Absolutely. Thanks for having me.
Yeah, so just to kick things off, it would be great to just get to hear a little bit about your background and really what you're focused on for the company, especially as we head into the new year.
Absolutely. First off, I will say I've got to read a disclosure here from our IR team. So some of what I will talk about today contain forward-looking statements. Please read our SEC filings for more information about factors that could affect those statements. So I've been with Equinix close to 15 years now. Actually, I've worn a number of different hats. Most recently, I was the president of the Americas before taking on this role with global responsibility of our entire data center services platform, over the course of the last 2 years. And so you can think about that as ownership and accountability and overall P&L ownership and strategy for everything that's physically related to the Equinix business.
So our real estate footprint, our capital expenditures in terms of new builds, the markets that we're entering, our physical interconnection products, so Cross Connect, all of the work that we're doing in supporting our customers there, and a lot of work coordinating with our go-to-market teams around the ecosystems that we're building.
Gotcha. That's a great overview. So, maybe if we just, you know, kick things off with, AI is definitely on everyone's mind right now. So, and you announced some wins at the last earnings call. Maybe you could just give us an update on what types of AI deployments are showing up in the pipeline right now, and really just more broadly, how does Equinix participate in the opportunity from here?
Yeah, we think about AI as it's not really a new thing, right? I think certainly generative AI has created so much energy and enthusiasm for a new wave of potential use cases, but we've been supporting our customers' workloads around AI and machine learning, and around how to use data more effectively for their business for a number of years now, and even in our partnership with NVIDIA that we launched in our prior Analyst Day event, like 2.5 years ago, around LaunchPad and creating AI availability as a service so that our customers can really start trying out new technologies and applying those models for their businesses. Largely speaking, we think about this as another ecosystem that can develop.
So our focus then becomes for the AI as a service providers, the GPU as a service providers, making sure that we're harnessing kind of their needs in terms of network connectivity for both their delivery and latency characteristics, as well as access to the multi-cloud. So you saw us make announcements around CoreWeave and Lambda. Well, they look like many of our other service providers. They need access to network, they need access to the cloud providers, and we want to make sure we're landing those nodes so that the enterprises can access their deployments. On the enterprise side, I would say, you know, just tremendous opportunity because at the end of the day, AI is fed with data. And where do we focus on?
Well, we've been focused on how do we make sure that our enterprise customers have access to the most reliable transport and storage for their most critical data needs, whether that's from a hybrid multi-cloud environment, so streaming their data into and out of AWS, or Azure, or Oracle, or Google, but also making sure that they've got control of their most important crown jewel data. So having the ability to own assets that have that underlying infrastructure, whatever that sovereignty requirement might be, or privacy requirement, or security requirement.
So by having all of that together now, that becomes the logical landing spot then for a lot of both AI training workloads, to be able to take that data, ingest that, take a public model, train that against their enterprise data, and then deploy that from an inference perspective across our 70 metros around the world. And when you think about that nature of that AI, we think about that as really private AI for the enterprise, which is private infrastructure, so that they have control and command of their data sets, and it never leaves their prem.
Gotcha. And I think what we— You know, at an industry level, it seems like what we're seeing right now is, you know, there's huge leasing numbers, particularly around the wholesale side. I think a lot of that seems to be to support training environments for Gen AI. You mentioned, you know, deploying inference nodes across your facilities in different markets. I mean, do you have a timeline for when you expect that to really start to pick up and show up for Equinix?
I think time will tell around that. I think certainly a lot of the activity right now, one, is, making sure that we're positioning and, and helping our customers with their training requirements and being able to build the models and, and, and kind of that tuning. And then over time, as, as inference is deployed, like, that ability to take that and get real business impact and results is, is starting to manifest. Again, it depends on the use cases. In some cases, we've been doing this for, for our customers for a long time, whether that's for the automotive industry, whether that's for, you know, obviously, content and media and all of the inference they're doing around workloads, around predictive, ad targeting, or predictive like recommendations and suggestions for retail or travel.
So, you know, we've been helping support a lot of this for a long time. On the training side, I'd say, again, a huge amount of activity there, and that's why we've been thinking about for our xScale program in the Americas. You know, we probably don't want to take this into our core retail balance sheet, but should we be leaning in further to be able to help support our hyperscale customers as they're thinking about these really large workloads and really large campuses? The answer to that is yes, and so we've talked about it publicly, like, hey, we're leaning in on structuring that more to come in the new year around exactly what that'll look like and what locations, et cetera.
Yeah. I guess, can you talk to how you see, you know, pursuing these xScale builds for training environments, how that helps the kind of core interconnection franchise and business of Equinix?
... Yeah, one is when our customers are asking, and they're asking loudly, we want to help support them. And so, you know, with our hyperscale customers, they're critical to our overall ecosystem around making sure that we're supporting their requirements and having that capacity and capability available for the rest of the ecosystem of customers we have, in a very transparent-like, high, high throughput, highly secure manner, it just adds a lot of value to the overall platform. So we want to make sure we're helping them sustain that growth.
I think there's also an important part of this program, which is when you think about the scale that we need to develop for supporting these workloads, it's important for us to be able to help these customers and ourselves because it gives us the relationships we need on the supply chain perspective to have much greater scale than just the retail footprint that we're deploying otherwise would be. And so a lot of that, working with our vendors then to secure a lot of the long lead items for data center creation, where we can use both our balance sheet as well as our capital partners' balance sheet to secure all of that together in a combined pool.
Gotcha. Maybe just finishing up on xScale, you started some builds in the U.S., which historically hasn't been as big of a focus. I mean, any way to frame how meaningful you think xScale can become in the U.S. and kind of what options you're looking at today?
Not yet. I guess I would say, more to come as we finalize that. I would say, our first generation of xScale activity, which has been, you know, a great success with $8 billion of capital kind of committed and 800 MW of projects kind of in flight. We're really happy with that. But that's the... One of the early theses around that was we wanted those locations very proximate towards our retail campuses, and making sure that we were kind of combining that xScale footprint with our retail environment to create scale there. I'd say as we look into the future around this, just the sheer magnitude of the requirement is actually gonna require us, in some ways, to think further afield about this.
We also think there's a great opportunity to combine this capability with being much more forward-leaning about how we're thinking about power sourcing. And so, you know, when you're dealing with tens or even, like, the low hundreds of MW, it's, you know, you can kind of just be a regular off-taker with the utility and not really need to think differently. When we're thinking about now in the multi hundreds of MW for these scaled deployments in metros, it really changes the relationship that we'll need to have with the power providers.
Gotcha. And then just in terms of the broader demand trends you're seeing right now, and in the last earnings call, you called out there's still some customer caution out there, just given the macro environment. I mean, any way you could characterize kind of where we are in terms of how the pipeline's looking, sales cycles, book-to-bill, you know, those metrics?
Yeah, I think for overall, what we're seeing is continued vigor on the sales front in terms of both pipeline and overall funnel as we look on a multi-quarter cycle. There definitely is still, I think, choppiness and hesitancy around some of the macroeconomic concerns. But, you know, coming into this year, I think generally a lot of our customers were surprised from other vendors and across the entire IT stack, that pricing had gone up, right? And inclusive of our own services, but also across every service that they're purchasing, and that had impacted their budgets. I think at this point, much of that is already priced in, and so there's greater visibility into, okay, we're moving away from just triaging what's right in front of us to, okay, how are we thinking about investment into the future now?
There's better clarity into next year.
Gotcha. Okay. An interesting disclosure that you guys had from last earnings was you're churning out cabinets at around 4 kW.
Mm-hmm.
You're bringing in new cabinets at 5.76, somewhere in that range.
Yeah.
I mean, where do you see that metric going as AI and just higher density workloads in general become a greater portion of the mix? And I guess, what can Equinix facilities today kind of handle, I guess, for lack of a better word?
Great question. Over the years, right, we've seen densities continue to trend upwards, but I'd say in the last year in particular, that change in velocity has probably increased more than it has in kind of the last four or five years. And so, you know, that in the past, it was kind of normalized inside of that cab's billing number, and now you're seeing kind of an outsized impact because of the increase of density. In our existing facilities, what we've done is a tremendous amount of studies on how can we support the densities that we need to for what our customers are asking for? And so, you know, in over 45 metros, we have facilities that can actually support liquid cooling.
We've looked at what it will, it will take in order to support these requirements, and that liquid cooling can deliver at kind of close to 40-70, 70+ kW a cabinet, which is, you know, a tremendously high in terms of... You think about the average again at a 4 kW kind of install rate. So, what, what that means for us is it gives us comfort around our facilities are able to support the workloads our customers are asking for. We've got the capacity they need.
And what we've seen over the course of this year is just an increase in the amount of requests and inbound around the density that is being required for, whether it's an AI workload or a high-performance compute workload associated with that, and we're able to land those successfully. It's we can solution that without a tremendous amount of problems or issues there, and we're continuing to build that. I would say historically, across the entire industry, liquid cooling has been an incredibly bespoke offering, right? It's one where each one of them needs to be solutioned without a lot of standards involved, with many different, like, very small-scale vendors helping support that for the servers, for the plumbing, for the infrastructure, and the coolers.
We've been working with all of those vendors in that stack to one, try and define a better set of operational standards. Again, a lot of the server manufacturers, they sell the box, and they're not worried about who's gonna run it, how it's gonna operate, and what the life cycle of that is. So we've been working very closely with Dell, HPE, Intel, AMD, NVIDIA, et cetera, to say, "Well, let's think about this on a long-term basis.
How can we make sure we're setting these up for success for the duration of the life cycle of the infrastructure?" And then two, we've actually been investing in how can we go ahead and pre-provision some of the work around both electrical distribution and mechanical, around getting the liquid, the water, essentially, the chilled water that we need to closer to these deployments, so that when a customer asks for it, we can deploy it very quickly instead of, again, a kind of bespoke custom project every time. So you'll hear us make more announcements around our productization and standardization of liquid cooling in the weeks ahead.
Yes, I guess. Are new builds, are the design specs for those builds changing meaningfully now to maybe have more uniform availability of liquid cooling?
It certainly is a huge part of our thinking, and I'd say has been for about two years now. We've realized that having chilled water availability just increases the flexibility that we need to be able to get the cooling where we need it. And again, whether we're doing that via air or whether we're doing that directly via liquid, having that chilled water central plant gives us that capability to do that, and so, it's been part of our design thinking for a number of years now. What we would say is, we're starting to, again, increase the size and capacity then of our mechanical plant and our planning there, increasing the size and capacity of our electrical plant, to give us that flexibility to go even bigger, even denser, in more locations across that footprint.
And I guess just, are there any implications around CapEx spend to support that kind of infrastructure, and then, I guess, in turn, the rents that you're able to charge for it?
Great question. I'd say overall, what we see is when we can build denser and with more scale, that ends up translating into better cost efficiencies for us from an overall yield perspective. That being said, from a pricing perspective, liquid cooling is not something that is generally out in the market, and I would say, like, our customers understand that there's a premium associated with building that and having that available to them, and also operating that with SLAs and reliability standards, which, you know, we're class-leading against in an industry, kind of defining in a lot of ways. And so what we've seen is customers aren't asking for kind of discounts associated with that liquid cooling or density. They realize, you know, they're gonna need to pay for that.
Gotcha. And then you had mentioned some of these GPU as a service providers starting to show up in the pipeline in terms of their on-ramp-
Yeah
- requirements. I mean, can you just talk to the kind of ecosystem benefits that you think these on-ramps can provide, and I guess what the interconnection density looks like, and how that compares to, you know, maybe more traditional cloud on-ramps that we've seen in the past?
Yes. It's still early to say. I would say that... I mean, right now, what we're seeing is they show up, and they look similar to a cloud and probably denser than a like a traditional SaaS-type provider, where they actually need more connectivity to multiple cloud providers. Again, when you think about the GPU as a service providers, they need access to wherever the customer's data is. And that customer's data probably either lives in a cloud environment or needs to be transported like via high throughput connectivity from one of our data centers somewhere around the world. So we're seeing that you know again higher than a typical SaaS provider, but kind of in the range of what you would think of for a large-scale cloud provider. It's still early days, though, right?
I'd say it's, we're seeing new emerging use cases. We're really excited about the work with the GPU as a service providers and these AI as a service providers. But we think about that entire ecosystem of both that infrastructure for the GPUs and the capability for customers to end up training on that. We think about the model as a service providers, right? Going ahead and having pre-built models then available to sell to enterprises or provide to enterprises on a license basis, and then converging that again with the enterprise's data itself.
How can they make sure they're doing that in a secure environment that the enterprise feels safe about and trusts, training that together, and then deploying that on an inference basis, again, with some inference as a service providers, or in many cases, that inference can be done on classic CPUs, and so that can be either done as a colocation or via our Metal offering.
Great. Then as we just think about the broader interconnection trends in the business today, I mean, Cross Connect volumes have been a bit below trend recently.
Mm-hmm.
I mean, can you talk to the drivers there and, you know, whether you see them as more temporary or longer lasting and just kind of what's going on in terms of volumes?
Yeah. I think there's probably 2 or 3 different elements there. One is, just as a factor of the macroeconomic environment, you're seeing a kind of an optimization effort that had happened over the last year and a half or so for the service providers. As they've grown and scaled their technology, you know, moving from 10 gig up to 40 gig or 100 gig now, they've... In the past, as with many folks, if it's not broken, don't fix it. And so they've just gone ahead and bought the new 100 gig connectivity, and in some cases, left those 10 gig connections behind, and now they've started doing a little bit of that grooming work.
What we feel, what that means and translates to us is the number of counterparties each of our customers is connected to is still very healthy and strong. In fact, it's continuing to grow, but the number of Cross Connects may have gone slightly down in some of these cases. But the value that the customer is receiving is actually higher, which again gives us a strong feeling about our pricing leverage and ability to go ahead and make sure we're getting fair value for those. On an overall interconnection basis, like you see, we're seeing healthy trends around Fabric utilization, so kind of VCs versus Cross Connects, the kind of ports provision and the port volume we're driving with our customers is trending up in a healthy basis.
We see those ports as kind of a leading indicator, so after the customer has gotten the port, then the number of virtual connections should trend up higher as a follower, so you know, we're optimistic about the road ahead.
Great. And I guess any sense of where we are in the kind of optimization and grooming time, timeframe that you're seeing? Or, you know, is it something that can continue for a bit here, just given what's-
I think, yeah, what we haven't seen kind of that next scale of bandwidth on the horizon, right? So I think we're getting close to kind of the net of that normalization. I would say in the past, call it decade, we've had a continuing set of trends around whether it's the move to high-def television or the move to mobile broadband, or the move to streaming everywhere, or the pandemic, there's been a like every couple of years a big cycle that ends up pushing interconnection needs and density quite a bit. And I would just say, we're not sure yet what that next one is, like, candidly.
It's what we're seeing is AI activity certainly is that those data flows and streams are quite high bandwidth, and so we're optimistic that that could translate into a trend. But it may be, hey, as we're continuing to work with our customers in the ecosystem for these requirements, we're still looking to see where that next bigger spike is gonna drive, that ends up, like, creating outsized amounts of interconnection growth.
Gotcha. Just in terms of your, some of your other digital services-
Mm-hmm.
Talk about Network Edge, Metal, can you talk to the uptake you're seeing for those products and kind of where they are in the adoption curve at this point?
I'd say it's still relatively early stage in the cycle for us. I think it's been a great amount of energy and investment for us to discover kind of, Hey, what are the use cases that are like clearly differentiated for us? When we're deploying these services, like, we're not—we wanna end up having use cases that can fundamentally win in the market, right? Again, just like we're doing for a colocation, we're not looking for undifferentiated, just like bulk users of compute. It needs to be customers that value that compute, that want to end up having that same interconnection profile and kind of latency characteristics of those use cases in the facilities we have.
Because otherwise, if it belongs in a public cloud and is relatively generic, like it should be in the public cloud and be in a relatively generic environment. Like, we're not trying to compete with general purpose compute as we think about Metal. We're really trying to make it easier for customers to consume the Equinix core value in more locations. So I'd say it's what we're seeing is that message resonates well with the enterprise market, right? And as you think about the new personas we're tapping into, moving away from traditional infrastructure buyers to now application owners and like developers in that practitioner community, and teaching them, actually knowing and having control of where your infrastructure lives can fundamentally drive massive performance gains for your application and also optimize the cost, that ends up being a really powerful message.
So that really resonates. On the Network Edge side, I'd just say, like, that we're seeing a continued nice uptake there around that feels like a core kind of performance network node and consolidation effort, kind of click to buy or API deploy to buy very easily across the entire global footprint. Like, that, that's a pretty resonant message with our existing infrastructure buyers.
Great. So I'd say that the other big theme in the data center industry right now is power availability, which I guess in some respects is related to AI. But can you give us an update on just where things stand in the major data center markets today on power procurement?
Yeah, I think we feel like we're in good positions around the world in terms of supporting our land bank and our existing growth requirements for kind of the core retail footprint, paired with our power availability. It has definitely gotten more challenging. I certainly like Singapore is a great example where, you know, that was obviously a tightly constrained market where we were able to show the value of what we bring to the market and get a forward allocation that will come for us in a couple of years as we develop that data center.
Ireland continues to be an overall challenge in terms of power availability that we've been able to solve for with some on-site power generation, but I think the implications of that in the longer term might be kind of a slowdown in growth. But across all of our major measures, otherwise, like we're feeling very good about that. I'd say for the xScale portion, though, where we're driving much larger sets of power requirements, I mean, it is quite challenging in market, right? And so for the folks that are in the room that are on the wholesale provider, like, God bless you, I know that's an incredible hard work. Thank you guys for helping us with paving the way in our discussions with the utility providers.
I think we all need to continue to work together to help them understand what these ramps look like, actually help with the investments that we need to from a national infrastructure perspective to get grid connectivity more reliably available around the world. And so that is, I think, in partnership together. But when we're thinking about, again, for xScale in the U.S., then we're probably gonna be looking at campuses around a unique relationships we create with the power providers.
Yeah, I guess I was gonna ask about that. It sounds like the maybe the retail side of the business not seeing as much of an impact, but on xScale, I mean, is this dynamic changing where you're planning to build facilities maybe outside of the core, you know, data center markets that we're all used to?
Yeah, I would say, yes. Broadly speaking, I think all of us, I think in the industry are seeing that we're gonna have to for, for again, a couple of different reasons. One is the power availability. The second is, I mean, from a regulatory and just, like, local community engagement basis, I mean, you're hearing, I'm sure in the media and the news, all the pushback to some degree from in Northern Virginia, about how much more data center do we need? Well, you know, we wanna be-- we wanna go where we're valued, right?
I think there are plenty of communities around the country that love the investments that data center brings, loves the job creation that it brings, loves the economic impact and benefit that we can bring in the infrastructure that we help develop there. And so we're gonna continue to make sure, like, we're developing responsibly, where we can drive a long-term value for the community.
... And you mentioned on-site power generation. I mean, excuse me, are there any alternatives that you're looking at in terms of, you know, getting power to the site?
I think we're forward-looking at across a number of different areas there, from, you know, on-site power generation with gas or using fuel cells, using turbines. We're looking at closer proximity with storage for renewable solutions. We're looking towards longer-term solutions around nuclear. I'd say, we're in this for the long haul, right? I think we're, you know, when we think about Equinix, we're thinking about both our investments for the data center, as we're gonna be in that location for the next 15 to 30 years, so let's just plan from that perspective.
But we're also thinking, just as a company, our obligation then towards, well, how do we make sure we're thinking about building this infrastructure both that we can grow for our shareholders, but just as important, like, sustainably for the planet? And so, that changes our lens to some degree around, well, how do we think about that in a very methodical way around just that engagement that we need to do, and what alternatives we can consider into the future? Again, I think our planning horizon is probably longer than most from that perspective.
Gotcha. Yeah, I mean, given these dynamics from the power side and just kind of the broader supply-demand dynamics in the industry, you know, it seems like we've seen quite a bit of an inflection on pricing, maybe more so on the wholesale side, you know, starting. You know, it went through a number of years of downward trend. But I guess where do you see pricing for the kind of core interconnection, retail-oriented business trending here into the new year?
I think both, you'll see continued strength there from a pricing perspective. Customers are, you know, candidly, like all of us, accustomed to higher pricing across the board. I think inflation is real. I think across the entire IT stack, prices have gone up, whether it's servers, whether it's SaaS licenses, whether it's security services. And so customers understand that that's part of kind of the course of business now. And again, I'd say much of that is baked into their budget and planning, like, they understand that there will be continued uptick there.
I'd say from the data center side, it's, you know, we're still a pretty small portion of the overall IT stack in that amount of spend, and particularly when you think about high-performance compute and some of these AI workloads, like, that cost per kW is even higher than, you know, your traditional, like, compute infrastructure or storage infrastructure. So again, we feel like customers value that ability to house that infrastructure, cool it effectively, deliver that sustainably, and so we think that will be continued to have some pretty strong pricing power there.
Gotcha. I mean, any change on outlook on escalators at this point? Is that kind of folded into the-
We feel comfortable about it, in terms of how we're able to achieve that, from, like, making sure we're either keeping rate with inflation there, or in some cases, keeping rate with inflation, and also leaving ourselves open-ended on the other, on the backside of the end of the term, to be able to make sure that we can continue to move pricing up.
Gotcha. And maybe just lastly on pricing, the, you know, the PPIs or the increases from the higher utility pricing in Europe have flown through this year, but I mean, any update you can provide on kind of hedging as we look into 2024, kind of what happens on the power price increases that you've implemented this year?
Yeah, I think overall, what we've said very clearly is, one, that pricing that we flowed through to our customers, we're gonna go ahead, and if our utility pricing goes down, and our kind of hedging strategy certainly is showing like we're likely to have some utility pricing movement, beneficial to ourselves and our customers next year, that we're gonna give that back to customers, right? This is empty calories from our perspective. There's no margin associated with it. So by going ahead and giving our customers that notice and transparency, they know we're keeping our word around that. I think we're still in the midst of finalizing all of our, you know, kind of locking down the utility rates. But I would also say utility rates are still pretty volatile across a lot of markets, right?
So while from, on a global basis, we're seeing some good downward trending across many markets, some markets in Western Europe, even we're still seeing increases there. So, a bit of a mixed bag there. But, you know, I think the important part is just communicating with our customers well around that, helping them plan during their budget cycle around what they can expect from us, and also just being really, really transparent around the dialogue. I think we've built a lot of trust there, and we're continuing to lean in there.
Great. And maybe just to wrap up, I think, you know, something that, that you're focused on is, is around, you know, efficiency benefits and, and driving more scale in the business. And obviously, we had some, some fluctuations in margin this year with the, with the power increases that you, you just discussed. But can you just talk to the kind of opportunities around driving operating leverage in the business and in the areas that, that you're most focused on in, in that regard?
Yeah. I think there's a couple of areas that we think about there. One is making sure we're operating our sites as efficiently as possible, and that's both for the benefit of driving margin up but also from a sustainability perspective. Every watt that we can save in terms of power ends up driving benefits to the planet, 'cause, like, waste is the enemy of everyone. And I think, like, we're very focused around that across our operations team, and also a lot of our software engineering around understanding all those data flows and actually using that in our own AI and kind of machine learning simulations to understand how we can move our control systems to drive more efficiency.
In terms of the workflow for our employees and for our customers, we want to continue to drive better automation and digital digitization of that process so that, one, it reduces the amount of rework, two, it reduces the amount of cycle time, so the customer actually gets the results faster, and three, it actually gives us more predictability and data around what our staffing levels may need to be, depending on when they're scheduling that. And so we've seen great success around driving our customers' interactions with us away from being bespoke or being on the phone or via email, to going into the portal, actually having very complete data sets around that.
And kind of 90%+ of our interactions with our existing customers now is all being driven via the portal, and I think we're gonna continue to drive that now with APIs to be able to be inserted into their customer workflows. And once we do that, and once we're kind of embedded into the fabric of that customer's entire, like, digital life cycle, we feel like that's actually a huge differentiator, one, of being very easy to be part of their team and being part of their workflow, but two, it just creates a switching cost in terms of making it harder for them to move to another vendor, and creates less churn.
Great! That was great. Thanks, Jon. We'll end it there.
All right. Thanks so much.