Good morning, everybody, or good afternoon, depending on where you are. My name is Brandon Nispel. I cover communications infrastructure for KeyBanc. This is a 25-minute fireside chat with Equinix. With us, we have Steve Madden, Vice President, Digital Transformation and Segment Marketing, and Katie Morgan, Senior Manager, Investor Relations. Katie, I think you have a safe harbor for us.
Yes, very exciting. Some of what we'll be talking about today is forward-looking nature, so please check out our SEC filings for those risk factors. Thanks, Brandon.
Excellent. Thank you both for being here. Steve, let's start with you. Maybe for people that aren't familiar with you and Equinix, what's your role at the company, and what do you spend your time focusing on?
Sure. It's Equinix is not Recursion nor Pure Storage, but at Equinix, my team does two things. We do thought leadership and research, so the Equinix Research Group reports into me, publishing indexes and insights. But also I have a team of people who are deployed in every country around the world, helping customers on the ground with their own transformations implementation. So I get the best of intelligence and wisdom to come together for a joint vision, joint vision and vision.
Got a lot less to look at here.
Yeah, yeah. How many people were coming today?
Let's, let's talk about one of the things that you spend a lot of time on, if that was a sufficient background. The GXI report.
Yes.
Tell us about the GXI report. How do you use it? What type of insights do you derive from it?
No, it's been interesting. So since Equinix is kind of like this unique position in the market, in that we're kind of like the nexus of where all traffic and value and economic exchange happens, we were trying to understand, how do you measure the growth of the digital economy? GDP kind of doesn't really help you in this. It doesn't really sort of show you anything. So we started by using interconnection bandwidth and the growth of interconnection bandwidth between different counter-parties as a way, as a proxy to say, "Well, that means this much data is being exchanged across, you know, these different providers." And we now are in our seventh generation of this report, and it's been, like, 89.9% accurate over the last few years.
And basically, what it's showing today is that, you know, we've got a tremendous number of enterprises, a tremendous number of providers, and we can now monitor and show the value exchange across the platform globally. So service providers use it to figure out where they should build that new capacity, whether to really expand into new regions, new markets. Enterprises use it to see what other companies in their own industry are doing, use it as a benchmark or a baseline. Investors use it to sort of look at other companies that are in that space and see where growth is going to go. So we have a wide following around that report, and it's become a lot more than it used to be.
It was just interconnection bandwidth, but now we actually monitor and track a lot of different trends and macro trends in the market.
Can you talk about some of the takeaways from the latest report, maybe how some of those takeaways have evolved over the last couple of years?
Yeah. I mean, it used to be we were showing network adoption, we were showing cloud adoption, so obviously we were talking about cloud first for, like, four years. And we were showing how much uptick there was and how multiple enterprises, most of our customers, in fact, that we studied in the market, are using more than one, if not three hyperscalers. So hybrid cloud is the architecture of choice. But lately, what we've been watching is how it's moving away from just being about networks and clouds and how they're doing business with each other. So the number of different companies that on average companies connecting to, is quadrupling and growing much, much faster.
So now an average enterprise will peer and share traffic with 16 other enterprise companies, where you start to see that the economy is shifting to being electronic. They're just doing it privately and peering at, you know, at Equinix. So we're watching the growth of how many counter-parties are you doing business with as a growth of the economy. And the other one is a mass migration or over time, a significant migration to subscription-based services. Most of the commercial market is a you know, a digital subscription economy. You know, you think about music industry and everything else, you subscribe to music. Where we're seeing people now look at software and hardware and infrastructure in the same way, and that they want to move to a usership model and not necessarily an ownership model.
The index is highlighting, I think, the growth of ecosystems and the shift to a usership model. Those are the two big ones.
Let's double-click on that, because, I think one of the things you wrote about more recently is that enterprise wants subscription services, even on physical infrastructure.
Yeah.
So, help us unpack what that looks like and what that means for Equinix.
Yeah. I mean, we've, we've had the idea, like, on-demand computing and stuff for decades, but what's happening more recently is think about the confluence of several things. Refresh rates and rate of change has gone up dramatically, right? We're, we're shifting and changing and refreshing infrastructure very, very quickly. Sustainability, power, power cost, power pricing, energy, et cetera, has gone up. And then you've got things around sustainability, around if my refresh rate is going up, then I'm refreshing hardware twice as fast, e-waste, waste management, getting rid of the equipment, shipping the equipment, buying the equipment. It's just becoming to the point where it's, it's usually cumbersome for enterprises across multiple locations or multiple countries to actually run and manage their infrastructure themselves. And so in a lot of cases, that's why we, you know, we follow our own trends.
Said, "Look, why not just put the infrastructure there?" And just tell them, "Hey, it's already there. You can just subscribe to it if you want to." but after COVID, when basically people didn't have capacity where they needed it, there was a shift towards subscribing to providers to do tele- you know, telehealth, telework, telemedicine, from home and working from home. And I think that once enterprises figured out that they could subscribe to dedicated infrastructure that's their own, they're the only ones using it, but just on a subscription model, they didn't go back. And so now we're just seeing an influx of that, and AI is accelerating it because the GPUs are refreshing every two years. They're really expensive. It's a cumbersome infrastructure that's actually very technical to put together.
Why not just have someone give it to you, for a rate, rather than have to deal with it yourselves?
How is sort of that subscription shift informing the services and product offerings for Equinix, whether it's Metal or I think you recently announced, you know, NVIDIA DGX?
Yeah. Well, so we work across two different paradigms. One is we think of it as an ecosystem. We're not just focused on Equinix and the customer. We have suppliers, we have providers, we have managed service partners and partners, and we obviously have enterprises. So we're trying to make sure that all of them can be successful on the platform. So we have to make sure there's just as many providers available at Equinix to sell to enterprises, which makes the platform valuable to them, but then enterprises need to come in and find everything they want to subscribe to. So if they want to subscribe to a particular technology from a supplier, we're trying to make those technologies available in a software consumable way, right? That's what Digital Services is. And Metal is one of those, for example, Fabric, routers, et cetera.
The technology functions as building blocks in a software-defined way. But then the providers can be the consumers of those, too. So providers, like some of our larger NSPs, are offering cloud services to their customers, which come straight through our own infrastructure. And so the enterprises come in and consume both. So we're looking at it as a consumption model of it's valuable to the enterprises, it's valuable to the providers, but what are the building blocks and things that are slowing people down? And how can we activate that to go faster by providing those things on demand? And then where are we just really enabling providers to sell on top of the platform?
Got it. Well, let's talk about AI a little bit. Since you're sort of the voice of the customer, and you have your ear to the ground, what's the latest that you're hearing in terms of AI from different customers? And maybe split it between different verticals.
Sure.
the customers that you talk about.
Sure. Well, yeah, there's what you read and there's what's actually happening. So what you read is everything's about ChatGPT, large language models, and fear of missing out and building large GPU farms. In practice, that's not really happening. If anything, we've seen AI and AI ML be progressing for, like, the last 10 years, and I think there's now more of a concerted effort towards using and, and leveraging inference models that help solve a specific problem. I'll give you an example. So I can, I can have a lot of buildings I need to manage, and building management systems cover everything from power, elevators, lighting, et cetera, waste management, water usage. There is a building management system infrastructure. You can train an AI model on how to manage a building.
You now can sell that model to anybody who needs to manage buildings. So essentially, what's happening is enterprises are looking at commoditizing the functions that they, that they're really good at, and then reselling that capability back to other companies that aren't so good at doing that. So the inference engines are pushed out to the edge, that you can use them to say, "I want to be able to manage waste management in a food store." Okay, how many food stores are there? Lots, right? So they can all use that. I mean, I'm trying to solve logistics problems. How many companies have logistics problems? I'm trying to conserve energy.
Okay, like, these are the things we're seeing, these models that are being trained are being pushed back out into the industry as like middleware and consumed by multiple companies to just use the trained engine to run things more efficiently. And that's all operational improvement. It's rarely, you know, new business value, new revenue. It's all operational improvement. But some of these companies are improving their EBITDA by, like, 50% just by solving major massive issues they couldn't do before because they couldn't process the amount of data and actually correlate enough knowledge to get their arms around it. So we see airlines, we see food stores, we see autonomous vehicles. We see a lot of different industries that are implementing these technologies for real business value today.
And for the enterprises that don't know what they're doing, they're trying things out on cloud. They're trying to learn how models might work. And for the ones that kind of get to the place where they're like: I know what I want to do, they move out of, like, a cloud exploration model into a do-it-yourself model, where they want to own it and operate it themselves because they don't want to put their data in the public cloud. They want to run it on their own data, and that's mostly because they're terrified of data leakage and data protection and data values. But if they don't want to own it, then they subscribe. They subscribe to CoreWeave, Lambda Labs, you know, whoever's already there, who can provide that infrastructure to them.
Or if they don't have the skill sets, then they go full to a managed service, which is what DGX is. So we have to make sure that whichever way they want to do it and which- whoever they want to use to do it, that's gonna be available, and, you know, and the most lucrative place for us is in retail space.
Got it. And as you talk to customers, and I think you alluded to it in your response there, but are customers trying to use AI more for revenue purposes, driving revenue growth or driving operating efficiencies? Maybe are there other examples that you could give?
Yeah, it's both. I mean, every company has been running two parallel initiatives. It was either an efficiency initiative or an innovation initiative. Like, when I used to work on Wall Street, it was at JPMorgan and Goldman Sachs, it was run the bank, change the bank, right? Run the bank, you want to improve operational efficiency. Change the bank, you want to increase or provide new capabilities. And everyone's still doing the same. And for the last five years, you know, this whole digital transformation push has been pushing towards new ways of running IT. But if it didn't change the outcome of how the business worked, it was just expensive. I just did all this work. I did, re-platformed them on the cloud, and nothing materially changed to my revenue.
Whereas, what companies that we're seeing are most successful are doing is they're starting with: What's the business problem I'm trying to solve? And then, how's the fastest way for me to see the return on that business value? Which is why operational efficiency is low-hanging fruit, because they already have the expertise and knowledge of how their business runs. So once they learn how they can apply this technology to start making decisions for them, they can accelerate that benefit right away. And then, I think you're gonna see the second part of what AI brings, which is, I have to rethink how I deliver value to my customers. I've got to rethink who my partners are and who I'm doing business with. And I've got to rethink how I deliver and exchange that value in the physical world.
That's where people are building new products and services with AI embedded. I think it's easier to go after operations first, because you can use the savings to help fund what your next innovation path is gonna be.
Sure. Sure. Just pause here, see if there's any questions from audience. No, not yet. Prep your questions. So I think one of the things people wanted to understand as we moved to, like, a cloud world is, you know, what percent of that enterprise apps are gonna end up in the cloud? Is there a similar way to think about, you know, AI adoption, granted early, in that regard?
Yeah. I mean, one of the things we've got to be careful of is we talk about AI like it's a standalone thing that only exists in one location. Most of what enterprises and providers do is have a distributed infrastructure, and a certain percentage of it sits in Equinix, a certain percentage of it sits in cloud, some of it sits in other wholesale data centers, some of it sits in their own data centers, right? But it's a model of percentages of deployment wherever at a certain point in time. And there are reasons why some workloads would be in one and not the other. So it's not an either/or, right? It's kinda like: Okay, I'm gonna deploy an AI infrastructure. Where am I gonna need to connect it up? How many different clouds do I want to access with this thing?
Where is the data gonna reside? Who am I gonna be buying data from? Where am I getting collecting and supplying data from, from the edge or from a factory or wherever? So there's a whole data transport, data management pipeline system that has to be built first. And then once you know that, where you're gonna put the AI compute to learn on that data or, or transact or, or inference on it, depends on response times and where it's needed. So when we have the conversation, we say, "Yeah, you know what? What you're doing, you should put that in the cloud. That's the best place for it for right now. Come back in maybe a year or so." Others are like, "No, you don't want to put it in the cloud." For starters, the storage is 60% cheaper when you build it outside of cloud.
71% cheaper egress costs if you're not transacting data in and out of cloud. But if that's your architecture, that the majority of what you're gonna be doing is that, then don't put it in the cloud because it's not a good place for it. So we sit down and understand what are they trying to do, where are they trying to deploy it? And the space is big enough that there's gonna be AI everywhere. Like I said, it's gonna be built into business operations as usual. It's gonna be as prolific as databases. So it's still a, a distributed infrastructure. And I know we're sort of focused on the large footprint, high dense cabinets and what that's doing, but there aren't very many enterprises, at least, that are even interested in that.
I think it's service providers that are building up capacity to sell back to enterprises that are building those large footprints.
One of the things that Charles called out on the call is, there's still just some cautiousness from from customers overall. From your perspective, what does that mean? Are these customers that traditionally would deploy CPUs now rethinking their investment? Like, how, what does that mean for you when you talk to customers?
It's a push and pull in IT. Usually when you are trying to start a new initiative, the business doesn't come along and just give you a new bunch of money. You're gonna have to look at your portfolio and say, "Maybe we won't do that," or, "We'll slow that one down, or we'll do something else," in order to make money available for this other initiative. And at the same time, you know, a lot of those other initiatives had a pretty clear return, and I'm not sure they understand what the return yet is for AI.
Until they have a really good grip on what the return is gonna be, having to sit in front of an investment committee and justify spending, you know, $ double-digit millions on AI with no clear path to what you're gonna get by when, it's taking time. They need better business cases. I think that's why you see a lot of the MSPs are being tapped because they're doing things like in healthcare and in, you know, fast food industry or different things that make sense. If they can get an MSP to help them with, we're just gonna do these basic things with these existing models to get this existing return, that's where we're gonna start, then they can launch a project. But like I said, it's so new.
People are still catching up to the okay, what am I gonna do with this technology? How am I gonna deploy it? What am I gonna need? How do my skill sets change? Where do I need infrastructure? Do you already have a digital infrastructure? Because if you didn't, your AI is not gonna work. You're gonna need to build that, too. And so it's a refocus on strategy, quite frankly. And for the ones that already had infrastructure, they're already going. For those that were already behind, they're still behind. And so just the bar got raised with the demands of AI.
When you think about some of the use cases, what type of environment do they need? And are customers sort of, like, demanding at this point, liquid cooling and power densities that are increasing? Like, what does that look like from an impactive?
Yeah. No, exactly. So there's two things there. One is that we have the overarching sort of data center facility, we call it the shell or the major box, and it's capable of handling, you know, a certain number of megawatts. When we allocate space within those facilities, it's based on the requirements of that deployment. So if the requirements of the deployment are a high kW per cabinet, then we use different gear, right, when we put that cabinet together and put it in. If it's not, then we just put it in a regular hole with, you know, with air, and it's fine. And we're able to a certain degree reallocate and reassign the space to handle most of the requirements we're getting today.
We do know that once we're starting to see the trending averages of kW per cabinets go up, we design the facilities accordingly to start getting towards that. We know that after it gets up to a certain amount, it's predominantly gonna be recommended for liquid cooling, even if the customer wasn't necessarily asking for it. We talk them into it to say, "Look, this is why." But if anything, for us, it's really, you know, I think anything over, like 10 kW, fine. 20 kW, we start recommending liquid cooling, which could just be back of door. The cabinet back door is replaced with one that's got, you know, liquid cooling in the door. So as the air blows through it, it cools it that way. But then when you get to, like, 30 kW and above, it's usually direct-to-chip.
So inside the computer, it's liquid-cooled, and inside the cage, there's a system, and that system is connected to our own cooling.
Are there different applications that require 10 versus 20 and 30?
No, it's, it's not so much the application. It's really, the amount of number of GPUs, at what frequency you're going to run, and how, how long. And like I said, you know, you've got differing, differing use cases. And with AI, if it's training large, large—specialized language models, training, that's so many permutations or combinations, you need a lot of chips for that, right? And that's why NVIDIA is pretty strong there. But once the model's trained and you want to use it for business transactions, you don't need GPUs for that. It can run on a regular x86 or an Intel chip.
So we've kind of got to step back and say, "Do you understand, like, the training is the GPU drive, but the storage doesn't need GPUs, the networking doesn't need GPUs, and the inference engines don't need GPUs?" So, so we have a conversation around what's your infrastructure going to look like, and we'll place the, the GPUs accordingly. So even if they train out of retail in, like, a wholesale facility or in cloud, the training model comes back, right, into our facility and infrastructure, and they run it there, because that's the steady state example, right, of what they're doing.
So it's not like we need a wholesale shift from CPUs to GPUs?
No, I mean, it's sort of the thing. If you follow the-- If you go to AWS re:Invent, you come out of AWS re:Invent, Amazon's going to take over the world, right? Because it's-- they're geared towards this is the way they see the world going. If you listen to Jensen, you know, we're going to need a lot more GPUs. We are, but we're not replacing existing networking, storage, and compute equipment anywhere in the field at large scale, because effectively, it's still needed to do what it already does. And in a lot of cases, it's more efficient and more mature and more robust, and a lot more price effective than alternatives. Not to mention, there's new suppliers coming down the line.
They'll take a while to get in place because they don't have the software stack that NVIDIA does, but it's going to change over time. And so we've been very hesitant to switch to large-scale mass switches of anything of that scale, because typically, the ecosystem would have to shift dramatically, and we don't see that happening.
We have a few minutes left. Are there any questions in the audience? No. I want to ask you about pricing. We've been in a really strong demand, you know, constrained supply dynamic, and I think the question that I would get most often is, as Equinix, a colocation provider, take price. There's obviously a trade-off for customers in terms of deploying in the public cloud. Where is that sort of equilibrium, if you will, and how does that inform, you know, pricing and how you... When you're talking to customers?
Okay. Well, first of all, it's why are you putting that particular workload or infrastructure in Equinix versus a cloud? Like I said, it's not an equal basis. If it's low latency, high volume of data exchange, regulatory kind of risk security, business critical, it tends to live in Equinix. If it's none of those things, we recommend you don't put it in Equinix. You put it somewhere else. And so when you get down to pricing, it's how valuable is the workload to the customer, which is why, if it's something that's mundane, like a time entry system for their employees, don't put that in Equinix. It doesn't belong there. Put that in the cloud. But if it's something like payments platforms or, you know, something that's revenue generating, it typically does end up in Equinix.
And so, you know, if you've got a customer that's digitally mature, and they understand that this is a cost of revenue to their business, the conversation goes a lot smoother because they have to be in multiple countries. They need this kind of bandwidth, and they need it tomorrow. Great. And price is just what it is, right? Like, we don't. You're not having a conversation on a quote match exercise. But if you do have a workload coming in, and all they want to focus on is the cost, we step back and go, "Well, hang on. Is this really where you should be putting it?
Because, you know, that's not really what the retail space is for." And I don't think AI is any different or AI or ML-related workloads are any different, other than large-scale footprints that require huge amounts of power, typically want to see provider-based pricing, which is lower than retail-based pricing, in which case, that's why we have an xScale business. It's like, put that over there, because the pricing model and economic model is designed for that. But our retail space is actually for people who are getting business revenue benefit out of what it is they're doing.
Got it. With that, we're just about out of time, so if there are no more questions? No. Steve, thank you very much. Katie, thanks.
Thank you for having me.