Equinix, Inc. (EQIX)
NASDAQ: EQIX · Real-Time Price · USD
1,076.32
-13.53 (-1.24%)
Apr 28, 2026, 4:00 PM EDT - Market closed
← View all transcripts

Bank Of America Securities Global A.I. Conference 2023

Sep 11, 2023

Operator

Ladies and gentlemen, the program is about to begin. Reminder, this webcast presentation is for Bank of America clients only. If you are a member or representative of the press or media, please disconnect now. Thank you. At this time, it's my pleasure to turn the program over to your host, David Barden.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

Thank you, and thank you everybody for joining us. Really appreciate you being here for this part of the 2023 inaugural B of A AI conference. My name is David Barden. I head up communications infrastructure and telecommunications research for Bank of America, based here in New York, for the U.S. and Canada. Today's session, we're super pleased to have with us Justin Dustzadeh, the Chief Technology Officer from Equinix, and we're joined also by Katie Morgan, who's SVP or sorry, Senior Manager for IR and Sustainability. So thank you, both of you, for joining us. We really appreciate you being here today to talk about the AI topic. Justin, specifically, you know, this is our first time meeting, so thank you again specifically for being part of our inaugural AI conference.

Maybe for the benefit of all our viewers who have diverse backgrounds, maybe you could share a little bit about your role at Equinix and how your responsibilities are intersecting with the subject at hand, which is the unfolding AI evolution.

Justin Dustzadeh
CTO, Equinix

Well, thanks for having me, Dave. Regarding my role as the Chief Technology Officer, I lead a team responsible for driving the company's technology, vision, architecture, and roadmap. I also partner closely with my peers to lead technical innovation and software transformation initiatives, as well as our engagements in external software ecosystems and developer communities. As technology is moving faster than ever before, a big part of my team's job is naturally to maintain a pulse for emerging and disruptive technologies, as well as for major industry shifts and operating model changes, so that Equinix continues to remain prepared for the future, and that we can continue to leverage technology leadership and innovation to best serve our customers and the broader digital ecosystems that we enable.

Now, on the topic of AI, since I joined the company about four years ago, AI has consistently been one of the top focus areas for my team. We have spent significant R&D efforts to closely monitor and assess the impact of AI on the evolution of digital infrastructure, and also how AI can be best leveraged inside Equinix to further increase operational efficiencies and enable an improved customer experience. In fact, over the last few years, we have been consistently calling out AI as one of the top five technology trends to impact the future of digital infrastructure, including the fact that AI infrastructure is becoming increasingly distributed and moving toward the edge. I continue to believe that AI is poised to transform virtually every industry, just like electricity did some hundred years ago, and pretty exciting times ahead indeed. Back to you, Dave.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

So thanks. So Justin, so that actually is, it's a good tee up. So I want to get back to that, but it's interesting you mentioned that there's... You kind of hinted the two sides of AI as it relates to data centers. I think it's specifically one is the evolution of the AI itself, the engines, the inference, and then there's the adoption of what AI will do for businesses and transforming industries and the economy, and how that might also affect the data center. So we'll get back to that. But just at a high level, you know, we've had a number of conversations today.

We're going to have another set of conversations tomorrow, and there seems to be a spectrum of, you know, the technology industry where the, you know, kind of surprise of GPU adoption, large language models, generative AI, is a very much a today thing. And then there's another set of industries where one thing has to happen, and then another thing happens, and then eventually that industry can benefit. So just to be crystal clear about kind of where you see Equinix on that spectrum, you know, kind of what is your core, you know, as the CTO, as you're planning horizon, your assumption set regarding how machine learning and deep learning and generative AI developing will actually impact a business like Equinix and, and maybe even the data center space, larger picture?

Justin Dustzadeh
CTO, Equinix

That's a great question, Dave. I think maybe first, a few words on terminology. So as you know, generative AI, which is very popular these days, refers to just a type of deep learning that can produce new content, such as text, images, audio, video, or even code, based on what is described in the input, and ChatGPT is a great example of that. Now, deep learning, which typically includes training and inference, itself is a subset of machine learning, and machine learning is a subcategory of artificial intelligence. So in terms of pace and timeframe, AI, as you know, is certainly not a new concept. The origins of AI and machine learning actually date back to the 1950s.

While we have seen multiple waves of breakthroughs in AI since that time, with the resurgence of AI a decade ago or so, the pace of innovation has significantly accelerated. And new breakthroughs such as convolutional neural networks, Transformer, and gen AI have been developed at an unprecedented speed.... So over the next decade, I believe that the pace of innovation will actually be even faster, and the ongoing research and development work that is happening on the next generation of AI already, including artificial general intelligence, or AGI, will result in even more new approaches, more new paradigms, and more new architectures. So there is definitely a major industry shift happening as we speak. Now, when it comes to AI infrastructure, on the hardware side, we have seen a continued progression of Moore's Law, which predicted a doubling of computing performance and efficiency approximately every two years.

Moore's Law has now held for almost 60 years and has served society really well, enabling personal computing, mobile, internet, and digital transformation for the enterprise. But what's more remarkable is that in the AI space, over the last few years alone, we have seen a faster-than-Moore's-Law increase in demand for compute, storage, and interconnection capabilities. As a data point, if you just look at the number of parameters used in the state-of-the-art AI models as one way of measuring the model size, we see that the parameter count in these large models has actually grown 10,000 times bigger in just four years, between 2018 and 2022. So such demands and pretty stringent compute requirements have resulted in a wide range of innovations for AI-optimized infrastructure capabilities.

In addition to these innovations at the infrastructure level, we also see significant advancements in the AI software ecosystems, including many new commercial and open source-based AI platforms and machine learning as a service capabilities that significantly simplify the build and deployment of distributed AI workloads. In fact, further unlocking the democratization of AI because it's becoming much more accessible to a much wider range of developers and AI practitioners today. Now, how will Equinix participate in this process? What we see across the board is that there is an accelerating appetite for companies to quickly integrate AI into their products, services, and operations, which is driving increased demand for data center capacity as a broad range of service providers globally extend and scale their infrastructure. Equinix has been actually seeing AI-related projects and wins in our opportunity funnel for several years now.

Specifically, we see many organizations thinking very hard about how they can handle their own proprietary data and models with a strong desire for many to maintain tight control over that data, but at the same time, be able to seamlessly intersect that across multiple clouds and across a distributed infrastructure. We are continuing to more deeply actually segment the AI markets because I think the range of use cases will be pretty broad, and we see this as an area where we will work even more closely with partners to capture incremental growth opportunities in the AI space. Back to you, David.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

So I want to dig into that more deeply, but just kind of at a higher level, when we think about the opportunity facing Equinix, at your June Analyst Day, you, I think this was, sharing a view that by 2026, you know, the AI addressable infrastructure market might be a $60 billion a year annual opportunity, and that Equinix's serviceable addressable market would be some portion at about $27 billion. So I want to ask a few questions about these numbers that have kind of been floated, because frankly, they're some of the few numbers that people have been prepared to put out there to kind of size the opportunity. First of all, where do these numbers come from?

How is the AI opportunity distinct from the generic, more kind of digital transformation opportunity, that's around us? And let's start with that.

Justin Dustzadeh
CTO, Equinix

Great question. So at our Analyst Day back in June, we outlined the growing market opportunity in front of us. Across digital transformation, there are trillions of dollars of infrastructure moving towards a hybrid multi-cloud architecture, and much of that opportunity today is geared towards our data center portfolio. So looking out to 2026, starting with the overall global IT spending of $5.8 trillion, we continue to see more of the overall IT spend move into the infrastructure markets, which we can support today. Now, the digital infrastructure market includes three main buckets: the data center infrastructure, as-a-service infrastructure, and AI infrastructure, together amounting to $665 billion in total available market. Now, distilling that market opportunity down and what is addressable by Equinix, we see that roughly $140 billion falls within our available market.

Currently, our service available market is largely being driven by our colocation and data center services, and some of our future market opportunity will come from starting to tap into the AI opportunity, which we estimated was around $21 billion of the $140 billion market available to us. We see that we are now at the earliest stages of a massive opportunity around AI, which has rapidly gone from experimentation, in many cases, to a significant opportunity. The market size and service available market reflects market sizing by third-party providers, such as MarketsandMarkets for the AI infrastructure market size, as well as our team's own research, looking at our current products and offerings. And that's how we came to the service available markets that you asked about. I hope that answers you.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

Why did you land on MarketsandMarkets as being the source for this information?

Justin Dustzadeh
CTO, Equinix

I don't have, probably I'm not the right person to answer that question. Maybe, Katie, who is joining us from IR team, can answer that question now or later.

Katie Morgan
Senior Manager for Investor Relations and Sustainability, Equinix

Dave, good question. I'd say overall, we use a variety of third-party resources to size the market opportunity. And so that was the one the team landed on for Analyst Day, but we use a number of different third-party providers to size the market opportunity out there for us.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

So as you think about that service addressable market that you laid out at the Analyst Day, how do we think about Equinix exploiting the addressable market and turning it into realizable market opportunity?

Justin Dustzadeh
CTO, Equinix

Yeah, our service available market today represents the market that we can reach and target, based on our current operations. Much of our service available market is, as I mentioned today, largely driven by our focus on colocation and data centers. As we continue to evolve and advance our product offerings across both our data center services as well as our digital services portfolios, we can continue to tap into the emerging AI opportunities. Also, it's important to note that AI workloads rarely happen in the silo. At Equinix, we are uniquely positioned today to provide the digital infrastructure required to support our customer needs, not only for AI, but also for their broader digital transformations and the adjacent infrastructure elements that need to go together with AI.

We provide customers with compute, networking, and storage, and with AI in its infancy still, especially in the enterprise space, our service available market will continue to grow as we augment our geographic reach and also our offerings. As always, though, we continue to pursue putting the right customers with the right applications and workloads in the right data centers, and this holds true for AI workloads as well.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

So, but in terms of exploiting that $21 billion market opportunity, is it a kind of linear progression towards taking share to that $21 billion opportunity in 2026? Or, or is there some sort of moment in time when it evolves and it kind of all cascades down on, on the industry and, and in Equinix in particular?

Justin Dustzadeh
CTO, Equinix

My short answer is that I think it's gonna be continuum. It's a very fast-evolving landscape. The enterprise AI use cases are developing, and there is definitely a maturity curve that we see across the industry, being developed. In terms of how that curve will look like over the next three, four y ears, we will have to see how the market develops. But as I mentioned, we are confident in our ability to help our customers in their AI journey, especially with enterprise use cases. We have the fundamental building blocks in terms of compute, infrastructure, storage, and networking, and the data, the vast majority of the data in the enterprise that needs to be processed for AI is already at our footprint, at our data centers.

So, again, I maybe let Katie speak to the specifics of how that chart or graph will look like over the next couple of years. But, again, it's gonna be a continuum and, as I mentioned, with the existing capabilities that we have in place and with the work that we are going on expanding our data center and digital services portfolios, and with the R&D investments that we are putting in place to further improve and modernize our technology stack and capabilities, I have no doubt that we will definitely be in that journey alongside our customers. Katie, any additional points that you wanna mention?

Katie Morgan
Senior Manager for Investor Relations and Sustainability, Equinix

No, very well said, Justin.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

And again, we're all like referring to Katie, to, like, bless everything we're saying here. But I think that it's fair to say that when you guys came out with the 8%-10% type of, you know, growth rates that you're looking, you're kind of baking your basing the guidance on, I think you specifically said that there's kind of a, there's no kind of specific AI component related to that outlook. And so to the extent that I know Katie's, like, taking a deep breath, she wants to jump in here. And to the extent that, that of this $21 billion, there's a non-zero portion that's realizable through time, I mean, that's, that's kind of an incremental piece of that. Am I wrong?

Katie Morgan
Senior Manager for Investor Relations and Sustainability, Equinix

I would refer back to Analyst Day in June when we said, you know, on stage, you know, through now through 2027, we expect to deliver, you know, our long-term revenue guide of 8%-10%. Certainly, you know, AI could be incremental upside and opportunity to that, but, you know, our, our long-term guide is for 8%-10% revenue growth per year through 2027.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

It's all incremental?

Katie Morgan
Senior Manager for Investor Relations and Sustainability, Equinix

... it could serve as an accelerant to the growth, as we talked about, but our long-term guide is 8%-10%, as we gave at Analyst Day.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

All right. So we've kind of sized up a few of these things, kind of talked a little bit about the basics. So this is kind of where the rubber meets the road, I think, a little bit for the market, where Justin, you know, the market, we're—I guess we're getting ready for a wave of kind of AI training engine development, which is gonna require a lot of new investment, likely for, you know, dedicated new facilities, which are, you know, optimized to address some of these power density requirements that the training stages require.

So, to what extent, if you agree with the argument that the first instance of kind of AI development in the data center is the training part, before we get to the next part, which is the inference and applications adoption part, would you, first of all, how will Equinix participate in this? And would you agree with the assertion that given that super dense data center development isn't necessarily your wheelhouse, you're not necessarily gonna be the first data center guy to benefit?

Justin Dustzadeh
CTO, Equinix

Great question, Well. As I mentioned earlier, AI is a pretty broad umbrella, and then with AI and machine learning use cases, when we look at deep learning, yes, training and inference are specific stages of deep learning. But across the board, there are many more ML/AI use cases that have a very wide range of requirements in terms of power density, in terms of compute, interconnect, and storage capabilities. As I mentioned, I think while AI plays an increasingly critical role for businesses to stay competitive, I believe AI will be one of the many pieces that will enable businesses to succeed today and in the digital age. I personally believe that AI will deliver an optimal value when seamlessly integrated with the rest of the end-to-end digital infrastructure architecture.

While some AI training use cases, specifically large or very large models, can introduce new infrastructure requirements, building dedicated data centers for AI with brand-new designs might not necessarily be an optimal approach. Today, we are closely investigating the emerging AI-driven requirements in terms of power density, cooling, and more performant interconnect and storage capabilities. We continue to evaluate next-gen technologies in these areas, both internally as well as with our partners, to stay ahead of the curve. It is also important to note that not AI workloads are equal, and as such, there won't be a one-size-fits-all approach to meet the wide range of requirements.

In our view, the model size, the training dataset size, the cleanliness of the dataset, the requirements around retraining frequency, and specific requirements around compliance and security altogether will determine the right training solution for a particular use case. For example, our xScale portfolio today could well be suitable for large-scale training, requirements such as gen AI or large, large language models. And our retail portfolio and the digital edge can offer an ideal home today for use cases where the AI models need to be frequently retrained, and where strong security and compliance requirements need to be met, for example. And, demand forecasting is a good use case, and risk analytics is another good use case, and there are many other enterprise use cases that we see across the board.

So, I hope that answers your question, but I would just summarize it by saying that AI workloads come in many different shapes and forms, and there is a very wide range of infrastructure requirements, both across training and across inference. And again, AI, in order to offer the best value to the business, it has to be part of an overall digital infrastructure architecture. It has to sit close to where sources of data and the users of data are. And again, depending on the use case, whether you need to retrain or whether there are latency requirements, the optimal architecture can be quite different.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

Okay, I think we wanna this might dovetail into another question. For those on the webcast, if you have questions that are coming up, there's a little bar, I think, underneath our presentation here, that you can type in some questions, and it'll pop up here, that I can try to feather them in. Oh, and they're coming in already, so you guys already figured it out. So, okay, we'll get to some of these.

So, the thing that I think people think, and Justin, you maybe just talked a little bit about it, is that there's a monolithic sense of the training engine being this hyperdense, you know, 40 kW-plus per rack, you know, dense, liquid-cooled, data center occupant, and that, your facilities are simply were never simply designed to allow for that. And while the occasional hotspot might be able to be accommodated, you know, you can't really scale it inside your existing portfolio, and it maybe not even exists on your existing build-out, roadmap. But your xScale program, as you say, is much more of a kind of raw material. Kind of it represents an opportunity to kind of do greenfield things that are more designed to address those types of builds. That may be right or wrong, but-...

You pointed out, and I think many people believe, that the inference element, which is typically going to be likely multiples of existing traditional CPU compute, but likely fractions of training, density in terms of power consumption per rack, that these are the things that could more easily live at some meaningful scale inside the Equinix facilities as designed, and then obviously inform future development at the margin. So, do you agree with the idea that Equinix is more of a business model because of its latency, interconnection, you know, centricity, density of customers, that it's a more of an inference, play than a, than a training play?

Justin Dustzadeh
CTO, Equinix

I would say it's an and, and we can come back to that. I would say in addition to the elements of training that we talked about and many training use cases that we can support today and will be supported as part of our technology roadmap. Let's talk about inference for a second. So as it relates to our retail portfolio, I think as we briefly talked about it, we think that our retail portfolio today will be an ideal home for inference. Particularly for use cases where the AI models are highly dynamic and where the insights generated based on these models are more real-time and mission-critical in nature. And because these really drive very stringent performance and latency requirements.

Overall, even though we are in early days of AI adoption within the enterprise, we are seeing wins. For example, as we discussed, in our Q2 earnings call in early August, we have seen specific instances of interconnect to support AI. In fact, we had a pretty significant win in Q2 in the AI space with an AI-as-a-service provider, and that put their network nodes with us to really drive interconnection to the multi-cloud connectivity and to support the inference and, interconnection to the cloud. Now, from a customer lens, as we talk with customers and many AI practitioners, some customers are saying that they have private data that for a number of reasons, they want to retain control of, but also want to be able to connect to AI services.

Customers can use interconnection today at Equinix to be a direct cross-connect away to the hyperscaler of their choice, leveraging our 40% market share of cloud on-ramp in the markets and where we operate today globally. On the other side of the equation, you're seeing some newcomers enter the market to deploy GPUs as a service, and these players might deploy their training in a low-cost power market.

But, the fact is that once their model is trained and they think about inference, which with scale, usually requires a much more distributed footprint, they look at Equinix, which with our broad geographic footprint of data centers across the world, means that 80% of the population in North America, Western Europe, and many large Asian metros today lives within a 10-ms round trip network latency of our facilities. So with that kind of geographic reach and latency, and proximity to the population, Equinix is the logical place to put inference nodes. And we believe that cloud service providers are likely to play a significant role in AI, and we already have a deep relationship with them, as well as with the enterprises that want to consume those cloud services.

We also believe that new ecosystems could be created by the demand for AI, and we are working to actively seed those ecosystems in our facilities. And finally, our view is that with scale and adoption, inference will be increasingly an edge play, and infrastructure providers with densely interconnected ecosystems and distributed footprints will be best positioned to serve those types of workloads going forward.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

So we've got a number of questions that have come in. Before we kind of start hitting those, and again, if you guys want to just type in something, I'll try to get it, as we kind of move into the final phases here. But I guess this is a big question, Justin, for us, which is, you know, are data centers like Equinix, given that they've been designed to 110-120 W/ft , you know, types of density, that's like what exists today. You guys have expressed an inclination towards a little bit more density, but not dramatically larger. Does the data center industry represent an enabler or an obstacle to AI adoption? And does power represent a limiting factor in the pace at which it's possible to adopt it?

Justin Dustzadeh
CTO, Equinix

Great, great questions. So, Equinix is an enabler for AI for sure, and will continue to be an enabler for AI. And, as we have shared, at the crux of AI, it's really about data and the ability to optimally process the right datasets and share the insights from that data processing... and data is mostly generated and consumed at the edge. And today, Equinix is well-positioned to support the building blocks for AI, again, both for training that requires proximity to data and the distribution of the insights in terms of inference, which will increasingly be an edge play, as we talked about.

So at Equinix today, we provide the most efficient ways for enterprises to effectively move their data from their private infrastructure to end user, as well as from their private infrastructure into the multi-cloud environment and to the SaaS providers. These are all capabilities which have been core to our 20-year history. We don't believe that there will be a significant need to retrofit facilities. We certainly continue to invest in emerging technologies and architectures for our next-gen design to ensure it's evolving and keeping pace with the market and latest innovations. One of the strengths of our retail business, for example, is that when you serve a very broad range of customers with differing density requirements, you're actually able to sort of dense up and extract more from the system over time.

On the xScale side of the equation, it's a little bit more challenging as you typically allocate power for the building to one or two customers. And within our co-innovation facility in Ashburn, we are actively investigating and testing various technologies, including liquid cooling, to further optimize our current designs and be able to implement it as a more standard capability in our go-forward builds. Now, in terms of power availability, if we have time, with power being a critical element of what Equinix delivers to customers, we take a long-term lens on our power procurement planning by securing power ahead of our breaking ground on our developments. In some cases, we are working with municipalities or local utilities that are facing resource constraints around power to secure our build capacities, for example, in Northern Virginia or in Singapore.

We also partner with utilities to share a roadmap of our own future power requirements. Because our fill rates are pretty predictable, that can be done pretty well. We are leaning into our sustainability agenda as well to demonstrate to municipalities and governments that Equinix is and will be a responsible user of the resources in the community. In contrast to wholesale and hyperscaler players, our incremental power demands at any given, in any given year, are smaller in scale, but have actually consistently grown over time. In certain markets, such as in Silicon Valley or in Dublin, we have also implemented self-generation via the use of, for example, Bloom's fuel cells as a primary power source.

Because of our retail-focused model, we have multiple options around power availability as we continue to expand our footprint and bring up new locations in the coming years. Now, in addition to power distribution availability, sustainability is obviously top of mind in order for us to measure our power consumption and the right type of renewable energy coverage. In discussions with AI practitioners, as I mentioned before, really the key decision drivers for them today are location of their datasets and data gravity and a distributed footprint, which all play well into Equinix's retail colocation sweet spot. I think we are still in early days for AI deployment for enterprise, and as innovation continues to develop in this space, I think we will see a pervasiveness of AI use cases and workloads across many new verticals and industries.

As AI infrastructure continues to get more distributed and move toward the edge, we actually see a wide adoption of hybrid multi-cloud architectures for AI use cases, which again, lends itself, very well to our strengths. We very much look forward to continuing the AI journey with customers, and, if you have any additional questions at this point, I'm happy to answer them. Thanks very much, Dave.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

Okay, tons of questions. So one question, and actually, the beginning of the answer doesn't even... I don't really get. So Cisco was earlier talking about the evolution from InfiniBand to Ethernet. And does that create any obsolescence risk for your interconnection business as you talk about the necessity of low-latency interconnect with respect to AI?

Justin Dustzadeh
CTO, Equinix

My short answer is, I don't see that as an impediment or any issue to make our interconnection capabilities obsolete. Yes, that we are closely and actively investigating and monitoring these technology trends. Interestingly, there's a lot of innovation in the networking space these days because networking, if not done right, could become a bottleneck in the AI training and inference, and, as you know, interconnection and high performant and highly performant interconnect capabilities have been core to our business model for many years. So the short answer is no, just the evolution and the trend making Ethernet one of the viable technologies in the interconnect space is a natural thing that we see as many other developments in the industry.

That is in no way a showstopper or any impediment for our product portfolio.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

So another question is, with respect to kind of the rising density of power consumption in the data center, obviously power exists, right? But in order for it to exist in a data center, in order to increase the power density of a data center, it requires an increase in cooling capability, maybe a diversity of cooling technologies. It requires an increase in N+1 generation capability, diesel backup power, battery backup power. And so could you talk about the step function increase in capital expenditure intensity for building, you know, an ideal Equinix hybrid AI regular data center, versus what maybe Equinix thought these data centers would look like from a capital intensity standpoint five years ago?

Justin Dustzadeh
CTO, Equinix

Yes, I can talk about several elements. The short answer is that I don't think there is a one single formula that works for all use cases. As I think that the keyword that you mentioned is that it's gonna be a hybrid, a mixed environment. Different use cases will require different types of energy and power requirements.

I think as the CTO, I spend a lot of time with my team and with my peers, looking at how we can actually leverage software and AI to create a much more dynamic architecture, such that we can measure the power consumption and distribution at a very granular level, and effectively create a feedback loop, where we can actually act upon those insights and control how power is distributed and how it's consumed. Based on different use cases and different requirements, even use different sources of energy for different use cases. It's a very fascinating space that is fast developing. It's really the application of AI in the physical infrastructure, so we are investing R&D efforts in that space.

But to answer your question, because of our neutrality, kind of tenet, as part of our business model, we are really trying to enable as many use cases as possible on Platform Equinix. And different use cases have different requirements in terms of power density, the flexibility between different sources of power, the granularity at which power needs to be measured and modified or configured differently.

Katie Morgan
Senior Manager for Investor Relations and Sustainability, Equinix

And Justin, I would just add on, Dave, as you think about it, it's something we've done over the course of our history, is managing across a number of different customer work.oads. You know, when you go into our facilities, it's not a homogeneous, you know, type of workloads. You have a number of different customers with a number of different types of workloads and draw caps and things like that. And so today, we may have in a facility, it's always kind of like playing Tetris as we lease up a facility, where we may have what we call today, a very dense deployment of call it 10 kW per cabinet in a certain spot of the data center floor.

Next to that, we'll put a networking deployment where we have 2 kW per cabinet to kind of balance that out, but it's something we've always managed through across our portfolio.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

So I have lots of follow-ups on that, but I wanna keep asking the customer questions. So, one question, Justin, you kind of mentioned this idea of data gravity. And, you know, one client kind of notes... I'll throw out an example, that Meta has been reported to be investigating a gigawatt deployment in Wisconsin because the land is cheap, the power is available, and there seems to be an indifference, if you will, to the notion of data gravity, because Meta can create its own data gravity. So are you at all concerned that, you know, as large language models, built-in huge data centers, move data center gravity away from city centers by necessity, that somehow your advantage dilutes down through time?

Justin Dustzadeh
CTO, Equinix

The short answer is, I don't believe so. I think there is, there is, the pie is big enough for a lot of use cases. There are definitely use cases that can benefit from a, a large training, kind of deployment, at the place where, maybe power is more accessible, or that use case does not necessarily require a lot of interconnection to other data sets. And typically, that's the case with, applications like maybe ChatGPT or public data, I would say, as opposed to enterprise private data. And, in many discussions with customers, their data is actually their most valuable asset, and they want to keep it where it is, and they wanna continue to update it, and they wanna continue to augment that with other data sets.

Moving that data out of where their compute is to a remote place, and then do all that computation, and then sending the inference workloads back to the edge, I don't think that's the most natural process. Again, AI practitioners today say: How can I leverage my data set today that is sitting at Equinix or in the cloud, and how can I just augment that and derive new insights from it, and make sure that it stays private, it stays secure, and that I don't have to do a lot of extra work to do AI on it? Again, in the example that you mentioned, that use case might benefit from that particular deployment type.

But across the board, we are seeing that a lot of AI practitioners are saying: How can you help me leverage my data? How can you leverage your compute, storage, and interconnect capabilities for me to do AI training and inference where I am today, and in close proximity to cloud providers within single-digit latency?

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

So Justin and Katie, thank you for being with us again today. Well, as we wrap up, what I wanted to do is just kind of really tackle. You've outlined all the opportunities and all these things, and you've kind of produced a lot of numbers, and we appreciate that. I wanna tackle. I think the lingering criticism, it's that AI has changed everything. That everything that we used to think could be done in a data center can no longer be done there. It requires far more density of power and far more ingenuity and cooling, and no data center that you know exists today could possibly handle it. Your roadmap couldn't possibly handle it.

All of this has to be done by brand-new people coming in with brand-new tools, and that Equinix is a dinosaur, and it can't really participate. Justin, take us home.

Justin Dustzadeh
CTO, Equinix

I have a slightly different view. I, as we mentioned, we talked about the fact that AI is not necessarily a new thing, and we have seen a continuous progression of Moore's Law that has really enabled so many different use cases. And I continue to believe that as you look at the end-to-end AI/ML workflow, there are so many ways to optimize the output that just throwing power density to the problem might not be the single kind of bullet that solves all the problems, and sometimes it might actually make things less optimal. So, again, across the board, we are seeing that today there is a mix of hardware technologies, and compute requirements that can successfully meet the wide range of enterprise AI use cases.

The fundamental building blocks are continue to be around compute, networking, and storage, and when these things come together at the right place, close to the data that matters, close to the ecosystem participants that matter, all of that together actually define the success or the failure of an AI kind of deployment model. Again, I'd be happy to continue the conversation and share the experience and the learnings that we see across the board with the 10,000 customers that we have on our platform. Every one of them is looking at ways to leverage AI, and again, there are recurring themes that we are seeing, and we are able to share with them, with the community. Again, AI is about where data is.

AI is about bringing compute to the right dataset and doing that privately, especially in the case of enterprise, and being able to elastically increase your inference capabilities and have inference where it's needed. Obviously, we are not saying that there won't be any new power density requirements, and we are working on these technologies, and we are enabling them at our data centers as well as our co-innovation facility in Ashburn. I hope that answers your question somehow.

David Barden
Head of Communications Infrastructure and Telecommunications Research, Bank of America

That's a great place to leave it. Justin and Katie, thank you so much for joining us. We really appreciate you being part of our first-ever global 2023 AI conference. It was really, really fun to talk to you guys. Looking forward to staying in touch.

Justin Dustzadeh
CTO, Equinix

Thank you very much.

Powered by