Good afternoon, everyone. I'm Samik Chatterjee. I cover hardware and networking companies at J.P. Morgan here, and I have the pleasure of hosting the Arista team here with me. We have Chantelle Breithaupt, who's the Chief Financial Officer, Ashwin Kohli as well, and Liz Stine as well from Investor Relations. Thank you all for making it to the conference, and thank you to the audience as well for joining. I'll direct a few questions.
Just let me know whoever wants to take it, but I do want to start with a question that we've been asking most of our companies to just share their thoughts on how they think the next 12 months will shake out in terms of just project for us when you think about your main end markets or primary end markets, where do you think they will be in terms of either spending intent or from a overall sort of positive, negative outlook compared to where we are today? How does the next 12 months look? I think for you, especially, it looks like maybe going by vertical when we talk about either the hyperscalers or the specialty cloud or enterprise, if you want to break it down that way, rather than any generic macro comments, we'd definitely appreciate that as well.
Yeah, sure. Good afternoon. I think that from the perspective of the next 12 months, so if you had listened to the earnings call we had on February twelfth, you would think from the perspective of, you know, looking at the raise of our guide for the year to 12%-14% growth, and with that comes your question, Samik, in the sense of what we see. So when we raised the guide for the year, part of that was looking at optionality of different ways to get to that outcome, and that's a robustness across many of our vertical sector segments. And, you know, if you take cloud and enterprise and the specialty providers, those three segments, you know, we've seen good traction across those three things. So I think that we're excited by what we're seeing from that perspective.
Mostly that was in the sense of traditional cloud, classic cloud, and enterprise. I know... I have a feeling you'll probably ask us something about AI, but we'll wait and see. But I think from an AI perspective, you know, that's a, that's kinda like a tertiary adjacent. That would be something I want you to think about going into 2025 with the $750 million revenue target that we expressed constructive optimism towards. And so we're excited.
Mm-hmm.
I don't know. Because-
So let me just, Chantelle, follow up on that. I mean, the optimism on that front is definitely something when we compare to 2023, when the cloud companies were not spending as rapidly. When you think about the change from there, from that time horizon, how do you think about what's changing for the cloud companies itself, for them to come back to invest in the traditional sort of infrastructure? Is it just purely a function of inventory digestion, and that we are sort of, that's behind us? Or is there something more sort of technology-related, structural-related, that's getting them to come back and re-engage in the traditional infrastructure? And we, I know we are not talking about the AI side just yet, but even come back and invest in this, how do you see that change? What's driving that?
Yeah, I think there are a few things in that conversation. I think part of it is just timing of the conversation. You know, I think with the fiscal calendar years, you know, working through budget cycles, planning cycles, et cetera, gaining visibility. We talked about gaining visibility from three-four , five, six months of visibility, so that's encouraging. I think the hypothesis is, you know, speaking on their behalf in the sense of what we see, is just continuing that refresh cycle, continuing the investment, aside or adjacent to their cloud CapEx, their AI CapEx conversation. So I think it's part of their refresh cycle that we're seeing, and then just having, you know, set plans and intention with that within our fiscal year 2024.
Okay.
Yeah.
Okay. I'll digress for a bit, and we'll get to the AI-related questions, which we obviously have a ton of. But one of the things that investors have come up and mentioned to me a few times, I did want to get that out as a question and get your feedback on it or comments on it. For an incredibly stable management team that you've had over the years, investors have been caught a bit by surprise about, in terms of the magnitude of changes more recently, including Anshul's departure, changes to the board. That's definitely got investors more sort of curious about what's happening. How do you sort of reassure investors that there's no change in the roadmap, everything sort of as they should expect in terms of Arista executing the way it has always, which is execute better than expected, right?
How do you reassure investors on that front?
Yeah. I think a couple things I would mention. I think given the size of Arista and the length of time Arista's been a company, I think having some of this tenure staff for anywhere between eight, 12, 16 years, I think that's actually, in the tech industry, quite a long tenure. I think that you had a couple things happening around the same time that are not related, and so maybe that was a bit of the form factor of the thinking that there's lots of change. I think they're very specific changes. Regardless of that, you know, we have a ton of bench strength across all the items, and it allows, you know, the leadership that was there before to kind of rise up and do things and add to their career, that they couldn't have before.
So we have, in the sense of mentioning Anshul leaving, we have great capacity and capability. You know, we have Chris Schmidt, who's leading the sales. We have Martin Hull, who's from a product engineering perspective. We have John McCool, who has like, engineering and operations, so a ton of strength that's been there a long time as well. So we have all the people that you know and love working on the tech roadmap that have not changed, including Andy, who, and Hugh, that we're very excited about. So I don't think any one departure changes the direction of the company. You know, coming from bigger companies, we have a structure and a matrix to help bolster what we need to bolster: tech roadmap, leadership, sales engagement. We have Ashwin here as the Chief Customer Officer-
... you know, working with the team and the enterprise. So from that perspective, the enterprise is covered. So we're excited. We feel we have the leadership we need, the relationships we need, the engagements, and-
Yep.
You know, and we have the confidence to raise the guide in that environment, so I'm optimistic.
Okay, great. Moving to AI, and before we get into the demand roadmap here, we're also curious in asking all of our companies to talk about how they're adopting it internally, where do they see the tangible improvements in terms of internal operations coming through?
Yeah. No, that's a great question. Good afternoon, everyone. So, we've made, we've made investments as well to take care of how we can leverage AI internally, so both from a development standpoint of view and from my team, which is a lot more field operations-focused. So on the development side, you know, while they're coding, you can actually see efficiencies from there. From my team specifically, we've actually... The goal is, from field operations, we wanna try to go figure out, how do we help customers make money, and how do we help customers save money? And so, you know, the goal is, okay, how do we go get better documentation, you know, better support, better alignment, automation of designs, and support out there?
We've obviously got, you know, our internal tool, which is called Ask AVA, and we've been enhancing that and leverage that as well for a number of different products internally in order to go drive efficiencies from there.
Great. So let's move to the demand side, but I do want to sort of first hit the biggest debate that investors have been focused on, which is Ethernet's opportunities relative to InfiniBand. Can you talk about where Arista is winning against InfiniBand, and what are the drivers leading your customers to choose Ethernet?
Yeah. No, great question, right? So, in specifics to AI, you know, typically, when a customer is thinking about AI, they're thinking about the back-- what's called the backend infrastructure, side, side over there. And so customers have a choice either to go for InfiniBand, in that scenario, or they can go from an Ethernet, as an option over there. And so when they're looking specifically from Ethernet, they wanna make sure they're actually going to a non-lock-in, non-proprietary, you know, standards-based architecture, both from a hardware side and from a software side, in order to go scale out their backend infrastructure. This actually drives their total cost of ownership much lower, which is of interest to a lot of customers.
Additionally, when they're actually looking at Ethernet, what they wanna do is they wanna make sure there are several metrics from a technology perspective they wanna look at as well. So they wanna look at whether it's a lossless architecture, whether it actually does good priority flow control, whether it does congestion, but there are several other metrics over there that customers wanna look at when they're deploying Ethernet. And this is not only from a day one perspective, which is the day you actually buy the equipment, but exactly how do you roll this out in what's called lifecycle management. So when you're thinking about lifecycle management, you wanna be thinking about, okay, how do I automate the infrastructure? How do I do code upgrade?
Because, you know, in all networks, there is software, and, you know, there are gonna be bugs in software. So the question comes to this: How do I do a seamless code upgrade in my AI infrastructure without actually bringing down the AI infrastructure? Because when you do, then the GPUs are not active, and so you want to be able to go do a software upgrade, which is what's called hitless. Additionally, you may have security patches on the software as well, so you wanna be able to patch your security infrastructure, or the infrastructure for any security patches, once again, hitless, so there's no impact to the business, specifically on the GPU clusters as well. The other low... You know, to, in order to lower the total cost of ownership, customers are thinking about, you know, visibility.
Can I actually manage my AI infrastructure in exactly the same way for the other use cases that I have? So let's say, for example, they have data center, they have campus, they have routing. They wanna make sure that they're actually using, you know, the same tools, the same people, the same processes in order to go manage the infrastructure as well. So all very, very key points that anybody who's thinking about looking at making an investment choice on an AI fabric, they wanna be considering all these as well.
Yeah, and I think just to add to that, Ashwin, I think, you know, six months ago, we were having this debate, you know, Ethernet versus InfiniBand. You know, and two quarters ago, we started talking about kind of these four out of five AI opportunities that we were invited into, that have—four of them have chosen Ethernet to move forward. That means these customers are putting engineering resources behind kind of the development and, wanting to see Ethernet service those backend AI clusters, so kind of showing that momentum. And I think that on the most recent earnings call, Jayshree went on to, you know, talk about these clusters progressing through the pilot stage.
So we've talked about kind of 2023 being the trials, 2024 being pilots, you know, and some of those pilots scaling up to tens of thousands of GPU connections, and then moving on to kind of the production in 2025, which is where our $750 million target comes from. You know, and those having aspirations of scaling, you know, well beyond, you know, up to 100,000, maybe maybe beyond that. So I think showing that progression, I think Jayshree's been very good at kind of dictating the last couple of quarters about the progress that we're making with some of these larger AI opportunities.
Yep, great. Maybe let's extend that sort of discussion a bit more towards now Spectrum-X, which has come up a lot in conversations, Liz, as you're aware as well. Just how do you see them as a competitor when Arista's winning with the four out of five you've talked about, obviously it's a choice by the customer to adopt Ethernet, but then Spectrum-X obviously then gets included as an Ethernet option. So how do you think about the competitiveness against that product?
Yeah, I mean, you know, I would, I would try to simplify that down into two big buckets, right? You know, one being hardware and one being software. So customers obviously want a choice. Once again, back to Ethernet, you know, whether it's gonna be proprietary or non-lock-in, and so, you know, Arista's delivered value over there, you know, for the last 20 years. When you actually look at the hardware specifics, depending on the scale of the AI fabric that you're trying to build, you wanna try to look at, okay, is the traffic leaving the rack, right? That's the first out of the scale that you're looking at.
If it's gonna be leaving the rack, and if it's going to a small cluster of GPUs, then you wanna figure out, do you want a, what's called a lossless fabric? So Arista has a solution over there, which is very Jericho-based. That gives the customers the confidence that if traffic is going from one rack to another, then, you know, we'll guarantee that they're actually not gonna be dropping packets. So they can actually use the Arista, you know, 7700, 7800 chassis to go maneuver traffics, you know, at a, what's called a single-tier AI fabric. The question then comes in is, like Liz said before, if you're gonna be scaling to tens of thousands of GPUs, if not hundreds of thousands of GPUs, scale becomes a massive factor over there. At that point in time, you wanna look at a two-tier architecture.
And so what differentiates us from Spectrum specifically over there is Arista has a Jericho-based platform that actually allows us to scale not only at the single-tier fabric, which is a lossless architecture. They can use, you know, a Jericho-based family of products, or they can use a Tomahawk 5, or they can go to what's called a two-tier architecture, and they can actually use the Jericho platform in the spine layer over there, which actually allows them to scale. So that's from a hardware differentiation that Arista has, you know, offering for our customers. From a software side, it comes back down to basics, right? You know, Arista's been very good at our software platform, which is called EOS. EOS allows and gives our customers...
And I always tell this a lot of times to our customers, any network should just work. It should not be just a specific use case, whether, you know, you have a, you have a software that actually is used for the data center, if you've got a use case for the campus, routing, AI is simply another use case as well. And so what we've been really good at differentiating our software value is basically being able to use the same code across multiple use cases, and I can, I can, articulate this very simply. I was giving a conversation earlier today, which is, imagine if you've got a MacBook Pro, you've got an iPad, and you've got an iPhone. So different hardware, but they all have the same look and feel across different, different types of platforms.
So for the users, they don't care about, you know, which platform they're using. They can actually use it across all. It's the same apps. Very similar to what Arista does, right? Different use cases, same software, gives our customers the value to go leverage from there as well.
So one other thing that I would add to that, and thank you, Ashwin. Never underestimate the software importance, right? So if you talk to any of these guys that are building large AI clusters, it's a mission-critical application, right? It's a competitive advantage. Like, if the network doesn't work, the GPUs are not talking to each other. That is a very important factor. So, you know, we've gotten a couple of questions where it's like: Well, if it's single application, does software really matter? It definitely matters, and it matters even more when it's a mission-critical app, and especially because these resources are so expensive, right? You wanna get your most utilization out of those GPUs.
Mm.
So EOS has always been built on this culture of quality, right? We don't ship a product before it's ready. We will not risk melting down a customer network in order to hit a specific date. Like, that's just not the way that we've decided to build products. And EOS and Ken and his team are committed to this culture, where so that Ashwin can tell customers, "It just works," right? There is a quality aspect to our code that, you know, that is a competitive advantage, and that remains true even in AI networks.
Okay. Maybe just following up on that, you talked about the four out of five wins. Just want to get more sort of nuance to that. When you think about that, is it a win against InfiniBand, or do you also see Spectrum-X as a competitor there that you were able to displace? And how do you really see in the marketplace today, when you go to your customers, are you seeing Spectrum-X being offered as a solution already, or is it more something that will be offered in the future?
Yeah. So I think if you look at the four out of five, we've talked about these four out of five AI opportunities that four of them went Ethernet, one of them stayed InfiniBand. Right inside that Ethernet, I think, some of you have probably seen, there's public postings around one of those deployments, one of our customers kind of outlining what it looks like at the leaf layer and putting Arista 7800 at the spine. Ethernet in general has always been a competitive landscape, right? There's always been multiple players, and really, even the advent of Spectrum, that's not a new player. Those assets have been around for a while.
I think when we look at kind of the front-end network, right, where we've shined, you know, classic, take AI out of it, you know, I think even through the cycles, you know, you can see the market share, market share numbers. Our job is to continue to develop and execute on the products that are going into this next cycle, which includes, you know, these AI, AI use cases and AI opportunities. So I mean, I think that the competitive landscape, like I said, is, it's always a competitive landscape. There's always multiple players. I think that we view, you know, this AI cycle unlike any others, where our job is to show up and execute with the best-of-breed products.
And, you know, as you've seen through the last cycles, I think that we've been happy with our share, and now we have to go do it again, right?
Okay. Let's talk a bit more about UEC and the performance improvements that that allows you to offer to your customers. You've talked about a set of products that probably launch more next year that'll be more UEC compliant. What are you sort of envisioning those performance improvements to be? How does that change in your thinking, what your win rate will be?
Yeah. You know, to simply put, I think there have been third-party reports that basically have indicated that you'll actually get a 10% improvement in performance, based on that as well, and it's all around two things, right? Job completion times, which is very critical in an AI infrastructure, and then, you know, the path, right, you know, from, you know, point A to point B as well. And I think those are the two things that you should be looking at, you know, whenever the products do come out, right? That's where you'll actually see the performance improvements from that side.
Yeah, and I think some of the stuff that the UEC is working on is really getting that ecosystem ready. So, like, just the end, like Ashwin said, the end-to-end path. That includes some of the NICs, the switches, right? Every hop of that network. I think that as far as initiatives go, it's really around congestion management. Obviously, managing the congestion in these AI workloads, you don't want, you know, packets colliding, et cetera. You don't want droppage. You don't want congestion. You wanna get the most use out of your GPU resources. And then load balancing, right? So how do I spray... How do I effectively utilize all of the available bandwidth within the network?
I think that those are kind of the goals that, you know, the initiatives that UEC is working on kind of day one. I think that we're expecting, you know, some sort of draft kind of here later on this year.
Mm-hmm.
Is it a stopgap to customers kind of continuing out their pilots? Like, obviously, there's a lot of customers that are still working on their pilots today. But again, it's working on enhancing Ethernet to better service this new AI use case.
Okay. Let me ask you one more, and then I'll open it up to the audience. How should I think about, I mean, within the industry, there's also this big, on the compute side, there's this big trend towards AI accelerators and looking at sort of adopting custom silicon that's sort of beyond the NVIDIA sort of ecosystem, right? How do you think about the win rate on those? How does that change this overall sort of market opportunity for Ethernet, related to what you have when a customer is using or more locked into the NVIDIA ecosystem?
Yeah, you take it.
I mean, I would say that an open ecosystem is good for everyone, right? It's good for, you know, Ethernet in general, right? The more these accelerators are coming out with support for Ethernet. It's good for customers, right? Giving them a choice. I think especially these larger customers, they don't wanna be locked in to kind of a vertical stack, and they don't wanna be locked into a single vendor. So choice allows them to pick the best of breed for the application that they're going for, and when they can pick best of breed GPUs, you can pick best of breed networking, and you can have, you know, all the building blocks that best service kind of their use case and their application.
And if I can add to that, you know, outside the win rate, it's, you know, this is not the first time that InfiniBand... There's been a conversation around InfiniBand and Ethernet as well, right? InfiniBand's been around for the last, you know, 15-+ years. And so if you wanna think about it, you know, almost 15 years ago, there was a use case for InfiniBand, where it was around low latency, high frequency trading applications. So you actually had a bunch of HFT customers that were actually using InfiniBand to go try to go figure out the lowest latency, high bandwidth, lowest, you know, drop rate to go do market data, trading execution, trading portfolio servers. And then those same customers said: "Okay, we don't wanna be locked in.
We wanna use something which is open standards." That's what Liz talked about, and so they actually migrated to Ethernet. That was about 15 years ago. Roughly speaking, about 10 years ago, there was another separate use case where customers were thinking about, "Okay, we have a Fibre Channel for storage environment," which is, once again, lock-in, proprietary fabric, you know, and vendor lock-in as well. And the goal was, "Okay, can we migrate to something which is open standards based?" And then they actually ended up using Ethernet for storage and Ethernet over storage, or what became, you know, HPC clusters, basically from there. And today, it's exactly the same conversation, right?
So this is a first. This is in a third use case, almost in 15 years, where, you know, you can see customers may try a specific technology, and then they'll migrate over to Ethernet as well, right?
Interesting. Let me open it up to the audience. Okay, there are a few hands already, so whoever gets the mic first, I guess. Thank you.
Hey, guys. So gonna get a little technical here 'cause I'm having trouble following some of the commentary here. Just first, the 10% better than InfiniBand, that's a data from Broadcom when they released Jericho3-AI, right?
Yes.
And you talked about that being part of a fully scheduled fabric, but the example that you gave at Meta, that's not a fully scheduled fabric. That's a kind of a three-tier NIC architecture. So of the seven... I guess my question, I have two. The first one is, of the $750 million that you've announced, how much of that is actually fully scheduled fabric-based? 'Cause the, the Meta one is not.
I don't know.
Well, we're not giving pieces and parts, parts of the $750 million. You know, what we're committing to is that it's a glide path to reach that path in 2025, based on what we're working on with the customers, but we're not giving pieces of what's in there.
I'm just asking about the adoption of fully scheduled fabric, 'cause you've been talking about that Ethernet is not proprietary... but fully scheduled fabrics are. You can't put a Broadcom to a Cisco, a fully scheduled fabrics, and even global load balancing for traditional networks with Broadcom. Broadcom's global load balancing and adaptive router doesn't work with Cisco, it doesn't work with Spectrum.
Yeah.
So, I mean, I don't see, like, the benefits of Ethernet in this at all. So just if you can clarify with me, 'cause there's—you've talked of fully scheduled fabric, and then you went to the NIC, and then you... I'm having trouble following. That's the question.
Yeah, no, no problem. Okay, so your question is basically, inside a fabric, if, do you use load balancing end-to-end? Correct. I think that's where the confusion is coming from.
Right, but it's implemented differently. Yeah, but it's into, like, you can't use a Broadcom global load balancing with a Spectrum-X with you and a Cisco Silicon One, right? It's not, it's not-
Yeah, so typically what happens is load balancing happens hop-by-hop basis. So when the traffic actually arrives on a specific switch, the switch decides on how to spray the traffic on either downlink or on the uplink. It's got nothing to do with the vendor. So when the traffic arrives on a single vendor, you can actually have, you know, vendor one at, at the first tier, you can have the second vendor at the second tier, and if a customer wants, they can have vendor one on the first tier, they can have vendor two on the second tier. Depending on the load balancing algorithm and how the, the traffic actually sprays going northbound or southbound, doesn't actually impact how the vendors actually interoperate over here.
That's separate from, you know, when you're doing congestion notification, that could be separate, and then within the priorities, you would have to do that as well. So you're absolutely right, right? But you can use different vendors. We have many, many customers today, even in the front-end networks over here, where they build data centers, not everybody will deploy Arista everywhere in the, even in the data center. We have some customers who may decide, okay, data center one, you know, with vendor one, data center two with Arista. There are a lot of other customers that actually say, "You know what? Maybe I wanna use Arista in the spine layer. I wanna use, you know, another vendor at the leaf layer, or even vice versa." And so you want to give the customer the choice.
It's got nothing to do with the vendor, and that's the whole point of Arista.
Hmm.
Arista does not want to lock a customer into a proprietary way of doing cabling, a protocol, a fabric. That's the whole point of doing that. So you're absolutely right.
But fully scheduled fabrics are proprietary, unless your fully scheduled fabric would work with Cisco, right? Is that the correct way?
Yeah, it depends on the customer itself, right?
Mm-hmm.
No, no fabric should be proprietary from there. I absolutely agree with you. Yeah.
Thanks for the presentation. I'm curious to hear your thoughts about customer concentration. This is one of the risks as investors that we all immediately spot. I think 45% of your revenues, split over three customers. Is that something that keeps you awake at night? Are you thinking about ways to diversify away from that, or that's just always been the case, and we should just be accepting that?
Yeah. So thank you for your question. It doesn't keep me awake at night in the sense we appreciate their business, but we always think about how to grow the denominator, to your point. So diversifying in the sense of growing enterprise, growing the other customers that we have, growing the specialty providers, that's always a goal for us, and as you grow the denominator, they become less of a concentration. You know, I think given where we're at, and you've seen the, you've seen the outlook on the CapEx from these customers, so, you know, for now, for fiscal year 2024, you know, I think that we've given the guide we have, knowing the things we know from that.
We're very thankful for those customers, and we know we'll continue to diversify within the sense of the segments, but keep very true to our product innovation with EOS and the hardware that we have for networking.
We have one more question.
Thank you for the commentary about the examples of prior InfiniBand solutions getting absorbed within Ethernet. My question is, do you think that AI networks will ultimately be co-mingled with the existing IT infrastructure? The reason I ask that is because in prior examples of both InfiniBand, as well as probably any other interconnect technology, whether it's Token Ring or T1 or SONET or SCSI or whatever, they've all been absorbed by Ethernet, because Ethernet was the overarching majority of interconnect and the majority of knowledge. So running a Fibre Channel network or running a T1 network or running a SONET network independent of an Ethernet network didn't make a lot of sense.
But in this case, it's kind of the other way around, where the AI network is not only driving such high elite-level performance, but it's also at a scale that's much bigger than any of these historical things. So I guess, do you think that those two networks may actually stay isolated longer than we've seen historical proprietary fabrics relative to Ethernet?
That's a great question, right? So the first thing I would say, very simple answer is, networks are not built on islands, right? And every example that you gave, which was either a T1 or a SONET or a fiber channel, every part of the network needs to connect to each other, right? And so even if you use the AI back-end use case, at some point in time, as that AI fabric, for any size of customer, whether it be cloud or a non-cloud, very large enterprise customer or a small enterprise customer, if they're actually building a AI fabric at the back end, they're going to need bandwidth, right? And so at some point in time, that demand for bandwidth will not stick only in that back-end network. It's actually gonna come out, and it's gonna go into the front-end network as well, right?
Or it could actually go out to any part of the infrastructure or actually even leave the infrastructure as well. So, you know, the goal would be is to try to make sure that whatever use case you actually build using Ethernet, which is non-proprietary, not lock-in, right? You wanna make sure that all these use cases all co-mingle together, and it's very, very easy to go run them and actually bring down the total cost of ownership. So over time, you may start with a proprietary fabric, but you, what you'll actually find is that you'll need a different team, you're gonna need different operations, you might need different tools to go manage, you know, the rest of your business versus a very small use case.
And then, you know, it's up to customers on how quickly they conclude that and actually say, "Okay, let's go standardize on Ethernet," which is the examples you've given, and actually just go build an Ethernet fabric everywhere.... Hopefully that answers your question.
Let me try and squeeze in a couple of questions before we have to end. You have a certain share with the hyperscalers in their front-end networks, right? When I take it to hyperscalers or the customer set that you're working with that's now using Ethernet in the back end, how do you think about that share that you have in the front end translating to the back end? Do you see areas where you should increase share that drives higher share versus lower share? How should we think about that?
Yeah, I mean, I think it's probably a little early to start-
Yeah
... start, picking apart kind of the share on the back end. You know, I think that, as Ashwin pointed out, as these, you know, as, as the back end scales, and if there's business process around it, obviously will also drive kind of investment in the front end. The back end is a little bit different just because there is a, you know, there's the, the IB versus Ethernet, and then, you know, Ethernet itself also a competitive landscape. I think we're still trying to figure out kind of the exact sizing of that, and, you know, every day that changes a little bit, right? So I don't know that necessarily we're looking at it from market share perspective.
I think, you know, we're still looking at it from, you know, all right, with this, this interesting use case that's driving, you know, the next generation products, let's go and let's execute, and let's win our fair share, right?
Okay. A couple of questions on the $750 million target that you have for next year. Firstly, how should we expect that sort of progressing beyond 2025? Technically, one would expect your size of deployment sort of grow from there on. There's a compounding effect on that, and you accelerate growth. Versus should we be more thinking, no, it's linear because it takes time to scale, it takes time to get new wins? Just help us think through that. The second part of that would be, you largely classified that $750 as with large hyperscalers. How should we think about the opportunity with the tier two cloud, the size of that?
Yeah, I'll take the first one, and then we can go through this. So I think in the perspective of what could the $750 million in AI revenue in 2025 be, to your question, I think, I think a few things have to materialize or have some more data points behind it. So for us, we're very specific in that definition being related to the back-end AI clusters, just to be super clear, so we don't put a bunch of things into this AI definition. But I think that, I think the customers, as they build out these AI infrastructures, as they start to showcases to monetize, and based on those monetization conversations, how quickly the peers pick up and what they would like to do with it, will dictate, I think, how fast they go.
Regardless of that, the whole ecosystem has to come together back to the point of timing. I don't believe everything would line up in one quarter to have a step function change in growth, but I do think it'll be maybe not linear, but maybe not, you know, a hockey stick, but somewhere in between. Because it will take timing of the ecosystem, of their own teams internally, their CapEx approval cycles, the ecosystem of power and cables and all these things related to it. So I think that timing is out of our control. What we can control is making sure we have the right products and the right team and the right focus on helping them be successful. That's the part we can control. And then we'll see. We're very hopeful, but, I think we're still early innings, on this conversation.
Yeah, and on the, on the four out of five, so we've said it, it's actually a mix of customers, right? Both tier one and tier two cloud. I think Jayshree also highlighted on the earnings call that some of the AI activity, you know, with the enterprise and with, you know, more of these tier two clouds, was good. And what that looks like, you know, everybody's got an AI initiative, right? Every enterprise has an AI initiative, and figuring out exactly how they're gonna solve for that. Do they start maybe in the cloud, then do they build on-prem? So I think a lot of those conversations are in the works and happening. I mean, I know that you have a lot of them with your customers on the enterprise side.
And it’s good we’re being invited to those conversations, right?
Yeah. I know we've run out of time-
Okay.
So I'll wrap it up there. Thank you.