From this room. I'll just read some brief disclosures. If you need to see any disclosures, please check out morganstanley.com/resource-disclosures or talk to your Morgan Stanley sales representative. And for those who don't know me, the least important person on the stage, Meta Marshall. I cover networking here at Morgan Stanley. We are delighted to have Jayshree Ullal, CEO of Arista, with us, and also Chantelle Breithaupt, the new CFO of Arista. So bringing her to the stage. So Jayshree, Arista has had a remarkable year as AI and the networking needs of AI have come to the forefront. However, for all the value accretion, Arista has not yet seen a lot of that kind of maybe back-end AI revenue. What do you see as the AI networking operating set, and what makes you confident that Arista can capture it?
You know, it's, first of all, a pleasure to be here, Meta. I enjoy these conferences because you ask these questions that make me think. And no matter how prepared you are, you still have to think, right? So I know there's a lot of AI fever going on, but I just want to first put this in context and say it is a very gratifying moment when I look back at 2023 to see how we have performed as a company in multiple ways. First, as you talked about the back end, but let's talk about the front end, right? The cloud networking and how we are now addressing that market. And just last week, we got some data that said we are the number one, not just in high performance, but the number one in overall data center switching.
Never happened in the history of the industry before that a company can capture that kind of performance. So thanks to our customers. Now, going back to your question on the back end, I think what's going on in AI is the most killer application is obviously training. And Large Language Model training requires a significant focus on job completion time. When you're training billions and billions of parameters, this is super important. And you can throw all the GPUs that you want at it, but this fleet of GPUs, while they're connecting all of these training models and all collective role, has to have a very predictive, low-latency network to make that possible. So this isn't easy, and we're very much in the first innings of it.
We've, as I've described many times, been in early trials and pilots, but the level of activity is very similar to what we experienced with cloud networking 10 years ago. So I think we have a whole decade run here, and we're only in the first innings. So I'm pretty confident that a best-of-breed network vendor like ourselves, with our software and platform expertise, will excel in this. And at the same time, we have to work with a broad ecosystem. And that's what's going to take time. So I'm confident, but I also know it's taking time. It's taking one to two years, often, for our customers to stage these pilots and go and connect these best-of-breed GPUs, the NICs, the switches, and really make a seamless AI networking system work.
OK. InfiniBand has captured the vast majority of kind of that early AI training networking opportunity, seeing remarkable growth over the last year. However, back of the envelope math would kind of indicate that the InfiniBand leader is charging upwards of two times markup on that technology. That would seemingly create a great opportunity for Arista. But just how do you make inroads against the technology vendor that's bundling product?
Yeah, I think there are two forms of selling in AI. There's the vertical stack and the horizontal, right? The vertical stack says, I just get everything from one vendor. I get the peace of mind. I don't have to think about it. And however, the horizontal stack, where you get the best-of-breed components, really favors these large deployments. No intelligent customer. I don't want to say the others aren't intelligent. They're obviously looking for convenience. Would possibly have the fox guard the henhouse, if you will. You've got these super expensive GPUs. You're spending $millions, if not $billions, of dollars. Any kind of downtime on that is a problem. And so why wouldn't you put the absolute best network you can, reduce downtime, increase availability, improve instrumentation, improve security, build AI at scale with a real professional IP network at scale with our EOS stack?
So I think InfiniBand or any non-Ethernet technology has its moment in time. I've happened to have been through several of them: ATM, Token Ring, FDDI, now InfiniBand. And they certainly are very good in certain use cases. But as I said in the last earnings call, four of the five customers we talked to have now chosen Ethernet over InfiniBand. We were very much outside looking in a couple of years ago. I think Ethernet's here to stay, and everybody's going to compete. InfiniBand's the battle. Ethernet's the war.
So some of that is just the ecosystem around Ethernet is broader. Some of that is that you guys have made kind of changes over the past year to kind of close some of those gaps with InfiniBand. Just what are some of the gaps that you've helped close, and what are kind of the other ecosystem pieces that need to come through?
Yeah, really good question. There was a formation. Ethernet's always been around, as Bob Metcalfe, the founder of Ethernet, would say, I would never bet on an Ethernet technology, right? So me too. But the beauty of Ethernet lies in the fact that it's familiar. There's billions of nodes installed. There's a lot of tools. There's interoperability. There's standards. But you always have to tweak it to do something better, right? And so in the classic cloud networking world, to go back to analogies, we had to build an active-active topology and make equal-cost multi-path work over N-way. N-way could be 64, 128. And that just built an incredible scale of 1 million going to 1 billion servers non-blocking. Never had been done before. And that's how we power the Azure or Meta or many cloud networks today.
Something similar needs to happen with AI, where you need to look at the problem and say, OK, what is it about these AI clusters? They're data-intensive. They're compute-intensive. And you're continuously going through a compute, exchange, fetch, reduce cycle over and over again. And so the network has to respond to that in three ways. First of all, you've got to have a very non-blocking architecture. But you've got to have a high-radix architecture that can support all of these GPUs coming at the same time. So you can have multiple senders sent to a receiver. And how do you deal with the congestion control? How do you respond to that congestion control? Packet latency is interesting. Message latency end-to-end, all the way from the GPU to the applications, the network becomes fundamental.
And then speaking to the ecosystem, there are things you have to do on the end-system side, on the host side with a NIC, where typically you had RDMA or remote direct memory access over Ethernet. A reboot of that is required to do more flexible packet ordering and spray of these packets. Dynamic load balancing to make sure you get equal access to all of these GPUs is important. So there's a ton of tweaks to Ethernet that are happening. The beauty of Arista's architecture is a lot of this we've been already working on. We have congestion control. We have the dynamic load balancing. And we'll be working with a suite of NIC vendors to make the UEC compliance stuff happen.
OK. So you've outlined $750 million of AI back-end revenue target at your recent annual.
Up from zero, I might add, right? That's a tough one.
At your recent Analyst Day for 2025, you've also noted that four out of the five large AI trials have settled on Ethernet, or the latest ones have settled on Ethernet. Just what are the gating items to when we start to see some of that? Is it Ethernet? Is it obstacles? What part of the ecosystem is kind of that biggest gating item?
As you can expect, to have the courage to step up to a number like that must mean I'm in some trials and pilots. What is taken in all of these trials and pilots is to first understand the cluster of GPUs they're trying to start with. Most of them don't start with thousands or hundreds. They start with a few hundred. So the first use case we saw a lot of was the 7800 AI spine, which can connect 576 in a single 7816 architecture, or you can dual-home them to get over 1,000. So you can imagine we're in a lot of trials doing that right now, pushing and moving billions, millions of parameters, right? And this is intense testing that goes on continuously. The next step is they go, OK, wait a minute. I finally got my GPUs. I want to do more.
So how do I scale that to a two-tiered leaf spine architecture, where, again, I can take the spine architecture that's very familiar to most of our customers and then extend it to a two-tier where I'm adding the AI leaf? This can go now from 1,000 GPUs to several thousand GPUs. Then you'll hear from us more this year on how we're going to build a so this is the next step, which gives me confidence in that number. If you start having to go to more than 10,000 GPUs, to 30,000, to hundreds of thousands, you can build a multi-tier architecture.
Arista is working on a distributed Ethernet spine that I shared with you guys at the Analyst Day, where we can bring a single-stage architecture and sort of bring the best of leaf spine and a spine, if you will, and scale up to a distributed 30,000 GPUs. So we're working on that. So all of these three, we hope, will contribute in some fashion to that big goal we have.
OK. So the optics ecosystem has been a gating item to upgrade cycles before. The InfiniBand leader has taken to kind of direct relationships with some of the optics vendors. Are there steps that you can take to kind of speed some of these innovations across the ecosystem?
Yeah. Our approach in optics continues to remain. Really encourage the innovation here and work with all of them. Sometimes the customer wants it with our products integrated. And we have that suite of offerings for 10 gig, 40 gig, 100 gig, now 200 and 400 gig. And of course, we'll extend it to 800 gig. One of the most exciting things, I think, on the optic front is what we've done with Linear Drive, right? And there's been a lot of talk about co-packaged optics. And it's a difficult thing to maintain and troubleshoot. But the beauty of Linear Drive is, especially for smaller distances, like within a data center or AI cluster, if you can actually remove the DSP and extend the drive through your own electrical SerDes technology, this can go a long way.
This is going to play an important role in AI as well, especially within a data center, but even for kilometers reach between data centers. I think this is going to be key as an innovation in the optics.
OK. So maybe one of the surprising things in the last earnings call, the GPU leader, was just talking about 40% of their data center revenue coming from inference, or what we would consider kind of the front end of the network. They're also talking a lot about their Ethernet ecosystem. We traditionally think of the front end of the cloud network as Arista's domain. A big question I get from investors is, why should I not be concerned about kind of all of this commentary about another Ethernet vendor entering the ecosystem with a closed system?
Well, I think you just said the right word, with a closed system. So once again, if somebody wants a proprietary something or a free bundled something, whatever, you can. But without naming customers, I just took down a very buggy Ethernet software stack from vendors like these to replace with us because it just doesn't work. There's something to be said about being in the industry for 15 years and building our operating system with the breadth and depth.
And what I mean by that is the breadth of features, all the way from layer two to AI to IP scale, and the depth in terms of this is the third time we are gutting our architecture to build not just a state subscription model on our software, but then moving it from SysDB, publish-subscribe, to NetDB, to now a Data Lake architecture that can take different forms of data. I think there is no AI strategy without a data strategy. And the data strategy starts with your software stack. So we could throw all the hardware at the wall. But without a good, reliable software stack, I think it's difficult to build large AI clusters. But certainly, you can do some experiments.
OK. I'm going to move on from AI networking. But I wanted to see if there were kind of questions about AI networking more specifically before I moved on. OK, we have a couple questions. Is it possible to have a microphone? Maybe just shout it, and we'll repeat it.
Shout away.
Hey, Avery. So if you want to look at your next slide.
Yeah.
If you look at NVIDIA's GPUs that have really ramped and accelerated, the NVIDIA's that they're using, it took us five to seven years to build something like this.
It took us five to seven years to get through the 400 gig cycle.
I could hear you better without the mic.
It took us five to seven years to build 400G. We're already at 800G next. And AI is on 2.6. I guess the question is, next use a more critical workload. Does that bring up the rest of the traditional CPU and network heavyweight rest of the network?
Yeah. So just for those of you who didn't hear, it's like GPUs are going at crazy speeds. It's moved from 400G to 800G to 1.6T. Is the network going to keep up in a way, which is the network interface cards and the switches? Absolutely. And I think the industry has been slow because they haven't had a use case, right? So I think we're going through three transitions. The enterprise is still transitioning to 100G. The cloud is still transitioning to 400G. And the GPUs will absolutely transition to somewhere between 400G and 800G. And the reason I say somewhere is some people may employ or deploy multiples of 400G to achieve that 800G. And some people may wait to go straight to 800G.
Currently, a lot of the pilots we're seeing are more multiples of 400 gig because even though the GPUs have come to SerDes technology and the NICs are still running at multiples of 400 gig, for sure, from Arista's point of view, we will be fully ready to support 400 gig and 800 gig this year with an eye towards 1.6 terabits. So we're ready. Keep those GPUs and packets coming. But you're absolutely right to point out there's this intermediate thing called the NIC that's slowing all of us down. And this is, again, maybe a reason why I say 2025 is a good year because some people may settle for 400 gig training trials. But if you want that really good, seamless 800 gig going to 1.6, I think this transition is going to happen faster than any of the prior enterprise and cloud cycles in compressed time.
Usually, it took a decade. This is going to take one to three years.
Did we have another question up here?
Let's hope your mic works. Yeah.
Would you expect your software stack, I mean, software plus hardware solution, would have more advantage in 800G or 1.6T compared to your peers trying to use maybe open source S solution?
Yeah. I think I heard your question. To what extent is our software stack an advantage as we move to higher speeds? And what is the role of Sonic? Is that a good paraphrase? OK. So first of all, our software stack, founded, built, and bulletproofed, I call it 15 years new, not 15 years old because we keep improving it, right? And one of the beauties of this stack is the data plane is programmable, the control plane is programmable, the management plane is programmable. All three have to work together. And so our stack is fully ready to support 400 and 800 gig today and will be 1.6T ready when silicon shows up. And same thing with the management and control plane. They don't always have to run at the data plane speeds.
But they have to keep up and distribute the data, whether it's structured data, unstructured data, flow data, contextual data, visibility, correlation, all of that becomes important. We embrace Sonic. Sonic isn't for everyone and not for every use case. And we contribute to Sonic. Sonic is, in many ways, a subset of our software stack where you can, in some use cases, deploy Sonic for smaller, less taxing, less feature-driven networks. And a number of our cloud networking vendors, customers that we support, either have Sonic or FBOSS running in one use case and EOS running in many use cases. So I wouldn't be using it for my most mission-critical, neither would many of our customers. But we certainly coexist with that and co-develop with them.
Did we have one more question? Yeah.
Hi. With 1.6 terabit optical transceivers coming on at the end of this year, do you see any of your customers, the four out of five choosing Ethernet, are they trialing linear drive now?
Not yet. Not yet. So optics at layer one tends to go ahead. And that's why you're seeing the transceivers because once they push the technology, then we all can push the electrical SerDes technology. To get to those kind of speeds, I'd have to go from a 50G SerDes PAM4, like I am today, to 100G, to potentially a 200G SerDes. So I don't see that in our horizon this year. But I certainly see it in the 2026 time frame.
All right. Perfect. So maybe moving we'll get to you in a minute. Moving to the non-AI side of cloud for a second, you're coming off of two phenomenal years with Cloud Titans, 78% growth, I think 26% growth. Upgrades drove meaningful expansion. You note that you still think that Cloud Titans can grow as you wait for some of these AI use cases to come in 2025. Just what kind of visibility do you have into 2024?
Look, it's their bread and butter. So of course, they have to keep investing here. And you all have seen the projections. Most of our customers have increased their CapEx. The question is, what are they spending that CapEx on? And I think the first thing to remember is, while they will continue to invest in their bread and butter, and we will grow at the rate they grow, they have all pivoted in some form or shape to also AI. So that big CapEx number has some cloud and some AI, right? So we will get our fair share on the cloud. And it isn't uncommon after two or three outsized years for them to take a breather. And I think this is a breather year for us on the cloud. I've said that many times. Nobody likes to hear that. But that's the reality.
But it's not a breathy year for us on AI, which is where they're pivoting. So I think the sum of the two will give us good success with our cloud customers. In terms of visibility, on one hand, the supply chain was no fun. But on the other hand, it was a lot of fun because we were getting 1-year visibility. I think we're back to six-month type visibility. And that's why we try not to get ahead of our skis. And we try to go do this one quarter at a time.
OK. GCP has been kind of a potential opportunity people have asked you about for a number of years. Just how does AI change how you view the likelihood of this relationship expanding?
Now, I think GCP continues to be a good partner. The company was formed because we were inspired by Google. They came to us one time, way back in our origins, and said, give us a non-blocking network. Give it to us at 10 gig. And give it to us at $100 support. And not a single commercial vendor could do that. And that's how Arista was formed. But because they couldn't get it from anyone, they actually built a lot of their, as you know, inside network themselves using some proprietary open-flow methods. I've been very impressed with the progress that Google has been making, not just with their TPUs, but some of their Gemini and Graph technology. We do have also a great deal of opportunity with them outside of their intra-data center network. Time will tell how well we will do on the AI side. But hopeful.
OK. Cisco, HP, Juniper are all making bigger points about their silicon strategies. Optics is becoming a bigger bottleneck. You have been a big proponent of merchant ecosystems from the beginning. But are you seeing enough development there to keep the portfolio at the forefront, as it always has been?
Yeah. Look, I'll go back to history again. Back in the day when Cisco started their silicon efforts, there just wasn't any Broadcom or Marvell or Intel or any merchant silicon. So you had to build it yourself. Same thing for Juniper. They had to do the same for their service provider routing products. But I think those companies have now consolidated. You can see that HP and Juniper on one side, two big guys, Splunk and Cisco on another side. They're doing networking. But they're going in other directions as well. That leaves us as the pure-play networking innovator, which is very exciting. And we've got multiple frontiers to explore. Clearly, the more options we have on merchant silicon and silicon diversity is going to be very important to us. Clearly, Broadcom is an extremely good partner. And they literally co-develop with us. They work with us on features.
They work with us on pricing. And I have nothing but good things to say. So unless those things change, why would I go off and do something that's not in my sphere of expertise? So I tend to complement what they're doing with things I can do, whether it's accelerating programmable silicon or FPGAs as needed. And so until I see a real gap, it's not broken. There's nothing to fix. If I start to see gaps or particular opportunities, I'll certainly look to build more of our own.
And so we talked a lot upfront about people don't love vendor lock-in and all of the reasons why people have been big customers of Arista as we've gone along and your participation in kind of these open ecosystems. That makes sense a lot for the big public clouds. I think as we go towards tier twos or enterprises or even any of your other customer base, they're not experts.
Yeah. They don't have the staff of 1,000 engineers.
Right. They don't have a staff of 1,000 engineers. And so how do you view the opportunity there, particularly when it comes to AI versus some of these more bundled solutions?
Yeah. So I certainly think it's early innings on AI for everyone. But we are starting to see the large clouds, or the tier two clouds, and large enterprises have the engineers, have the thought leadership. They're going to go work with us on it. The enterprise customers also have some very smart people. They just don't have the staff. So over there, I think we will tend to be more educational. We will work with them. The importance of giving them the instrumentation, the manageability, the high availability, the security will be much, much more important. So in the case of the advanced thought-leading customers, the early adopters, they'll figure a lot of this themselves. And they'll have the staff. In the case of the enterprise customers, they're very intelligent. They often don't have the staff.
We will have to give them more Arista-validated designs for AI and take them through sort of the small, medium, large clusters and how you build them. But don't underestimate. Their networks may be smaller for AI. But I think they will be up and coming.
OK. Then moving on to enterprise across this earnings season, we've seen many networking vendors have seen a pullback as just order patterns normalize, macro creates a headwind. I know you guys have always said you're not a great enterprise macro bellwether. But just how are you seeing kind of the enterprise opportunity? And how are you taking advantage of kind of this environment?
You know, I'm going to turn this question since Chantelle's been very quiet. But I know you had a chance to meet in the last two months a lot of our sales leaders and enterprise customers. Do you want to say a few words?
Yeah. We'd love to. Thank you. Thank you, Jayshree. I think that from an enterprise perspective, we're very excited. And it's one of the things I was pleasantly surprised coming into the company, the opportunity that we have. And so as I've gotten to learn, we have a fantastic portfolio that's ready to meet our enterprise customers where they're at. And I think we're making a lot of great progress on kind of a new logo, land and expand kind of playbook, which has proven out very well. If you listen to Jayshree's prepared remarks from the last earnings call, we estimate about 20% penetration of the global top 2,000 accounts. So I think there's lots of room to grow from a share gain perspective.
It's working, whether we enter through the data center and then move to the campus or enter through the campus and move to the data center: financial institutions, health care. So there's some specific industries. We've made really great progress. And so we're very pleased and excited. And things that happen in the market, they take time. The roadmaps get confused. And so we've heard from customers, "give us your roadmap because we're not clear where others are at." And so we're happy to meet them. And we see it as an opportunity.
I think it's perfect timing because clearly, Arista captured a lot of the early adopters, as Chantelle was saying. But now we've got the fast followers and the mainstream enterprise interested in us too because, as I said earlier, we're pretty much the only pure-play networking vendor available. So it's a good time for us, notwithstanding macro and everything else. There's such an incredibly large TAM of $60 billion or at least $25-$30 billion in the enterprise. It's ours to execute.
Got it. I mean, you guys have set out campus we talk a lot about AI. But you've set out rather aggressive campus targets as well. You just mentioned disruption in the space. We've mentioned HPE, Juniper. Just how does that change kind of your channel strategy in that there's a lot of channel partners that are now looking for other partners to sell?
It does. But I want to sort of separate our channel strategy for the large enterprises from sort of the mid-market. So I think from a strategy on our enterprises, we continue to invest. We have 9,000 customers who bought our data center products. And not all of them have yet tried our campus products. We've got plenty of opportunity there. So I would say our go-to-market strategy doesn't need to dramatically change over there. As we go into the mid-market, we will have to see more changes. And this is where channels come in because while the direct large enterprises know more about us, there are many small channels that don't. So absolutely, we will invest in both. But with an eye towards the larger enterprises because that's where we're well known.
OK. Your COO, Anshul, recently announced he was going to take a leave of absence last week. Can you just speak to kind of workarounds in the near term? I have a lot of questions from people as to any details we can get.
Yeah. Well, first of all, Anshul's just a tremendous asset to the company. He's been with us 16 years, from the young age of 30s to the middle age of 40s. So he's been with us 16 years. And he's been working nonstop. I think he deserves a break, deservedly so. And I'm looking forward to seeing him. I miss him already in the first week because all his work has come to me. So I've got to find some delegation. So I took a leave of absence when I was at Cisco for several months and came back rejuvenated. So I hope he will too. But I think it's going to be good training for many of us in Arista where we still function like a startup. And we have single points of approval, whether it's Anshul or Chantelle or myself.
We'll be distributing that responsibility and at the same time looking forward to his return. I think developing leadership 2.0 is important for Arista in terms of scale. Chantelle's a very good example of that. I'll be looking to hire somebody who works for Anshul as a cloud sales leader. We'll be looking to bolster our AI strategy. We have had the good fortune of 15 years of continuous leadership for most part, except for CFO. While I would love for that to continue another 15 years, I have to be realistic about whether it will or not. We're just all getting older and wiser.
The only thing I would add is a peer to Anshul, who I've had the pleasure to meet. He's a pleasure to work with. We're all here to support him as peers. We will make sure he feels comfortable in his leave of absence that we will help him and have his back.
OK. I wanted to open it up to questions again. Yeah, we still had a question back here.
I just had a quick question on the vertically integrated player in the space has talked about scaling their proprietary NVLink solution up to kind of 500 processors. And the interconnect between each processor is many multiples of what's available on either InfiniBand or Ethernet. Could you just talk about how you think about a solution like that? Is that a mainstream type solution? And how does that impact Arista's opportunity?
I think I understood your question. Can you just repeat the beginning? Did you speak about a specific?
Yeah. I was talking about essentially, NVIDIA has planned to scale their InfiniBand up to a two-layer architecture and up to 500 processors.
Yeah, yeah, yeah. OK. In order to answer your question, I'm going to step back a little and describe the anatomy of an AI network, if you will. And maybe that'll put it in context. So often, you just have a 2-socket or 4-socket CPU to do all the querying and indexing and all of that. That remains. Then you have a fleet of GPUs. And this is where our most successful GPU vendor comes in to do all the data crunching for millions, billions of parameters, training, inferencing, et cetera. And then usually, you have to talk between that server out into the network through a transmit layer, typically a NIC. And then you either scale up or scale out. A lot of the scale up either happens with NVLink or InfiniBand, two proprietary technologies, for a limited number of machines and processors.
And usually, it's sort of a scale up of tensor traffic. We're really not involved in those kind of rail-based designs within a server. The real part that Arista comes in is once you have to collect and integrate a whole slew of servers, so from server to server communication across an AI network. So what companies do within a server and a rail-based design, whether it's PCIe or CXL or NVLink or they connect an HPC networks with InfiniBand, that's going to keep happening. That's a compute-oriented approach. Where we really come in is as sort of an AI networking approach across the servers.
Did you have a question?
Jayshree, earlier you mentioned startup and culture and single point of approval. Can you just expand on that? And second point, when I think Arista, I think Andy. So can you spend a little bit of time talking about Andy, his involvement with the company, the culture, and the business that he's built? And if he was here, what would he say for two or three minutes?
Yeah. Well, it's hard to be Andy. So I won't try to do that. But I will tell you, the culture at Arista is one of the strongest attractions for me to be here at Arista, always. We're very much a founder-led company. We're built by engineers. Both Andy and Hugh are very active founders, very committed to the company. Because he's off the board right now, and I don't think he'd really enjoy it being chairman or having fiduciary responsibility, that's not Andy. He's getting deeper and diving more into AI architectures, the optics, the silicon. So if he were here, he would tell me, my god, I've never seen this kind of progress in silicon and AI advancements. And he's very active on work on that. Ken Duda, other founder, is very committed.
You may have seen also the introduction of Hugh Holbrook, our Chief Development Officer, who was employee number four or five in the company. These are guys who just live and breathe Arista. We're very much an engineering-led company. If he were here, he would tell you that for all the hardware that Andy builds, Ken is the creator of the software. Hugh brings these platforms together from an AI perspective. What was your other question?
I was going to add just to, if I can, on the culture because I'm three weeks in the sense of the role or two months if you count the beginning. So the thing I really appreciate about this startup culture for a company of this size is that it hasn't been overcomplicated. I've worked at probably two of the biggest, most matrix companies. And I've learned from them. But I can tell you, there's an elegance to not overcomplicating things because it drives accountability very clearly. And so I hope you guys appreciate that's a really nice part of a $6 billion startup culture. But to me, that's one of the things I've been pleasantly surprised with from that perspective.
It's also about keeping it flat. We don't have five people to get approvals for. A lot of it just went to Anshul or me or Chantelle or Ita.
Got it. We had a question here.
Hi, Jayshree. Meta Marshall here. You've put out some good kind of indications on where cluster sizes are going into, say, 2024, I think. If some of these trials are successful, assuming they will be for AI Ethernet backend, you might have mentioned, say, 30K clusters and even on perhaps past earnings called up to 100K. There was like a reference architecture showing up to 165K at your investor day. What should we think about as kind of how big could these get into the second half of 2025? Is 100, 200K, even higher reasonably?
I think on the extreme training side, 100K, 200K would start to become a normal number. But on the distributed training or inference side, I think 10K-30K would be plenty. So you're going to see six of one and half a dozen of the other because not everybody is going to build the mainframe of AI. And a lot of them may build the client-server equivalent of AI. So you will see both.
Is that 1 to 200K? Is that 1 to 200K just timing like? Is that more of a 2025?
2025, 2026. Yeah.
Do we have any last questions? All right. OK.
Hi, Jayshree. You guys have the best $6 billion networking company in the world by far.
5.9, I think.
Yeah, for now. The one concern that I do have is you also have the best product mix, customer mix, deal size, transaction per customer. When you look forward and you look at customer mix, product mix, and deal size, as you move to the enterprise, that's going to change. As the market switches from front end to back end and all these transitions, you may lose some more of the high-end chassis that you're crushing the market. You're not the number one player in chassis. I guess my concern for the company isn't that it's doing great. It's that it is so great that the incremental growth may not be as great. So Anshul, you always talk about product mix, customer mix as what drives our business.
My question is, when you look at product mix, customer mix, and your margins and your growth rates, do you think that this can still be a 10% grower at 40%+ margins? Thank you.
Yeah. I do. The reason I do, unless we go do some wild acquisitions, then the 40% is off the table because we might spend some money there. But if we went the way we mostly do right now, the organic way, first, I want to say we still have three major markets we're pursuing. This is heavenly for somebody who didn't have three major markets to pursue at all times, which is we got the cloud. Now we got the AI. And we got the enterprise and just including service provider and routing markets to go along with it. At the same time, our architecture is uniform across all these three. And you talked about the great products, which is really a packaging mix for each of these of different kinds.
So I'm so lucky to have three transitions going on at the same time, 100G, 400G, and 800G. Whoever has that. And so if you believe my $60 billion TAM, and I'm here I am sitting at $5.85 or $5.9, shouldn't I be growing to achieve more of that TAM, especially at a time when I'm pretty much the only pure networking company with a best-of-breed platform? So on theory, I should. On execution, I need to keep doing better. Not too many companies have rapidly gone from $5 billion to $10 billion and proven that. I happen to work for one that did. And undoubtedly, we'll have to invest in different types of investment models. I think Chantelle's nodding her head vigorously there. And hanging on or loving our productivity model for cloud and AI won't be the same as enterprise.
Naturally, we will put in a lot of investments in the enterprise and go to market. It got a little slowed down. I would have liked to do it faster. During the supply chain crisis, it made no sense to invest more when we couldn't ship more. We'll get back on our front foot for that.
All right. We know contracts are great opportunities.
Thank you.
All right. We're at time. But Jayshree, Chantelle, thank you so much for being here.
Thank you, Meta.