Arista Networks, Inc. (ANET)
NYSE: ANET · Real-Time Price · USD
176.91
+4.36 (2.53%)
At close: Apr 24, 2026, 4:00 PM EDT
177.19
+0.28 (0.16%)
After-hours: Apr 24, 2026, 7:59 PM EDT
← View all transcripts

Wells Fargo 8th Annual TMT Summit

Dec 4, 2024

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Perfect s o why don't we go ahead and get started? I'm Aaron Rakers, the IT hardware analyst here at Wells Fargo. Pleased to have Martin Hull, who is the Vice President and General Manager of the Cloud and AI Platforms business at Arista, as well as Brendan Gibbs, the Area Vice President of Product Line Management at Arista i think you joined not too long ago?

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

Yeah, I just joined earlier this year in March.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Perfect. So we're going to talk a little bit about this thing called AI and networking, and we're going to see where that goes from there. But first of all, thank you for joining us. I think where I want to start is at the high level, because I get this question all the time. How? do we think about the intensity of networking in AI, especially as we go from what used to be, it's not that long ago, 10,000 GPU clusters to 100,000-plus GPU clusters, How? do you guys define the network intensity, maybe relative to what traditional networking looks like? Yeah.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

Feel free to go wherever you want with that.

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

Sure.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

So I'll take that first. I mean, what a surprise w e started with AI, right? The investor community is hyper-focused on AI, which I think is completely appropriate. We have committed to a $750 million for AI back-end revenue fiscal year 2025. In the most recent earnings, Jayshree and Chantelle reiterated that, but they also communicated that there's an incremental pull-through of the front-end network, which is an additional $750 million.

So the AI back-end network target for us is $750 million for fiscal year 2025, which is calendar year 2025. So the network intensity of that AI network on the back-end is there's been a shift in terms of how people would build traditional data center networks at the front-end, where you'd have a degree of oversubscription.

The total amount of networking in a normal data center isn't the sum of all the parts. You don't expect everything to be working at full speed all the time. You pivot to the AI back-end network, and it's the opposite. The expectation is that the GPUs are going to be running at 100% throughput all at the same time and constantly for hours, days, or weeks.

So the network has to be architected with that paradigm. I need more networking than the sum of the parts. I want to put in additional capacity so that I can deal with device link optic failure and not cause any outage or performance impact to the compute, that GPU cluster that's performing these high-performance jobs. So the network intensity, the network paradigm shift has happened to the AI back-end networks.

When you think about the ratios, though, the ROI, sorry, the total spend on a GPU node, is an order of magnitude higher than the spend on a traditional compute-only node. The ratio of how much networking from a dollar perspective, I would say, has gone down because the GPU cluster cost has gone up so high. We have more networking in the back-end, but as a ratio of the spend, it's gone down i think it's still in that mid to low single digits. The other aspect on the AI back-end, you send me to see where this goes, the other aspect on the AI back-end network is interconnect, optics, cables.

One of the large spend items is those optics. There's a CapEx and OpEx aspect to them. Anything we can do that makes the compute as close as possible, sorry, makes the network as close as possible to the compute, means I can use lower-cost optics or cables that get the CapEx down. If they're consuming less power, it gets the OpEx down. If I make them more reliable, it brings the reliability, the uptime, and the quality up. So those are aspects that we've been looking at in terms of how do we make the AI back-end networks perform better.

And this is where we bring our value proposition about engineering and architecture and design and multi-standard interoperability. So that's how I think about the network intensity. It's very intense. It's not oversubscribed i t's undersubscribed. And that's why we are just as excited about AI back-end networks as it seems all of you are.

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

Yeah. Martin, if I could just jump in.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

Yeah.

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

I mentioned you were talking about the relative spend on a GPU or server compute node relative to network. I think one of the things that's so exciting for me and for Arista overall is because the majority of spend and majority of power is on GPUs, which then means that the relative value of the network is commensurate, where best of breed can really help a well-tuned, high-performance network can unleash the capabilities of GPUs.

Even though it's a much smaller fraction of the spend and a much smaller fraction of the power. And as one of the global leaders in not only AI, but also overall Ethernet, that really speaks well to the Arista opportunity for helping these customers get the most out of their massive spend on GPUs, having high quality. The way I like to think about it is networks are essentially like the nervous system of an AI cluster. And one of the things (of course, then each GPU would then be like a little neuron)

Networks are required, and Arista builds the best networks. We have a technology advantage, we believe, and a leadership capability with our best-of-breed software that then helps differentiate and make that network perform better, so it fits very well with what you said, Martin, in terms of the intensity of server versus GPU, but a best-of-breed network then makes the compute even better.

So I can't help but to ask this because I'm a numbers guy, and I look at spreadsheets i mean, mid- to low-single-digit, first of all, that is just the network switch spend component. That's not the optics and the cabling and everything, correct?

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

Yeah w hen we talk about our 2025 AI number, it's the switch infrastructure, not the optics. That mid- to low-single-digit is comparative to what historically? I think Jayshree and I, we've talked maybe in the past, low-teen, mid-teen w hat is the traditional?

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

Yeah.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

So I think it's in that high single digits a nd I think it's because the cost of the GPUs is an order of magnitude higher for the GPU clusters.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Understood.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

That's why it's moved a little bit, and it's always difficult to try and figure exactly where it is w e've got a portfolio of products from shallow buffer fixed systems to deep buffer modular. The ASP of a port is a variable, so when you calculate that ratio compared to the compute spend, it's going to move a little bit.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Yep. That's perfect. So back at your event in mid-2024, you had Hock Tan get on stage w e talked about these scale-out cluster deployments growing exponentially, right? Like 10,000 going i think he showed a slide i t's 60,000 GPU clusters, 80,000 GPU clusters n ow we're talking 100,000. Where does this go in your mind? The radix scaling continues. We're going to see that obviously continue to advantage, maybe Ethernet in these architectures. But simple question, where do we go from here?

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

So there's a couple of aspects to that one, right? And I think that the networking technology can continue to keep pace with these customers' ambitions for a few years. Then you look at other limits, space and power. There's a lot of discussion in the industry about how do we get access to more power, how do we get real estate, how do we get buildings.

And so I think what will become a limiting factor on some of the size of these clusters is going to be power availability and physical space. Once you start to do that, I build large clusters, but I build multiple large clusters and then interconnect them.

So we create pods, islands that are then interconnected m y jobs need to stretch across them t hey can, and I can segment. And then we're going to have synchronous and asynchronous parallel technologies running in these areas. And so, too, how big does it get? There are some people out there, leaders of these companies, talking about very, very large numbers, hundreds of thousands of GPUs.

It's not necessarily clear if they're all a single cluster or it's a total aggregate number of GPUs, and there are some clusters, and they have an interconnect. We can build out, and we've built out front-end networks, and we measure our tier-one customers are having more than a million servers.

That's a very small set of customers. If you start talking about having a million GPUs, again, that's going to be a very small set of customers who are going to have a lot of expertise in how to build both front-end and back-end networks. But it's not unrealistic that you could have customers with a million GPUs. We've already seen public hundreds of thousands, but they're not necessarily all in one cluster.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

But also, you have to think about what is the GPU in this context? I mean, what is it connecting at? Because you could go, like Martin said, 32K- 100K to a couple hundred K. If they're all at 100 Gb, 400 Gb you could grow out, scale as like a scale-out model.

You can also think about this as each GPU continues to improve in terms of performance. Maybe it goes from 400 Gb - 800 Gb to 1.6T and grow in that perspective. We're going to be ready regardless. So all of our systems that we're selling now are 800 Gb ready now. So today, that really means 2x400. And so you're having stamped-out GPUs where each GPU is connected at 400 Gb . Each one of our ports does 2x400. Eventually, GPUs are going to be capable of processing 800 Gb .

We're already ready, but technology will progress. Just like Martin said, you could scale out hundreds of thousands, multiple hundreds of thousands, or you could scale in terms of performance per GPU or both.

Yep. That's fascinating i mean, it feels to me like one of the other elements of this might be what I think, Martin, you were alluding to a little bit is like it maybe is not singular clusters that we think about, but how do then we go and connect disparate clusters? That's not in the 750 w ould you argue that's not in the 750? Like if you start connecting over data center interconnect or geographically dispersed clusters, that's another element of this?

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

So if it's connecting multiple back-end networks together with an interconnect, it would only be a very small percentage of that, but that would be included in that back-end networking number. If you transition to the front-end and you're using your existing data center interconnect,

How do we spot that anyway? And that's one of the things we've said as we go forward w e can talk about the 2025 number, but there will come a point in time, whether it's 2026 or 2027, where AI, data center, data center, AI, can we really spot the difference? Because the products that we sell are the same. So it's only when we're tightly engaged with the customer, we understand the relationship and what they're using these systems for, that we could identify this as an AI. But even today, if you say about the data center, how much is leaf, how much is spine?

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

The same products.

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

So there will come a time when the AI (and I don't want you guys to ever stop talking about it) there'll come a time when it's harder and harder for us to identify pure AI as against AI adjacent. Connecting multiple clusters is already a thing now because Martin hit on one of the key pragmatic realities of space and power. I mean, we can think abstractly of 100,000 GPUs means X ports and stuff, but you're talking about hundreds and hundreds of thousands of cables and optics. The amount of sheer space that requires is pretty staggering. Power measured in the Gigawatts.

You pragmatically have to connect things at longer distances just because you can't physically cram all the boxes and the cables and the optics in one physical space.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

That makes sense. Maybe I should have started here, but I mean, it seems like it's played itself out, or at least I think the investor mindset understands where we're going. But the delineation between Ethernet and InfiniBand, for those that haven't seen it, you guys did a recent webcast and talked a lot about this. But where are we at today in that transition point? Why is that transition point happening? Does InfiniBand go away? Your views on that architectural debate.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

Yeah. So as a technology company, we are 100% committed to Ethernet. And so that 750 million number is an Ethernet number. There's no InfiniBand in there. If we look at the landscape today, there's a single vendor of InfiniBand. There's multiple vendors for Ethernet. You can get into the commercial aspects of this one and say that these large customers and their supply chain and sourcing teams want diversity of supply chain.

Right? We also have that worked out five years ago i f you didn't have diversity of supply chain, you were left with some problems. So supply chain diversity, technical innovation across a multi-vendor standards-based community gets you to a better outcome. We've seen this over the last 50 years since Ethernet first became technology. So Ethernet will win out over time. And then we start to debate how much, how fast, when, who, where.

There's some other aspects to it, and that is that there's an Ultra Ethernet Consortium, which Brendan can speak to in terms of making some enhancements to Ethernet. Ethernet is a great technology for AI today, but that's not to say you can't improve on something that's really good. So there'll be some improvements to Ethernet coming in the next year or so i think it's about a couple of quarters away.

Then we come down to commercials. The cost per Gb or the cost per bit on Ethernet tends to be lower. Part of that is due to this multi-sourcing and the ecosystem that innovates in that technology space. So InfiniBand today is deployed in a lot of AI clusters. We've also seen a lot of the very large customers come out publicly and say that they are committed to Ethernet.

And part of that is because they're building their own accelerators, their own GPU, XPUs. So whether it's, you can roll the names off. They've come out and said they're building their own in-house accelerators. None of those have got an InfiniBand adapter on them. They're all Ethernet-based. So over time, as these customers roll out their own homegrown technology, take control of their own destiny, it's inevitable that Ethernet wins out over InfiniBand a nd I say when and how is something we can constantly debate, but I think it's inevitable that Ethernet wins out.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Yep. That was the conclusion we came to as well. One of the things that I do cover NVIDIA as well, and we'll get to Spectrum and talk a little bit about the competitive landscape there, so Rocky V2 has been out for a while, right? RDMA over Converged Ethernet. NVIDIA talks about SHARP like what they can do in network compute around things like congestion control, protocol reduction, etc. Does Ethernet have that capability? Just going to segue into the Ultra Ethernet Consortium and what things need to be done that takes it that next step on Ethernet.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

So I'll quickly talk about the RoCE v2 side of it, and then I'll pass over to Brendan to talk about UEC and the enhancements that are coming. RoCE v2 is not a new technology. RoCE v2, I've been at Arista for 13 years, and I remember talking about RoCE v2 at least 10 years ago. And she's like, "Will this work? Do your products support it?" This was 10 years ago.

So RoCE v2 is not a new technology. It's been embedded in our products for at least a decade. So that's not a new enhancement i t's been there. So whether people have been using RoCE just for RDMA or for early AI-type clusters. So then it's great. Can you do better than the great? What's happening in the Ultra Ethernet Consortium, Brendan? Sorry, I'm asking questions for you.

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

Yeah. Perfect. Well, I mean, first, I want to kind of recognize that everything we're doing today from an Arista Ethernet for AI, all the $1.5 billion that we projected for 2025 AI revenue, none of that requires Ultra Ethernet. So what we're talking about is the fact that Ethernet today is well-suited for purpose with RoCE v2. It just works.

That said, it's complicated. So what we're seeing with the UEC, Ultra Ethernet Consortium, they're inventing something called Ultra Ethernet Transport, which is UET. It's more of a good-to-great scenario. So we're not reliant upon it, but we can anticipate improvements in the future. What the consortium is really trying to address is the complexity and cost associated with Ethernet. And keep in mind that Ethernet's already way lower cost than InfiniBand. What we're trying to do is further improve it from there.

So the reality is you can build large-scale networks with Ethernet today, but there's a complexity associated with that because now you have to do complex load balancing, complex congestion management, and the tuning of these parameters to assure a lossless transport so you don't drop packets is complicated.

So what we're trying to do with UEC is, well, first of all, I think we're expecting the first specification, which should be meaty and weighty, probably in the Q1 timeframe. Don't hold us to that, but that's kind of what the consortium is looking at. There's about over 100 member companies s o this is a significant initiative with lots of different vendors, Arista as being one of the initial founding members. What we're really anticipating is that Ultra Ethernet is going to bring a few things, Number one, it's going to bring higher scale at lower cost.

It will also bring higher reliability with more modern congestion management, and it'll also bring integrated security s o I'll start with that latter point. There's really no integrated security whatsoever in InfiniBand w hat Ultra Ethernet is going to do is going to bring native encryption to the entire AI workflow. It'll just be part and parcel of the flow of any sort of UET compliant capability.

And what this is going to do is especially help multi-tenancy. So you think about any cloud provider offering GPU as a service. They're going to have multiple tenants with their own AI workloads, being able to have, as part of the design of the transport, separated group encryption so that one tenant's workload, it remains encrypted and completely separate from another tenant's workload, even though they're on the same network, is going to be part of the advantage that Ultra Ethernet will have, not only over today's Ethernet, but of course over InfiniBand.

I think also what we're going to have is the ability to have, like I said, more modern congestion management. One of the reasons that we build, as Martin was saying before with the answer to your intensity question, we build lossless networks. Essentially, no oversubscription, line rate performance is because there's a large tax to be paid if you drop a packet.

Because AI is fundamentally a collective solution, meaning you're only as strong or as fast as your weakest link. If you start dropping packets, you now need to have the entire workload, the entire collective pause, and there's something in the AI called Go- Back -N, go back to a prior checkpoint state. That slows down the entire workload until you might have heard of job completion time or JCT t hat slows down the entire AI workload.

What Ultra Ethernet is aiming to do is essentially eliminate that kind of tax of dropped packet o f course, we still don't want to drop packets, but sometimes things happen, whether it's because of a failure or because of congestion s o Ultra Ethernet will deliver mechanisms to offer very rapid acceleration s o you get to wire speed quicker and also much faster recovery from any sort of failures.

Then the first point that I raised, kind of like I said, going backwards, is more scalable and cheaper. These are all related. So part of the way that Ultra Ethernet will achieve that kind of fast scale-up and fast scale-down capability is by moving to what they call packet spraying on the individual NICs.

This should make the NICs even cheaper because part of the way, and this answers your question about Sharp and that sort of thing, part of the way that some vendors offer their solution is they've got what's called a DPU, not a traditional NIC, but basically a higher performance, like a smart NIC with a DPU on there, such as from a Pensando at AMD or NVIDIA's BlueField-3.

These tend to be higher cost, higher power type of solutions because they push some of the intelligence right down to the NIC. Ultra Ethernet will obviate the need for that. It'll allow the ability to offer packet spraying, which gives you very high entropy load balancing over a network pushed down to the NIC.

So lower cost NICs that gives you much higher scale of your network overall o again, this is a good-to-great scenario. Arista is leading in AI and Ethernet today, Ethernet is really going to be the choice for future AI. Ultra Ethernet will just make that even better, more secure, cheaper, more scalable.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

You gave me a lot to unpack there in a future report, so I'm not going to do it today, but I'm going to keep this simple, high level. Version one standard release one Q, what does that mean from a timing of product? When does it show up in products?

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

One of the smart things that they did to make it a complex answer is that a lot of these things are optional because it's meant to allow today's products to be compliant e verything that we're shipping today is Ultra Ethernet compliant.

There are features that will push on the NICs n ow you're going to be required to say, "When will the Ultra Ethernet compliant NICs start to ship?" I don't know. The spec, like I said, is expected in Q1. A lot of the NIC vendors, like a Broadcom, like AMD, NVIDIA, they're all members of UEC t hat's up to them. As far as features on the network side, we'll have to see once the spec is finalized, but I wouldn't expect. It's basically software upgrades to existing products.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

Exactly right f or some of the control plane, signaling of some of these features, it'll be a software upgrade to existing products. So you're not reliant on new hardware or ripping out and replacing everything a software incremental upgrade so people can retrofit existing networks into UEC a nd as I said, optional. So if you don't have it, it doesn't turn on.

And one other point, just to finish off what Brendan was saying, we said there's over 100 companies in the Ultra Ethernet Consortium. It's end customers, systems vendors, and silicon vendors, making this a true consortium of the players who are most interested in the future of Ethernet.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Yep. That's perfect. I want to go down the path of the competitive landscape in the time we have left. So maybe the first question would be is getting to the competitive landscape. There's all kinds of estimates out there about how large this market is, right? I know 650 Group is oftentimes referenced, and I think their TAM estimate has been like $20 billion i think it was massively raised here mid-2023. Your 750, right? You have like a 40% market share in high-performance networking.

750 doesn't seem like that is commensurate with that level of market share t here's $20 billion overall. I'm just trying to.

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

Yeah, i think you've got to dig into a lot of the details s o I would say it's $750 million back-end, but also $750 million front-end s o I would say $1.5 billion for us.

When you look at the $20 billion, that subdivides to a lot of pieces. A big part is NICs. A big part also, surprisingly, is optics and cables, and then there's the networking portion of that. Networking portion is maybe $6 billion-ish in 2025, of course, growing from there. Out of that $20 billion, the 650 Group specifically said. So we're forecasting $1.5 billion next year in terms of revenue.

You can do math. Right now, what I would say, though, is the thing that's exciting about what 650 Group and other analysts are seeing for kind of the market share is the market TAM broader is two things. Number one is we're seeing a lot of growth projected from 2025- 2028, and all of that is coming from Ethernet. InfiniBand is forecasted to be flat to down. So you asked the question earlier, is InfiniBand going to go away? No, of course. Customers have it t hey're going to want to continue to infill that.

But the growth is all coming from Ethernet, which really speaks well to the opportunity ahead of us. Like Martin said, we're completely Ethernet-based, and we're a leader in global Ethernet for data center and for AI s o that's great for us. Second thing I would note is if you look at not only AI, but overall spending for data center, it's an end.

So you're still seeing consistent growth in traditional Ethernet-based data center, plus and the extra spend for AI Ethernet. So it becomes multiple vectors for continued growth, both of which we're going to continue to compete vigorously for s o I see multiple avenues for potential opportunity for growth for us for that.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

So this question popped up in my head as you were talking there. A lot of this growth, I think, is widely anticipated. Do you think? that we should, or have we fully appreciated maybe the refresh cycle of these? Because a lot of this has been net new greenfield deployments h ow do you guys think about it?

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

Because that's a big part of Arista's traditional business, right? You go through these server refreshes in the past in the core business. Should we not think that these refresh cycles would kick in on these big GPU cluster opportunities three years, five years from now? I don't know.

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

Well, I think that there's a couple of things I would say for that is number one, Martin said, eventually AI, data center, it's all going to be the same sort of thing s o if you think about the long tail of enterprise, which are just getting started of thinking about AI, they're not in large-scale deployments.

For a lot of them, it's going to be a refresh of their data center. They're going to refresh the front-end for inference or other sort of things, and it's going to look like a refresh of the data center. So as the number one market share holder for global Ethernet, that refresh, even if it's driven by AI, it's still going to be opportunity for us.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

Yeah, so that's the blurring of the line. What's traditional?

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

Exactly i t definitely blurs, and I think also you hit the nail on the head also on another advantage of why Ethernet's going to win, because as customers look to upgrade their backend, 100 Gb to 400 Gb to 800 Gb , Hopper to Blackwell to whatever the next is.

You're going to have an opportunity to upgrade the network, and that network can then go from the backend to get reused, so if you have a 400 Gb switch, put that in your frontend, upgrade the backend, so that's a refresh cycle for us, but it's an opportunity for customers to have consistent operations, consistent platform to help them with their TCO of the network y ou don't get any of that with InfiniBand. That's another reason customers are betting on Ethernet, and it's another reason why we're going to benefit from that.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

That makes a ton of sense d oes that also allow? I think I asked this question on the last earnings call, but Arista wins on backend just pulls in the frontend or vice versa w ouldn't there be in theory a goodness to have both Arista frontend and backend? Isn't there a better together architectural view on that?

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

Clearly, we would certainly support that idea. So you're right, and that is one of the, it's like the frontend is pulling in the backend at the moment. If you're using Arista in the frontend of your data center today and you're looking for a partner for the AI backend, then it's clearly you're going to at least call on us.

Incrementally over time, once you've got that AI backend, and if we secure that with Arista technology, that incrementally helps us. So these things are complementary, and we're never going to sort of push back on using the same vendor in both places. But you do have that supply chain diversity, right? So there's choices. One of the benefits of Ethernet is that choice s o we have to be earnest and attentive to our customers to making sure we understand their short, medium, and long-term plans.

We continue to do that, right? And then you talk about what's the upside. That's why we communicated that $750 million for the frontend. That is an identifiable pull-through from the AI backend, characterizing it as this is the upgrade pull-through cycle.

And Jayshree on the earnings call referred to, it depends on how the frontend network is today, whether it's an all-new deployment, so there's a lot of pull-through, whether it's an upgrade, or whether it's just a retrofit of certain parts. So we'll see that spectrum. So between 30%, 100%, or 200% pull-through, we kind of put the midpoint on that one, say, if it's $750 million on the backend, it's about $750 million on the frontend.

It does get harder and harder to characterize w e'll be able to do that in large deployments. But if it's an enterprise customer that Brendan referred to, who's deploying a new enterprise application, that's an AI-driven application, if they have a conversation with us about building a new data center, is that AI-driven?, or is it just an enterprise application driven? So the lines definitely get blurred there. In a few years' time, as we go down that path, I think we're going to continue to see these rollouts and these upgrade cycles, but what's the trigger is going to be harder to tell.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Perfect. So I'd be remiss if I didn't ask you guys the competitive landscape i mean, you guys have executed superbly in the competitive landscape and high-performance traditional networking for many years. This AI dynamic has obviously NVIDIA, once they're part of the equation with Spectrum-X. How do you guys?, or what have you seen competitively from that regard? And maybe I'll throw it out there in the five minutes we got left. Other names too, like Scheduled Fabric or Silicon One. How do you? characterize the competitive landscape?

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

So we are eternally grateful to NVIDIA for helping stimulate this AI market with the GPUs. But when we talk about networking, we've got our networking technology, and NVIDIA has ours as theirs, and obviously we will compete on that AI backend. If you look at the existing high-speed networking market today, we characterize being 100 Gb or higher. We have the number one market share Cisco has number two market share.

So clearly they're a strong competitor there, and we continue to do what we can to gain success, and hopefully they don't do the same things. So I'd say that NVIDIA and Cisco are two of those large competitors out there. Given their history on the compute side, then you can't discount HPE y ou can't discount Dell because they've got that ability to attach compute to networking, and they have a networking portfolio as well.

If you go outside the U.S., then you have some of the smaller Chinese vendors out there as well. So that's how I'd kind of name in there. But Cisco has number two market share at the moment on the AI back-end network. Then NVIDIA is clearly a strong presence there as well.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Yeah. I'm going to put this out there because I hear him talk about this all the time i think Jensen brought it up in the last. He talked about Spectrum-X providing 1.6 times the performance benefit of traditional Ethernet i think the key to that is traditional Ethernet.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

Non-optimized Ethernet v ersus what your EtherLink product platform. So that's not a competitive comparison, right? It's kind of. It's always interesting to have somebody say that their product's better than something else without necessarily publishing all the test results to show how they got there.

We continue to work with our customers on designing, deploying, and tuning the Ethernet networks that they have. And there's many customers out there now saying that, A, they're using Ethernet, B, they're using Arista Ethernet. So clearly they're getting the results that they like, and we don't necessarily have to go around and point out failings of other companies.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Yep. That's perfect. So look, we've got a couple of minutes left. I'm just going to add, I mean, high level in the engagements you guys have, the discussions you have with customers, how do you characterize the durability of demand in this environment? I mean, there's these vectors of power availability. Recently, over the last few weeks, have we hit a plateau as far as scaling of large language model performance, But how? would you characterize what you see from a demand durability perspective?

Brendan Gibbs
VP of AI Routing and Switching Platforms, Arista Networks

The first thing I would say from a demand perspective, and I'll pass to Martin, is the customers are starting to see ROI i mean, you can look at multiple different hyperscalers who have taken the most aggressive, quickest leaps into this, and they're starting to see the business ROI.

That's critical, of course, because no one's going to keep spending billions of dollars on this if they're not seeing the business benefits. So we're starting to see Google has said publicly 20% of their coding can be done with AI w e've seen Meta who has said they've had more successful algorithms from AI, from kind of the former cookie-based model. You're seeing multiple different scenarios y ou guys can all see these public reports.

So the customers themselves are seeing the benefits from making the investments, which gives them seemingly the impetus to continue investing, which gives us then in turn hope and anticipation for continued success because AI doesn't seem like it's going to go away w e start to see.

AI getting more and more entrenched into longer tail of maybe enterprises e ven though it's earlier days you can start to see key use cases arising, which are going to then drive more durability into their needs, like from financial services such as fraud detection, life sciences, maybe for accelerating drug trials or finding key interactions they might not have otherwise found s o these are key tangible business benefits that then speak to durability of AI as a really sustainable type of technology investment.

Of course, we have invested significantly ourselves at Arista into having the most broad technology portfolio, the broadest product offering with the highest quality software t he more AI gets consumed, the more AI continues to last, the more we think we're going to benefit.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

Then you come to the technology side of it. We announced, as you referred to, mid-year, our 800 Gb Ethernet portfolio. That's now starting to get shipped in production from our manufacturing John McCool referred to that in the last earnings call. As we're going to calendar year 2025, for the AI, it's going to be an 800 Gb cycle. The industry is already talking about what comes after 800 Gb , and the answer is 1.6 Terabit. It seems fairly easy w e just keep doubling numbers.

That's predicated on 200 Gb SerDes. There's an upgrade of the underlying technology on the silicon that goes onto the systems that goes into the optics to get us to 1.6 terabit without doubling the cost. That's the reason people do this.

The industry is already talking about what's after 1.6 terabit. We can all do math, as Brendan referred to earlier, 3.2 terabits. As we go from 400 Gb 800 Gb to 1.6 to 3.2, we believe we can keep up with the growth of the AI clusters, the language models, and the pace of evolution on the XPUs or the accelerators

Whether those are commercial or homegrown accelerators a s customers start to build out higher performance clusters, larger clusters, then the networking of that we think can keep up. We go from 7 nanometer to 5 nanometer to 3 nanometer.

And I don't think the networking is going to go below 3 nanometers in the next few years, even though the CPUs and the GPUs are already moving to 2 nanometers. So networking normally stays roughly one process node behind where the compute is. So we've got a path to follow the compute process nodes. We've got a path to go from 100 Gb SerDes to 200 Gb SerDes to go from 800 Gb to 1.6 to 3.2.

So we can see that pathway. Is technology going to get in the way of the AI clusters I don't think so? Are the AI clusters going to continue to get deployed if there's a tangible business ROI for the hyperscalers or for the enterprises? Absolutely.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Yep. Martin, Brendan, thank you so much.

Martin Hull
VP and General Manager at Cloud and AI Platforms, Arista Networks

Thank you.

Aaron Christopher Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Appreciate it.

Powered by