Okay, great. I think the mic's a little bit. Thank you, everyone. I have the pleasure of hosting the next session with Arista , and we have with us Martin Hull, who's the Vice President and General Manager of Cloud and AI Platforms, as well as Rod Hall, ex-J.P. Morgan, and now in Arista Finance. Thanks, Rod, thanks for being here as well.
Sure, Sami, good to be here.
Thank you both for the time. I'll start you with a very easy, you know, softball question to really hit at. A lot of the conversation over the recent months has moved very quickly to scale-up networking. Mm-hmm. We understand AI is an incremental revenue opportunity for the company, but just help us think about how you're thinking about the TAM, whether to scale out versus scale up, and we can go from there. Particularly addressable to Arista.
Yeah, let's level set and make sure we're all talking about the same things. In this AI networking explosion, about which clearly we're very excited, and I think all of you are, and most of the questions on the various investor calls are primarily about AI. What is the AI networking that we're seeing? The primary part of the AI networking is tens, hundred, thousands of GPUs in a physical location, and you build a high-speed, full mesh interconnect network between those. That's what we call the scale-out network. That's that full interconnect, tens, hundreds, thousands. Jayshree talks about customers getting towards a hundred thousand. In front of that is the traditional, existing data center, which is connectivity to the outside world. That's what we call the front-end network. There's a front-end and there's a back-end. The back-end is the scale-out.
What we're increasingly seeing is that at a rack level, where you might have multiple GPU enclosures, you want to be able to provide additional high-speed local connectivity between those enclosures at a single rack or a pair of racks. That is introducing this new technology of scale-up rather than scale-out or front-end or back-end. It's a high-speed interconnect that is potentially 4x or 8x higher speed than the scale-out, but it's constrained to single digits of compute nodes, GPU clusters. Because it's higher speed, fewer ports, you could do some math and say that the scale-out TAM is roughly equivalent to the scale-up, but scale-up is an emerging market. It's not here yet. It's new. It's nascent. Two years ago, we were talking about scale-out being a new emerging market. Now we're in year two, moving towards year three of that.
I forget exactly how long since ChatGPT burst onto the scene, but it feels like about two years. These are incremental TAMs, and the scale-up TAM is incremental on top of the scale-out, but it's later. I don't want to put numbers on it. Other people can put numbers on it, but it is incremental. When we think about how that relates to Arista Networks, for the first phase of that, it's going to be primarily driven by proprietary technologies in year one, maybe into year two. You're then going to start to see the introduction of an Ethernet technology for that scale-up use case. Once it's an Ethernet technology, it becomes a real addressable TAM rather than just this market sizing exercise.
We think maybe 2028, maybe sooner, maybe later, it's difficult to tell this far out, but we think in 2028 that scale-up networking for Ethernet becomes a realistic addressable market for us. I didn't put numbers on it, but roughly equivalent to the size of the scale-out network.
Okay. You expressed the overall confidence that Ethernet is really where the market's going by the time you get to 2028. We saw this play out on the scale-out side as well, where we started with InfiniBand. Over time, Ethernet is really what seems to be the more popular option for customers and their adoption. Is your expectation that scale-up is also primarily Ethernet, just driven by what you saw in scale-out, or are there other reasons, including differentiation in the technology, that's driving that expectation that Ethernet is eventually where this industry goes on scale-up as well?
Some of the answers to those kinds of questions depend on when, not if. We've seen this within the scale-out, right? It has predominantly moved to Ethernet. There's been a crossover point. If you ask me a year ago or two years ago, and you investors were asking similar questions, will Ethernet win over InfiniBand? We were quite confident in our yes. I don't want to get quite so confident on the scale-up. It's still very early. Will Ethernet be an option? Yes. We've seen this with the introduction of new chipsets from Broadcom, the Tomahawk 6s, the Tomahawk 5 Ultras. One of the positionings for those chipsets is for that scale-up use case. There's an Ethernet option. If you're deploying a scale-up networking today, you're probably doing it with the predominant supplier of GPUs, which is probably going to drive you towards using their choice of a technology, NVLink.
Over time, as you get a choice in the GPUs, customers will express a preference for using something that's open, flexible, multi-vendor, over being encouraged to use a single vendor proprietary technology. There's been some moves within NVIDIA to open up some of their IP blocks so that other people can put them into their silicon. It still makes it a closed technology. We can have that debate. Do I see Ethernet becoming a significant part of the scale-up? Yes. How much share and how fast is where we'll have that debate for the next couple of years. Scale-up, absolutely. As you see new GPU vendors come in, GPUs, accelerators, TPUs, whatever you want to call them, I don't see any of those embedding these other technology choices. They embed an Ethernet choice. Scale-up is inevitable.
Whether it's Ethernet or not, we can have that debate like we did about InfiniBand versus Ethernet for two years.
Or about five other technologies versus Ethernet. We kind of know how that all played out, because you had scale economics play as well. If so, we'll see, like Martin says, but Ethernet history has proven that it tends to be the one that ends up succeeding.
Maybe before we jump into some of these sort of scale-up, scale-out questions and specifically to your customers, one of the other questions I get a lot from investors is how do we think about the between the data center or sort of the DCI opportunity for Arista Networks? How do you play into that? What does that addressable market look like for you?
Yeah, so Data Center Interconnect is an interesting technology. I've said this before many times. When I launched our 400G portfolio three years ago, four years ago, the primary use case for 400G was for Data Center Interconnect between large multi-tiered data centers using 400 Gb ZR technologies. At the same time that I launched that 400 Gb portfolio, I said the secondary use case was this thing called AI and ML. People looked at me and said, it's for what? I said, AI and ML. Yeah, it's DCI. Now we're on the other side of that. 800 Gb predominant use case is AI. Nobody disagrees with that. The secondary use case is for Data Center Interconnect. If I'm building out a campus of data centers, I've got six, eight, twelve buildings in a local geography, I need to provide high-speed bandwidth between those physical buildings.
It might only be a kilometer, 2 km apart. Ideally, I'd have my big supercluster stretched infinitely across all those buildings, but in reality, I've got finite bandwidth. I'm going to have to design clusters, bubbles, zones, and then mesh them as best I can. We are going to see an introduction of Data Center Interconnect technology for joining buildings together in a metro or a campus. We go to metro. Maybe we have to think about, I've got these buildings that are 20 miles, 30 kilometers apart. Can I use Data Center Interconnect? Can I get access to fiber? How fast can we push that technology? There's going to be a growth in that Data Center Interconnect because people are constrained by how big you can build a data center, how much power you can get on a campus, how much cooling, how much technology.
That's forcing people into multiple buildings and stretching those buildings distances apart. Data Center Interconnect does become a key driver for a growth in the AI segment. The other debate about that is, the technologies that we announced at 400G, our R series, switches, routers, fixed and modular, were perfect for 400 Gb Data Center Interconnect. The products we've announced for 800 Gb are the same technologies. They're perfect for Data Center Interconnect. They're now just being used on that backend network, not necessarily the frontend network. We do see Data Center Interconnect as an interesting slice of that AI segment.
Okay, any way you would quantify it relative to the opportunities within the data center that you have?
It's always going to be a slice, because you're not going to put a full mesh bandwidth between those buildings. Then you're typically using more complex systems. You're maybe designing to a different set of rules. I don't want to size it. If we look at it, it's going to be, you know, it's Data Center Interconnect for AI is going to be used by the big players, which you should probably ask about next. The bigger players will have these buildings in a similar location. When we talk about smaller customers who have got a single location or single digit locations, Data Center Interconnect is less relevant for them. That's how it'll split. Is there a use case? Even when there's a use case, it's a small percentage of that total aggregate spend.
Got it. Okay. Moving to the large customers, I think on the earnings call, you said two of your Cloud Titan customers are at or near the 100K GPU cluster size in terms of deployment. As we look beyond 2025 or even to the second half of 2025 into 2026, what does the growth trajectory with these customers look like? Is it just continue to scale with them, or do you start to hit a plateau in terms of the deployment pace with them? How should we think about where to go? Where do you go next with these customers that are already close to the 100K GPU cluster sizes?
Yeah, so we can refer back to the prepared remarks on the earnings call from Jayshree and Chantelle. Right. Those top customers are still on track for 100K GPU. I don't think we've given a hard date, but we did say we expect them to be there before the end of the year. All of them are going past 100K. There's no slowdown in the demand for AI. There's no slowdown in demand for accelerators. There's no slowdown in demand for networking for those accelerators, for those AI clusters. As we go into 2026, those customers will continue to grow. We also expect to add incrementally other customers. We may not give as much detail about them. We don't see that the AI growth is slowing down in 2026. Based on publicly shared TAMs, I don't think anybody's seeing it slow down in 2027.
You also look at the public companies that are talking about their CapEx budgets 2025 into 2026. They're incrementally increasing their expectation of their spend. I think what gets challenging with some of that is that you can actually have a CapEx budget and be unable to use it all because there isn't enough building space, power, cooling, physical infrastructure, and accelerators to satisfy this demand. The largest consumers of AI, as I say, are incrementally increasing their CapEx budgets. Whether they're an Arista Networks customer or not, that's still a good thing. The AI market is continuing to expand, and then we can take our fair share of that.
Okay. One thing I just added to that in terms of background, the reason that we gave some detail around these customers is because we started off with zero share in backend, and we wanted to let people know, hey, we've got traction in backend, which now we feel pretty confident we do have. We no longer feel we need to disclose as much in terms of what customers, where, how big, et cetera. The other thing I would add is the third customer we did say will achieve scale, and I wouldn't get caught up in these GPU numbers either because all that's meant to convey is that we're at scale. We're in production with these customers. The third one will be there, we've said, early next year.
The fourth one is this InfiniBand customer that, you know, that's a slower burn, and we haven't really said much about the ramp on that. Just to be clear about that and give a little bit of background as well.
Maybe the follow-up question was going to be on the fourth customer. You treat them, you obviously classify them as a large deployment or a Cloud Titan, even relative to the backend. There must be some level of confidence that they will eventually hit that size cluster on Ethernet as well. Is there, maybe the question is, is there visibility that they will get to that deployment size with Ethernet in 2026? Is there limited visibility on that front with that InfiniBand customer at this time?
I think the debate only really happens around timing, speed of progress. We are happy with the progress we're making as a technology, happy with the speed of growth of those clusters. That's going to depend on things that are potentially outside our control. We're talking about 2026. It's a year, year and a half from now. We are extremely happy with the success. We're happy with our progress. How fast the customer chooses to go is, is we'll go as fast as they want to go.
Okay. Maybe I'll introduce one more nuance there in that question. How much of the confidence on that customer or the limited confidence in 2026 is driven by InfiniBand to Ethernet transition versus InfiniBand to SpectrumX to an open system, open Ethernet transition, like more multi-vendor transition?
Yeah, I think that the key decisions are made that Ethernet's the answer. Not to say that any technology organization or any large customer can't revisit those decisions on a daily, weekly, monthly basis. I don't want to get too far out ahead of our skis here. No, the decisions have been made that Ethernet is the right technology. That's not the doubt.
Okay. Maybe moving to the enterprise and NeoCloud category, where you specify 25- 30 customers versus 15 prior. That would indicate that you're seeing a significant step up in engagements with the sort of Tier 2 category of customers in that sense. Is there something in terms of timing that helped this quarter, or should we expect a similar continued ramp with these Tier 2 kind of category of customers? How much more headroom also do you see on that front?
Going back probably a year, year and a half, we were saying that every large enterprise has to have an AI strategy. It was a year, year and a half ago. That AI strategy could be, we'll put an AI project into somebody else's infrastructure. That could have been that we need to have a business model, a business plan. Could be that we spin up a technology group internally. I've got that. Could spin up a technology group internally to go build a pilot. Now what we're seeing a year, 18 months later is a number of organizations who are starting to progress from that discussion and conversation into pilots, trials, and production.
They do range from, as I say, enterprise customers who will be putting in a small cluster, tens, dozens, hundreds of GPUs, to organizations who have got access to facilities and buildings who are now spinning up AI GPUs as a service. They're putting, again, relatively small clusters into many of their existing locations. Maybe they've got access to power and cooling, and there's some subleasing going on back to other customers. When we talk about AI as a service, we talk about enterprises who are technology-centric or technology-focused. They will be starting to do AI pilots and trials now. Some of those might have a second phase and a third phase, but they're not going to be a multi-year rollout like a hyperscaler or a Cloud Titan would do. They just don't have that scope. It's like saying how many data centers can any organization have?
If your business isn't data centers, it's a single digit. Pivot that back and say, let's look at these NeoClouds and sovereign wealth funds. Yes, they're also making investments, but they have to get the funding in place, and they have to get access to facilities and power. They're going to be that second wave or third wave, and they're probably in the phase one, and hopefully there's a phase two and phase three with them. They are starting later. With the NeoCloud perspective, they won't necessarily be as big as the biggest, the largest worldwide. They are going to be significant. Some of those names would be known to you if you're studying this space. Then we have the enterprises, the tech-centric organizations, and effectively Tier 2 service providers, Tier 2 hosters.
Between all of that, tens to dozens to hundreds to customers who are deploying a couple of thousand GPUs in a data center. That's the scope of the scale. When we say 25 to 30, that's an estimate at the moment. Yes, we should expect that number to grow from here to the end of the year. Incrementally, each quarter, we should be adding new opportunities, new wins.
With the hyperscalers, where you had a good 18- 24 month time period of pilots to production, is that pretty similar when it comes to the smaller scale customers as well in terms of the engagement before you start to see material revenue out of that in production?
There's no single answer to some of those questions in that if it's a relatively small deployment and there aren't milestones and step functions, then it's a normal transaction. Others where you've got phases of rollouts and milestones, then yes, you're going to see a similar cadence of pilots to trials to pilots to production. You're going to fit in all of that.
This is another one of those things where there's a perception, potentially we've gotten questions from investors about whether, you know, we can be as successful with these smaller cluster sizes as we have been with the big ones. Again, we're disclosing some of this to let people know, yeah, we have, we feel like we've got good traction, good momentum there. Some of that same type of dynamic is going on from a communication point of view, just to kind of put a little bit of background to it.
Okay, let me open it up and see if anyone in the audience has a question.
Kind of product related, but can you just walk through the pros and cons of a customer deploying kind of like a disaggregated scheduled fabric solution like Jericho Ramon boxes versus a more traditional like we've spun with Jericho and Aloha?
Okay, I'll try. You said disaggregated scheduled fabrics, that's DSF. That architecture is productized in our 7700R4 series, where you have two sets of fixed configuration devices. You have an edge leaf switch, and you have a centralized fabric switch device. That architecture is exactly the same architecture you have in the fully modular 7800. There's no difference between the architecture. There's no difference to the forwarding, the daily life of the packet, behavior, characteristics, features. What you're doing is you're allowing the customer to physically position that leaf disaggregated switch next to the compute and then have a single set of connectivity to a fabric tier. It looks like a leaf spine physically. You cable it like a leaf spine physically, but you get the benefits of a single modular chassis.
The tipping point for going from a single modular chassis to a disaggregated solution is 576 ports of 800 Gb or 1,152 ports of 400 Gb because I could try and build a bigger chassis, but most people wouldn't be able to get it through the doors of their buildings. You think I'm joking?
I know you're just.
You walk around our headquarters, those things are lined up like monoliths. We could build a larger chassis, but it's not practical. We took that concept of the modular chassis and stretched it. We can have 128 fabrics, and we can have, I forget exactly how many distributed leafs, we can scale that to more than 6,000 ports. I've got my modular chassis stretching to the sky. That's the architecture. You compare that to a Tomahawk-based single-chip architecture. Tomahawk's a great forwarding architecture. Tomahawk 5, Tomahawk 6 gives me 51 Tb of switching or 102 Tb of switching in a single device. That's great for 51.2 Tb of local I/O. Once I go past that, I need two of them. Actually, you can't do it with two because you lose half the bandwidth. To go from one, the next stop is six.
To go from 51 Tb- 100 Tb of I/O is six switches. Then it's 12, then it's 24. We scale this up to 512, 1024, 2048. It's just a mathematical progression. It's simple until you run out of ports. That first hop switch can have half of the I/O to talk to compute and half of the I/O to talk to the second tier. When I can't split it up into any more granularity, I need to add a third tier. In contrast to that DSF architecture, I need to add in another layer of cables, more racks, more power, more cooling. We can talk about conversation about optimizing for power, having LPL class optics that halve that power at an optics level. The trade-offs are most of what I just said. Right? How big a cluster do I want to build? What's my future proofing?
Do I want a VOQ architecture that gives me consistency? Do I want to stay with fixed configurations devices with a higher radix but be fixed to how big I can build a two-tier network and need to go to a third tier? We actually find customers deploying Tomahawk-based leaf switches and Jericho-based spine switches. We estimate that most customers are probably putting that kind of architecture in. Some who are scaling a little lower will be more than happy with a two-tier Tomahawk-only architecture. There's cost trade-offs, there's power trade-offs, there's other things in there. That's quite a long answer, but I think I got most of your points.
It's a radix argument going with DSF versus.
There are break points, right? I can have a single switch, and then I can hit the scale limits, and then maybe DSF is interested in the middle. There will be a point at which even DSF can't scale that big. I need to go back to a three-tier network or a four-tier network and do Data Center Interconnect. DSF fits into the sweet spot at a certain size. Maybe some evolution of that will allow that DSF to scale more in the future. Let's see. Yeah.
Might feel so if one Zoom can hear you. Let's open mic nine, please.
Oh, perfect. Thank you. Just back to the tiering question, I was just curious, once we go past 100K GPUs in a single cluster, do you think we'll need three tiers of the networking side, or how do you see that evolving?
This magic number of 100K. At the Broadcom launch of Tomahawk 6, they used 131,000 and something. I can't remember the last three digits. That was based on a two-tier network of Tomahawk 6s with all the accelerators running at 200 Gb. You can get to 131K with a two-tier of Tomahawk 6s. If you want to get past that, you need Tomahawk 7. I didn't just announce it. Yeah, right? You need whatever follows on. You need more than 100 Tb. Otherwise, you can't get past it. That's physics. You can just go to 100 Gb for every compute. No, that's not what people want to do. If you want to do 400 Gb, your 131K comes down. I can't answer the question about how to get past 131K without knowing how many GPUs you've got, what connectivity are.
People are then looking at data locality, moving these clusters into smaller pods. I can build a pod, a cluster of, let's say, 100K. Let's say 96K. Then I have four of those, and I connect them together with a full mesh. The data locality means that I don't necessarily have to have 200,000, 400,000 in a single cluster. You then get into failure modes, troubleshooting, operational challenges. Building those pods and clusters together is the technology alone. It's kind of like Data Center Interconnect if they're across buildings. If it's in a building, I'm not going to have full cross-sectional bandwidth between all four pods necessarily if I don't need it. It is a high incremental cost to put that third tier in of non-blocking bandwidth.
Just in terms of the scale-up opportunity, I think the competitive advantage in scale-out in general has been the combination of hardware optimization and software. As you think about scale-up, do you think that one of those is more important than the other?
What we've seen over the last decade is that our relationship with our key customers has meant that they come to us for our best-of-breed hardware designs, even if they choose not to run our software. There's a number of reasons for that. Efficiency of design, our engineering team, we actually produce lower power systems compared to something that's identical. Above that, between the hardware and the software, there's this hidden middle layer of intelligence, whether it's power management, CPU management, link efficiency, link training, SERDES, identifying unknown errors before they become a problem. That middleware of value is through EOS software, but a customer that's running their open operating system is still using that same middleware to control our hardware. The efficiency of the hardware design, absolutely. If you put two designs side by side, I'd always stand behind ours for obvious reasons.
I think that interaction between our hardware and our middleware intelligence, that's fundamentally everything we've done in the company for the last 20 years about how we program the hardware, manage the hardware, identify issues, even down to the manufacturing processes that we use to make sure the customer gets a high-quality product. If you have one and the competitor has one, you put them side by side, maybe they behave very similarly. Get 1,000, get 10,000, get 50,000 of these things, you start to notice the differences. Ultimately, in the scale-up backend network, hardware design and software management of that hardware design, the quality, the traceability, all the other intelligence that we put into our products will still be an advantage.
I mean, there's a strategic element to that too, which is the lock NVLink provides. That also releases that lock to some extent or makes it less strong. That's another part of Ethernet. Like Martin said, 2028 is more the year we would start to expect and maybe see a little bit of something happening there.
I would just be curious to get a sense of how you think the UEC, with, I think, moving a lot of the kind of routing and traffic control functionality to the NIC, would affect Arista Networks' product strategy going forward.
In June last year when we had our Investor Day in New York, we talked about all our hardware being UEC ready then. The UEC 1.0 spec that came out doesn't change the products we have. The Tomahawk 5, the Tomahawk 6, the Jericho 2C+ and Jericho 3s, and everything in between, they're all UEC ready or UEC compatible. When you come back to the question I answered before about radix and scale, UEC doesn't fix how many I/Os a chip has got. I still have to build this very large network. If it was my money I was spending on a network for AI, I'd want to go best in class, best in breed. When we think about the percentage of spend on the network infrastructure, probably more than, I think optics are more than half of that.
I'm not going to save anything by going cheap on my network if I have to put a third tier in. That comes back to that. We say, what other advantages do these deep buffer systems have? It's a safety belt, belt and suspenders. I can use all the UEC features that are out there, and hopefully they're perfect and nothing ever goes wrong. When a link fails, when an optic fails, when I, for whatever reason, have some links that are a bit variable, don't I want the intelligence and the smarts and the buffering so that I can actually investigate, troubleshoot, remediate without just pointing at the two end points and going, which one of you messed it up? I want intelligence in the middle. Right?
A month ago, two months ago, we talked about our AI ops, the EOS advantages you have for monitoring and troubleshooting within that network infrastructure. I'm going to want to have the best network that I can get. Okay.
Okay. Maybe moving to not Tomahawk 7, but Tomahawk 6. With the recent announcement of the launch of the chip, what does a typical gestation period in terms of Arista working with Broadcom on a new chip and getting a product out to customers typically look like? Do you see any changes, major changes from the Tomahawk 5 generation that would sort of have implications on market share for Arista ?
Broadcom launched Tomahawk 6 two months ago now, June. I was part of one of the launch videos that they published, so clearly I knew about it before they launched it. I actually have one sitting on my desk at home. It's a mechanical sample, don't worry. I've got a Tomahawk 6. At Arista, we don't pre-announce products. We don't tell you when a product's going to come out. We have that conversation with our customers under NDA. We're working on joint development. We were working on these joint development activities before Broadcom announced the chip. We can't physically get started on the engineering until their first samples turn up, and then we get to more higher quantity and we can't get to production. Broadcom has their own release process from samples to production. That will typically be anything up to a year.
You would realistically expect that our production of any product based on their silicon would align to their historical timing for samples to production. I don't want to pre-announce an Arista product, and I certainly don't want to speak to how long Broadcom may or may not take on this version of the chip. When we get to whatever that point is, we expect to have a variety of products designed in cooperation with the customers that we're working with and for more general purpose markets. I think at the Tomahawk 6 generation, the leading edge is going to be quite out ahead of the mass markets. We do expect to have a variety of products designed for the right customers and the right use cases. In that scenario, we would absolutely expect to get our fair share of this market opportunity.
You're probably going to ask me, what's my fair share? My number won't agree with your number. We are very excited about Tomahawk 6, the innovations around 800 Gb at 1.6T. There are a few new interesting features in that silicon, which we'll unleash through software, and then we'll ship the products when we've completed our development and the product and the customers are ready. What you sometimes see, and we've seen this with our joint developments with customers, is we actually might be shipping a product, and we may not have told the public about it. It's going to that customer for their use cases. You've seen us do this, there's history of this, right? Just be careful sometimes with how you see some of these things.
Okay. One of the questions we often get from investors on this front, although we haven't really historically seen this, is why don't customers maybe pause a bit when they know 1.6T is about to ship and they still continue to buy 800 Gb or right now they still continue to buy 400 Gb while ramping on 800 Gb? Do you expect to see at any point customers pausing or why maybe is that, what is the explanation of why they don't even though they know there's a higher bandwidth solution coming?
There is the micro answer and there is the macro answer. 100 Gb technology is still shipping in very high volume. Why? Why isn't everybody going to 400 Gb? Most people do not need 400 Gb. If I do not need it, why would I pay for 2x of bandwidth? It is not the same price, but it might be the same price per bit. If I do not need it, why would I buy the next highest speed? It is going to be driven by the use cases. Is there a use case of 1.6T? Yes. I need optics. I need GPUs. I need NICs. I need the whole ecosystem lined up behind that. I need to have it in my hands. I need to test it. I need to qualify it. I need to start a new pilot. I start planning a rollout and go forward.
What am I going to do to my business in the interim? Put it on pause for a year, 18 months? What if the technology that I am hoping for in the future slips, gets delayed? You have taken a significant risk. People will keep on deploying 800 Gb because it is here and it is shipping in volume. Some customers are still deploying 400 Gb because they started rolling it out on a multi-year evolution. You cannot just call time on and switch to 800 Gb without another qualification cycle. You are going to see these overlapping waves of technology from 400 Gb- 800 Gb- 1.6T and 3.2T. They are coming quicker. It is good. Can customers afford to hit pause and wait for a year? Not often.
There might be some parts of their infrastructure that they say, "I have no plans to go to 800 Gb over here." Over here in the corner where it is a new technology or a new deployment, I will start with 800 Gb. You are going to see these waves and say there are micro answers and there are macro answers. 800 Gb is growing rapidly, but 400 Gb is not in decline. These are incremental. 1.6T comes out with the 200 Gb SERDES. We do not see 800 Gb dropping and we do not see 400 Gb dropping. Quite frankly, 100 Gb is still stable based on 25 Gb SERDES. All of those different speeds can coexist in the market at the same time. If you do a sum product of ports and speeds and bandwidth, the bandwidth shipping in the industry is going up year- over- year- over- year. These are all growth opportunities.
Okay, I'm going to try and rapid fire through a few questions here.
I'll try and rapid fire my answers.
I'll jump around in terms of topics. Apologies for that in advance. CPO, going back to scale-up, how should we think about the need for Arista products to support CPO to gain opportunities in scale-up? How critical is supporting CPO to eventually seeing share in scale-up?
The lowest cost, lowest power connectivity for short distances is copper. For scale-up opportunities, it may not even be CPO. We've got no objection to CPO, COBO, LPO, NOVO, whatever you want to call these technologies. We've got no objection. We will do what our customers want, but we have to be convinced that it's the right technology. If it is, we'll absolutely implement it.
Okay. Blue box. You've talked about, or at least Jayshree's talked about the blue box opportunity. Give us a sense. What does it look like? How is it different from what you're doing now?
I think it's more a case of us describing what we're already doing. I can't remember where the question came from about hardware and software and the advantages and differentiation. Blue box is an Arista product that's got all our engineering intent baked into it, all our engineering and manufacturing diagnostics, software, intelligence, reliability, manufacturing DNA, all baked into that product. That is the Arista blue box. That's a rapid-fire answer. That blue box is that. I think you'll hear a lot more about that at the analyst day and towards the end of the year. We believe that our products differentiate in the market with or without the EOS software.
We haven't talked enough about our hardware advantage, including that middle layer that Martin Hull talked about. We want to talk more about that because we're in hardware mode now. The engineering challenges are just ramping so rapidly, you probably wouldn't believe it if we were to dive into that. We want to be a lot clearer about that advantage that Arista Networks has. We have people like Andy Bechtolsheim that are working super hard every day. Most people know who that is, and pretty good hardware engineer.
Just to be clear, what you're saying is it's a hardware plus middle layer software or firmware, for the lack of a better term, minus the EOS.
Right.
Yes.
What does the margin structure on such a product look like? Once you take EOS out, how materially does it impact your margin structure?
There is a margin structure on a product like that. No, we're not going to talk about margin structure there. I mean, we try to get paid for our value, though. We will say that. We do have customers who will pay for that added value, and we do add value.
Let me ask it another way. How different is it going to be from a margin structure perspective from the white box companies? Where does the differentiation come in to differentiate on the margin?
You get paid for value, and if you're adding value in the hardware layer and this middle layer Martin talked about, then you get paid for that. We aren't going to quantify the differential between those two things. You won't see us do that.
Okay. Moving to the front end, one of the things that you mentioned, I think on the earnings call as well, there's definitely a pickup overall, not only in the backend investment, but in what you're seeing on the front end. We get this question a lot: what's driving the investments on the front end? Is it purely, can you correlate that to investments in AI, or is it a non-AI driver that's now driving the upgrades on the front end? When you're seeing the investment on the front end, is it a volume investment, or are customers upgrading from 400 Gb- 800 Gb?
Very rarely would a customer go back to a production data center, turn it off, rip out the infrastructure, and replace it with 400G. What they do is they look at the next new deployment and say, "That's going to be the new design. I'm going to put the new design in this building." Whether they had a 100 Gb- 400 Gb or 400 Gb- 800 Gb, they're not really upgrade cycles. It's the net new wave of the new technology. For an enterprise customer, if I only need two data centers, then I might start an evolution and transition within one of them, and I might do that upgrade cycle. Back to your question, the growth in the front end is multifaceted. There's definitely a pull-through from the back end.
That is that as we're seeing more and more inference, we're seeing public reporting from customers about the impact this is having on their front-end data centers and their wide area and backbone networks. There's a growth in traffic as we all increasingly use these AI tools and resources, or if you're an AI as a service, enterprise customers start deploying. Yes, there's a growth in traffic on the front end. For a few years, there was this rush to AI, and maybe there was an underinvestment in some of the front ends. Some of that will be a little bit of catch-up. Catch-up is driving growth of traffic, and then remediation of five-year-old and six-year-old technology, which is the technology refresh cycle that's always there. Those are the drivers for that.
We're also seeing a little bit of, in the enterprise, a little bit of repatriation of traffic that may be moved to the cloud. Maybe it's moving back from the cloud and there's some of that going on as well. All of those different waves are happening at the same time. We're having different customer conversations about what their drivers are.
Got it. Okay. Last one. A lot of focus recently on sovereign opportunities. You had these bunch of announcements coming out of the Middle East. Notable to investors has been that one of your larger peers has been mentioned and is participating, whereas Arista Networks has been sort of visibly absent on that front. How do you think about the opportunity, you're tapping into it? Is it going to be through partnerships with larger sort of partners that find their way into those announcements? How do you think about Arista Networks' position to tap into the sovereign opportunity?
Yes. We would look to clearly partner with the technology companies that have already been announced for that. We don't necessarily announce something until it's meaningful, real, and in the rearview mirror. Don't necessarily get carried away by the fact that we may or may not have been announced in any particular things. We might be involved. We might not be involved. I wouldn't necessarily read into the headlines that we're not there. Sovereign wealth funds are definitely interesting in this space, and we're fully engaged.
Okay, I will wrap it up there. All right, thank you for the time.
Thank you.