Really fast low latency switches in the first category, read leaf spine in the middle, which we're doing, and read scale across is where, in fact, the combination of our 7800 chassis and many types of leaves will really help us succeed. I'm going to skip a couple of these slides in the interest of time, and I know Hugh and Andy will cover it, but I do want to cover one thing: that although we all get access to the same silicon, power is king and queen. Arista builds very optimized drivers, some of the best hardware designs, and you can see here we can translate to $3.5 million a year or $15 million savings over four years, which is stunning. You can literally buy your switches free at that point if you get the right power, right?
Arista's maniacal focus, even on the same platform from the same chip vendor, Arista can design it better, and that's a big deal. Same thing with 800 gig linear drive optics. Once again, we were able to save millions of dollars per year because you're getting rid of that DSP and you're co-locating the switch and the optics as close as you can together, the electrical and the optical together, and this is becoming very, very popular at 800 gig servers. I would be remiss if I didn't talk about the features that make a US hum.
We are working on a suite of features not only on the back end that you've heard of, like cluster load balancing, high availability, dynamic congestion control, but the front end is now getting stressed, and we need multi-tenant scale, we need encryption scale, we need streaming telemetry, a feature we call STANS, to make all this work. The back end is putting pressure on the front end to make the AI network fully work, and therefore, the Ethernet portfolio is not just about high-speed hardware, but very rich features. Most of you have been asking me, but what about the white box, Jayshree? You must be competing against them, and you must be wondering how to challenge, how to deal with that challenge. First, I want to take you to history. We've dealt with white box since the beginning of time, and of course, they'll exist.
ODMs, JDMs, cheap, cost-effective platforms at 10% gross margin or whatever it is, may be good enough for someone. That's not Arista. Arista is trying to provide value, and value means good price, good performance, and good capability, right? Having said that, we have worked on co-development, so we don't look at this as do we build, do we buy. We often work with our customers on a build and buy strategy, so we can co-develop with them and make them better. We've done that with SONiC, we've done that with FBOSS, and we've done that, you can see, across three platforms already, the 7388, the Meta XPAC, as well as the DSF 7700 that was an Ethernet platform. To us, our strong partnerships are key to rapid deployment of our products and rapid co-development of their products. Both are very important for us.
Why would they work with us if they can just go to a white box? To understand that, I need to share with you what is the Blue Box philosophy, first of all. You have a NetDL Foundation, that's our EOS stack. You typically, on the hardware, have a switch abstraction interface. I'm not telling you anything new right now. These are all the things we have. You have validation tools that come from us or come from the customer. Often, the customer deploys hundreds of engineers to do that, or we do that for them in the enterprise. We have a set of deployment guides and tools in the enterprise for our customers. What's missing? The missing layer that makes Blue Box really hum is this diagnostics layer that's getting more and more difficult between your software stack and your hardware.
It's not just the drivers and the switch abstraction interface. It is the ability to create an environment and a foundation so that we can run any NOS, an open NOS like SONiC or FBOSS, or an open NOS like Arista EOS. It's this diagnostic layer that is fundamental and strategic to creating the Blue Box. What is the Blue Box? It is a suite of features that we call NetDI, Network Diagnostics Infrastructure, that allows us to work between that highly complex multi-layer hardware we have and the EOS software layer to give enough troubleshooting, validation, signal integrity, L1 events for optics cables, L0 events for passive components like flash and memory and power supplies, deployment checkers, control trackers for our manufacturing teams, and bring this all at massive scale. It's a suite of functions actually we've been working on for many years. It's a welcome secret.
We've had, you know, just on EOS alone, 30,000 man-years. We're running at any given time 300,000 diagnostic tests a day, and this is a large part of it. If you want to build anything at scale and not just build a throwaway box, the Blue Box is fundamentally enhanced by this firmware and dyads layer. It's a huge work of art between our hardware, our troubleshooting teams, our firmware, and our EOS layer. We're very proud of it, and you're going to hear more of it.
Think of it as if you have a product, which is our hardware, and if you have a NOS, open or closed, and you have AIOps on top of it, the green layer is all the stuff behind the scenes that we are doing between silicon, power controllers, FPGAs, booting up things, control plane, data plane, management plane, how do they talk to each other at crazy speeds, high-speed servers, and making that all work. In fact, as NetDL has its own state and base, this NetDI has its own little mini one too to work with that NetDL. It's a mini me of NetDL for the hardware layer and the L1 functions we have to do.
It's very foundational, and this Arista Blue Box complements the work we've done with state and NetDI to really bring these kinds of functions, tools, diagnostics, signal integrity, quality control, secure boot loaders, passive flash component management, active cables, optics, loopback management, and finally, dashboards for deployment. It's amazing. It's all the secret behind what we do on switching that most of you don't get to see, but most of us get to work on. When you put this all together, see Arista Advantage, you have the NetDL architecture, you have the diagnostics now, the NetDI coming in, you have the actual hardware that you see that's running all of this, and then increasingly with CloudVision, you'll see more and more AI-driven predictive tools that go beyond the topology and telemetry and observability we're all doing already to have natural language processing and queries for different types of events.
I'm so excited about this that finally, I want to share with you that we are in a rare breed of companies. When I put this slide together, we were at $150 billion market cap, but wherever we are, it took us a record time to achieve our first billion in 2016. We went public in 2014, and I think it's going to take us record time to achieve our first $10 billion. Our commitment to you for next year is $10.5 billion in 2026, which should be a 20% growth on extremely large numbers. What does this consist of and why? We think the TAM for this in 2029 is north of $100 billion, so we better capture our fair share and keep growing from that point on. It would only be fair with all the innovation and technology we're doing.
As part of that number, there's going to be two very fast-growing markets. The campus, which includes the branch, which we think is going to grow due to the addition of VeloCloud at 60%, and we're aiming to go from that $750 to $800 million number this year to $1.25 billion, trying to add another $500 million there. Ambitious goal, and we're signed up to it. Of course, the AI market, which, as I described to you, has to now include the back end and front end, will be converging to grow that at anywhere from 60%- 80%. If we end the year at $1.5 billion, it's an 80% growth. If we end the year a little higher, then it's a 60% growth, but somewhere in that range, we're looking to achieve this number.
Out of our $10.6 billion, two fast-growing markets will definitely contribute to achieve this, and I'm very excited because I think for the first time in a long, long time, we're seeing, you know, we always worried about what's the TAM, what's the market, what's the acquisition we have to do. We're seeing something that's sustainable for multiple years. We're doing that with that foundational technology we have with NetDI, with NetDL, with AVA, and for those of you who'll watch the video at the break, we're also doing it with a suite of partners that goes beyond NVIDIA. Last year, we announced the NVIDIA partnership, and no doubt, they're a market leader in AI, but it's going to take a whole ecosystem of innovators to do it, not just ourselves, but, you know, working with the LLM modules, working with other GPUs, working with storage.
I would like to invite Fred to show you a little demo of the Agentic AI and how we've simplified it.
Please welcome Fred Hsu.
Thanks, Jayshree. What I'm going to show you here is a demonstration of our Agentic AVD and how it can simplify workflows for our operators. What I'm going to do is ask the agent here to add a new VAST Data storage endpoint to our network, and the agent's smart enough to know what are our best practices, what are the features we need to turn on, and make this storage network flow really well. Additionally, we've partnered up with VAST Data to be able to call out to their APIs and configure the storage device as well. Not only am I setting up the network, but I'm also going to set up my storage endpoint.
We can do the same thing with another partner, so I'm going to have this also reach out to Pure Storage and create a virtual IP so I can also set up a whole Pure Storage setup and configure the network as well. Now, building this on top of our AVD framework gives us two really big advantages. The first one is that everything's built off of a data model. This helps us constrain the LLM and reduce the chances of hallucinations when we're generating these configs. The second thing you can see here is that as it generates the configs, it also generates network tests. That's sort of an extra layer of safeguards there that if we do do something wrong, we'll catch it once we actually get into deployment. We'll tell the agent now to go ahead and deploy those configs and then run the tests.
We've now pushed these changes out to our network, and we get our test results back saying that yes, everything went well, so the network's fully deployed. Just to kind of double-check, we can check out the VAST Data dashboard, and we can see that our connection's been established. Very quickly, we've gone and deployed an entire new storage network and taken what usually takes maybe weeks or months, knocked that down to hours and minutes. What's more is if you're a storage guy or someone who's not necessarily a network expert, you're able to now configure and deploy things on the network without necessarily having all that expertise. Thanks.
You see, I remember stressing over this when we were doing fiber channel over Ethernet and storage emulation, and it was days, it wasn't even hours. Thank you, Fred. This is a real demonstration of what we can do with AVD, which is our Arista Validated Designs. Before I end and transfer it over to our new President, I just want to thank you all. I think it's been an exciting journey, and I'm a worrywart. I always worry about the next quarter and the next year, but I think what's going on here at Arista Networks and as an industry has been transformational and will continue to be for many years to come. Thank you.
Please give a warm welcome to Kenneth Duda.
Oh, thank you all very much for coming. Really appreciate the chance to talk to you here. What I'd like to focus on is AVA, our Autonomous Virtual Assist, and what we think can be achieved in AIOps, AI for networking. Of course, AVA is built on the foundation. I'm going to start by talking about what that foundation is. Many of you have seen this before, but I'm going to say it all again because the foundations really matter. This is structural competitive advantage right here in the architecture of the EOS stack. In the EOS stack, on top of the switch hardware, we have the NetDI layer of diagnostic infrastructure that Jayshree told you all about. I'm not going to go into so much detail on that in this talk, but it's a lot of stuff.
High-speed signaling, signal integrity, dealing with power and cooling, all the different scenarios, making sure the switch is really going to work under lots of operating conditions, dealing with single event upset when subatomic particles from outer space come in and hit the switch ASIC. Is your switch going to survive that? All of this sort of low-level hardware integrity validation, this layer of software, is a major source of value of the Arista Blue Box platform, regardless of what operating system is running on top. In the EOS stack, naturally, on top of NetDI, we run EOS, one operating system for all of the use cases across the infrastructure. EOS feeds into our Network Data Lake, on top of which we run CloudVision and AVA, the Autonomous Virtual Assist.
Having this consistent architecture across all domains of the network is a major competitive advantage for us because one OS, one architecture, is just better for the customer. This infrastructure has got to work. Reliability is critical. One of the main enemies of reliability in infrastructure is too many different configurations, too many different versions, too much complexity from having all that variance across your infrastructure. I've talked to tech leaders, networking infrastructure leaders. One of them at one of the largest banks in the country told me that from our competitor, he's running more than 200 different operating system versions. He has to track all of this, all the differences between them, the little flukes and differences in the protocols, the bugs, the security vulnerabilities. What a nightmare. With EOS, you have one OS to learn, one image to qualify.
This is actually really important, one API to automate against because one problem you have as an operator, you have all these different operating systems, they're all a little different, and your automation systems have to cope with all those differences. If you make it all the same, you do it right once, you do it right everywhere. It's just easier to configure, there's fewer mistakes, you get more reliable operations at scale, you can address more use cases this way. Having one OS across the whole domain is just better for the customer. It's also better for us. Can you imagine if you're a software engineer dealing with that menagerie of different software versions? How do you test all of this? How do you make sure it all works properly? At Arista , we have one image to test.
We run tens of thousands of tests every day, tests running fully autonomously. That means software testing software, 24 by 7. We can all go on vacation, the software is still being tested. This run that we're testing is against every hardware platform, every branch of code, all of our older releases, the work in progress, new features being developed, all being autonomously tested continuously. This all comes down to this principle. There's a development team at Arista Networks that's responsible for quality. We don't say, oh yeah, we write the code, but these other guys, the QA guys, they're responsible for making sure our code works. No. The software developers are responsible. When you give people both the mandate and the responsibility to take ownership of the quality of their code, you get better code, and that's what we've done.
Sometimes people ask me, what's your evidence that EOS quality is actually that much better than your competitors? I'd like to offer you a model here, which is the model of the iceberg of bugs. If you imagine bugs are organized into a gigantic iceberg floating in the water, poking out above the surface are the CVEs. It's like the tip of the iceberg. These are, remember, not every bug is a security vulnerability, but every security vulnerability is a bug. The CVEs are simply the visible subset of bugs that are publicly reported, publicly categorized, and classified. If you look at the size of the CVEs, maybe that says something about the size of the whole iceberg of bugs. Now, if you look at the public databases, you'll see that Arista EOS has dramatically fewer security vulnerabilities than other network operating systems.
The tip of our iceberg is one-tenth the size. What does that say about the size of the overall iceberg? I wish I knew how big their iceberg was. Unfortunately, it's not publicly disclosed, but I believe it's probably about 10 times bigger. If we got one-tenth the CVEs, we probably got one-tenth the bugs of other types. Certainly, from talking to customers and talking to the field about people's experience, I think that bears this out as well. You know, we can talk about fancy features all day long, but at the end of the day, what the customer cares about the most is, is my network working? That's what we care about the most as well, which is why quality is always our highest priority. Let me go back to the EOS stack and the software foundation. On top of the switch, we run a standard Linux release.
Alma 9 right now is the release we're on. On top of Alma 9, we run NetDB. This is an Arista database that contains all of the state of the switch, everything from hardware attributes, power supply voltages, temperature sensors, fan speeds, control plane stuff, what's going on with BGP and MAC learning and IGMP snooping, and management plane activity as well, network authentication, that sort of thing. All of that state of how the network is running is stored in NetDB across all of the switches. In about 2014, we suddenly realized, wait a minute, we've got all this information in the switches. What if we stream that all out of the switches continuously, generating a stream of updates indicating the state of each device into a common scale-out database, which we call NetDL?
All of the updates stream out, and NetDL winds up with a time series, a historical record of every state in the network across all of the devices in one place. This state foundation is so valuable. Having all of your state in a common representation, in a common infrastructure layer, it enables you for provisioning, security, compliance, telemetry, and of course, also for AI. NetDL contains actually multiple types of state. There's the low-level state about the switches, interfaces, counters, the stuff which was actually streamed out, the flows, the events on the switch, link traps when links go up or down, or temperature events, or things like that. NetDL also maps all of those lower-level concepts to higher-level concepts, users, devices, applications, services, and incidents.
These are higher-level ideas that come from the ability to observe across the whole network and also bring in data from other sources, including vCenter, OpenShift, Kubernetes, DNS, TLS header inspection, OAuth, RADIUS servers. All this information comes into NetDL along with information from the switches, regardless of where the switches are. They could be virtual switches running in the public cloud. It could be switches on your campus or in your data centers, or all the way across the WAN service provider core. This is again the advantage of having a common architecture across every domain of the network. You bring all this information from all these different places into a shared common database that then supports end-to-end visibility, end-to-end uniform provisioning, a consistent treatment of network upgrades and software updates, of security incidents, CVE handling. All of those things are unified.
That same NetDL is the foundation for AVA, our Autonomous Virtual Assist. Now I'd like to finally talk a little more about AVA. AVA, just from the name, the A's are very important. AVA's not a chatbot, okay? Chatbots just sit there and wait around. You type in your question, get back your answer, they go off and do something else. No. AVA is an autonomous agent running all the time in your network, always watching, always trying to understand what's normal, what's common, why is that happening, how does this compare to that, looking at events, trying to figure out what's important, what do I need to, what's changing, what might I need to alert the operator about. AVA's autonomous in that respect. I think very importantly, the second A, AVA's an assistant. The media talks endlessly about how AI is coming for all of our jobs. Maybe someday.
I don't believe this current generation of technology is taking away any network engineer jobs. Not yet. It's not ready. What it is ready to be is a fantastic assistant that can help the network engineer, help the network operator deal with the complexity of their network, deal with all the different tools that are available, and just how hard it is to operate in a modern environment. If you look inside AVA a little bit, AVA's constructed in layers. At the lowest layer is NetDL, of course. NetDL is the state foundation for AVA.
On top of NetDL, we run the AVA runtime, which includes LLMs, standard off-the-shelf, a context engine that builds the prompting for the LLMs based on context elements that come from observing what's happening in the environment, a tool manager that helps manage all of the different things that AVA can do to get more information or even make changes to the network, talking to telemetry systems, obviously CloudVision telemetry, but also third-party systems, a policy and safety engine for the obvious reasons, and finally an MCP client so we can connect to arbitrary MCP servers within an AI environment. On top of the AVA runtime, we build specific agents for specific functions, state machines and prompting engines, history reducers for each of the different areas of network operations.
Ask AVA, this is basic question and answer based on our knowledge of what's happening in our documentation, in the user's network, from bug database entries, CVEs, and tech support history. All of that is available to ask AVA. Monitoring, of course, always watching, like I said. AVA provisioning is an assistant for making configuration changes to the network. I think you saw in Fred's demo an example of that. Finally, AVA troubleshooting. When things do go wrong, how do you put these pieces together, figure out what's happening? AVA troubleshooting is an assistant to help with responding to incidents of network issues. In summary, we have a multi-domain operating model across the entire estate, from the cloud to the campus, everything in between, with a single consistent OS, consistent state management, giving the customer consistent operations as a foundation for their environment.
Now what I'd like to do is turn this over to Todd to talk to you about how we harness this architectural foundation and these shared elements across the whole estate and really focus in on our strategy around campus. Please I'd like to welcome Todd to the stage.
Thank you. Now coming to the stage, Todd Nightingale.
Don't go anywhere. You're coming right back. I really appreciate it. This is an amazing event. My name's Todd Nightingale. I'm the COO here at Arista Networks. I'm new, and I cannot tell you how excited I am to be here, of course, at Investor Day here, but most importantly here at Arista Networks. It's been a phenomenal experience to get to look across the whole business, from the technology to the go-to-market, and of course the operations. One of the most amazing opportunities I think we have in front of us is the campus TAM, and I'm not sure we always get to talk about it as much as we'd like to, but it is really, it's an enormous and profitable business for us, and we are in so many ways just getting started.
This is a part of the network that for years was the largest spend across the industry, and today more than ever, it is ripe for modernization and truly Arista Networks' flavor of modernization, the kind of differentiation that we provide. There's a ton going on. There's an explosion of devices thanks to IoT, but also a high diversity of smarter and smarter devices with more and more intense networking needs. There's phenomenal focus right now on spend and OpEx across every industry, and that's putting pressure on NetOps and the efficiency. There is a real, real acceleration in the attack velocity, and when we're talking about CVEs, we have to talk about how we secure these networks and how we deliver zero trust networking. All of this stuff really adds up to a need in the market for the kind of innovation that Arista Networks has always delivered.
There is no longer really such a thing as a network that is not mission critical. We used to think of mission critical networks for military sites and tier one hospitals, but I assure you, every hotel whose Wi-Fi went down when guests need it thinks their network is mission critical. Every retailer who couldn't process a payment believes their network is mission critical. Every school during testing week, every university, every manufacturing plant, building for surge, every network, it's 2025, every network is mission critical, and now more than ever is the time for us to take the reliability that Arista Networks has always been known for, the foundation of EOS that's made that possible, and bring it to the campus network. This isn't new.
Arista Networks has been innovating in this area and pushing towards this surge in campus for years, and this has been our strategy, delivering that truly always-on network, truly bringing mission critical always-on networks to the entire industry. That is our focus in so many ways. That is who we are. The campus, that also means focusing on zero trust networking. Jayshree, I think, alluded to our strategy in the best possible way, providing the best-in-class networking security, whether it's on the firewall side, segmentation, the NAC solution we've invested so much in, not forcing our customers and locking them into a SASE solution or identity solution that only comes from us, giving them choice to partner with Apalo or Zscaler or whoever they might pick. That strategy is helping unlock this market for us, and so is a focus on zero trust operations. I'm sorry, on zero touch operations.
This idea that you should be able to deliver a truly mission critical network, a 4.9 and 5.9 campus network without armies of people, without having to be in the Fortune 100 with thousands and thousands of network engineers. It's building on the foundation of EOS to deliver that kind of reliability, exactly what Ken is talking about, but it's also building the technology we need to compete in this campus network where we are relatively a newcomer and reduce and remove all of those roadblocks so we can compete in every deal so that we can deliver Arista quality for every network. That innovation has been going on for years here, and it's an exciting time right now because we're really starting to see that unlock. The Wi-Fi portfolio, this started as an acquisition seven years ago of Mojo.
There's been phenomenal innovation velocity here, and we are now sitting on one of the most complete portfolios in the world. We have not just indoor and outdoor APs, but high, medium, low offerings in all these areas, external antennas. No matter how sophisticated an RF deployment you want to put together or how simple you want that install to be, we have hardware offerings for you, and they run at the highest reliability of any Wi-Fi on the network. Wi-Fi is near and dear to my heart, and I'm telling you, I put this in my house. I have been compelled by this solution. Arista's switching is second to none, but it didn't go, data center switching doesn't drop into the campus by itself. There's been an enormous amount of innovation across a campus switching environment, bringing EOS to the campus, and it is an incredibly powerful solution.
We've always had SP and large campus routing, but the Arista acquisition really finally closes the loop and completes the puzzle. In fact, it's this continued investment that has filled in every hole in the campus portfolio and now leaves Arista with a complete networking stack. These are some of the key investments that we've made. Agni is our NAC solution. For many years, we've been putting R&D into this, and Agni is incredibly powerful. It provides network access control. It allows the Arista networking stack to leverage all of the security posture assessment from third parties and provide best-in-class network security using Arista technology. The Wi-Fi acquisition I just talked about, and it's obviously incredibly powerful, but the VeloCloud acquisition and bringing SD-WAN, it's an incredibly important part of the total solution.
It's important because while Arista has had best-in-class routing for large campuses, connecting headquarter sites from continent to continent, some of the highest performing routers in the world for service providers, et cetera, we have had a hole in our portfolio for the branch, for the small office, even the home teleworker, and Arista fills that solution. It allows us to connect those branches over broadband, bring Arista technology into every single site, and for customers who want to make a single architecture choice for the network, now no matter whether they're running at the largest university campus in the world or they want to deploy at the smallest branch office or coffee shop, we have a solution from routing to wireless to switching for them. It's an incredibly powerful acquisition. I'm super excited. They arrived the same day I did. It was like, it's kismet. I love it.
The VeloCloud acquisition also brings something special to our go-to-market. We've been investing in bringing up a channel, especially this year, and we're starting to see solid momentum in that channel, both systems integrator and service provider. We've been expanding our direct sales motion, and it's amazing to see the momentum, especially in large strategic accounts, downtown major New York financial headquarters, like Jayshree said. The VeloCloud team brings in an MSP motion. They've done a ton of their business traditionally through managed service providers who provide an all-in-one managed offering, and it gives us really two phenomenal opportunities.
It gives us the opportunity to take those managed service offerings and bring all of the Arista technology through that, and of course, to take the VeloCloud technology and bring that through this kind of burgeoning campus channel that we're building today on the Arista side and have been building for years. The key here is this investment, both in leveraging the EOS technology for the campus and developing new technology on that EOS platform for the campus. One of the biggest roadblocks in these large campus deployments, especially in education, but any multi-floor office, has been stacking. We've put a lot of investment in delivering campus stacking. Our version of that is called SWAG. It's going to be coming out soon.
This gives us an opportunity to deploy the highest density sites by being able to stack not just tens, but really not just one or two, but dozens of switches together, manage them as a switch, and really have that cluster of campus switches operating as a single switch. It's been a competitive issue for years. Now, bringing stacking to the campus at Arista is enormously powerful. For someone with new eyes, it's amazing to see the speed of innovation velocity on EOS that made this possible. It's a remarkable innovation, and it removes an enormous roadblock in the market. One of the cornerstone differentiating features of EOS in the data center has always been hitless upgrades, the ability to upgrade the EOS firmware on a data center without taking any downtime.
It's something that I had a hard time getting my head around when I first learned about it, and to watch it in action is remarkable, so much so that I had to run tests to prove to myself it was real. Bringing hitless upgrades to both wired and wireless means that we no longer have to consider for planned downtime and outages due to waiting to upgrade as bugs and security vulnerabilities become too critical. Hitless upgrades mean we can realize that promise of zero touch operation, that we can maintain the most secure software on campus networks around the world, and that we can do it while maintaining perfect uptime, delivering on the promise of Arista, the most reliable mission-critical network in the world. We've seen an enormous amount of investment and focus on this concept of zero trust operations.
Ken mentioned it, CloudVision and the NetDL framework it's built upon is a remarkable tool that allows you to manage data center and campus networks, something that no one else in the industry has, and I really don't think anyone else in the industry will have anytime soon. CloudVision and the whole suite of management products at Arista have flexibility. You can deploy them in air gap networks, but they deploy in the cloud with some of the simplest, most straightforward functionality. NetDL and that idea of seeing the complete state of an entire network in a single platform gives enormous power to network operators today. The opportunity that we have with AVA to use that data lake to deliver truly differentiated AI assist is phenomenal. I'm excited to bring Ken back on the stage and give us a little sneak peek of AVA.
All right, thanks Todd. All right, this is a very quick early technology demo of what AVA can do, and the scenario here starts with just a sort of a chatbot style interface, but again, it's not a chatbot underneath, but we do route the question to the right agent based on the content. Here the question is, what can you tell me about Alice's device? And "you" is misspelled. One of the things I love about LLMs is they just do not care how you spell, okay? You can spell it any way you want, they figure it out. We typoed in our demo and hey, it just works, so you know, whatever. What can you tell me about Alice's device? Ask AVA if it understands this calls for some telemetry, makes a telemetry query, and brings up a bunch of information.
You can see the actual screen kind of underneath. I've called out what I think is the key information in the larger window, so I try to make it readable. Here are some details about the device. Here's the host name. Here are the IP addresses that are in use. There's a MAC address. Here's how it's connected to the network and offers you some things. You want to check on this or check on that. Actually, it can't reach the internet. Oh, okay. This is an incident. We create an incident record, invoke the AVA troubleshooting assistant. The AVA troubleshooter wants to run the following action, ping from this location to that location. Once the operator allows that action, we look at the resulting traffic and see that sure enough, nothing's getting through from the switch to the internet. It's not Alice's device that's the issue.
The problem is actually wider spread. There's a little bit of a back and forth here. I've skipped some of the details. After doing some other pings, trace routes, looking at some configs, troubleshooting AVA concludes, I've reviewed the access list configuration on a leaf switch that's involved in the flow. It appears there's an access list named Rogue Device List that contains a deny statement for the following subnet. Since Alice's device has the IP address it has, it falls within the denied range. This is likely the reason the device can't reach the internet. It goes on to offer more things, and there's a further conversation. The point of this is that the troubleshooting assistant, I think, is going to really change the game for how quickly and easily people can resolve these kinds of problems. I want to leave you with here the Arista way.
Everything we do is based on our architectural foundation and our culture of innovation. The most important thing, again, is that thing at the top, my commitment to you and to all of our customers. We are never putting your network at risk so we can ship some shiny new feature sooner, okay? We're always taking the time it takes, whatever that is, to make sure that when we ship it, it actually works. Thank you very much.
Returning to the stage, Rudolph Araujo.
Thanks, Ken and Todd. We'll take a quick 15-minute break, allow you to stretch your legs, grab some coffee. The restrooms are right around the corner as well, to your right when you exit the back doors. We'll be back for a deep dive into AI networking and how the power of Ethernet is transforming AI networks with Hugh and Andy. Quick break.
Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence.
Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence.
Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence.
Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the.
Ladies and gentlemen, our show is about to restart, so please start making your way back to your seats. Thank you.
Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence. Rise to the pavement, break my shine. Shadows, people know the truth can't hide.
Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence. Rise to the pavement, break my shine. Shadows, people know the truth can't hide. Heavy boots, the world shakes beneath. Every step screams, every breath's a thief. Shatter the silence, tear it apart. With distilled mist, straight from the heart, echoes of rage laid down in the chest. Shatter the silence.
Ladies and gentlemen, our program is about to begin. Please take your seats. Thank you.
Rise to the pavement, break my shine. Shadows, people know the truth can't hide.
At OpenAI, we're at the forefront of the profound AI momentum reshaping the world. AI workloads have fundamentally altered our approach to compute infrastructure, requiring a different kind of network than what we use for general purpose computing. For an AI job, the network must be exceptionally performant, providing massive bandwidth with high efficiency to maximize every GPU cycle and prevent bottlenecks. This is where our partnership with Arista Networks has proven invaluable. Arista Networks' solution delivers predictable latency and reliable connectivity across multiple network paths, improving our job completion time.
Our mission at Anthropic is to develop safe and trusted AI solutions that businesses and people depend on. Claude, our AI assistant, helps millions of users with everything from complex coding and research to detailed analysis and creative work. Our key requirements for next-generation AI data centers are straightforward: scalability and speed of deployment, performance measured in throughput and latency, security without compromise, and the reliability that our users demand. That's why we partnered with Arista Networks as the leader in high-performance networking.
At AMD, our focus is on providing the foundational computing technologies to power everything from large-scale AI training and inference to AI-enabled edge and client devices. As AI workloads grow in size and complexity, a highly optimized compute and network infrastructure becomes essential. That's why we're excited to partner with Arista Networks, a proven leader in high-performance Ethernet-based networking solutions.
With VAST disaggregated flash-native architecture combined with Arista's AI, one tightly integrated solution, you're investing in a platform that's ready for the scale and complexity of AI now and well into the future.
Arista provides an Ethernet fabric with the intelligence and observability to handle AI scale, and Pure Storage delivers flash-native storage that feeds GPUs without ever becoming the bottleneck. Together, we have built AI solutions that are fast, transparent, and ready to scale without compromising performance or visibility.
Together, Arista Networks and Broadcom are building the networking solution and foundation for the AI era. We're doing this with shared values, deep technical excellence, and a commitment to open innovation. We are deeply proud of what we achieved and even more excited about what's next.
Penguin Solutions and Arista Networks share a passion for enabling the leading edge of AI innovation. Together, we're solving the complexities of building and operating massive AI infrastructures and delivering platforms that may truly change the world.
ARM and Arista have a shared vision to unlock next-gen AI performance. We're co-designing and optimizing together, compute and networking side by side. Together, we're enabling everything from hyperscaler AI training to edge inference. Thanks again to the Arista team for the partnership and for continuing to raise the bar on what's possible in this new AI era.
This partnership combines Anthropic's AI expertise with Arista's networking excellence. Together, we're building a future where AI is not only powerful but also dependable and secure.
I'm excited by our partnership with Arista Networks as we accelerate towards the future and look forward to our ongoing work together. Thank you.
Returning to the stage, Rudolph Araujo.
Wow, what an amazing group of AI thought leaders. Speaking of thought leaders, it is my great honor to welcome on stage Andy Bechtolsheim to talk about AI and Ethernet networking.
I don't need to tell you what a unique moment of history we're in. Wait a minute, this is the wrong slides. Sorry, guys, wrong slides. This is embarrassing. No, Analyst Day slides, not internal NDA slides. Sorry for this. How did this happen? Not possible.
These are the slides that I have, Andy.
No, no, no, no. No, no, no, no. How did this happen?
Jude, you want to go up?
Okay, we'll do you, Hugh Holbrook, next and we'll reverse roles.
Is the clicker up there? Okay. These are not my slides. Oh, here we are. Okay, great. No, I don't have the... No, the... There it is. Okay. All right. This is advancing, but that is not advancing. Okay, great. Hi. I'm not Andy. My name is Hugh Holbrook. I'm the Chief Development Officer at Arista. I mean, it's really an honor to talk to you all. Thank you all for coming. I'm here to talk about AI.
and cloud networking. I spent a bunch of time both on internal development and talking to customers and standards bodies, working on AI and network from platforms and network design to software. I want to tell you about what we're doing. First of all, we've got the Arista Etherlink portfolio, which is really a suite of technologies, both hardware and software technologies, to try to make AI better. This is for all parts of the AI network, the front end and the back end, the scale out, the scale up, and the scale across. It's platforms and software purpose-built for AI to try to make AI better. That's Etherlink. In terms of the platforms, just on the hardware side, we have a range of switches from the 7060s, which are kind of targeting scale out and scale up.
These are the lowest power switches with the lowest power, most reliable, lowest cost optics. We've got the 7800, high-radix, modular chassis, deep buffered, high-featured, useful in the scale across, and also in the scale out dimension. We have the distributed Etherlink spine, the 7700. These are kind of the three major product families in the Etherlink portfolio. One thing I want to talk about is the front-end network. There's a front-end network and a back-end network. It's kind of like the world of AI networking gets bifurcated that way. The back-end network is the network that interconnects just the GPU to GPU connectivity. Today, it's almost all RDMA, typically RoCE. It's high-speed GPUs doing direct memory access to GPUs. That's the back-end network. Also in the back-end network is the scale-up network, which is just inside the chassis.
Just GPUs talking to GPUs inside the chassis or inside a rack. That's scale-up. Scale-out is the back-end, GPUs talking to GPUs. There's a front-end network, which is kind of the lifeblood that feeds the AI compute fabric, which is, I would say, equally important and actually quite a bit more complicated than the scale-out fabric in terms of the functionality. The front-end is kind of the gateway, and it connects to storage, compute, cloud, WAN, and both the back-end, but also the front-end performance is critical for both training and performance. If you look at what the front-end network does, it connects the AI fabric to all these different things that are part of AI jobs: local storage, general-purpose compute, cloud storage, the internet, the corporate network. I asked, of course, you used AI to help me explain some of these connections and why they're important.
The first thing I did was ask this query, like, where is the storage for a RAG? A RAG is a database that is used as part of inference to get real-time data. The first thing you note is that when I asked this question, it searches 94 sites. This is Gemini. It runs out and it searches 94 sites to, in real time, answer my query. I need good internet access, right, for inference. I have to have solid internet access from wherever I'm doing the inference from my AI cluster out to the internet. This is going through, maybe it's Equinix, maybe it's going straight to Google, maybe it's going to Azure, whoever my provider is, or maybe it's going across my internal network.
The thing I was actually trying to do was show this, which is like, RAGs are typically stored either in cloud storage or maybe in local storage. If I'm going to cloud storage, my AI inference job has got to connect to GCP or Azure or Amazon via S3 to get to that cloud storage with all the protocols, security, VLANs, route advertisements, all of that stuff that's necessary to connect to those cloud providers. If I'm doing inference with KV cache offloading, as a technology you might have heard of, where I run a query, I'm partway through it, and then I pause and I get a cup of coffee or I just think a little bit and ask my next question. That GPU is not going to sit there idle waiting for me to come to the next query.
It has to get loaded with a context of somebody else's, like Andy's next query or Ken's next query. There is a whole bunch of context, and it's gigabytes and gigabytes of data that have to get paged out of the GPU and into storage. That's going into local storage, which is typically not on the back-end GPU-to-GPU network. It's typically connected on the front-end network. It's a storage cluster, which has different needs. The storage servers may have different requirements. Different kinds of switches are necessary. General-purpose compute is super important for training. General-purpose compute is not where you're doing the compute for the training per se, but it's where you're pre-processing the data. I have reams and reams and reams of data. I'm sure you've heard that LLMs are trained on trillions of words of examples or trillions of tokens of examples.
That data all has to get pre-processed somewhere using tons of algorithms. It's coming from the whole corpus of the internet, or it's my internal customer database, or my engineering database, or whatever I'm training or fine-tuning my models on. That is going through general-purpose compute. At the same time, I may have data that I'm accessing. Maybe these are RAGs. Maybe it's queries to my internal databases. Maybe I'm doing Agentic AI, where I'm going to have an agent that is actually going and doing something inside my enterprise. It's got to reach out to my enterprise network. That's probably not in the same data center. I may well have confidentiality, security processes that require that data to be hosted in my corporate database. Maybe it is just naturally stored there. It could be distributed across my corporate WAN.
I need access from the AI network to the corporate network. That may have security. It may be access protocols. I may have VLAN access. I've got segmentation. I have gateways I'm going through. I've got firewalls. All of this is the purview of the front-end network. In this kind of space where I have to access the front-end network, there are all these protocols that we've been developing for EOS that are part of the solution for the front-end network. Multi-tenancy is super important. There are lots of clients on my network, and I have to keep their traffic separate. This is obvious if I'm a service provider, like I'm a Microsoft or something like that. I have different customers, and of course, I have to keep their traffic separate. Security of what I'm sending into the AI cannot have. Microsoft can't have their customers.
I don't know who it is, customer X and customer Y, Boeing or Walmart, like their data can't cross, right? They have to be kept separate, and they have to be kept separate all the way to the WAN. High availability is extremely important. Keeping things alive, keeping access on my gateways, keeping all the connections running, availability of the AI traffic is important. All of this WAN traffic going to Azure, going to different gateways, I have to do route advertisements. I'm steering traffic. There's MPLS involved in many cases. This is because I'm accessing what the internet is, what my metro network is, what my corporate access network is, and my AI fabric is touching all of that. Confidentiality is important, and it can happen in a couple of different ways. It can be segmentation by separating traffic in different ways. I'll talk about that.
It can be encryption. It could be encrypted on the endpoints, or it could be encrypted in gateways if my endpoints aren't doing encryption. Many times inside the fabric, there's high scale, right? There can be many, many GPUs. Doing scale right, both route scale outside the data center when I'm talking to the internet, policy scale if I'm doing filtering is important, and just like ECMP scale if I'm going very wide because I'm connecting a lot of GPUs through a lot of tier two switches. Observability at scale is important with all of this. These are all the set of features, observability, telemetry, routing that come together in the AI fabric. Like that, there is a part of the network, that back-end network, which is relatively simple in terms of protocols. There is sophistication there. In terms of routing, it's not the most complex routed environment.
That has to connect to the rest of the world. That is not a simple connection. Segmentation is something I've talked about. This is just keeping different customers, be they actually my customers or be it just different corporate clients that have different security profiles. My finance data has to be kept separate from my research data, has to be kept separate from my engineering data. I may have siloed projects where I can't be crossing things. We have multiple tools for doing this. VXLAN EVPN helps with seamless provisioning, standing up subclusters of GPUs within a data center, and can preserve that segmentation in the scale across fabric, going from data center to data center. If I have my job spanning multiple data centers, either for performance or for resiliency, we can do IP address segmentation.
We have multiple techniques to do that that we've been developing over the years. We have customers deploying those to be able to do the segmentation without having the overhead of a VXLAN header on the packets. We're using 802.1X, which is technology that we've been developing and have continued to develop for GPUs to be able to identify what job a GPU is from or is part of so that we can put it in the right network segment. Encryption can be important in the network, especially if I've got data that is segmented somehow within the data center. When I leave the data center, I don't necessarily have the same control or the same comfort about the confidentiality of the data on those links. I want to have wire rate encryption.
Many enterprises and larger clouds have policies that when things leave the data center, it has to be encrypted one way or another. The way to guarantee that is to have link-by-link encryption. This trusted segmentation is important as functionality that we've been building. As I said, that segmentation happens within the data center. The secure connectivity happens within the data center. Larger jobs and larger AI clusters are now spanning a single building or a single site across a region because I can only get so much power, so much space in a single region. It's not uncommon to have multiple data centers within a region talking together. All of that multi-tenancy, all that segmentation to keep customer A from customer B or to keep finance separate from engineering has to be extended to the WAN. It also has to be extended to the other data center.
It has to be extended to Amazon. I have to keep those separate paths of traffic going to Amazon or going to S3 all the way that I'm going to Azure, all the way that I'm doing it. The security, whatever security I've got, whatever confidential I've got, that has to be extended all the way through this scale out from the scale out to the scale across fabric. I want to talk today about STANS, which is a secure traffic analyzer. It's a feature that we've developed that is doing telemetry at the host, at the top of rack switch, and at the spine. We've got telemetry from an AI Agent running on the host. We've got telemetry being exported from the first hop switch and the last hop switch.
It's the sending host and the receiving host, and also at the spine of the network that's gathering information about what jobs are running, what hosts are running, the performance flows, pulling that data together, putting it in CloudVision. We get data about, I don't know if you can see the tiny little box here, but about security applications, what tenants are running, what hosts are running, how they're performing, tunnels, etc. That all is pulled together.
That is a foundation to build upon this secure segmentation or to extend the secure segmentation to do all of the traffic management that we need to do to build a reliable, solid performance AI network, detecting misconfigurations of the endpoints, doing a good job of load balancing, identifying congestion and hotspots, being able to report those, steer around them, being able to manage our tunnels, our peers if we're connecting to an Equinix or to another router in the internet, be it in my corporate network or outside, detecting DDoS attacks, microbursts, and then connecting to storage. All of this is the foundation of the secure traffic analyzer builds and then is built and then lets us implement these things to deliver better value. I wanted to talk briefly about platforms. Andy will talk about this. I'm going to go really fast.
We have a suite of platforms for the Etherlink portfolio, and each one is optimized for a different role or a different set of roles in the AI fabric. That diversity is valuable. We've got single-chip systems, typically one or two or maybe four RU for scale-up and scale-out fabrics that are optimized for power, cost, and speed. We've got the chassis for the largest tier two networks. These can be useful as a spine in a scale-out fabric or as an edge device or as a kind of central layer in the scale across network. That scale across network is the network that connects multiple sites when I have a regional network of data centers put together. The DES systems, the 7700s for rapid, seamless AI backend, we have these deployed at some customers.
It has the kind of attributes of the data plane attributes of a single router, a single device, and the load balancing of that, but in a distributed fashion that can scale out to 256 or more top-of-rack switches and support networks of 4,000 and more GPUs. We have edge devices and a deep portfolio of edge devices that can play that role at the edge of the data center. These have deep buffering, routing, MPLS support, large routing tables, tunnel capacity, built-in encryption support for connecting the front end clusters to clusters together, connecting clusters to the cloud. We have more than one, multiple, shall I say, custom, and I can't say much about them, custom designs for scale-up. These are designs inside rack scale designs. This is an example of something that is representative of the kind of thing we're building.
We have a broad set of platforms that are optimized for a range of AI use cases. I want to talk briefly about the Ultra Ethernet Consortium, which is something that I was personally quite involved in. I chaired the technical advisory committee. I'm on the steering committee. The Ultra Ethernet Consortium published the 1.0 spec back in June. They're continuing to do work. The Ultra Ethernet Consortium was a consortium founded by these 10 companies at the top. Among them was Arista in the founding set of members. There are now more than 100 members. The goal of the Ultra Ethernet Consortium is to advance Ethernet in the service of HPC and AI. Ethernet is already very successful in AI and HPC networks.
The goal is to take anything that makes it that we can do to make it better and do that. There are more than 100 member companies, 1,000 participants. Arista is quite active in it and was a founding steering committee member. The Ultra Ethernet is supported by Arista switches. There is a lot that happens in the NIC in Ultra Ethernet. There is some functionality in the switch. Just an example here is packet trimming, which is functionality that we've put into our switch to support Ultra Ethernet. It's technology that will make the transport protocol be able to detect lost packets faster and more reliably. Ultra Ethernet itself, the Ultra Ethernet transport is a standards-based transport protocol for AI that does out-of-order delivery, congestion control, and security. This will make things better. There will be other transport protocols as well.
This is not the only one, but this is going to be, I think, one that will have some impact starting later this year and more so into 2026, I expect. Our 2025 products, specifically the 7800s, 7280s also, and the 7660s will have support for the Ultra Ethernet capabilities that are needed to run the Ultra Ethernet transport. We have a broad range of merchant silicon that we use. I'm not going to spend a lot of time on it, but again, like the platforms, and each of these is kind of foundational in the different platforms, and they're optimized for different use cases. Highest scale, a perfect scheduled fabric, power and cost optimized, super low latency optimized. We use these judiciously when we need to to build the best platforms that our customers want.
I want to talk about a metric that I've started to call TTFJ, or time to first job, which is a really important metric for our customers. Time to first job is what I consider the time between when all my equipment shows up on the dock and is ready to be built into a data center and when I run my first job. That is provisioning the switches, testing everything, testing the software, making sure that I'm getting good performance end-to-end, debugging cables, figuring out what fans aren't spinning, making sure that everything is inserted properly. It's a huge problem. This is super, super focused for these customers because having those GPUs somewhere between the dock and the first job is just burning money. It can be really expensive. At a street price of estimated $30,000, just to pick a number, if I have 10,000 GPUs, it's $300 million.
It's like a very expensive asset that's sitting there waiting to get it running. There's like no budget once it's up for downtime. The problem here is that new switches come at the same time as new GPUs, NICs, optics, everything comes up together. We need a super solid foundation. The features have to be ready. They have to be debugged when the silicon is ready. That is what we are totally focused on. That's what we've been very good at so far. I think that's a compelling advantage that we have here at Arista Networks. Things that we've done to optimize for time to first job, quality, just our relentless focus on quality. I'm sure you've heard Ken talk about quality, or me, or Jayshree. Super important to us. Telemetry, just visibility. What is happening? What is going wrong? Why isn't this working?
Super important to have the visibility, fine-grained visibility. We have many, many features that do that. The other thing we do is we provide deep sharing at the platform layer in our code across silicon families on things like optics and fan management, platform commands, and telemetry. This is something you cannot bolt on after the fact. It results in better time to first job. I've just got a picture that shows this. If you look at the left, and this is kind of like how I always did things before, the gray stuff is the software that's shared. It's like routing and BGP and SNMP, a bunch of shared stuff. Then there's a bunch of stuff that's specific to the silicon. Programming TCAMs, programming ACCLs, powering things on, managing the PHYs, initializing silicon, pulling counters, like all this stuff.
What we did at Arista Networks was we said, you know what? There's a bunch of stuff in there that actually can be shared typically. Like this comes in an SDK from the vendor, but we can share more of it. Pulling counters, I don't need different code for different chips. Sure, the counter registers are different. Doing that efficiently, storing it in shared memory, using DMA engines to pull the chips, storing it, making it visible, we can do that in a shared way. We did that across the board with fine-grained state machines, programming the ACCLs, programming the TCAMs. The very lowest level here, this white stuff, needs to be unique. The registers on the chips are different. We can share more of the code than what's typically done. We can do it across vendors that compete with each other.
Of course, like Broadcom and Marvell and Intel and Cavium, they're not going to share code like their competitors, right? We can inside Arista. We can share that code. That gives us better tested code day one when we start deploying our switches because we already tested it on the previous platforms. Jericho 2 can be shared with Tomahawk, can be shared with Trident. We can share code there. That gives us better quality, faster features, one team working on the same code base. The result is better. It improves our time to first job. I think that is what I would call our architectural advantage, or one of our architectural advantages in addition to what Ken talked about that is less talked about, but I think is equally important, honestly. I want to just briefly talk on the Blue Box. This is my last slide.
I'll hand over to Andy. The Arista Blue Box, as Jayshree talked about, is kind of there's the white box, which you know about. The Blue Box is the Arista version of it. It gives you a choice of OS. I can run EOS, SONiC, FBOSS, or any of those, or a variant of them, or multiple of them if I want. It's built on top of NetDI, which Jayshree talked about, which is used to validate not only the hardware, but the low-level firmware, like the thresholds for power, the optics tuning, the programming of the fan speed algorithm. These are tricky system-level components that are hard to get right, that can fail in the field. They can fail in the field years after. They can fail when the vendor makes what seems like a transparent change or a process change.
We need to be able to detect those, support that. We're hardening components. This is all fully the Blue Box is fully supported by the EOS software team, hardware team, diags team. You have got the backing of Arista on it and the choice to run the operating system that you want on top of it. Arista's Etherlink products are optimized for AI in many ways that I talked about. I believe that we have an architectural advantage for AI that gives us an improvement on time to first job from features, quality, software architecture, Blue Box. That is everything I have. I am now going to hand it back to Andy to tell you about a lot of the details underpinning what I talked about today. What Andy has to say is like totally fascinating. Thank you very much.
Okay. Is this working? All right. I want to talk to you about the truly extraordinary opportunity that's ahead of us. You know the numbers. You know the CapEx numbers are going up every day, apparently. If you think about it, we're still at the early innings of this journey where not just the workloads are scaling and getting bigger exponentially, but the models themselves are evolving. Suddenly, it's not 10,000 GPUs per cluster or 100,000, but it's a million. The requirements in terms of how you implement these very large-scale data centers, including how the network supports these very large data centers, are really paramount. A few years ago, a typical AI cluster was 4,000, 8,000, 16,000 GPUs. InfiniBand did that. InfiniBand stops kind of at that level.
I don't actually know a customer that's not planning on hundreds of thousands, and in some cases, millions of GPUs that are tightly interconnected in a scale-out network. The second thing that's changing here is that the bandwidth per GPU or XPU is going up dramatically with each generation. The numbers here are forward-looking statements, meaning the next version of XPU perhaps is a 12.8 Tb scale-up and the 800 Gb scale-out. The one after that will double that. The one after that will double or quadruple that. If you do the math on a per sort of cluster basis, you're going from like 100 Pb-2 00 Eb. That's a number that's 1,000x bigger within one campus sort of white data center. Think multiple gigawatts. As EO was talking about, we do primarily use three silicon architectures to support these various requirements.
Starting on the lower left is the Tomahawk Ultra, which is a brand new chip that is the lowest latency Ethernet switch on the planet, 250 nanoseconds. It's really custom optimized designed for scale-up applications. The Tomahawk type of chip that we're shipping Tomahawk 5 today, Tomahawk 6 in the lab, Tomahawk 7 is coming shortly, that can span 10,000s of XPUs in a two-tier network very efficiently. The Jericho architecture, which is by far the most scalability of all, and we've successfully deployed in both the modular chassis form factor as well as the disaggregated switches. Talking about what we can contribute here and what's really important to our customers is, number one, power reduction. The power bills, of course, you know what they are. The real issue is whatever power the network consumes takes away from the power available for the GPUs, which make the money, right?
Basically, a network is, for better or worse, taxed on the delivery of the cycles that the power is supposed to pay for. Every 1% power improvement on a network means in a large data center, like one that has, I don't know, 100,000 chips, you get 1% more GPUs. The first step is the latest switch silicon is always more power efficient than the previous generation. Thus, there's this incredible pressure to get new silicon into the market in volume as quickly as possible in sort of a ramp that nobody has ever seen before. The second thing has to do with the transmission of bits. Copper cables have essentially zero power, but they only work within the rack. Beyond the rack, you need optics. There's a lot of emphasis on how we can reduce power for optics.
We have been an industry leader in promoting the adoption of linear optics, known as LPO, linear pluggable optics. We now have multiple customers that have deployed these things successfully in volume. The short summary is the linear optics is one-third the power of a full-retime optics in the next generation. Thus, you can get three times as many optics if you go linear compared to the full-retime. Another factor that's a little harder to explain is that the larger the radix of these switches, the fewer layers you need in a network, the less power it consumes. We're all into these maximum fan-out radix kind of networks to accommodate most building kind of deployments in two tiers rather than three tiers. The final one, liquid cooling, saves power because there's no fans. It's between 5% and 10% of the system level, depending on temperature. That's an important step.
Talking about liquid cooling, this is not a product announcement, but more like a directional step in of what's coming. We're designing 100% liquid-cooled switches that plug straight into an ORI-3 style rack. You see the picture on the right is the rear of the chassis, plugs into the busbar, the liquid connections, and that's it. It's like a line curtain in a big chassis. A lot of our development is focused in this direction. In addition, we're designing fully liquid-cooled switch racks that offer up to 32 P loads of these fabric switches with patch panels and management switches and power shelves. This is all optimized for the liquid-cooled data center to enable high-density switch configurations. Separately, we have engaged with multiple customers on multiple projects on custom switch designs for their custom AI racks, which are customer-specific.
These are very large, 500 kW going to megawatt class racks that have a lot of internal switches for scale-up in particular. They use copper cables internally for the lowest power and highest bandwidth connectivity. These projects are being done in very close coalition with our largest customers. The whole focus is to minimize the design to volume deployment. On pluggable optics, I don't know how much you've followed the market there, but there's this endless debate about what's the best optics and lowest power. If you could eliminate the retimed DSP, it is lower power. It is surprisingly also more reliable. There are fewer components that can fail. We do see a lot fewer link flaps with linear optics than traditional retimed. The most important thing is it is the lowest power optics.
There is no other optics that is lower power, including core package, which is essentially the same components, but just placed in a different part of the chassis. What people like about pluggables is that it supports any kind of technology from multiple vendors, including future microwave ideas and slow and wide optics and whatever comes next. If you do core package, you're tied into a vertically integrated single-vendor stack. The most important request from our customers is they're spending too much money and they're asking us to reduce TCO. It's true. The way we can help is minimizing time to volume for next-generation silicon.
We're all in on liquid cooling to reduce power, eliminating fan power, supporting the linear pluggable optics to reduce power and cost, increasing rack density, which reduces data center footprint and related costs, and most importantly, optimizing these fabrics for the AI data center use case. What we call the purpose-built AI data center fabric around Ethernet technology is to really optimize AI application performance, which is the ultimate measure for the customer in both the scale-up and the scale-out domains. Some of this includes full switch customization for customers. In other cases, it includes the power and cost optimization. We have a large part of our hardware engineering department working on the things on this slide. I'm spending all my time on this topic here. This is all the slides I had. Thank you very much.
Now, welcome Chantelle Breithaupt.
Hello, everyone. Very nice to see you. Always a tough act to follow, Andy, but I'll give it my best shot. Thank you for being here for the last part of today's official agenda before the panel. The whole thing I'll focus on in these slides is giving you a different perspective on where we see momentum for Arista. We were talking at the break, actually, and some people were mentioning, do you ever step back and just realize how far you've come as a company? Coincidentally, we do have this slide in here to show the momentum since the IPO in 2014. You can take a look through. I think that to say there's 52x market cap, and that's actually higher if you take it today versus the August kind of cutoff point we did, 15x times the TAM. Very excited to see the results to date.
What I would like to leave you with on this slide is to leave you with the thought of the tenacity and the conviction for Arista to execute when they have the intention when it comes to pure play networking. Just a quick summary of some of the key financial metrics for you over the last five years. You can see which ones you resonate to. My two favorite children in this slide are, if you look at gross margin and operating margin, you can see even though you have the fluctuation and the volatility in the gross margin during different times, depending on what's happening with mix and inventory, you still see our ability to deliver operating margin expansion. I think that's a good testament to the fact that we have a very efficient and effective business model that we're very proud of.
Now let's get into the different aspects of building momentum. The first slide I have for you here is to kind of set the foundation when it comes to TAM. Jayshree had mentioned at the beginning the TAM in the sense of crossing that $100 billion mark as we get to 2029. You can see in the last two years, we've had a 75% increase in our TAM, very much bolstered by the AI conversation, but equally so in kind of the campus branch segment of the TAM. Together, we're looking at $105 billion by 2029. Super excited by this foundation to give us the growth foundation that we need going forward. Now let's talk about building momentum in cloud and AI. I'll start with an external view first and then talk about Arista's ability to deliver in this market. I like this 650.
AI is such a big space with all kinds of projections that you kind of need a framework to gravitate to. One of them is the 650 Group framework I like that talks about different waves of how AI is going to come in. You can see kind of the dollar values coming in, going from the foundational models and content creation to agentic AI growing to 1 trillion dollars between 2025 and 2028, and then wave four being autonomous transportation and robotics, humanoids, another trillion 2027 onward. Even if these are indicative of the opportunities, we can see that Arista Networks is very well set up to play in this space. I'll give you three reasons why I think Arista Networks is very well positioned, perhaps uniquely positioned to maximize our potential in this space.
You can also just go more specific to the cloud kind of data center infrastructure, CAGR for the CapEx being about 16% 2025 forward. Pretty robust numbers no matter which ones you look at. The three reasons I'd give you why we're uniquely positioned to take advantage of this space and to do very well. The first one is all the stuff that was spoken by Jayshree and Hugh and Ken and Andy in the sense of our product portfolio, our software capabilities, our NetDI, scale out, scale up, scale across. Very excited by what we have to offer from product and solutions. The second one is Andy talked about the thought leadership when it comes to minimizing or optimizing the total cost of ownership for the AI data center buildouts. He talked about the silicon. He talked about liquid cooling, linear optics, rack density, and optimizing Ethernet fabrics.
Very much a playbook we can use with our customers. The third one is that great set of AI partners we announced this week in the sense of working with them in the community. Three really compelling reasons, I think, to say why Arista Networks is going to do very well in this space going forward. Now we can switch to equally as important how we're going to build momentum in enterprise. This one's not as cyclical. It's not as volatile, not as big. It's a little bit slower to grow, but quite a steady eddy in our portfolio. Very excited for Todd and his ideas he shared with you earlier on campus specifically. You can look at our expansion of customers. If you look at the last 12 months of customer growth over FY 2022, some pretty, I think, demonstrative results in the sense of growing customers internationally.
You can look in the sense of we're starting at a pretty fair space when it comes to share, when it comes to campus and enterprise generally. We feel we have lots of share to go get. The way we're going to do that are the three growth drivers you see on the right: acquiring new logos, land and expand, and then this whole new AI use case when it comes to enterprise and Agentic AI. One of the things I wanted to do is give you, sticking to enterprise, is to look at kind of what's an outside-in feedback that we received. We're super proud when we get these kind of sentiments shared back to us. This one's from the Gartner Group. It looks at, on the left-hand side, you can see it talks about the size of companies that are considered, the industries, and the geography.
Very well represented across many different domains. The right-hand side is the thing I think we're most proud of. It talks about ease of use of doing business. It talks about data center and the wireless. A wide breadth. I thought this slide was just very indicative of the things we're very proud of because we only focus on networking and try to do it very well. Again, sticking with enterprise to give you a different perspective of why we're confident we can continue to take share in this space is from the two data, two examples we have with customers in the enterprise space. Customer one, a U.S.-based very large insurance company. This is a story of going from data center to campus. You can see the journey since 2015 to now. Just the stickiness with the customer continuing to take share of wallet.
If you look at the 2025 to 2015, 2x expansion of their spend with us. You go international. Here we have a campus to data center win. To me, that's really great news because it validates, one, our brand, two, it's international. Actually, a third one is it shows that our product portfolio is there to serve some of these largest customers, this being a widespread financial institution. I'm very happy to see that. We're 6x expansion share of wallet over five years. We'll continue to see what we can do to help serve that customer. Third space for building momentum is what we call specialty providers. In this category, you have the telcos, the SPs, you have the streaming services, you have the networks. Now we have this great benefit of putting NeoClouds in this space, in this category.
You can see on the left some of the cloud, NeoCloud, excuse me, growth drivers, very specific curated needs. You can see the great CAGR predicted by the 650 Group, 28% CAGR this year forward. Lots to consider in that category for us. When NeoClouds have the ability to have open, best-of-breed conversations in their RFPs, we absolutely want to play in that space. The thing they come to us the most for is to take our experience with the larger cloud titans and how can they have that experience, how can they have those outcomes with all the things mentioned before me. I think we're uniquely positioned there to have those conversations. We're excited to continue to do so. Now switching to the other part of the P&L. Those were to cover how do we get to the top line numbers that we've been discussing.
Here are just some insights, I think, to margin drivers, which are some of the conversations we have. If you look at operating margin, one thing we're very focused on is our commitment to innovation, quality, reliability. We'll always be in this 10% to 13% range of revenue on the R&D side. We're very committed to that. On the gross margin side, we've had lots of conversations last year, this year, what's driving your gross margin. If you take a look back over time, you can see that E&O as a percentage of revenue has fluctuated from 1%- 6%. Some of it was during COVID, but some of it's inherent in our business model. We have one-year lead times and two quarters of visibility. We have to lean in at any point and predict educated guess in the sense of what we should be doing.
We don't always get it right. E&O is a part of it. We have this 1%- 6% range and just wanted to provide some insights because we've had some dialogue on that. The last one is just the customer mix, which is the second category of what affects gross margin the most, you know, and mix. Mix, just to be clear, it's the mix between the three categories: cloud, AI, enterprise, and specialty providers. There's also a mix within those categories. As hard as we try to keep it simple, there is mix within them depending on macro environment, the use case, et cetera. Just to give you some thoughts from my perspective on how that works as we talk about the guide at the end of this presentation. Capital allocation framework, not much changing here. No need to change what we think is not broken.
From this perspective, you know, we have organic investment. We have the share repurchases. We have marketable securities investment. Then we have what we call tuck-in M&A. Hopefully, you've seen during the last 12 months that we do lean in for share repurchasing when it's the right opportune time. We've done so. You've seen that we do do tuck-in M&A where we think it's strategic with the VeloCloud acquisition. We are demonstrating and we will continue to, but no major change in this so far. I also want to talk about building momentum from the inside the company scaling processes perspective. I just wanted to bring some insight to all the things we're doing from a supply chain resiliency because I think that's important. Todd coming in, working with him and his team and the CISO from a security perspective.
Looking at how do we have optionality in the sense of who we work with, what kind of breadth do we have in that kind of supply chain from the vendors. We look at the location and make sure with all the different tariff scenarios we have mitigation from a geo-risk perspective. The last one is actually just the security aspect. Arista Networks is in there in all the different steps to ensure that there is no issue when it comes to our customers receiving our products and services. Now we can talk about building momentum when it comes to outcomes. I know Jayshree mentioned earlier the FY 2026 outlook. There you see that 20% growth at $10.5 billion. Very, very interested and excited about that.
Gross margin, we're going to keep it at 62%- 64% based on the mix that we know and the other drivers in the sense of inventory. So 62%-6 4%, excuse me, in the sense of gross margin outlook. Then OM 43%- 45%. We have a lot of new leaders coming in to join our team, a lot of thoughts as to how we scale the company. We're going to leave some room for Todd, Tyson, Hugh, Andy, et cetera, to decide what we need to do to keep that top line growing. We'll continue to up that as we go through till the February call to see if anything changes from that perspective. Very proud of some of the highlights in the dials below that you see. 10,000 plus customers, 87% Net Promoter Score, which is a fantastic way above the industry average.
You can see our share in country ship too. Just a blend of metrics to give you an idea of the breadth that Arista Networks is getting to, back to the first slide where we came from 11 years ago. From building momentum on the long-term model, from this perspective, from 2023- 2026, we're committing to that 20% CAGR with the guides that you've seen. 2026- 2029, now we're talking a little bit of law of large numbers. If you take that at 15% growth to 2029, we're talking $16 billion plus. The mid-teens, we don't feel comfortable at this point. We'll see as it progresses what that mix will be. Gross margin is a fairly wide breadth of 60%- 64%.
That's to give us some room from a mix perspective, mostly given all the AI conversations we're having, what will be in our world when it comes to 2027, 2028, and 2029. Operating margin at 43%-4 5%, if we do find a different way to scale the company to leave some room for that investment. Percentage of revenue, back to the point, we'll always be in that 10%-1 2% for R&D, keeping sales and marketing to 5%- 6%. G&A between a point and a point and a half. Hopefully, that's helped to demonstrate the momentum on the top line between the outside opportunity and our ability to execute. It's given you some insights in the sense of how our gross margin moves across the different elements. Very excited about what that does to our long-term model and all the opportunities there.
We're grateful for the opportunity and very excited about the possibilities. Thank you.
Returning to the stage, Rudolph Araujo.
Thank you, Chantelle. We'll now proceed to the final item of our agenda today. We have our panel for you to ask questions of. For this session, please raise your hands as we go through. The hands are already coming up as we go through the questions. One of my colleagues will come find you with a microphone. We ask you to limit yourself to one question so that we're respectful of all of the other folks that have their hands raised up. Along with Jayshree, Ken, Todd, and Chantelle, please welcome on stage our newest executive, Tyson Lamoreaux, who is our Senior Vice President for Cloud and AI. We'll now proceed to open it up. Our first question comes from Ben Reitzes from Melius Research. There's a microphone.
Mike is coming to you.
Thanks. It's a pleasure to be here. Thanks a lot for the question. Congratulations to you all, you know, seeing it from the IPO. This is a really neat day. A lot of your peers have actually talked about, just for Jayshree, I don't want to talk about the numbers you put out, but a couple of your peers have talked about acceleration for next year. Hock Tan has been very upbeat about his business, which includes XPUs. Jensen can't really accelerate necessarily from where he is, but his numbers are huge. He's talked about a 50% kind of CAGR for the CapEx. Networking is going up as a percent of the overall spend of compute and networking. I'm wondering, are you seeing all these trends? I know you typically give conservative guidance. Qualitatively, though, this networking acceleration, is there potential next year for networking to accelerate?
If it does so, is there any reason why you wouldn't benefit and whatnot? Just parsing that through, are you seeing the same things in networking becoming more strategic and that potential?
Hey, Ben, that's got to be the longest record question I've been asked.
That's it.
OK. The short answer is we're absolutely seeing a lot of momentum in our business, in particular in AI, in particular the combination of the backend and the frontend. Timing is the hard thing to predict, right? We see this as a multi-year phenomenon. This has been a critical, crucial year for production of migrating from InfiniBand to Ethernet. We're seeing that. We're also seeing the advent of not just scale-out, but scale across and then scale up coming in probably in the 2027 or 2028 timeframe. When you add all that up, there is a level of, what's the right word, buoyancy and excitement and enthusiasm. We want to stop short of that and be realistic about also how much of that will translate to numbers in 2026 versus 2027. Is it a one-year phenomenon or a multi-year phenomenon? I would say, Ben, it's a multi-year phenomenon.
When Broadcom and Hock Tan express enthusiasm on a chip, we've already bought the chip. We made the purchase commitments from it. We get it a year later. We translate it into systems. That translates into customer revenue. The delay from his enthusiasm to customer revenue can easily be 18 months to two years, right? That's another important thing to remember. Look at this as a multi-year, really transformational piece. One thing I can just say personally is, you know, having been here the longest, along with Ken, you know, we're not living quarter by quarter right now. We are getting at least a 6 month- 2 month view. That's a good feeling. We believe a lot of that is because people have to plan their AI centers well in advance, especially the power and the space and the building. Absolutely, I think 2026 can be a good year.
You would normally argue the law of large numbers, you know, we shouldn't get too ahead of ourselves. I think we are going to continue to experience not just double-digit growth, but at least mid-teens growth. We'll see. It might get better.
Thanks, Jayshree.
Thank you.
Our next question comes from Amit Daryanani from Evercore ISI.
Thank you. Good evening, everyone. Thanks a lot for doing this. I guess just a question on the AI infrastructure side. A lot of the cloud companies are looking at different ways to deploy disaggregated network fabric. There's a scheduled approach that I think helps or favors Arista Networks more prominently and a non-scheduled one. Do you think cloud customers like Meta, for example, will skew one way or the other? Just talk about the pros and cons around that. That would be helpful. Chantelle, how do I think about your deferred number in the context of this 20% revenue growth? Do you see that continuing to increase, or is the growth really going to come from there?
I think when it comes to deferred, which is a very common topic that we have a lot of dialogue on, not much has changed in the sense of the mechanics. What has changed in the sense of what's going into it are these large, remember, it's use cases, new products, new customers. If you think about the new use case being AI and Jayshree just mentioning this 18 months- 4 months kind of timeframe, it's going to take time to work through. The growth could be next year or the year after in the sense of things even coming through for this year. That's the way to think about it. Then you have what's going in. The balance side, the P&L side will be what's accepted by the customers in that timeframe. Of course, some of it's in the 20% guide for next year.
It's in our 25% guide for this year, TBD on what we see on the acceptance side.
All right, I can talk about that.
Yeah, I love it if you can. Please go ahead. Is your mic working?
No, you said.
This one works. If we think about disaggregation scheduled in the network, scheduled on the endpoint, I think everybody in this room can appreciate the amount of ongoing research that continues in AI. What becomes the hot thing for training a model or pushing more workload to inference kind of quickly dies. New techniques are coming every day. Traffic patterns are shifting rapidly. The semiconductor space is seeing increased levels of competition. NVIDIA isn't going to be the singular solution forever. You've got kind of a lot of investment. This is, again, a multi-year kind of long game. We're going to see things pay off at different points in time. I think the thing that Arista Networks has going for it is this view of the solution as a holistic thing and products, software, systems, teams, technical capabilities that can meet customers where they are.
I think that that is a differentiator for the business, is the quality of the people and the way in which we engage in these deep technical partnerships. Is it going to be scheduled fabric? Like we've got a product, we're shipping that. I don't see that slowing down. That's a very successful solution right now. A lot of customers love it, are continuing to buy it, continuing to invest. Andy Bechtolsheim and the team have invested in hardware innovation to continue to push that forward in novel and unique ways. Been there first and are going to continue to push that. I think when we start pushing out to the endpoint scheduling, again, it's the same kind of notion that the fabric needs to be there. The scale-up and the scale-out needs to be there. The network doesn't really change.
I think those are just as equal opportunities, if not bigger opportunities for us because of the way we're going to work with customers. We're going to prop them up and take our lessons learned from working with the big folks, driving the innovation, but cascade that across the entire, what I would call the AI practice in general, enterprise, service provider, cloud. It doesn't matter. There's a lot of commonality there that we're going to drive leverage on.
I'm going to give him an A for doing all that on his fourth day in the job. Congratulations.
Thank you.
To add to what Tyson said, to answer your question, we co-developed with Meta the scheduled fabric DSF 7700. Naturally, it's being used quite a bit. There are going to be different use cases, and some of them, it's going to be a simple single switch implementation that needs no scheduling. The operators have to figure out how to schedule it, right? There are use cases that Tyson, when he was a former customer, was deployed. We absolutely need the scheduled fabric. Otherwise, you're going to need tons of resources to schedule the fabric with people, right? It really is going to depend on what your philosophy on this is. I'll buy a cheap switch and then throw people at it, or I'll buy a slightly premium switch and throw less people at it.
I continue to see that those two architectures are living long, and it really depends on the use case.
Thank you. Next question is right next to Amit. Aaron Rakers from Wells Fargo.
Thank you for taking the questions. Yeah, Aaron Rakers at Wells. I want to ask about the numbers a little bit, right? If I look at the numbers you gave out there, $10.5 billion, I'm going to get them wrong, $2.75 billion of AI. You've got $1.25 billion of campus. If I look at that relative to what you guided this year, it would imply that non-AI, the non-campus number looks more flattish. I'm curious if is that just conservatism? Is there an element of mix within AI that is hard to kind of discern? Jayshree, why do you think scale-up is not a 2026 story, more of a 2027? It seems like you alluded to in the presentation, a couple of them, that you've done some custom work with some customers. Why is that timing not next year?
Two questions, Aaron. Let me answer your first one. Anytime you have enthusiasm and fast-growing markets, those are taking over. We think our run rate business may not be negative, but it'll be slower growth. Have we modeled it exactly correctly? No. We're not expecting the rising tide for all boats. Some of the boats will just bob along. We're not being conservative. We're being realistic. Some of them are going to grow faster. It is true that we don't know quite how to count the frontend AI revenue. We've always struggled with it. If it has a lot of AI traffic, it's going to go in the AI bucket. If it's a more traditional classic use case, it's going to go in the cloud bucket. Maybe we don't exactly know until the year progresses. What was your second question?
Just the scale-up, the timing.
Yeah, scale-up. I think there's, as Andy would have pointed out to you, there are many projects we're working on on scale-up. The most important thing I can tell you in scale-up is the fundamental low-latency chip requirements would be the next generation of Tomahawk or Ultra that is still in labs right now. The first would be just pure availability of these chips will be 2026, right? By the time we put it on the board and make it available and work with a compute fabric, validate a rack, figure out if it's co-packaged copper, retimers, SerDes, optics, et cetera, it's going to be December 31, 2027. That's the reality of how things work with the software and everything. There's a second reality, which is standards itself. Some people may just go with it like they go with proprietary NVLink. There's still a lot of confusion on scale-up technologies.
You've got the NVLink. You've got the UALink. Then you've got the Etherlink, Arista family of products, right? I'm really counting on Hugh and the leadership from UEC and the entire consortium to define this better. I think Broadcom's done a fantastic job of putting out the scale-up Ethernet spec. Realistically, all these things take time to sort out. Hence my view on 2027, right, and 2028 and beyond. I fully agree with you that there'll be a lot of proof-of-concept trials earlier than that.
Thanks, Jayshree. The next question is from George Notter at Wolfe Research.
Hey, George.
Hi, guys. Thanks very much. This is George Notter. My question is just on the enterprise initiative. You know, as I kind of look at your expectations going forward, thinking about some of the margins, I'm kind of wondering if there's like a big push into the channel here. Obviously, the channel is fairly expensive in terms of margins. It's been something I think you guys have looked at and wrestled with for years and years. I'm just wondering if there's any kind of pivot here in how you think about the channel.
The wrestling is real. It is top of mind for sure. We've had kind of an early days channel effort for a little while. At the beginning of this year, put out a program that has gained real traction. We're starting to build some momentum there, not just in processing deals through systems integrators, but actually seeing that demand gen come from the channel in true deal registrations. The real power of that, and I think the value to us, is using this as a force multiplier on our sales team so that we can keep our cost of sale down and just put so many more feet on the street. That new logo deal registration is incredibly valuable to us for exactly that reason.
Helps us keep that cost of sale where it is while driving up the coverage of the sales team and letting us start to approach accounts below that global 2,000 that we cover so well direct. As you start to creep down there, it's true that there's a little bit of margin that hits the channel. The benefits in discounting tend to more than make up for that. We'd still see the enterprise business as margin accretive, channel or not.
Just as a follow-up, is this like a full frontal assault, you know, on Cisco, you know, everywhere and anywhere through the channel? Or is this measured, certain partners, certain channel strategies? Or, you know, it creeps over time in terms of your intention and how deep you get into the channel? How do you think about just pacing?
Cisco looks at itself as a technology provider beyond networking. I don't know if you want to talk about a full frontal assault. There should be no network where we can't provide a best-in-class solution at Arista. Any way to reach those customers, we're going to go after that. If that's a full frontal assault, then yeah, that's what it is.
Yeah. The only thing I want to dive was outside the enterprise, just back to the margin question to your question, George, is that yes, some is enterprise. It still stays within that 5% to 6% range of revenue, so more dollars, but same percentage. There's actually probably a bigger range in the R&D side in addition to the enterprise Todd talked about on all the other innovative things we'll need to do to ensure we hit that top line, just to kind of round out the margin differential.
Thanks, Chantelle. We'll move right next to Samik Chatterjee from JP Morgan.
Thank you. Maybe if I can ask you on Blue Box and you try to highlight the sort of what you're trying to do on that front. How should we think about the relevance here in terms of what's the mix today of that opportunity? As you look forward, is the relevance of that product going to increase? Is it more required to compete with WhiteBox or actually regain share rate to WhiteBox ? Just a quick one, another one, in terms of the partnerships you highlighted, OpenAI, Anthropic, are those opportunities for Blue Box or more broadly sort of across the portfolio? Thank you.
OK. First of all, we will always coexist with WhiteBox because there's a business model there, whether it's 10% margin or just a basic ODM, where they're not looking for the premium, the value, the features. This is not an attack on WhiteBox. This is how we coexist with WhiteBox. As you know, there are a number of our customers who just love our performance, love our features, but might need the flexibility of not always requiring every bell and whistle of the EOS, and particularly in some users, like if it's a leaf switch that's just connecting general purpose compute or the use case is very simple and they're just doing some layer 2 functionality. We have already installed Blue Box with NetDI with some of our largest cloud titans in certain use cases.
We can see that expanding to some of the maybe specialty tier 2 providers where they want to play with SONiC or they want to do EOS and SONiC and see in their labs and see what the delta difference might be. To be able to do that without making an either/or decision and saying, OK, I'll do this, but I have a hybrid strategy where if I don't want this, I can always load EOS, I think is very powerful. They're not giving up something to get something. Obviously, the pricing won't be as cheap as white box, but it won't be as expensive as a EOS either. The choice model, the economics model, and also the total cost of ownership, if you add the CapEx and OpEx, will be very favorable to the Blue Box for simpler use cases. That's kind of how I see it.
You asked the question on the partners. Obviously, you wouldn't expect these partners to stand up and say nice things about us if they weren't working with us. Every single one of them is working as an ecosystem partner with us. We expect to do more and more with each one of them.
Thanks, Jayshree. We'll go to Meta Marshall from Morgan Stanley.
Great, thanks. Maybe one topic that wasn't touched on today is just a lot of talk in the atmosphere about OCS and just kind of how you see OCS versus your opportunity and kind of the opportunity people have been talking about there. Maybe I'll just stick to one question there.
Meta, I was asked to specifically not speak on that topic. This is an Arista analyst day. We'll stick to Arista topics. I think you all know Oracle is part of our cloud and AI titan category. They've been a very important and strategic partner and customer, and will continue to be. We look forward to a multi-year vibrant partnership with them. I'll leave it at that.
Next question is Michael Ng from Goldman Sachs.
Good afternoon. Thank you so much for the question. I wanted to ask a little bit about the traction that you're having with NeoClouds. Chantelle , during her presentation, talked a little bit about there will be a point in time when those conversations will occur where they're seeking best-of-breed. What milestones are you looking at to measure your traction with these types of customers? Just a quick follow-up, if I could, the gross margin range over the long term of 60%- 64%, a little bit wider than you gave at the last Analyst Day. Maybe you can talk about some of the thoughts in providing that wider than historical guidance. Thank you.
Yeah, sure. Happy to help. If anyone has any comments, jump in. From the NeoCloud perspective, I think there's a few things we're doing intentionally. One is to ensure we're going out proactively to understand where the opportunities are from a global perspective because a lot of these conversations are global and international. Fortunately, we have some coming to us first because they understand, and Tyson, you could probably speak to this, all of the great experience.
That we have with the hyperscalers, they want that. They want to understand, how do I get that quickly? How do I get that outcome? How do I get that performance?
If I can jump in on that, I think that's exactly right. I see what I'm talking to NeoClouds is the vision, the desire to operate at cloud scale without having built that thousand-person team, building a full custom management solution. They need help with the automation. They need help with the visibility designs, with the CloudVision framework, because they don't have the time to build all of that themselves the way the hyperscalers have. We're seeing very good traction there.
Yeah, I think the only thing I would add on that is, you know, if you look at it then through a kind of a similar lens to full frontal assault, I think we want to win every one of these deals. We have the stack to do it, and there's recognition of that, I think, amongst all the folks who are investing in AI, whether they're NeoClouds or even enterprises. It's for all the reasons Ken talked about, but it's also all the hardware innovation that was talked about earlier. I can say from firsthand experience, making selections around who your suppliers and partners are going to be, for me, in prior lives has generally been pretty easy working for big companies. You take that experience, and it's accretive to future experiences. That doesn't go unnoticed amongst industry players, and I think that helps build momentum.
It's the credibility of the team that's doing a lot of work here, and I think people are getting wise to it. I think we're evolving kind of how we're thinking about it and engaging here. I think, as I said, we want to win all these deals. We're going to go out and get after it. I'll let Chantelle talk about the margins.
Oh, you don't want to take that question? OK. I think for the margin range you spoke of, the 60%- 64%, that's absolutely to allow us self-room at this point in time, this far away from that time period, to allow us to see how far this AI mix will be cloud, NeoCloud, AI mix in that number. That's exactly what that's giving us room for, nothing more than that. It assumes the current tariff scenarios, et cetera. It's really just that mix. How big will AI be cloud AI? Because that has a different mix than enterprise AI in those years.
Wonderful. Thank you.
Thank you.
Thanks, Chantelle. We'll go to Tim Long from Barclays.
Thank you. Sorry, sticking with the AI, kind of related to that last question, and maybe I'll go back to Anita's question. The very impressive growth, could you talk a little bit about kind of distribution and how diverse that revenue outlook will be next year? Obviously, you have a few very large CloudTitan customers. Just curious, when you're looking at the growth into next year, do you think the AI bucket will be more diversified? I think the question before, just related to that, curious your outlook on optical circuit switching and the impact on the spine part of the network, if you could touch on that as well.
I'll take the first part, Jayshree Ullal, and take the second. For the first part, the diversification, I think your question is which kinds of customers will be delivering that AI target of $2.75 billion in 2026. That's going to be a combination of the ones that you know are the large hyperscalers. If you recall back to our last earnings and our prepared remarks, we talked about 25- 30 enterprise tier 2 AI customers, and that list is growing. It'll be part of the hyperscalers you know and part of this growing kind of enterprise tier 2 market. We're very excited about that. Jayshree will go on too.
I do want to add that while these 25-3 0 will be meaningful, the large hyperscalers, or Titans, as we call them, are going to be significant contributors to that 2026 number because they're building such large clusters. Coming back to optical switching, gosh, it's been around for a long time. We're certainly aware of one customer that has deployed that. We're not seeing it as mainstream. In fact, I would go the other way and say customers are looking for pluggable choices, whether it's pluggable optics or pluggable copper, and not locking themselves up into one type of technology. We don't see that as the mainstream way to scale up or scale out. We do see electrical switching going all the way. When they're trying to determine distances and flexibility of layer one capabilities, certainly different types of copper and optics come in.
I wouldn't say that's a predominant architecture.
Thanks, Jayshree. We'll go to David Vogt from UBS.
Thanks, guys, for taking my questions. I have two, if you will. Jayshree, can you talk about, you know, Andy talked a lot about power consumption from your customers. What are the practical implications from a competitive dynamic perspective, from a power perspective, if we go from three-tier to two-tier pluggables to CPO, LPO? Kind of talk about where you're competitively seeing yourself performing relative to your peers. The second question for Chantelle, you mentioned very small contribution from these 25 enterprise customers. Over the longer term, what are the roadmaps? What's the hurdles? What's the roadmap for enterprise to be a much bigger portion of the mix, particularly on enterprise AI, not on the campus side, but more on the AI side? Thanks.
Do you want to do that first?
I'll do the first one, yeah. You should think about a three-tier network being roughly in physical space and power consumption 20%- 40% less efficient than a two-tier using the Scala solutions that Arista has. That's a big deal. Back to optical circuit switching as well, customers want to spend every penny that they can on GPUs and nothing else. The pressure is there, power optimization, floor planning, space planning, density of network. That is where Arista does really shine. I mean, I think you can go across all of the competing solutions out there, and you can drive and fine-tune. LPO is not broadly qualified. You want to do white box, and you want to do even a small cluster. Those savings are meaningful for those customers, and they're waiting on the ecosystem to catch up on the software side to be able to enable them.
We invest a lot in qualifying optics, pushing ahead, driving all that efficiency. The 20%- 40% matters. Those are deal-making decisions, right? They look at those, and they say, OK, that's a difference maker. We think we favor very well against competitors in that regard.
Just to add to that before Chantelle goes, in my LPO slide, I showed you, on top of reducing the tier, add linear drive optics and get rid of the DSP if the distances allow, there's another 20% of power savings of $3 million- $5 million a year. Add that up, it adds to a lot of money, right? That's based on a, I think my slide showed it was a 1,000 switch configuration, which is quite small, actually, in the large scheme of things. You can get a real collapse of OpEx by reducing tiers and a real integration of optics to save money and power in a very significant way.
I think that at least I'll start, and I'm sure Todd will have a view specifically on campus and AI. Generally, for enterprise and AI, I would say what we hear from our customers are just some growing pains. There's definitely intention, even so far, some mandates. You hear some CEOs in some of the enterprises mandating an AI outcome or an AI implementation. What do we hear and see? We hear and see, is it on-prem or off-prem? Is it in the cloud? Are we going to have it here? Is it training? If it's inference, is there specific ROI that the board's looking for? All those things just take growing pains, take time. We see intent. We see mandates. We see that there's a little bit of confusion, congestion on how to make that happen. There's definitely intent in the conversations.
If you listen to, I'll give you some examples, in the education university systems, big adopters of AI. The banks are definitely, you guys probably know this more than I am, the financial institutions are looking at AI. Health care is looking at AI. There's definitely intention there. I do think that there's a little bit more scrutiny on the ROIs, on the use cases, probably expectations in some of their future years, what they expect to be cost out with employees for AI. I think that's where the rubber will meet the road. Do you want to talk about at the edge, maybe?
On the enterprise AI deployments, we're engaged in many, many of those conversations. Largely, the enterprise is making a build versus buy decision. These are still early days. They can start off by buying AI capacity from the Titans, from the NeoClouds, and make those decisions later on. We plan on being the premier solution for them whenever they choose to get going. They're making a build versus buy decision right now.
One quick comment I can't resist on the two-tier versus three-tier. This is a topic we've been wrestling with since, I don't know, 2010 or something like that. I mean, our success in the hyperscalers is very much related to the success we've had building these very wide, very flat networks because two-tier is so much cheaper than three-tier. If you get a high-ratex switch, if you get the modulars that have the internal fabric, our distributed Etherlink switch, and the rapid reconvergence routing stack, these things work together. I think Arista just says, I mean, AI, of course, on both front end and back end requires these same architectures. It fits really nicely in something we've done for many, many years now.
Thanks, Ken. We'll go next to Tal Liani from Bank of America.
Hi. I'm not going to ask you about the margins. I want to ask you about Blue Boxes. Does it open opportunities with customers you're currently not serving, meaning customers who are buying white boxes, philosophy for white boxes? Can you penetrate to these customers with Blue Boxes? Second, how much of an opportunity—you touched on it, someone asked here on Blue Boxes—how much of an opportunity for Blue Boxes do you see outside of the big cloud Titans? Is it at all addressing second-tier baby clouds, even the enterprise? Or is it strictly or mostly for large cloud? Thanks.
Thanks, To. You can always ask me another question on margins. I think the way to look at Blue Boxes is there's no doubt that if people have the staff, it's a lot easier to do, right? No question about that. It is naturally appealing to our large cloud Titans and AI Titans for very specific use cases. This is why you've seen us largely be deployed there. I think there is an intriguing element for some of the smarter enterprise staff or NeoClouds where they want to try that for a choice of flexibility, where they want to sort of lean in on their expertise. They may not have the staff to do that. We are starting to see that. It's not meaningful yet in numbers, but it is an important sort of innovation area and sector for us so that they have choice of flexibility.
I have in my mind a very good example of a customer who started with the white box and couldn't get it to work and has now adopted our Blue Box. It all happened in a span of six months. It'll probably show up. Now they're in lab trials, and we'll see the deployment next year. When it happens, it can happen either because they're already an existing customer. In this particular case, it was a call to me directly. They said, we can't get this to work. Can you help us? We helped them. They jumped into it immediately. I think there's a recognition that Arista Networks is only doing premium, and white boxes are only cheap. Nobody knows of us yet in this hybrid state. We're hoping there'll be more and more of those customers and use cases. Is there a target for the contribution of Blue Box?
Is that what you said? It's in my $2.75 billion target. How much of that it'll be, I don't know yet. We'll see. Please refer some customers to us.
Thanks, Jayshree. We'll go to Simon Leopold from Raymond James.
Thank you very much. I wanted to ask about your purchase commitments because over the last several quarters, they've been growing significantly. It doesn't look like we're in a supply chain crisis. How should the analysts think about it? I think in the most recent quarter, it was up more than 70% year-o ver-y ear. You're only forecasting merely 20% growth. How do we align these commitments? How much of this is safety net? What constraints are you facing? How should we think about those numbers?
Yeah, it's a great question. Thank you. I would say some of it's a safety net, some of it is to ensure we have the capacity. That's not the majority. You have to think about the, and I think, Jayshree, you were referring to this earlier, the purchase commitment to transaction to acceptance and showing up at Arista can be a few-year journey. Is it indicative of future transactions? Absolutely, or else we wouldn't be doing them. The timing of it, you just need to be a little cognizant of in the sense of it's not 12 months. We're talking 24, maybe 36 at the outset. It's a longer lead time into our showing up as revenue.
That's a very good point, Chantelle. In addition, I'd just add that besides the multi-year cycle, that's not a one-year cycle. Many of our components have greater than a year lead time. Many are brand new products, and we can't buffer them enough if a customer suddenly wants them. We're trying to lean into more satisfaction of lead times, particularly in the enterprise and campus segment. It's easier to plan in the data center because we know they have something going, but it's harder to plan in the enterprise and data center. You'll see us leaning in on more investments in purchase commitments, both in AI, where things spin up suddenly, and in the enterprise, where there's more of an expectation of shorter lead times.
OK. Thanks, Jayshree. We'll go to Atif Malik from Citi.
Hi, Atif.
Hi. Thanks for doing a super information session. I just have a kind of a dumb question. When Broadcom talks about SUE or Scaled-up Ethernet, and you guys are talking about Etherlink, they both have Ether in it. Can you talk about the difference? Are they trying to take the customers a different path than you guys?
Yeah. Broadcom's SUE, Scale-Up Ethernet, is a concept spec on how Ethernet can be scaled. We are huge supporters of that. Arista's Etherlink is our AI portfolio for scale-up, scale-out, and scale across. We can optimize all of our features and our hardware for Etherlink. It's our branding name for our portfolio of products using Broadcom chips and Broadcom implementation.
I want to comment on one specific technology, which is the distributed Etherlink switch, which is a scheduled cell fabric. That makes use of a very different technology than scale-up Ethernet. Scale-up Ethernet is all about optimizing latency, minimizing the packet sizes, and cutting the delay down to nothing because the back and forth between these components within the GPU chassis is everything. The distributed Etherlink switch is all about scale-out. It's all about how do I get the most nodes connected through a shared fabric without creating any bottlenecks. They're actually very different technologies, but addressing different parts of the AI use case.
Thanks, Ken. Any other questions in the room? Right over there.
Thank you for taking my question. This is Yung Pu, from BNP Paribas, for Carl Ekman. I have a question about your CPO strategy. I think you recently discussed, maybe probably last week, you discussed that you are agnostic to CPO, and you can provide it if customers want it, which I interpret that as you have the ability to make CPO switch using merchant ASIC, probably from Broadcom. I believe some of your peers are already doing that. Just before, Andy was talking about LPO and alluded that CPO switches are vertically integrated. Can you maybe help me understand that? What's your CPO strategy? What's your progress on that? Are you making any samples?
We thought we'd bring Andy Bechtolsheim back to answer the question. Go ahead, Andy.
We are non-religious about CPO, LPO, whatever it is. However, we are religious about one thing, which is the ability to ship very high volumes in a very predictable fashion. To put this in quantity numbers here, the industry expects to ship something like 50 million OSFP modules next calendar year. The current shipment rate of CPO is zero, OK? Going from zero to 50 million is just not possible. The supply chain doesn't exist. Even if the technology works and can be demonstrated in a lab, to get to the volume required to meet the needs of the industry is just an incredible effort. If you think of the math, how many modules you have to make every day, every hour, to hit this 50 million going to 100 million quantity, it's surreal, right? It works in the context of pluggable modules.
The industry on the CUBESET is not ready for this. OK, I'll leave it at that.
Thank you, Andy. Well said.
Thank you very much.
I think the other piece I'd add to what Andy said is, look, we're all about standards. If you look at today's CPO implementations, you've got one version from NVIDIA, you've got another version from Marvell, you've got a third version from Broadcom. Therefore, CPC, co-packaged copper, and pluggable optics give you and deliver all these capabilities richly without locking yourself up into one implementation, which our customers care about. We'll embrace them all when they get a little more mature and ready.
When customers want them, customers aren't really asking us for them. They like our solution. It's 200 gig servers is fine. We can scale it. It's reliable. They know how to operate it. I think that customers struggle with the CPO paradigm because from an operator perspective, what we call the blast radius, the unit of failure, and the size of that failure increases. There's genuine concern about that. I think a lot of the early CPO interest is still in the trials, understand it. The supply chain has to catch up. If the customers get comfortable with it and say, we want it, obviously, the rest of the industry is going to get behind it, including us.
Thank you very much. I have a follow-up here, if I can. I'd like to ask about your 2026 target from a different angle. You have this massive $4 billion deferred revenue, including $1.8 billion product deferred. Let's say just the product deferred gets delivered in 18 months, and the service gets deferred just in three years. If we do that, you already have like 20% revenue growth locked in.
Are you assuming all $4 billion comes out in 2026? You shouldn't.
If we just think like the.
100% is not going to empty out, right? Just so we're thoughtful about that.
Yeah, not empty out, but say product deferred is.
I think the important way for you to think about this is deferred will come out, deferred will go in. There'll always be a net deferred. I realize you think it's particularly high now. If we emptied all of that, why isn't the number $15 billion or whatever you want it to be? Let's be realistic, right? Let's be realistic that there'll always be a deferred short term, long term. Let's be also happy that we are agreeing to grow at $10.5 billion revenue two years ahead of schedule. If we can do better, we absolutely will, which is a sign of more customer momentum. I think that's too early to tell when 2026 is four months away and the entire 2026 is 15 months away. We'll see how it plays out. Deferred will come in, deferred will go out next year and the year after and the subsequent years.
Understood. Thank you very much.
Thank you.
We've got time for one last question.
One last one, OK.
Oh my god, I got the last question?
OK.
It's got to be an important question.
Better be a good one, Nick.
All right, first of all, really great basket of innovation. Really, congratulations to all of you guys. I do have one question on the NetDI in AVA. I think it's one of the best innovations that I heard today. The question is, why centralize all that data? Why didn't you do an agentic model where you put agents into the switches and have AVA interact with the switches versus centralizing? I'm sure you have some really great reasons for it. That's just curiosity.
Yeah, the reason for that is because the decision-making around what an AI Agent is going to do depends on the full network state. You just don't have the perspective from one device to know what's happening in the overall network and what actions make sense to take there. We absolutely give AVA tools to reach into a switch. In fact, you saw in the demo where one switch was sending pings across the network to another endpoint. The AVA agents are able to take action from the point of view of a given switch when that's what's called for by the overall scenario. Actually executing out there isn't needed or helpful because of the fact that full network context is required to make good decisions.
Nick, just to add to that on NetDI, NetDI is fully distributed. It's not just centralized. Every switch has an expression of a diagnostics infrastructure, and then in aggregate, we can manage it. I think you need to think of it as centralized management, distributed data, and control. There is a lot of mini NetDLs or NetDBs sitting in the switches and NetDIs sitting in the switches. To take any kind of sensible action, you've got to do it across the entire scale of the network. Thank you, Nick. It was only fitting that you and I talked about leaf spine, and now we end with you on beyond the spine.
All right, thank you, everyone. That concludes our panel. It also concludes our Analyst Day. Thank you so much for joining us.