Okay, morning, everyone. I think we're about to get started. Just a brief intro. I'm Mark Newman, Bernstein's US IT Hardware Analyst. I'm joined on the stage on the far end by my colleague, Daniel Zhu, who is Bernstein's new Networking Analyst. Daniel and I are delighted to welcome Arista's John McCool. John is Arista's Chief Platform Officer and Senior Vice President, as well as Special Advisor to the CEO. John has been at Arista for nine years, prior to Arista, had more than 35 years executive networking experience, including being SVP of Dell EMC and 17 years at Cisco.
On my left here, Rudolph is Arista's Head of Investor Advocacy, and has been at Arista for nine years, as well as another 13 years of networking and cyber, cybersecurity experience prior to Arista. I think given the huge shifts we're seeing in the space, it's a great, honor to have both of you here today. Thanks very much for joining us... [crosstalk]
Thank you for having us.
I'll get started, and then I'll hand over to Daniel. Perhaps a very high-level question just to kick off on AI. The AI opportunity is clearly huge. Can you talk about how you're viewing the AI opportunity, how it's evolving, and risks and opportunities for Arista?
Sure. you know, I think clearly Ethernet has a place in this AI opportunity. I think if we went back a couple of years ago, there was a question: what role does Ethernet have, connecting what we call the back-end network, connecting GPUs and clusters? We've made substantial progress, not just as Arista, but as an industry, in the standardization and development of that back-end network.
The interconnect of these clusters, we call the front-end network, also grows tremendously, along with the challenges of power consumption. People want to connect multiple clusters across large numbers of physical locations to get power effectively. That's kind of the framework that we think about the Ethernet opportunity.
I mean, I think the other thing that is, you know, clearly manifest is these networks are getting larger, right? I mean, there's you can't throw enough compute at the problem, apparently. Now, how do you build these larger and larger networks that are still as just as efficient? I think lots of room for innovation there, and we've done well, you know, for the last 20 years by innovating and really continuing to drive the performance bar higher and higher, and I think in the AI networks, it shows itself up even more.
Thanks for that. I guess kind of adding on to that, how are you seeing the evolution of the AI opportunity between front-end and back-end, and also between scale up, scale across, and scale out?
Sure, you want to go first...? [crosstalk]
Sure, yeah, I can start. One is, it's, we're, I suppose, lucky in the sense that our products are very fungible, right? The same products can work in a front-end network, in a back-end network, and what it gives customers is that flexibility. It's a little bit harder for us now to parse out, like, what is front end versus back end. And customers, I think, are increasingly finding that these networks are kind of blurring together, right?
You know, I used this analogy in, in one of our group meetings earlier today, where, you know, if you're on your Instagram app and you're trying to look like a cowboy, you know, is that front end? Is that classic data center? Is that back end?
Like, where does that line get drawn, right? That's kind of one thing. The good news is all of that growth has been good and great for our business. Talking about scale up, scale out, scale across, a couple of things to remember, like, scale up is really within the context today of the rack, but it's starting to go beyond a rack, right? This is kind of the highest speed network, if you will, that, you know, connects to the memory. But it is also, in some sense, the simplest network, right? Because there's not as much complexity kind of involved there.
That's an opportunity that today is really captive, you know, since the bulk of the accelerators being used out there come from NVIDIA, it's really an NVLink is the protocol that's used, a proprietary protocol. There's not much of an opportunity today, but there are some efforts called, you know, Ethernet for scale-up networking is a, is an term that you'll hear quite a bit. We view that as an opportunity for Arista, probably in 2027 at the earliest, not something that we include in our TAM right now. Scale out has been the bulk of that AI revenue.
That's where you're connecting across racks, maybe even across buildings that are relatively close to each other, right? Scale-across is something that's come up fairly recently, let's say, over the last year, but it's frankly a concept that's existed for a while, right? Because if you have data centers across multiple locations, how do you interconnect those data centers?
AI brings some unique challenges there, but you've got to think about things like routing and encryption and traffic engineering once you start going. I think that, for us, is a very exciting opportunity because it takes a unique set of platforms, which there's not that many companies that have those.
It plays to our strength that we've had in traditional cloud. What we saw in traditional cloud was the same phenomena. How can I connect as many CPUs as I can in a data center? Once we were able to scale ECMP very wide, those data centers provided not enough power. That was actually our impetus to develop routing, to add another layer across. The same thing is happening with GPUs, and the power consumption is more constrained, so you want to have a broader number of data centers that look like one logical GPU cluster.
Got it. You know, just double-clicking on that, because I think the scale-across opportunity is a little bit less well understood by investors. How do you think investors should think about sort of dimensioning that opportunity? How do the competitive dynamics differ from, you know, the opportunities within the four walls of the data center?
Sure. Maybe I'll start with the competitive dynamics. These are highly sophisticated networks, need strong routing convergence time, need capability like encryption end-to-end, need multiple optical type of support, so variations in the type of optics dependent on the physical locations of these different data centers. It's extremely sophisticated.
It plays to the strength that we have in our modular platforms, which have a Virtual Output Queuing architecture with deep buffers to deal with the distances involved and plays to our traditional strength. As Raph said, very few suppliers can compete in that kind of environment.
Just a reminder for the audience, we do have an app. You can submit your question there. We'll get to some time at the end, questions for the audience. You can enter it in the app, or if you prefer to ask live, we'll offer an opportunity to do that with a microphone as well.
Yeah... [crosstalk]
... [inaudible]
I think kind of, you know, speaking a little bit about architectures, I think within AI, we've seen a shift from AI model training being very dominated by pre-training to one that has more of a component of kind of post-training and test-time compute, as well as inferencing ramping a little bit more. How does that impact sort of the network architecture design that you guys are seeing, and how does that impact your networking opportunity?
I think that the increase in this pre-training activity brings broader storage and more machines into play, which really drive that next layer of the architecture and that front-end network, as well as pulling data from different data centers and that scale-out capability. When we move to an inferencing environment, we think a little bit about the user interaction, which adds a latency component to the question. You, you input something, it has to be computed back-end network and then get back to you.
It really interacts across the whole dimension of the network, and we see, you know, enterprise customers thinking through their next-generation campus networks and how that will impact them. We had, you know, one discussion in the early days of training, where one of our large hyperscalers talked about the things they hadn't thought about.
They were so focused on that back-end network, they really hadn't thought about the dimensioning of the front-end network and how that was impacted by pulling in the storage data. One thing I thought was interesting was on the wide-area network. If all the content that we're getting from AI is individualized now, we're not all watching the same cat video. We're watching different cat videos. It can't be cached.
Some of these wide-area networks were designed with caching capability to, you know, kind of eliminate some of the bottlenecks on the back end. There'll be pressure all the way to the user, you know, as this becomes more of a widespread opportunity on inferencing.
There's also things like RAG and stuff like that are, you know, kind of expanding the aperture of what connectivity is needed, right? It's not as simple as, "Look, I've got these 50 or 5,000 or 50,000 GPUs that I'm just trying to interconnect," which is, in a sense, a simpler problem.
Once you start having to go beyond just the resources that you control and you own, and you're trying to connect to a service on the internet, maybe you're trying to connect to your database that sits in a different location, it just opens up more complexity. I think one thing that, you know, I'd say as a takeaway is complexity has been Arista's friend, right? Because more complex networks require higher levels of R&D, higher levels of engineering. You know, all the way from Andy and Ken, who started this company, I think we've shown that, you know, we can innovate, you know, better than most.
Is there still a lot of experimentation going on with the cloud customers in terms of how they deploy GPUs and networking, you know, in response to power and cooling requirements? Is there still a lot of experimentation, or are you seeing more, more of a consensus on the direction forward in terms of how...
I think... [crosstalk]
They use that?
Experimentation on one angle, the intensity of the power consumption and the cooling becomes a question, even within a single customer, of where I'm deploying and what kind of capability I have in that data center. If you have a modern data center with liquid cooling, maybe that customer's and a lot of power, I'm trying to affect the highest level of GPU density within that form factor. I may have legacy data centers that I don't have any liquid cooling.
I will still want to air cool but still have some capability in there. There's a broader diversity of physical form factors to meet the market needs than I think we've seen. I don't know whether that gets better. You know, people started to standardizing cloud on not only just leaf-spine two directions, but the width of the rack, x86 GPUs. There just seems like there's going to be more diversity because we're on the edge of that power and cooling dynamic.
You know, I Some might take offense to the word experimentation, I guess, but the pace of innovation, which is I think the point you're making.
Mm-hmm.
is not slowing down, right? Like, to give everyone an anecdote, we introduced our 400 gig platforms in 2019. We introduced our 800 gig platforms in 2024, so call it five years. It's not going to take five years to go from 400 gig to 800 gig. It's definitely not going to take five years to go from 800 gig to 1.6.
What that is leading to is a lot of experimentation and trying out different things, because ultimately, what they're trying to optimize is power utilization, time to first token, job completion times, you know, the being able to generate more tokens per dollar, more tokens per kilowatt. Like, all of these things are. They're willing to try pretty much anything... [crosstalk]
The utilization of the GPUs.
Mm-hmm.
You want to maximize the utilization of the very expensive asset.
Exactly, exactly.
I guess since we've talked a little bit about sort of the technology roadmap and 800 gig versus 1.6, you know, can you talk about sort of, you know, the timeline of that rollout and kind of what you guys are seeing, you know, with that sort of transition?
I think you just hit it. It's become more compressed than we saw 100 to 400 to 800 compressing. It's gonna go faster. We've seen a lot of announcements of silicon for 1.6. I think that one thing that we have learned. We've gotten this question, by the way, I came into the business, we had really secured a substantial position in market share at 100 gig, everyone said, "What's gonna happen at 400 gig? What's gonna happen at 800 gig?" We've been able to layer on those technology transitions based on a consistent software architecture and methodology and development.
We'll always have to add some new things and capability as the speed increases, but it's been pretty straightforward, and I think we've led that deployment. The critical time is from first silicon to thousands of GPU connections. A lot happens within then, so you see us tend to wait until we really have deployments to make announcements around the next generation.
Got it. Continuing with the technology roadmap a little, we're starting to see a lot more talk about co-packaged optics. In the past, you know, I think you've talked about how the failure rates of optics is a big bottleneck for CPO adoption, and how replacing these DSP and retiming chips with, you know, linear drive, pluggable optics actually captures a lot of that. You know, it's a pretty effective substitute. Is that still true with the 1.6 cycle, or are you kind of, like, starting to see more evolution here?
Yeah. I'd like to answer this question by kind of coming back to why co-packaged optics began and the concept. There's really two things that are intriguing about co-packaged optics that we're so trying to solve. One is power consumption. If I can shrink the distance, but also the number of chips between the switch chip and the wire, either cable or optics, I save power, that's a good thing.
The second problem was, in each one of these speed transitions, the laws of physics decrease the distance that I can run on a printed circuit boards between the switch and the front end. The ultimate shrink is really putting optics on the die. I think there...
We would say there's some inevitability to the trend of shrinkage, but we've been able to defy that in a couple of ways. The first observation we had is the improvement in the switching chips DSP technology that was designed for co-packaged optics, allows you to eliminate the DSP in the optics component itself, and preserve the operating model that the hyperscalers use of pluggable optics from multiple vendors. If something breaks, I can replace it.
Get most of the power consumption savings that you have with CPO. We were able to push out that transition one, and we believe two generations, but there will ultimately be a time where to get the laws of physics, bringing that optics closer to the switch chip is inevitable.
Okay.
One thing I would add is, you know, the biggest kind of concern around co-packaged optics is optics fail most often, as you pointed out, and if the optics fails and the optic is actually on the switch, on the board itself, you now have to take the whole switch out of commission, right? Which means every workload that's connected to that switch is now paused or shut down, or has to restart, et cetera. The reliability requirements of the switch have just gone up exponentially, right? Again, complexity is our friend at Arista.
We've got the best, the sharpest engineers. We feel really good about engineering for that co-package era, but I think what customers are telling us is, "Kick that can down the road as much as possible," to John's point.
The fans, power supplies, and optics tend to be the things that people want to replace on the fly. You may get in a situation with co-package, maybe there'll be some improvements in the overall reliability of the lasers. Maybe networks will be designed to be redundant of some path failure, so you can delay the serviceability requirements of those switches, but there'll have to be some operational adaptations.
Got it. I guess continuing with the theme of, you know, emerging technologies, you know, sort of optical circuit switching is a technology which has actually been around for a while but remained fairly niche. At least we're starting to hear more talk about it, and, you know, I think in particular, there's talk about scale adoption by one of the hyperscalers. Can you talk a little bit more about what you guys are seeing in the space and sort of how that would impact Arista's business?
Yeah, I think you framed it well. I think there's been one customer in particular that's used optical switches in some large-scale deployments, and you can think about them having massive scale. It's not equivalent to switching or routing. It's not a per packet-based decision on which way the packets go. It's more of a construct, an interconnect, some ways, thinking about replacing patch panels, being able to allocate and move different elements of your computer GPU environment at a more granular level.
As some of these deployments and other customers get larger, they can have the benefits of that kind of technology. I think there is some elements of TAM growth, 'cause more people will be able to use those technologies, but it's still kind of a different aspect than traditional switching and routing. If you want.
I think to Mark's point, this fits in that experimentation bucket, right? Like, is there some pound of power savings? Is there some pound of performance improvement I can get? Even, even at this customer that has been using Optical Circuit Switching for a while, once they get beyond a certain layer in the network, it is all Ethernet, right? Because the flexibility that Ethernet gives you is just tremendous.
The variability in the supply chain, the, you know, the diversity of suppliers, all of that, I think, you know, brings its own advantages. I don't think we view those as necessarily competing. I think the TAM for Optical Circuit Switching is significantly smaller than Ethernet switching.
Got it. You know, we've kind of been talking about a few technologies that sort of, you know, are real, but maybe less disruptive than some investors might fear. You know, are there any technologies that sort of are on your radar that you think kind of are being underappreciated by Wall Street?
I'd say we touched on this earlier. The scale-across opportunity, I think, for us, is very, very exciting, right? I mean, you know, to kind of repeat a little bit of what we said earlier, like, it takes a unique set of technologies. It requires these deeper buffers, it requires those virtual output queuing, it requires encryption.
That's something that I think, you know, probably the biggest misconception I've heard out there is people assume that a switchport is a switchport, is a switchport, right? You know, to John's point from earlier, a low latency switchport is much more different than a scale-across switchport. That nuance, I think, is sometimes gets missed.
Got it. I guess if we zoom out and kind of, like, talk a little bit more about the platform, right? You know, I think at the Analyst Day, I actually asked Jayshree, you know, "Why is it that Arista was essentially the only networking company that sort of entered the market after the dot-com bubble and saw sustained success?" She highlighted that data center switching has really been kind of the foundation of Arista's strength for a long time.
You know, that's a category which, you know, has been one of the fastest growing and most important components in a data center, and probably all the more so now. Can you talk about how sort of, you know, you're leveraging that strength and switching to building a platform that supports other products and services?
Maybe just to amplify Jayshree's answer. In 2010, 2011, it's hard to imagine how small these hyperscalers were. It was a classic case of an underserved market and a, you know, new competitor coming in with hyper-focus, not trying to take enterprise networks or service provider networks, and serving the emerging hyperscalers. Their problem was, they had very smart people, you know, sophisticated operators, but were trying to connect together millions of compute nodes with a very small team.
How can I put my agent on your networking device like I do with my servers on a Linux operation, and build a centralized management stack? How can I have resilience and scale, and being able to scale out without bespoke different operating systems? They were sort of the architectural premises that we developed at Arista.
The hyperscalers, you know, had their own management stack, a centralized management stack. They encouraged us to be able to stream data to that management stack. We took that concept and built something called CloudVision, which is a management stack that we use for people who don't have the wherewithal to build their own. This back to your sort of question on AI, this being able to scale and manage is happening, you know, as we speak, into campus environments.
If you think about a university, it's very easy to think about a new student coming in with three devices they roam with, right? Maybe an iPad, a laptop, and perhaps a watch, and they're moving around campus. You imagine them all coming to see a sporting event that seats 20,000 people. All of a sudden, you're up to 60,000 MAC addresses in a very small place. Starts to feel like connecting GPUs and CPUs.
Mm.
We've taken, in our Wi-Fi stack, concepts that have been developed and standardized in the data center, like EVPN and VXLAN, and with our Vespa architecture, using Wi-Fi seven, being able to build these large Layer two roaming networks that fit into campus. It's a similar problem, except these people move around. They're not standard like racks. The operating principles of a single management stack and a single operating system in your data center and campus are consistent with what we've done in the data center. Very much plays this whole consistency and scale piece that we've focused on.
Yeah. One maybe thing that I'd add is, you know, our customers, what customers really liked about us in the data center also was just the cost of ownership of an Arista network is fundamentally lower, right? Because it fails less often, it's easier to maintain, it's easier to operate, the level of automation, et cetera. If you think about it, who has even less money to spend on networking, right? It's the enterprises, because, you know, they might have one networking person.
If you're a large bank, for instance, which, you know, many of you know, probably work at banks, and you've got, you know, 5,000 ATMs, you don't want to be sending a tech to reboot a switch at every ATM every few weeks, right? I mean, it's just not cost effective.
You want that reliability, you want that centralized management, you want that automation, and I think that's the advantage we've been able to bring from our cloud learning, learnings, if you will, to the rest of the world, right? We call it the modern operating model, and it's been very, very effective for customers.
It's hard to imagine, in networking, customers were very scared to upgrade because they can run into regressions. Something that used to work, I've upgraded it, now I broke it. With kind of the concerns around security and security alerts, being able to move quickly is really important. It was really critical in the data center, but it applies to the entire network.
Got it. Now that we're talking sort of a little bit more about enterprise, where are we seeing sort of enterprise AI adoption inflect the most, and how do those use cases differ from sort of what we've seen kind of with the cloud title?
Yeah, maybe I'll start.
Sure.
Add in here. I think definitely we see different verticals with a different approach. I think the consistency is an enterprise may have certain amounts of data that they wanna monetize or take advantage of and build out their own AI pieces. It could be in the financial vertical, maybe around security and, you know, credit card handling and fraud. In healthcare, it could be, you know, detection of anomalies around your health or radiology. We see these, you know, vertical applications emerging in AI specifically where a company may have a lot of data that they wanna take advantage of.
Yeah, I mean, I think the healthcare example is a great one, right? Like, we have customers that will have conversations with us about, "Look, we want to take advantage of this AI wave, but we're concerned about our patient privacy, right? Like, how can we ensure data is segmented correctly, and there's encryption on the wire, and we've got threat detection on the wire, et cetera." We've got a whole slew of services now that, and software capabilities that we've built within the switches that can, you know, provide that, right?
I'd say the biggest trend we're seeing maybe with customers is fewer of them are trying to build their own training clusters, because I think they recognize that the amount of CapEx investment that it takes, for that is incredibly high, right?
What they also realize, and this touches on something John said earlier, is inferencing is very much a latency-driven game, right? If you're going to ask a question and it's going to take two hours to come back with a response, you might as well not have it, right? How do you optimize that? Well, you can try and bring AI to the edge, if you will.
You're already starting to see questions about, okay, you know, is there going to be an accelerator on each of these devices you have running in front of you? You know, can we bring the inference compute as close to the edge, right? Whether it's putting it in a data center that doesn't belong to you, but closer to you, or putting it in your own data center.
Those are some of the trends we're seeing. I mean, you know, we talked about over 100 800 gig customers that we have now, you know, and just from about a year of the product being out there, right? Obviously, there's not a, you know, 100 hyperscalers, so there's a small number of hyperscalers in there.
There's some of these neo- clouds and specialty clouds, but a fair amount of enterprises in there, you know, in the verticals that John mentioned, right? Insurance, financial services, healthcare, the educational sector is up there, manufacturing, you know, some of those folks are trying to think about the whole AI factory concept.
Got it. You know, we spent a lot of time on AI, and we kind of have touched on this already, but, you know, campus networking has obviously been a huge area of focus for Arista. Can you talk a little bit more about sort of, you know, the value proposition and kind of the go-to-market in scaling that business?
Yeah, I think it's important to think from an Arista perspective, campus is part of our enterprise go-to-market. We have a very intimate engineer-to-engineer relationships with the large cloud providers. We've had, from the beginning of the company, starting with the financial vertical, an enterprise go-to-market. The only thing they could sell was low-latency switches when we started. We added the more broad data center products, and we've been continually to add to that.
We don't necessarily go to a customer or target a customer because they're a campus player. They buy a lot of networking equipment, networking important to them, and then our sales team will look for whatever opportunity exists within that account.
Maybe their Wi-Fi is coming up first, or maybe it's a data center opportunity, but they have more, you know, arrows in the quiver, if you will, to go build out an enterprise network. That's, that's how we go about it. We were pushed from our data center customers. "Why, why aren't you going into campus?"
They want the same benefits of a single operating system, the same operating model into the campus pieces. We took that to heart. When we were ready, we built our campus LAN switches organically. We did add a Wi-Fi component that I would say we've Arista-fied into the architecture and gone about it that way.
I mean, I think it's important to keep in mind that campus dynamics are also slightly different. I mean, the refresh cycles can be seven to 10 years. You know, data center, typically even three to five years, you'll see a refresh cycle. That's kind of one thing. It's also very rare that someone says, "Okay, you know what? Come in and replace my entire campus, my switching, my Wi-Fi, my SD-WAN," right? It's much more piecemeal, right?
Like, where they'll say, "Okay, you know what? You won this business, but we're gonna start with you in one building, and then maybe a few months later, you get the second building." It's a much more slower grind in that sense, right? With that said, I think we're very happy with our growth, right?
I mean, we told the street that we were targeting $800 million exiting the year, you know, $750 million, and then we acquired VeloCloud, which, you know, John can talk in more detail about, since he's heavily involved there. So, we bumped it up to $800 million. We were able to meet that target, and now for this year, we've set $1.25 billion, right? Which, we actually got this question earlier.
I mean, that's pretty aggressive growth in a market that isn't growing as fast, right? The reason we can do that is we're only about 3% market share in that campus market. So, it's much easier to grow at a faster clip when you're a share taker, right? It's a lot.
I think we feel very confident about being able to become, you know, frankly, one of the larger players in the campus in the next year or so. You know, again, the, you know, there's one big player in the room, and the name starts with a C, ends with an O. Then there's, you know, HPE/ Juniper, which obviously with especially with the merger now, they're in the probably in the teens, percentage-wise, market share.
Beyond that, it's been quite fragmented, and we feel like we've got a pretty good opportunity to kind of, you know, grow that business. It's an exciting opportunity. It's a different sales motion, it's a different pace than the data center.
In many ways, I think customers care about the same things, right? They want a network that just works, right? You're not having to reboot stuff. You're not having to throw people at it over the weekends. You know, things of that nature.
I wonder if I could just follow up on. You mentioned HPE/ Juniper, obviously big merger there. Has that created an opportunity for Arista, or you know, how is Arista benefiting, or is there any risks from this acquisition that you see for your company?
It's a discontinuity in the market. That always presents an opportunity. There's been customers who used Juniper and HP as alternate sources. Now they don't have an alternate source, particularly around the Wi-Fi piece. There seems to be two things moving forward. Customers think about that. It's an opportunity to create a conversation or have a conversation, but it hasn't fundamentally shifted our strategy. We're not sort of reacting to that event. We're still, you know, focused on the bulk of the market share and what we're bringing to market, as opposed to changing our direction relative to that.
Do you see the combined scale and breadth that the new HPE has in networking as a potential threat to Arista? Potentially them coming in, gaining a bit more share in your areas or?
No. No. I mean, what we've seen, it leaves us really as the largest pure-play networking company that can focus on every need in an enterprise, and that's a pretty big TAM for us. As we said, particularly in the enterprise on the campus side, very low market share, so we can be a disruptor in that space. While on the other side, there's some rationalization, and there's other pieces with storage and compute that require some focus from networking. So, I think overall, we view it as an opportunity.
Mm-hmm.
Yeah, I mean, I think again, you know, this market, even if you go back to the data center market, size is not always an advantage, right? Frankly, it's one of the things that Jayshree always keeps us honest, right...? [crosstalk]
Can be a disadvantage.
Yeah. Even though we've grown, we've got to keep that nimbleness and that start-up kind of mentality, which is why we tend to operate very efficiently and, you know, have stayed focused on pure-play networking, right? Because when your attentions are divided across storage and across compute and things like that, it's very easy to kind of, you know, move resources to whatever is the flavor of the day.
Mm.
Customers are, you know, don't have patience for that, right? I think in some sense it helps create opportunity. It hasn't, to John's point, it hasn't fundamentally changed the dynamic, and the largest player by far is still Cisco, right? I think, the opportunity is still there for frankly, them and us to take share.
Right. Great, so I had a follow-up question on memory. A bit of a hot topic these days, so... [crosstalk]
We waited 30 minutes to get there... [crosstalk]
Yeah. Yeah, well, we had to ask about networking, too. Your CEO, Jayshree, talked about memory being the new gold and pricing environment being horrendous and lots of talk about exponentially higher pricing. My question really is how is Arista positioned versus other OEMs here? Are you able to secure enough memory, or are you memory constrained right now?
We're not memory constrained. As she mentioned, pricing has gone up. We've made some targeted changes on our pricing to address that. Also changed our internal price structure to make sure we have continuity of supply.
Yeah. I mean, it's, you know, if you noticed, our purchase commitments have been going up. You know, so it is, you know. This is not specific to memory. I mean, memory is a relatively small part of a switching BOM, right. We're not a server manufacturer. I mean, sorry, memory is a bigger part of the BOM. Impact wise, it's less of an impact. Also, I think one nuance that sometimes maybe the street doesn't entirely appreciate, the more complex the switch, the more memory.
Like, so for instance, the chassis switches, right. There's more memory in those, but as a percentage of the BOM, it's actually a lower number, right. Because there's a lot more into that switch than just memory. What. My point being is, we're not maybe as impacted as a server maker.
Mm-hmm.
With that said, it's clearly a industry-wide constraint. I think our customers get it. To John's point, where we need to, we have the ability to do these targeted price increases on these more memory-intensive SKUs, and the customers get it. It's always a negotiation. I mean, this is not a we'll say, and they'll just do it kind of thing. At the same time, I think because we've got, you know, that free cash flow, because we've got, you know, a lot of cash on the books, it's allowed us to make these investments to get in line. Right.
Because right now, I think the challenge is, how do you get in line for that supply? We don't believe, you know, pricing aside, we don't believe we're supply constrained in terms of being able to meet the demand we anticipate, and our guidance, you know, is still valid.
Do you have a rough guidance on what portion of the bill of materials is memory?
We haven't.
Yeah... [crosstalk]
We haven't disclosed that, and it varies by... [inadible] [crosstalk]
Servers, you know, it's going from 10% up to, like, 30% now.
Just because of the... [crosstalk]
Yeah.
Just because. Yeah, the pricing... [inaudible] [crosstalk]
PCs also from 10% to 30%, roughly.
Yeah.
I mean, you're definitely in the single digits, I think.
Well, we haven't disclosed that... [crosstalk]
Disclosed that. Yeah... [crosstalk]
The percentages are going up just by the nature of the price increases.
Right.
I would also just add, in terms of being able to be in line, our relevance in the cloud market with top-tier customers helps some people understand where we fit in memory.
Mm.
They're also cognizant of the golden screw. If there's a lot of GPUs, but they can't connect them with switching, that wouldn't be great. I think that suppliers are aware of that position, and obviously, we have a strong balance sheet that helps us make those multi-year kind of commitments where needed, right?
Right. Your strategy is to pass on these price increases to customers and try to maintain your... [crosstalk]
Well, some of it. I mean, you know, we've been able to absorb some of it. I think Jayshree said this, and Chantal said this on the call, right? This, like, Jayshree actually signaled the memory thing at our last earnings call, right? I think everyone was a little bit surprised why she brought it up. In many ways, I think it shows that she was almost prescient about where this was going. It has gotten actually worse...
Mm.
S ince then. We've been able to absorb, you know, for a while, but, you know, at some point, we can't, right? I mean, the good news is our customers are some of the biggest buyers of memory, right? Like, you talk about the hyperscalers, they need more memory than pretty much anyone else, so they get it.... but you know, no one likes a price increase, and we don't like a price increase, either. It is always a tricky thing. The question is really about our margin guidance.
We feel very comfortable with that 60% to 64%, gross margin guidance for the year, that we can fit right within that using a variety of mitigations, right? Whether it is absorbing some of it, you know, obviously purchase commitments and then, you know, some targeted price increases are needed.
Great. Thanks for that. Before we hand over to audience for questions, just one question I would ask you. Is there something? What do you think that Wall Street is missing in your company? Is there something you'd like to highlight that you think Wall Street is not paying enough attention to?
You know, when we have these conversations, even the discussion today, we think about from an investor point of view, TAM and opportunity. We tend to segment things, right? Front-end network, back-end network, campus. From an Arista perspective, it kind of, especially in the enterprise, loses track of our value proposition.
Our value proposition, if we walk into an enterprise customer, is, "I can now deal with or execute any use case you have today, whether it's a distributed enterprise, wide area, network connection, GPUs, connecting together your virtual hybrid networking environment, and I can do this all with a single operating system that is proven to be extraordinarily low in defects versus our competition, and simplify your operating model." We go to market with that value proposition to an enterprise customer.
They give us something about what's coming up in the next 18 months in terms of their changes, right. Whether it's a management network in the data center, or now with VeloCloud, we're interconnecting a wide area network in a distributed fashion. I just wanted to make that point and connect these different network use cases and area to really what our sellers are going to market with.
Right. Yeah, I mean, maybe from my perspective, what I would add is I think networking is incredibly technical and complex, right? Maybe, you know, we need to do a better job of kind of simplifying that for the audience. I think people sometimes assume, like we were saying earlier, right, a port is a port is a port, like it's just a pipe, like how much complexity can be there? There is a lot, and because of that, I think people assume sometimes that networking is also a zero-sum game, right?
If, like the question earlier about optical circuit switching. Because optical circuit switching, you know, is, you know, seeing some growth, doesn't mean that Ethernet-based switching is declining because of that, right?
It's not, you know, it's not a zero-sum game. I think especially in this AI era, like, it is definitely a rising tide, kind of setup. You're gonna see all kinds of companies, I think, do well, because of that experimentation thing, because of, you know, the pace of innovation, etc.
Great, appreciate that. Any questions from the audience? I don't currently see any in the app. Feel free to stick up your hand. We've got a microphone here. It can come to you. Any questions? It's all super clear. Oh, we have one right here.
Thanks for the presentation, guys. Question: things that are out of your control, what is the biggest thing on the medium-term horizon that is out of your control or constrained growth? Is it power? Is it memory? What are you doing about this? What, how are you thinking about it? What mitigations are you putting in place?
I think power is definitely up there, right? Because I think, you know, customers want to build these as quickly as possible, and power is probably becoming the biggest constraint. To some extent, you're right, we don't control it. What we do control is, can we get more power efficient, and power efficiency out of their clusters, right? Because the network is not a big consumer of power, but it is a consumer of power. Some of the stuff that John talked about earlier in terms of reducing the power utilization by optics, from optics by 50%, right, with our linear pluggable optics.
Again, we don't sell a ton of optics, by the way, which is, I think another thing that the industry misses, but we are definitely thought leaders in innovating when it comes to optics, right? Andy Bechtolsheim is, you know, considered essentially one of the smartest minds when it comes to optics, and he's been, you know, pushing this idea of linear pluggable optics, eventually co-packaged optics, etc. That's kind of one thing.
Even if you look at the switches themselves, right? Look, you know, I love this comparison, where even with the same Broadcom chip within our switch, right? Let's take a Tomahawk-based chip, put it in our switch, put it in another branded vendor switch. Our switches tend to be 15% to 30% more power efficient than the competitors. That 15% to 30% that you can now save, you can apply to more GPUs.
Why is that? How can you save power?
Uh... [crosstalk]
How is that...? [crosstalk]
Because we just You know, it's kind of stuff that John touched on earlier, right? Like, for instance, better signal integrity on the boxes, being able to design the hardware so that you're being as efficient as possible, getting rid of some power-hogging components because you've got that better signal integrity. Just a variety of things.
You can lower the output power to the effect of the channel and minimize that. More direct control over the SDK on the software side to do some optimizations.
Mm-hmm. Mm-hmm.
Thermal integration with how we do heat sinks and cooling to run the fans at a lower power. Lots of little things that add up to some pretty...
Mm-hmm, mm-hmm.
S ubstantial changes. Sort of back to the memory, you know, not 100% direct control, but definitely we have an opportunity to influence that with some of the things we mentioned around, you know, our engagement with the large cloud customers and, you know, our momentum in the market is influential in helping us get some memory.
Thanks for that. Any other questions? We have one question over here.
They're making you work the microphone today.
By the other end.
Thanks. There was a large AMD, Meta deal announced yesterday. You know, is that additive to your TAM? I don't know if you can comment on that specifically, but then more broadly, in these, you know, once a large, compute deal like that gets signed, let's say, is someone like Arista consulted kind of as that discussion is happening? Do you come in later in the process? How does that sort of timeline work of when the networking piece would come in?
Yeah, I would say, look, I think, you know, we've been talking about the diversity of GPUs playing to Ethernet as the environment for running AI. You know, we largely believed, even when things were back in the InfiniBand days, that that would happen.
Some, you know, AMD didn't even have a GPU in the market at that place, we've just seen transitions like this, that people want a diversity of endpoints, and they want a consistent way to operate and connect those endpoints. I think it's validation that there's gonna be a multi-GPU environment. It's, you know, an external event. It's probably more long term.
You know, you see us making longer term supply agreements. I think people are having to think through longer term agreements in this environment to make sure that they have the capacity to grow. I think it's all consistent what we have baked into the model.
I mean, I guess in general, you know, the biggest advantage, I think, a diversity of suppliers in any area gives. It eliminates strategic locks that single vendor might have had prior, right? Like someone saying like: "Hey, if you buy my GPUs, you know, the network comes for, quote-unquote, free, or you get the GPU sooner because you bought the network." I think that, you know, it opens up the market, right?
I mean, the good news is all of these third party, whether it's the branded, you know, accelerators or the homegrown kind of ASICs that are getting built by some of the larger players, they're all Ethernet-based, right? That is a market that we can compete with.
We feel very good about winning in a best of breed, kind of fight. It eliminates some of these maybe more, you know, go-to-market type impediments that we would have run into otherwise. I think it's a, it's a good thing. It's directionally something that we've always planned for because I think customers always tell us that, "Look, we don't want to be locked into one vendor for anything," you guys included, right?
Like, even with us, right from the early days of the cloud, we've known we've had to coexist with other networking vendors. It's, it's not at all surprising that these customers would not want to single source, something like Compute either.
All right. Any other final questions from the audience? We're almost out of time. We maybe have time for one more if there is any other question. I don't see any hands. Last chance. Daniel, you have any other question on your side?
Yeah, I mean, I think kind of, you know, we just go back to sort of that high level, sort of like AI discussion, right? Jayshree has really talked about sort of addressing a $100 billion TAM. As part of that, you know, Arista has guided to AI networking revenue doubling from $1.5 billion in 2025 to, I think the number is $3.25 billion in 2026. Just want to get a sense of like, how much of that expansion is coming from, you know, hyperscalers spending more versus, you know, Arista being able to capture kind of a bigger share of that opportunity compared to competitors than you might have anticipated before.
It's hard to parse it. It's a combination of both. Definitely hyperscale growth and our architecture growing within that's a key component. I think we've also been, you know, effective at picking up some of the new companies and people that are starting to emerge in AI. I don't know if you have anything... [inadible] [crosstalk]
I mean, I, you know, we've talked about potentially, you know, even one or two more 10% customers, right? I think we are definitely seeing a diversity of customers coming to the table. Also, like, we're very happy, you know, like, we, you know, Meta was referenced earlier, right? Like, we're very happy with our partnership with our existing, you know, 10% customers. We don't name them anymore. They're customer A, customer B, but we, you know, we still love them just the same. Right? Yeah, it's, I think business is good on all fronts, and not to forget the enterprise, right?
I know this was more AI focused, but even within the enterprise, we've got customers, like we said earlier, that are starting to, you know, be a relatively significant, not as large as the hyperscalers, but a significant amount of dollars associated with AI coming out of the enterprise in the non-traditional places you, that you might imagine.
Great. Well, we're right out of time. Thanks everyone for joining us. Thanks very much for joining us on the stage today.
Thank you, Mark.
Absolutely. Thanks, Mark.
Thanks.
Thanks, Daniel.