Good morning, everyone. Thank you again for joining UBS's Tech Conference here in Arizona. I think this is probably the premier event, promoting our conference here. We're excited to have with us today Arista Networks. With me today, we have Chantelle Breithaupt, CFO, and from Investor Relations, Rudolph Araujo. We're going to talk 30 minutes about, I think, topics that everyone is kind of focused on given what's going on in the marketplace. We're going to start with your outlook for 2026.
Great.
Roughly six to eight weeks ago, you provided an outlook for 2026. I think in usual Arista fashion, the outlook was viewed as relatively conservative. Maybe let's start there and talk about how you see fiscal 2026 or calendar 2026 playing out. I think it helps investors understand your view and your perspective on how we get there. Like, I think in the past, you've talked about multiple vectors, multiple ways to get to the number. Maybe let's start with 2026, and then we can drill deeper into each of the different businesses.
Yeah, great. Thank you. Hi to everyone in the room. Good morning. Thank you, David, for having Rudy and I here today. We're very excited. You know, we're very excited about how we're finishing FY25, which we'll get into FY26, you know, at the midpoint, looking at almost 27% growth, starting the year at 15-17%, operating margin at 48%. Fantastic year. We're proud of that. For the first time ever, we've talked about the following year as early as August, having our analyst day in September. Usually, it's in November, and already stating a goal of 20% growth going into next year. The piece and component parts to your question, you know, we're very excited about two goals that we've laid out in that target of 20% growth. The first one is our campus target.
You know, we're aiming to do $800 million in FY25, setting a goal for ourselves next year of $1.25 billion. That's 50% growth going into next year. Super excited, starting at 5% market share. Looking at the AI center target, combining the front and back end, exiting FY25 at $1.5 billion plus. We'll see how we finish the year and laying out a target of $2.75 billion, excuse me, for next year, which is anywhere between 60-80% growth for the AI. Two very clear targets, setting the direction for the team and the company. Now, the piece component. You're saying, okay, those are really great growth rates, but you're giving us 20%. We will not guide the following year, assuming everything hits 100%.
What we'll do is look at the year as we go into the February earnings call with two quarters of visibility, and we'll continue to guide the year as we see it go through. We're excited about all the piece and components of the company.
I want to dig into each of those components separately, but obviously we have to start with AI and infrastructure. This year, front end, AI centric, back end, $1.5 billion. You guided to $2.75 billion. I guess the question that we get from investors all the time is when we think about CapEx trends from the large hyperscalers, your two largest and a third large player, when we think about your inventory, your purchase commitments, how should we think about the correlation or maybe sort of the relationship from those macro data points to how you're thinking about 2026 from a visibility perspective and how that kind of underpins your forecast for next year?
Yeah, it's a great question. If you think about some of these fantastic CapEx numbers that you've seen in 2024, 2025, and now going into 2026, the relationship to that announcement to Arista recognizing revenue has not really changed in duration too much. It goes from the announcement to us receiving a design win, to us building it out, to us and then eventually recognizing the revenue. That timeframe can be 24 months even. If you're hearing CapEx numbers now, you're talking for us 2026, 2027 type of revenue. I think if you take that combined with our deferred revenue growth, we exited Q3 of the deferred revenue growth of 86%. In deferred revenue, we have timeframes of 12, 18, more 18-24 months. You're looking at 2026, 2027.
I think that understanding the duration and the complex environments we're working within, it takes a lot of power, cooling facilities, cabling, employees to make some of these largest, you know, really phenomenal deployments to happen. That is how you see it. You see it in the purchase commitments, you see it in the deferred revenue, you see it in our guide, you see it in our 2025 actuals. We are very excited by that robust environment that we're seeing.
To your point, can I walk that walk back a little bit?
Sure.
You talk about the CapEx that we're seeing now really starts to filter in 2026, 2027, given the roadmap, 12 to 24 months. Is it a fair assumption to make that what we're going to see in 2026 are programs, design wins that will turn into revenue recognition for you in 2026 that have already been specced, designed, you've been designed into them, and you're just waiting on basically approval from large customers to rev rec them? Is that?
For the AI component, what we're speaking about.
For the AI component.
Yeah, the larger AI deployments for sure. The hyperscalers, some of the NeoCloud, some of the larger ones. That's absolutely what's in that category.
How would you characterize, if we had this conversation, which I think we did a year ago, the visibility and the timeline in terms of from program design, well, from, you know, RFP to design win to rev rec? Has there been any sort of dynamic change in the relationship, or is it too ad hoc by customer by customer to have like a sort of a nice, neat, tidy package in terms of like how do we think about?
Yeah, yeah, I think that our relationships remain strong. We're very grateful for them. We're very excited to have some of the deepest, funnest, geekiest tech conversations that there are. That has not changed. What has changed is the size of the cluster, the environments, the architectures, the difference in designs. All those things have changed because everyone's trying to figure out what's the most optimal way to get every possible power usage consumption saver out of what we're putting together. What's changed is the complexity. What has changed is the speed and cadence. What's changed are the power constraints. What's changed are the amount of optics in some of the two and three tier designs. All those things are more complicated, but what hasn't changed is Arista being a trusted partner for some of these largest things. Now it's just time to make it happen.
You know, we have a great example of one customer that needed a thousand people to come in to put in the optics, for example. Like these things take time, but we're very excited.
Since you brought up the clusters are getting larger, you know, if we go back again six, nine months ago, we talked about four to five large customers building out larger and larger clusters, you know, upwards of 100,000 GPUs or accelerators. Where are we today with these large customers that you've talked about pretty consistently over the last two to three quarters? Like as we go into 2026, how much of that incremental growth that we're going to see in AI centric revenue is coming from these large customers versus, you know, that tail that you have? I think you've talked about a very large tail of 30 to 40 other customers. Like how do we think, how should investors think about the weighting of like the contribution next year from these deals?
Yeah, I think it's going to be a mix for sure. Of the four pilots that we speak about, they're all on track as we expect from a timing and how it's going perspective. Three of the four are anticipated to be within this year. One might happen, as Jayshree likes to say, December 32nd, but very close. The fourth one is a customer who's intentionally going from InfiniBand to Ethernet, and so we're very happy to be on that journey with them. All is expected. You'll have some of the deferred coming out, which are some of the larger customers. You'll have some new projects coming in. This long tail end can include some of the great NeoCloud conversations.
You know, we're very excited when the NeoClouds come to us, when it's an open best of breed architecture conversation to say, we've seen the great partnerships you have with hyperscalers. We'd love to have that. We need to differentiate ourselves within the NeoCloud sovereign AI community. Help us differentiate based on our network and our design. And Rudy, you can probably speak of that.
Yeah, I mean, I think with the NeoClouds, you know, they individually, they're probably not spending at the same levels as the hyperscalers, right? They all add up. I think that long tail, as you called it, is a pretty thick tail in that sense. Then to Chantelle's point, I think, you know, what they're realizing is the network can actually be a pretty big differentiator for them in their offerings to their customers, right? Because at the end of the day, if you're just another GPU as a service vendor, like what's the value prop that you're delivering to your customers? Yeah, they want to have these deep conversations about how can we draw every ounce out of the network in terms of shrinking job completion times, for instance, or time to first job, right?
Those are the kinds of conversations we end up having with them.
Presumably on the hyperscaler side, these are large training environments for the most part. It's very early days in terms of inference. Is that the same characterization that you would make for NeoClouds, that they're training models, or is it really, are you starting to see maybe the early stages of, hey, we actually have to think about what inference is going to look like? You know, Arista, how can you help us, you know, on job completion time, et cetera?
Yeah, I mean, some of them I'd say are really optimizing for inference from the get-go, right? Because they realize that maybe that's the opportunity. I mean, the thing about, you know, that's unique about inference is you want to be as close to the person that's asking the question, so to speak, right? A lot of these NeoClouds, and especially sometimes some of the sovereign clouds, are trying to be that last mile. They are actually building from the ground up to optimize for inference rather than for training because they figure maybe training will happen at the hyperscalers, right? It's a variety of use cases, David, but I wouldn't say inferencing is lagging behind.
Is that a different type of sort of conversation than historically? If you think about optimizing for inference day one versus let's go pre-AI, obviously front end, traditional workload, I think it's a fairly, not standard, but I think well understood kind of roadmap in terms of what customers were trying to solve for. Does inference bring to bear, or does it require you to bring to bear different technologies and skill sets? Like, or is it, you know, leveraging what you've built over the last decade plus and extrapolating it into like this optimization for inference?
Yeah.
Sorry, go ahead.
I think the only thing, I'd start with kind of the framework we've been trying to use because I don't think we're still so nascent. I don't think we have an exact model, and I think it's fluid, and I think it's changing. The one thing we try to take a look at is for every dollar of backend spend, not that backend and frontend is strictly training and inference, but let's use it as a proxy. For every dollar of backend spend, we see, you know, $0.30 to $2 spent on the frontend inference side. The reason being is they see, depending on their policy of who needs to be in the office, how much does AI need to be at the edge? Where does it actually have to push the information to? What's the data mesh architecture that they're using?
If the customer had just recently refreshed their traditional frontend data center, now let's call it maybe an AI center, they might not need as much, but if they're on a refresh cycle and they're trying to do their AI agenda, that could be one. We feel there's a lot, probably a lot stronger over time inference opportunity frontend, which we're very well known in as a branded vendor. We're super excited about both. I think you heard Ken say on the earnings call, outside of China, we're probably the leading, if not one of the leading vendors to have frontend, backend optionality in portfolio. We're super excited by the general opportunity.
To that point, Chantelle, is that a competitive advantage? Ken's point on the earnings call is you can bring to bear the entire solution, frontend, backend. You know, and obviously there are some new vendors who are primarily focused on backend, who, on a legacy basis, missed this huge cloud demand in the frontend. Does that give you sort of a competitive moat or an advantage when you're having these conversations, or is it just still a best of breed for the backend? Like how much of the bundling dynamic actually comes to bear when you're thinking about a relationship with a customer or a new customer?
The great thing is, the backend network is a net new TAM. You know, if you think about the TAM, we've gone from $60 billion to $70 billion to $105 billion over two years. A lot part of that is this backend AI coming in. I think it gives us an advantage. More than one vendor can grow. We feel we have an advantage when it comes to portfolio options, when it comes to being agnostic to the accelerator, the chip, LPO, CPO. A lot of optionality, a lot of great choice for the customers, if choice is allowed for a best of breed. We're not sure what's offered under a bundled scenario because we wouldn't see it. We know when we are in front of the customer doing proof of concept, we have a very high win rate and chance of winning.
Got it. Maybe just one other question on kind of how we're thinking about 2026. When we look at 2026, how important is it for your roadmap to have new silicon for 2026, right? There is a lot of discussion about, you know, 1.6 coming down the pipeline. You're very closely tied to Broadcom. Obviously, it's been a great relationship, great partnership. I think on the last call, there was a little bit of concern about maybe some supply chain variability, you know, impacting your ability to ship and hit targets. How do we think about that kind of potential dynamic playing out in 2026?
Yeah, I want to just clarify. There was no intention meant to say there's anything at risk for Q4, FY25, or FY26. Just to be sure, since I have the audience here, there's no intention there. There was intention to talk about it would be naive, we'd be remiss as an industry to not say, hey, there's tightness in the system. There's capacity tightness. Some people talk about memorial. That's a small part of our business. We just wanted to be, we leaned into some purchase commitments. We wanted to make sure that 2026 and going into 2027 with some of these new products coming in, that we were going to be well positioned and we feel that we are.
That's part of our purchase commitment increase along with just pure demand increase because we have one-year lead times, because we deal with a very well-executed machine and Broadcom providing us the chips. You know, we don't pre-announce. You know, you can be sure if there's a new technology we're working on it, but we don't pre-announce until it's ready.
I was going to come back to financials later, but since you brought purchase commitments and preparing. At your investor day a number of months ago, you gave a gross margin guide. Some of it, I think, if correct me if you feel differently, investors interpret it to be sort of to try to future-proof your gross margin from this dynamic that you referenced. Is there in your mind a buffer in your gross margin outlook for 2026 that takes into consideration sort of the challenging, you know, supply chain dynamics, whether it's memory or other components? Is there a way to think about how to think about your initial outlook for 2026?
Yeah, so for the guide for FY26, just to remind, is 62-64%, which we think is still a great gross margin range. That is purely based on what we anticipate to be the current mix of end customer. So we have, you know, basically cloud and enterprise, if you want to make it those two big animal pictures for the end customer segment. That just means if it's more of a 62, it's more cloud-heavy. So that's what was in there to your point, David. We feel that our supply chain team has done a fantastic job getting ahead of any potential price increases with multi-year agreements, with having dual source, multi-source vending, multi-vendor sourcing. I think from that perspective, that's not a price component pressure.
Customer mix, customer mix. Going along these lines, if I maybe disaggregate your 2026 outlook, I think some of the feedback that we've gotten, and I think you probably have gotten this also, if I look at your AI-centric targets, what you're thinking about campus going from $800 million in 2025 to $1.25 billion in 2026, it does not imply a lot of growth in the more traditional Arista business where you've been historically a share taker.
Right.
How should investors think about that? Because if that was the case, if that business does grow faster, that's traditional enterprise, which comes at a higher gross margin, and that would skew us towards the higher end of the gross margin range.
Right. You are spot on. Nothing has changed in my style, Jayshree's style. We would not give a guide that assumed 100% of everything went very well. That has not changed in our style. We will start the year with our 20% growth. We will continue.
Right, the margin will spill out accordingly.
We will continue to update best on mix what the margin profile is. You know, we are being consistent through this very frothy timeframe to be, you know, I think that style works even better nowadays in the sense of kind of what is going on around us to make sure we are pretty clear and give a number we know we can hit, and then we will continue to build from there.
Sticking on enterprise for a second, enterprise AI presumably is virtually nothing in your numbers for 2026. How are you thinking about that roadmap from your customer perspective, given how successful you've been on traditional enterprise data center, taking a lot of market share the last decade? You know, what do you see from your customers from a roadmap perspective? Like, what are they any sort of sense for problems that they're trying to solve, how they're thinking about this? Is it a 2027 dynamic given visibility might take 12 months to get there for them, or is it a little bit further out from your perspective?
Yeah. Go ahead.
There is some enterprise, you know, we talked about, like you said, 30-40 other customers, and that's not all NeoClouds, right? There's some enterprise in there. It is, to your point, it is smaller. I think enterprise AI is playing out in two ways, right? One is inferencing is a bigger deal for them. I think most of them seem to be inclined to use the hyperscalers for training and then do inferencing either on-prem or, like, you know, near-prem, so to speak. The other thing is it's influencing their campus decisions too, right? Because that last mile ultimately is what affects the end user experience. Wi-Fi 7, for instance, is, you know, upgrades are happening maybe faster than you might have seen with previous generations. Wi-Fi 7 requires more power, which means power of Ethernet switches upgrade.
It is actually playing out in two different ways, I guess, you know, something to keep in mind.
Okay, that's a good segue to campus. Obviously campus, we're going through a bit of an upgrade cycle. We're seeing it from the biggest player, has a big refresh. You've talked about taking your campus business from $800 million, as I mentioned earlier with Velo, to $1 billion to $50billion. Correct me if I'm wrong, but I think when we've done the math, it looks like your campus business should be margin accretive to the portfolio. Is that accurate? Is that fair? And if so, gross margin accretive. I know you've given a lot of investment opportunity to Todd and his team to kind of grow this business. How should investors think about sort of that business and the priority of campus within the Arista portfolio?
Yeah. We are super excited. Todd is here to help. Really, really happy that he is here. I think that, you mentioned the goal, $800 million to $1.25 billion. That is a couple different ways. That is new logo acquisition, and it is land and expand. New logo acquisition, Todd and the team were definitely talking about, okay, what is our plan? You have heard other people talk about great refresh cycles coming over the next two years, which we absolutely see as an opportunity for us to go and work with those customers. I will get to the margin point in just a second. On land and expand, we are super excited that, you know, we have the portfolio and brand recognition that now we are winning campus first and then land and expand to the data center. That is fantastic from our opinion. The margin, the margin really depends on the end customer mix.
We do have campus in hyperscalers, and we have campus in enterprise. Enterprise generally is more margin accretive. The more it's enterprise, campus, yes, it would be now 1.25 on $10.5 billion. It starts that up. This is a slow and steady, again, campus for us is our slow and steady rinse-repeat data center over time, whereas the hyperscalers and the NeoClouds kind of do that.
Why do you think you're winning with campus in some cases first when historically it had been sort of, hey, we lead with the enterprise data center product, people understand the technology that we bring to bear, it's extrapolated to the campus, but now it sounds like you're winning more, not to say that you're not winning that way, but you're winning more often now with campus first.
It's not more often. It's just to give you an example, we do sometimes win for campus first. There's a couple of reasons. One is we've been discussing with the customer of trying to get in there, and from their perspective, now they're coming up on a refresh cycle, or they see, we see tailwinds from competitor actions. Either they're confused by the roadmap on some of the mergers and acquisitions, or they're maybe not clear on the intention with all the other distractions and a company offering, also offering, you know, networking. I think we're just getting in now that we're seen as a true player in the field. We're only 11 years post-IPO. You know, first we had to kind of declare cloud, data center, you know, high performance, and now we're saying campus.
When we declare a goal in our number, it is absolutely an intention. AI centers and campus have our attention.
Got it.
All the other stuff does too, but we're intentional.
You mentioned competitor uncertainty. Obviously, we're a year and a half to two years into a large competitor announcement, not quite deal closing. I mean, I would imagine it's hard to kind of tag that deal we won because of maybe uncertainty in the market, but are you seeing, you know, more RFPs? Like, how should we think about your ability to kind of win given that degree of maybe technological uncertainty between those two competitors that are now won? Could it be meaningful enough over the long term while they kind of work to kind of integrate two different technological stacks?
It is meaningful, and we're absolutely seeing, can you, we would be happy to have you come in now, Arista, because we're not sure on the incumbents' go forward strategy. Those are absolutely calls that we're getting, and we're happy to take. We get in with a proof of concept, and then we have a high win rate.
Does Todd have the marching orders to go out there and find great channel partners? I know that's part of the expense. Like, we're adding more channel partners. Obviously, you're never going to be Cisco from a channel partner perspective, but like, what are you looking for from a partner per se? What is Todd, what are some of the metrics that he's focused on to build that footprint for your business?
Yeah, so I think speaking on behalf of Todd, if he was here, he'd say he's focused on international as he's coming in, and he has a great background globally with his prior roles. He's focused on this campus refresh cycle. The distinction we're working through is, we're not going to do 80% through partners. That's not our goal, but to do a bit more. We're not looking to go down to SMB. We're looking to stay at the top tier enterprise. We're balancing, you know, partners helping us with fulfillment and partners helping us with leading and landing deals. That balance is what we're working through. How much do we want to, we're very much direct-led with our sales team, which do a great job. It's more expanding the footprint, you know, the number of customers we can talk about.
To your point, I think that you made earlier, the confusion with customers, there's a similar confusion with the partner community too, right? Because partner programs have been changing, et cetera. I think, you know, what we're trying to do is just be a good partner to them. Ultimately, I think if you deliver a great technology, it makes them look good, right? It gives them services opportunities, things of that nature. You know, that's the overall strategy.
Got it. Okay, I wanted to hit campus, get that out of the way. Come back to data center. There's a lot of moving pieces in data center, whether it's scale up, scale out, scale across, maybe it's a little bit different nuance than DCI historically. There was a lot of announcements coming out of OCP about ESUN. How are we thinking about within your context of your longer-term view? I think I might have asked this on the call to Jayshree, but when you think about scale up versus scale out and scale across, obviously scale up has not been an opportunity for Arista.
Right
How are we thinking about maybe, I don't know if you can rank order them, or just how are you thinking about how investors should think about the opportunity there for a market that had been effectively nonexistent for you for Arista's 10 plus year of existence?
The scale up market.
Scale up, start with scale up.
Yeah. So we're super excited. You know, for us, this is a similar playbook to two years ago talking about InfiniBand moving to Ethernet. Now we're talking about other technology moving to Ethernet. We go through the similar cycle of an ecosystem of community coming together to say, let's have open standards, let's get it to Ethernet. That's kind of the FY26 timeframe that Jayshree was articulating. It's going to take about a year or so for this ecosystem. The whole thing's ready. The standards are ready. We're absolutely excited coming into 2027. You know, we have some great thought leaders in the company that have some views on how this could be a great opportunity. It's not in our $105 billion TAM, so it'd be accretive.
We're not sure how big that is, we're sizing it up to see, so we don't have an answer just yet because it is so new, but we're working on it. I think that as we come out and work with the community, we'll be a player and be super excited to participate.
I mean, the other thing about scale up that also adds some degree of variability is that it'll manifest itself in a rack style form factor, right? So that's the other part of it that needs to be figured out too.
Yeah.
Obviously you're coming at scale up fresh, you know, a large GPU provider is bundling and coming into networking from a different angle. How do you think about your position in scale out and scale across as, you know, a GPU provider is looking to bundle? Like, I'm just trying to think, we get this question a lot.
Sure.
You know, the competitive dynamics, obviously they're selling NICs, they're selling switches, they've got software, full stack solutions. Just want to get a sense for how you're thinking about the competitive landscape.
Yeah. I think there's always going to be more than one player. If there's two, that's not abnormal. Two's fine. There needs to be competition in the market. The market's growing that everyone can grow. Where do we win? You know, we win when the best of breed choice is an option. We win when the customer's looking for agnostic to the chip, the accelerator, CPO, LPO, the broad portfolio that allows designs of two-tier, three-tier scheduled, non-scheduled fabric. I think the breadth of the hardware, the EOS software, the agnostic to all the components around that I mentioned, generally we find customers want to do best of breed choices and not be locked into a proprietary head-to-toe design is what we generally see.
Now, sometimes commercial things will cause them to make different choices, and perhaps once those things are done, they'll come back and come to a best of breed option. We feel even that aside, we have a ton of runway and TAM to go get that will, you know, will give you growth that we find is exciting.
Does your Blue Box solution play into the strategy? Like, how do we think about Blue Box in the portfolio of what Arista brings to bear from best in breed to maybe a little bit of a different offering with Blue Box? Is that in response to White Box, in response to bundling? Like, how do we think about how that plays?
Yeah, so it's a great question. It's great to position it. Blue Box has been in our vernacular in the sense of us using it with the hyperscalers for customers that do their own NOS, their own FBOSS, SONIC. We've been doing that for a while. It's in our run rate. In the scale out, scale across environments. Scale up is a Blue Box opportunity. I'll let Rudy talk technically in a second why that's more operational. Blue Box is not in response to White Box. It's two different areas. Blue Box actually came to be when our larger customers wanted to have dual source optionality so they could enjoy the hardware of Arista underneath all the NetDI operating firmware system stuff, but be able to put their software on top.
They could say, I have Arista network box here with EOS, and I have one here with SONIC. It gives me dual source. I can move it across as I need to. That was the use case for Blue Box, different than the White Box market. Did you want to?
Yeah, I think on the scale up side, maybe the opportunity for Blue Box is because the scale up networks are less complex in a sense, right? There is probably a lower software load there. It gives customers the ability to have a very thin software layer on top of a highly reliable piece of hardware, right? I think that is an important thing to make sure people understand is the Blue Box hardware is not any different than the standard hardware that you would buy. The hardware differentiation is still there. The NetDI differentiation is still there. It is only whether you have your own OS or not.
Do you think that ultimately there's an opportunity to run EOS and scale up? Or I would imagine you probably don't need it to your point. It's just going to run like SONIC or some other.
Yeah, or maybe even a scaled-down version.
Even a lower version.
Yeah, I mean.
Like a degraded version of SONIC.
A scaled down version of SONIC.
We just create a new term scale down.
There we go.
We are super excited about the pieces that we can contribute from the NetDI hardware side. That is a good use case for Blue Box.
Got it. I guess what I'd like to do in the minute remaining is give you an opportunity to maybe kind of touch on maybe things that are kind of important to the story that have not come across either on the earnings call or here today in any of your meetings. I give you an opportunity to kind of.
Yeah, thank you. Close it out. Yeah, so we are as a company, we've never been more excited. I've been here two years, but you heard it from Jayshree, who's been here from the beginning. She just sees a great roadmap ahead for Arista. You know, we have a style which you guys know as our community. We've already guided 20% for next year, ending this year at close to 27%, you know, 43-45-48% operating margin depending on the investments that we do. We're super excited. We feel the market has enough room, $2.3 trillion spent on AI in the next five years. We appreciate your faith in us in the sense of taking opportunity of this AI opportunity.
Great. Thank you, Chantelle. Thank you, Rudolph. Thank you, everyone.
Thank you.
Thank you. Have a great day.
Have a great day, everyone.