Hello, and welcome to Needham's 21st Annual Technology, Media, and Consumer Conference. I'm Ryan Koontz. I'm real excited to host Arista Networks here today, joined by CFO Chantelle Breithaupt and John McCool, SVP, as well as Rudy Araujo, VP of IR. Welcome, guys. How are you doing?
Great. Thank you.
Doing well.
Thanks for having us on.
Excellent. Super. Well, Chantelle, let's start with, you know, your excellent quarter you guys had. Can you maybe walk us through, you know, how you felt about the results, obviously beating expectations and, what some of the kinda key demand drivers in your first quarter were?
Yeah. Thank you. Yeah, we were very excited as a company to report our results. I'm very proud of them as a team. We're very pleased with the 35% revenue growth. If you take that, Ryan, with the change in deferred revenue, it's actually 54% of revenue forward-looking kinda growth.
Yeah.
Very excited by that, right? We raised FY 2026 again, so the $11.5 billion. Twice in the last two sessions with you guys raised the year, which is great. If you guys go look, I encourage you all on the call to go look at our latest earnings deck, where we actually also raised our three-year CAGR outlook to be 20%+.
Yeah.
A lot of great news, a lot of great momentum. Super excited. You know, as Jayshree said, we've never seen more demand than we see at this time. We're really encouraged. We're encouraged by the progress we see in AI and the adoption of our 800 gig Etherlink portfolio.
Yeah.
We're proud that we're winning new logos across all the sectors, all our customer sectors. you know, the proof of concepts for campus are really working well. We have a high win rate where we can get in there and look at the campus where we're only 3% market share today.
Yeah.
EPS growth, I think, was stellar. To sum it all up, we're super excited and feeling the momentum.
Excellent. Within the quarter, you know, hyperscalers, your big customers, obviously a big contributor there, some of your NeoClouds, I'm sure, can you maybe walk us through your thoughts about how your different customer segments contributed in Q1?
Yeah, I think that we said, you know, it was a really great quarter across the board, in the sense of we saw some really strong performance from the AI specialty provider segment, which of course is across-
Yeah
the AI portfolio.
Awesome.
I think that's continued to do well. From that perspective, I think all of those segments are showing really great signs for the year.
Super. Yeah, you mentioned the nice step up in deferred. I think it was up, what, 100%?
Yep.
Year-over-year is just incredible. I mean, can you remind us again, what's going into deferred and what's driving that acceleration here for investors that they should be paying attention to?
I think it's really important for investors to understand and to go look at the earnings deck because we have some analysis and trending that I think helps us position where deferred is for Arista over time. You know, just to remind, what goes into deferred, so we talk about the $3.5 billion of AI revenue in the P&L this year. Most of what's sitting in deferred is AI use case and product. You have to look at both combined to look at the AI demands that we're seeing across the board. It's not that we just put it in once and it's aged over 2 years. Things come in and out of the deferred into the P&L, so it's rotating, it's refreshing.
I think from that perspective, it's important to look at the earnings deck pages we have. It's a sign of growth, and momentum, you know, in the sense of the AI demand that we're seeing.
It's clear. Incredible demand environment right now for yourselves and a lot of your peers and your ecosystem partners, I'm sure. With regards to your full year, you know, raising that to 28%, what were some of the puts and takes there? You know, coming out of the quarter, I, you know, I think some of the investors were, you know, maybe a little disappointed that you guys didn't pull the second half up as much. You know, how would you characterize, you know, your constraints there on your ability to raise, you know, your full year a little more relative to your supply environment?
Yeah. I don't think the year guide was in the sense of necessarily tying it to the supply environment for this year. I think John McCool will take us through probably a little bit during your questions in the sense of the supply environment. It's a when, not if, to us. I think you have to take it the inverse way. Arista Networks raising twice, we never assume 100% of everything's gonna work in our guidance philosophy. I would take it as, hey, Chantelle Breithaupt and Jayshree Ullal raised twice in the last 2 quarters, we're only in May, let's keep watching the future quarters, if everything can fall into place, we'll see where we can get to.
I think that's how I would take it versus always trying to meet what people expect from us from a guidance perspective. That's not new.
Totally get it.
Yeah.
Try to be conservative as you guys are and, you know, beat and raise. We like that. Excellent.
If I had to choose a philosophy, I would choose that one.
For sure. John, how would you characterize the supply environment and some of the puts and takes and, you know, what you guys have been successful in doing? You obviously tie up a lot of capital in terms of commitments with your strategic big partners like Broadcom. Kinda walk us through your thoughts on some of the things that are going well and some of the challenges you're feeling right now in supply.
Sure. Let me start with the going well. If you think about Chantelle's comments on deferred, the actual supply chain team had to deliver both the deferred revenue plus the revenue that was recognized.
54% growth year-over-year.
Yeah.
We have a lot of busy folks, and our suppliers are very eager to support us, to fill this demand. That said, the entire supply chain environment with the AI demand is really pressing capacity on fab.
Sure.
First, we saw that with memory, and how that played through. There's a lot more fabs with memory. On silicon, there's some concentration in those fab cycles. That's what we're up against is a, you know, an environment that's really booming across the board and really having to drive that and to turn it into finished goods and ship. We're really comfortable with, you know, our execution so far, and we'll continue to drive it.
In regards to your updated guidance, you feel like you're, you know, have everything you need in place in terms of supply chain to execute on the guide and hopefully chase some upside there?
We're very comfortable with the guide on both margin and revenue for sure.
Excellent. You know, can you maybe walk us through how you use, you know, purchase commitments to really, you know, lock in that supply? Your purchase commitments were up pretty strong.
Yeah, I mean.
Is that around memory?
Yeah, it's across the board. In the areas, you know, where there are constraints, specifically as we look ahead into 2027 and what we're seeing from the demand environment, it also helps us in short term get very tighter with those suppliers in terms of our demand today.
Yeah. The only thing I would add, John, to your comments is that that raise to $8.9 billion, it is across everything. The majority I would say is chip related, and it's a 52-week lead time, right, John?
Yeah.
We're leading into your point into 27, so getting ahead of making sure that that's sorted for what we need to deliver in 27.
TSMC guys are gonna be pretty busy, I think.
I think they've been busy for a while.
Yeah. Great. Well, maybe shift into to customers and what's happening in the AI domain. It's been a huge success story for you. I remember back a couple of years, a lot of questions, you know, what was Arista's role in the AI backend? You know, you're certainly proving that out and just knocking down some great wins. I think on the quarter you announced that your fourth AI customer had moved their backend from InfiniBand over to Ethernet. You know, what brought that customer over and, you know, how do you feel about that going forward, all of your AI accounts there?
It was the same trend we've seen on other ones. That particular customer was kind of early on.
Yeah.
Had made some investments in InfiniBand and, you know, the full NVIDIA stack. The, you know, the compelling reason to move to Ethernet is multi-vendor supported, agnostic to the endpoint, so NVIDIA GPU, AMD, something you build on your own inference engine.
Sure.
Same capability as your front-end network, your back-end network, so the operational simplicity around it. I would just add, you're seeing the same dynamic around NVLink and ASON starting to come up the industry rally around that and, while that's not a near term thing in terms of revenue, I think that also has bearing in some of the architecture going forward.
Super interesting. Wow. Maybe Chantelle, talking about price a little bit, I think you have some price changes to offset the memory costs. How is that flowing through the income statement here, relative to price changes?
Yeah, I think that, back to John's point, I think the supply chain team's done great work over the last few quarters, and you've seen various companies talk about different quarters in the last 3 to 4 where memory's been in their conversation. Our philosophy or our approach and strategy was to, you know, understand the market, understand where we thought the pricing was gonna go, the cost to us, and the kind of the adage, you know, measure twice, cut once. We wanted to be sure we had a pretty good, you know, line of sight and visibility into what the cost was gonna be for us for the next few quarters. With that, Ryan, we did a price increase, which customers never love, but at least it was a well-known topic.
Our goal was to be margin neutral over the time of the backlog converting to new orders. I would say that kept us in our 62%-64% gross margin guide, which we've been talking about for I think 3 or 4 quarters now. I'm super excited we've been able to hold it through it. You know, we've earned the value and trust of our customers to have those commercial conversations, so we're pleased with the result, but never like to deliver a price increase to the customer. I think margin neutral.
Sure.
It is standard, and that's allowed us to keep our guide. Yeah.
Sure. I mean, everyone in the hardware industry is feeling that one pretty hard. You guys exposed to both DDR4 and DDR5 across the portfolio?
Yeah. Not the same equal weighting, but yes to both technically. Yeah.
Great. You know, as you think about your AI revenue bogey here at $3.5 billion, which is, you know, raised from $3.25 billion, does that just count your four large customers there and not others that you've added, new logos that you add?
Yeah. I think the $3.5 billion, you know, the way I like to look at it is you look at it as a percentage of $11.5 billion. We're talking 30% of our revenue. Not too many years into AI totally is a pretty good win rate. It covers more than the 4 customers.
Okay.
-you heard a couple of quarters back it, you know, you could technically count about 100 customers across NeoCloud, some large enterprise. The 4 we talked about. The 4 initial pilots were just to take you along the journey with us as investors and-
I see.
-even the community, how it was going with the larger ones. it covers-
Yeah.
More AI, we think is the predominant use case. We're proud to work with some of the, you know, largest customers, but some of the smaller ones.
Yeah.
that are starting to get into their inference conversation.
I'd like to explore that, if we could. Just, you know, discussing maybe how some of the NeoClouds, what the NeoCloud kind of procurement model, how that might differ from a hyperscaler type model, and then maybe talk about inference, could you guys dive into that?
Sure, yeah. I can probably jump in on that one, Ryan. I think the NeoClouds have kind of evolved a little bit, right? Like, I think when they started, a lot of it was looking for a rinse and repeat kind of reference architecture kind of model. Frankly, they needed allocations of compute.
Right.
-you know, bound them on networking decisions, right? I think what they realized is they were all starting to look the same, right? How do you differentiate in a crowded NeoCloud market, and in terms of what do you bring to the table?
I think what they realized is the network is actually pretty critical to that differentiation, right? Because the network can mean the difference between job completion times taking longer or shorter. It can mean the difference between power utilization being higher or lower. It could be the difference between time to first token, you know, when you start talking about inferencing, et cetera. What we're starting to find now is that they are actually realizing that they can't just be bound into these agreements based on what's good for compute, because having the best and fast compute is no use if you've got a subset, right?
Yeah.
That is opening up the market for sure. I think the other thing is they are definitely not buying at the same levels as the largest hyperscaler, right? That's, you know, perhaps stating the obvious. They are also the folks that need a little bit more hand-holding, right?
Sure.
-the experience necessarily of having built these large cloud networks, et cetera. We're also seeing, you know, a higher attach with CloudVision, for instance.
Oh, yeah.
Our full capability that gives you know, AIOps kind of visibility into it. That's kind of the a little bit of an interesting dynamic there relative to the hyperscalers.
Super interesting. Maybe to follow that up, you guys have talked about both, you know, scale across as well as scale out, kind of that dynamic. Can you maybe unpack that for us a little bit and, you know, where you've come from and how you see the market evolving in the back end there for you in terms of opportunity and what's driving the revenue?
I think we saw a similar dynamic in the beginning of cloud. Once we were able to connect all the servers in the data center on a 2-tier spine-leaf architecture, the question was: How do you expand? I don't have the physical capacity in this data center for more CPUs. Now it's GPUs. You know, scale across is becoming kind of the next generation of the leaf-spine architecture and the universal spine to interconnect. We see it all different use cases, but specifically where you want one logical AI cluster across multiple physical instances.
Scale across sounds like it's in the pretty early stages of build out. Is that right?
I would say relative to scale out, you know. I know, Rudy, you're gonna jump in, this is a very exciting topic for us. The only thing I would say is that it's fairly new, but you can see how it might be required because a lot of this is when you need these kind of federated space, LAN, access to power cooling scenarios, and I think that's part of what's driving that need. Rudy, I know you wanted to say something.
Yeah. No, I was gonna say exactly that. The other side of that coin is as you start talking about inferencing, Ryan, you brought this up. You know, inferencing is increasingly like, how do I get as close to the edge as possible, right? You know, if you're sitting in New York right now and you're asking, you know, you're trying, you know, you've got an agent running on your device and you're trying to interact with a model, you don't wanna be waiting, you know, seconds, right?
Yeah.
You want it to be far more real time. That's the other thing that's going to drive, I think, scale across. We're very excited about it, partly also because it's a very unique product set that it takes to be successful there, right? We've shown success there. We've got the products. Frankly, you know, a lot of our competitors that we would run into in scale out don't have the product set to compete in scale across, right? I think we feel very good about it, but it is relatively early, just to the point Chantelle and John are making.
Sure. Yeah, it's great. You know, lest we forget about, you know, good old cloud, front-end cloud. You know, I think the back end has, you know, pretty much sucked all the sod out of the room and the attention. You know, maybe tell us what's happening in the front-end networks these days. That's been your core business where you've been, you know, such a dominant leader for so long. How is the front end changing to adapt to some of the new AI requirements, and what are you seeing happening in the, you know, boring old cloud business?
I mean, it's funny, right?
forgotten, at least in the analyst world or in the.
Yeah.
-world it is. I mean, for us, it's certainly a, you know, a tremendously important piece of the puzzle, right? From a upgrade cycle perspective, if I can call it that, you know, most customers in the front end are still in the 400 gig era, right? At least at the hyperscale level. Some of them are maybe even 100, 200 gig era, because frankly, the applications that we're using today are working just fine, right?
Yeah.
We're in a web conference. You know, we're all coming in nice and clear, you know, going over those 200, 400 gig networks. It is triggering a cycle, especially as this agentic traffic pattern starts to become more, you know, commonplace. It is gonna trigger that cycle. Probably next year is when you start to see the front-end networks kind of get upgraded. I mean, we're certainly having conversations right now. It's still maybe more planning phases right now, but maybe next year is when you start to see the 800 gig kind of upgrade, just as the AI clusters start to go into the 1.6 era, right?
Right.
at a generational gap between the front end and the back end, kind of naturally, if you will.
The other thing, by the way, is again, inferencing is having a huge impact there. Frankly, we didn't touch on this, but inferencing is also having an impact, and agentic is also having an impact on the enterprise side.
Wow.
-bit about that as well. I know it's not front end, but I just wanted to kind of get a-
I'd love to hit on enterprise too.
Yeah.
That's great, Rudy. Yeah, on the enterprise side, you know, you're having these early discussions, I'm sure with, you know, big Fortune 50 types and financials, and I can imagine those are the leaders leaning in here. Like, where are those discussions with you about architecture and build out? Do you know, who's gonna hold their hand, you know, to go do these sorts of things? I mean, do you think there are, you know, traditional guys like, you know, your competitors that are maybe deeper, more deeper entrenched in enterprise that are gonna get, you know, their fair share there? How do you think about the enterprise kinda AI build out to complement where they already have, you know, quite a bit of private cloud out there, I'm sure, where you guys are very strong.
Yeah. I think one thing I'll add in or at least start with, and the team can chime in. You know, I think one thing that we've heard from some of the larger customers, because part of that $3.5 billion is with enterprise customers, as we mentioned. Part of what they are starting to think about as they go through their refresh cycles, as they think about new data center builds, campus and data center, you know, to be fair, is what is their AI landscape? What is their AI goal in the company? One thing that we're finding is really great in the conversations, when you can have 1 operating system like EOS across all of those components, it's super easy to have agentic AI sit on top of that 'cause you're only having to hook into 1 operating system.
You can imagine having an agentic AI kind of portfolio where you're trying to go over different operating systems. Not as easy to hook in, not as easy to be ubiquitous experience. I think that's a great Like, Ken thought about that maybe 20 years ago, not sure, but he designed something.
perfectly built for it. That's things we hear from customers. We're hearing them pulling their campus refresh earlier to try to get into this, get ready for at-the-edge agentic AI inference. Those are some of the things we're hearing. Rudy or John, anything you wanna add?
Yeah. No, I think, Chantelle, you hit the maybe the most important thing I hear from customers, right? They're realizing that AI within the enterprise is not just a campus issue, it's not just a data center issue. It goes across the branch, the campus. They have, to Chantelle's point, that unified operating platform across all of those is really something that's exciting. Interestingly, the other thing they're asking us and we heavily invested in is what is our AI strategy for, you know, AI for networking, right? Not just we've talked a lot about-
Yeah.
AI, but AI for networking and AIOps. You know, we've continued to invest there in AVA, which is our autonomous virtual assistant, you know, driving better outcomes, if you will, for customers and being able to do everything from root cause analysis to helping them automate as much of their network operations as they like, right? Like, this is not about, hey, you don't need a network operator anymore. It's about how can we augment what the network operator is doing and make their life more efficient, if you will.
Really great, Rudy. Let's shift gears to one of my favorite topics coming out of OFC, which was the XPO announcement. I went in very excited to hear about it 'cause I hadn't done a ton of work on it myself. I was probably a little behind. Wow, I walked away from OFC and saw just the broad industry endorsement and was so impressed. Can you maybe walk us through kinda how you got there, how you built such strong support and what it means to Arista for that to be, you know, adopted as an industry standard?
Well, we have a very passionate founder, Andy Bechtolsheim, who.
You do.
over many decades has really led the definition of many MSAs working with these partners.
Okay.
It goes, you know, way back in terms of his development. The most recent before this was OSFP.
Okay.
We anticipated that at some point the thermal dissipation would be such that the conventional state-of-the-art would not be able to contain that. That really was an enabler to the products you see today and our leadership in those deployments. We're seeing the same thing happen as we go to 1.6 T and beyond. The OSFP form factor was great, but won't be able to carry the industry on the next generation. He's been working with the team here at Arista and the optics team with those partners and building it out, and we engage with them in testing.
Yeah.
Definition. It's been a real great collaboration with the industry.
Yeah. It's a super cool solution. Go ahead, Chantal.
No, it's okay. I don't want to stop you from saying it's super cool. What were you gonna say?
No, go ahead.
I think that we're very proud because it's industry leading, right? As far as what it means for us, it's open to the industry and may all people play, you know, competitively in it. I think for us, again, it's another thing where we've been kinda ready. We've got our product portfolio already thinking about it for the next generation of things. I think it just plays us well to be ready for something that the whole industry, hopefully, and our customers, more importantly, will benefit from for years to come, right?
For sure.
You know.
Yeah.
The other aspect of this that sometimes I think doesn't get fully absorbed is the front panel density that this can add, right? Being able to shrink those racks opens up a whole bunch of other scale-up technologies from R-RF to MicroLEDs, et cetera, that are far more power efficient, right? It opens up, I think, an entirely somewhat tangential benefit, but a super important benefit as these clusters start to get more and more dense and the front panel density was becoming the limiting factor, right? Not to forget fiber. The sheer amount of sheet metal, you know, less sheet metal that you need and the structural costs are frankly aren't paying the bills, right? Like, they're just-
Sure.
That way it will.
Yeah, I think.
Nice.
I think that the stat we would throw out there is, you know, 40 plus % footprint reduction, just to give the audience an idea of what we're talking about.
Yeah. You're, Rudy, you're saying not just for scale-out, but there are other applications for that same technology that you can embed on that XPO.
Yeah. Scale out is probably the most interesting one because-
Yeah
getting to the point where, you know, if you want the easy definition, right? If you're within the rack and scale up, if you go outside the rack and scale up, well, we realized that we need more compute in that scale out domain. Now we're thinking of ways to go across multiple racks but still stay within that coherent memory kind of scale out domain. Well, technologies like this are going to enable that, right? Because it gives you a lower power way to interconnect these across a larger physical distance, but copper is really gonna, you know, struggle, right?
Yeah.
Not only is there a direct benefit, you know, from an optical perspective, but I think it opens up other avenues and like everything Andy always does, like it's completely open standards based, nothing proprietary. There's no like, "Hey, you have to have Arista to get this to work." But I think it to Chantelle's point, it really continues to show the thought leadership that we bring to the table and how we move the industry forward with these open standards.
Super cool. When do we start seeing, you know, products in customers' hands and supply chains start to ramp? What's the timeframe that this starts to impact the industry?
I think, this is really something that will probably be most impactful in the 3.2 T, right?
All right.
1.60, you're still going to continue to see, and frankly, I think if Andy was here, he'd tell you OSFP is not going away, right?
Yeah.
The vast majority of optics out there for the foreseeable future will be OSFP, especially when you count the enterprise and things like that. I think for XPO specifically, you'll start to see the first products really come out next year, and really large ramps in that 3.2 terabyte era, where its value is much more impactful than, you know, to John's point, right, where OSFP would struggle.
Right.
Yeah.
The only other thing I would add to that is I think, you know, about a quarter ago everyone thought 3.2T was when we'd have to go to CPO, and I think this shows that now we have a whole longevity even at 3.2, that that's not required.
Exactly. Great. Maybe shift into the NeoClouds. I know we touched on them already, but, you know, you talked about a more consultative sale. Can you maybe unpack that a little bit for us in terms of what your role is in working with them? They're not probably as deep and sophisticated as some of these hyperscalers have been doing this for decades. How is Arista's role evolving with the NeoClouds and AI specialists?
Yeah, I can start and the group can chime in. I personally think NeoClouds is a fascinating segment. You know, every one of them is different and a pleasure to work with, they all come to us differently. Generally, they come to us because they know we have the bigger cloud experience, they would like more of that, like what Rudy and John described. There are different scenarios that we come into it. There's a scenario where the team has experience and they know us, it's a best of breed open conversation. High win rate there because our portfolio of Etherlink suits that very well, all the things that Rudy mentioned.
There's the example like what Kenneth Duda mentioned in our earnings call, where the first try of it with the team capable, but in a different scenario, they went with a different solution, and now they've come to us because they realized the capabilities required of their incumbent technology and architecture didn't serve their scale out needs. Kenneth Duda did a very good job, I think, articulating that scenario. We see those, and we see things that when people feel emboldened to go to different XPU vendors, they come to us because they know we're agnostic to the XPU. We see they come at us from different angles, but we are known as, you know, a great provider to them for their AI needs. I don't know, Rudy and John, if there's anything you want to add.
Yeah, I mean, I think the, in terms of the selling process, the only thing I'd add is, look, what's done well for Arista is this deep engineering to engineering partnership, right? Like, so having our really, you know, folks that are working on some of these largest networks be in that consultative selling process with the NeoClouds is something that they appreciate because, like, frankly, they're breaking new ground, and they're trying to do this at kind of the speed of light, so to speak, right? Like, you know, what normally might have taken a year or 2 years to ramp, you don't have that luxury anymore. Only other thing I'd add, and I'll tie this back to the, your previous question about optics, right?
One of the things that they appreciate while working with us is we don't lock them into, "Oh, well, if you want to do this, you have to go down the CPO route," right? We still give them that optionality. You know, if they want to go down CPO, you know, we're thinking about open CPO and things of that nature. XPO is obviously there. You know, your LPO is an option. You know, traditional OSFPs will be vTIME. You know, like whatever, they don't feel as locked in, and I think that's something that they're really beginning to appreciate because as I was saying earlier, ultimately they've got to bring dollars and cents in the door and from their end customers.
To be able to do that, they need to be able to differentiate from the NeoCloud down the street, right? How do they do that if they're all running the exact same?
Yeah
architecture, if you will.
Yeah.
Sure. Makes sense. I wanted to go back and touch on agentic. That's a, you know, it's going to come up more and more and more. I think it's really starting to impact companies' core business with all the different agents that are being rolled out. You know, how is that affecting the architecture of your customers? Which we hear about more CPU content, locating CPUs closer to GPU clusters. I mean, how does this affect, you know, Arista's, you know, demand equation and the conversations you're having with customers in terms of agentic infrastructure?
Do you guys wanna start or do you want me to start?
Sure. Yeah, I can start.
Yeah.
Chantelle kind of touched on this a little bit earlier, right? Like people are starting to think about, okay, what does this new bandwidth pattern on the, you know, look like, right?
Yeah.
You know, in our traditional world, it's a little bit more bursty, right? You go to a ChatGPT, you know, kind of more chatbot, kind of a world of LLM, so generative AI. It is still bursty, but maybe the bursts are slightly bigger, if I can use that term. You get your agentic thing, and these things are talking to each other all the time, right?
Yeah.
How do you kind of manage through that process? As Chantelle touched on, right, Wi-Fi 7 is becoming, you know, maybe a sooner cycle than you normally would have seen with these Wi-Fi generations.
Yeah.
Also thinking about, things like, you know, what does that mean from a power over internet perspective, right? Now you've got all these smart devices that are all across your environment that are also becoming agents in themselves. How do you enable them? How do you enable them to be securely connected via network?
Yeah.
Zero trust networking is another aspect to the conversation that they're having. Frankly, kind of the bread and butter stuff around things like routing and encryption on the wire and, you know, multi-tenancy. You know, like stuff that frankly, I think most people would think is boring, until maybe 6 months 1 year ago. Is becoming super important now. Again, the people I said earlier, we're one of the few folks that has the products to play in that space, right? We like our shot as we get into that. It's not that there's no competitors, but it is a different competitive environment than I think you see with maybe Cisco and Aruba, for instance.
Sorry, go ahead, John.
Just one of those boring parts that Rudy talked about that's baked into the architecture. Whenever we see like a mission-critical application, we see a thrust in those customers looking at observability. What's happening on my network?
Yeah.
You can imagine all these agentic communications, this new type of workload. What kind of visibility can you provide customers?
Yeah
-to see what's actually going on? That's gonna be pretty important as well. Is that where your telemetry kind of has a lot to offer? Absolutely. Not just the telemetry itself, but the architecture, the state-based architecture, enabled to stream information is going to be critical.
Yeah.
That's a big differentiator for you guys. I've heard a lot about that. You know, regards to 1.16 coming, you know, how are you thinking that, you know, the long way to Tomahawk 6, how is that going to impact your mix and demand? Is this just a normal generation or does this bring any kind of accelerators to the demand equation to keep up with all this bandwidth?
Well, I think, we're not pre-announcing anything, just to be clear in the conversation the way you asked the question.
Yeah
-you know, you could count on us to be ready when the cycle's there to be ready with the great products we usually do. Take that as we'll be ready when the market's ready and everything comes together from an ecosystem perspective.
Sure.
I think, I don't know if there's anything there, Rudy or John, we talked about in the sense of accelerating at this time. You know, clearly there's a demand in the portfolio, and the customer set. Rudy, anything you want to add?
Yeah. I mean, you know, I guess just one interesting data point, right? Like, I think we released our first big product. I mean, John will probably keep me honest here, 2019, I think, give or take. You know, the 800 gig products came out in 2024, right? Call that, what, five years.
Yeah.
It's not gonna take 5 years to get to I know we're not pre-announcing product, but I can tell you it's not gonna take 5 years to get to 1.60, right? People are already talking about 3.8. In that sense, the pace of innovation is absolutely rapid, right? Like, I don't think it's anything we've seen in the world before. Frankly, you know, that's why when JC says, look, the demand environment is nothing like I've ever seen before. By the way, she's seen a lot, right, in the business. That says something. In terms of how customers think about it is the next generation of networking. The uniqueness, of course, is the cooling infrastructure is changing, right?
Absolutely
We're now gonna see a mix of air cool and liquid cool. Right. That's I think an interesting dynamic that customers are still kind of trying to get their hands around because this is breaking new ground for everyone, right? I mean, it's part of the 1 reason why, you know, Chantelle talked about new use cases, new products, et cetera. Like, this is some of that complexity that comes in, right? It's not just about can we ship the product out the door? It's can the customer also be able to absorb that and get value out of it, right?
That's where I think, you know, we come here to partner, and Tomorrow has worked very well for us. You know, we see no reason to change from that, right?
Yeah. This shift to liquid cooled data centers, I mean, I didn't see this coming. Mechanical engineering would be so cool again.
It's back.
It is. Just wrapping up here, let's talk briefly about campus and your progress in campus and Wi-Fi a little. Can we touch on that briefly?
Yeah, I would love to. You know, I think that if you look at, you know, I think last year was our first real external kind of accountability model, and we hit that revenue of $800 million. We raised the target this year to 0.25. Fifty-five percent growth for ourselves with you guys to hold ourselves accountable.
Yeah.
Super excited. You know, you've heard others talk about there's a great refresh cycle happening in the next couple years.
Of course.
We're definitely participating. We're winning in campus, not even being in the data center yet, which validates our portfolio. Our proof of concepts are very well received. I think we're super excited with campus. Our portfolio is there to meet their needs, including this agentic AI conversation. Rudy, anything you wanted to add on the or John? We're right on time.
That 55% in context, right? Like it's a market that's growing triple digits.
Yeah.
Admittedly, we're starting from a low base. I mean, our share is probably in the 3%, 4%, 5% range today. I mean, that is just tremendous growth. Sorry, I think, you know, just as much as we're excited about the AI opportunity, I think the campus opportunity is incredibly exciting, especially because there's far more customers building campuses than there are customers building AI, right? It's a different, it's a different selling motion. It's a little bit slower of a ramp, et cetera.
Yeah.
It is a long game, and we're in it for sure.
That is great. Well, super. I mean, I really appreciate you guys joining today. Any last comments you wanted to make, Chantal, to investors, just wrapping up, you know, where Arista is delivering?
We raised our year twice in the last two quarters. We had 55% across the P&L and deferred revenue, exceeded EPS, you know, expectations, and so we'll continue to deliver value to you guys. Hopefully you're as excited as we are on the demand that we've conveyed to you today.
For sure. Well, thanks for joining. Really appreciate it.
Thank you. Have a good day.
Bye.
Bye.