Arista Networks, Inc. (ANET)
NYSE: ANET · Real-Time Price · USD
176.91
+4.36 (2.53%)
At close: Apr 24, 2026, 4:00 PM EDT
177.19
+0.28 (0.16%)
After-hours: Apr 24, 2026, 7:59 PM EDT
← View all transcripts

Barclays 23rd Annual Global Technology Conference

Dec 11, 2025

Speaker 1

Barclays IT hardware and comm equipment analyst. Thank you so much for joining. Very happy to have Arista Networks with us. We have Mark Foss, who's operations and marketing, and Hardev Singh, GM on the cloud and AI products. So we're getting a few different pieces of the business, a few different perspectives here. Thank you so much, guys, for joining.

Mark Foss
Senior VP of Operations and Marketing, Arista Networks

Thanks for having us.

Let's, yeah, we're going to bounce around a little bit, but maybe if we start, you know, with Hardev, probably for you on, you know, when we're looking at cloud and AI networking, there's a lot of debates about competition, so we've always been talking about white box, and it's always been there, but it's not going away as a discussion point, and obviously, over the last several years, given what NVIDIA has done with Mellanox, they've increased their presence a lot, first with InfiniBand, now somewhat with Ethernet, so maybe if you could just kind of walk us through how you view competitive differentiation and dynamics, and what is it that helps Arista win?

Hardev Singh
General Manager of Cloud and AI Products, Arista Networks

Sure. I mean, I'll start by saying that the AI momentum is strong, right? There's a lot of activity in this AI space, and we're very excited about it. I think our differentiation and our value hasn't changed from the past. You know, with AI, you just have a different set of requirements. You have the scale-up. Let me frame where we really play today, right? So if you're looking at scale-up with NVIDIA, that's a closed system right now. We don't play. So most of where we are playing now is the scale-out, which is the network to connect all these accelerators. And now also you hear about scale-across, which is once these cluster sizes become really big, you need to connect them to each other. It's DCI in the traditional world.

That's another segment of that AI space where we play very, very strongly because that's routing, that's chassis product for us. But what we're also excited about is the non-NVIDIA accelerator ecosystem picking up, right? That opens up more opportunities for us for scale-out, for sure, as well as scale-across, but also potentially in the future for scale-up, right? These non-NVIDIA accelerators, you have the scale-up, which is going to be Ethernet, whether it's ESUN or UALink, the flavors of Ethernet. That opens up a new TAM for us. Obviously, I don't think it's material in 2026. It's more 2027. 2026 will be more trials, pilots. That's another opportunity we're excited about, so.

Mark Foss
Senior VP of Operations and Marketing, Arista Networks

And in terms of our differentiation, Arista is a software company, and it all starts with EOS, which is our software. And it's a really unique operating system. It all starts from the architecture, multi-process state-sharing architecture, very, very high quality. It's been deployed in some of the largest networks on Earth. Automation, visibility, single-binary image, that's our main differentiation. And then the hardware side, we use off-the-shelf chips, but the way we design our hardware is very efficient. So our power draw is generally about, could be about 25% less than the equivalent products out there, which adds up when you're going to be deploying a lot of products at scale. And power being a big top-of-mind issue, it really helps to have a lower power draw switch.

Okay. You mentioned scale-up and scale-across. Maybe let's start with scale-across. I was talking to you before, Ciena, talking about some more optical, pluggable wins. How do you see Arista from a product standpoint playing? And maybe talk about whether it's latency, speed, distance. Where is it where there's going to be a higher content of Arista in that scale-across architecture?

Hardev Singh
General Manager of Cloud and AI Products, Arista Networks

So, Scale-across is a new term, but traditionally we call that DCI, which is Data Center Interconnect. In an AI world, it is connecting a bunch of AI data centers that are physically in a metro region or across. So very excited about that opportunity. We play very strong there with our EOS feature set as well as our 7800 chassis platforms. You mentioned coherent technology. So once the distances become large, you need coherent optics. Like a ZR technology is what most customers use and are looking at today at 400 Gb going to 800 Gb. Yeah, if the distance becomes very large, then you have a latency impact. So that needs to be kind of architected well that your AI workloads are not going across these long links too much. But that can be solved from an architecture perspective.

But yes, I think from a scale-across, I think we are a very dominant player there.

Okay, great. And then maybe the second topic is scale-up, and we'll talk it in a little bit as Blue Box because I think that's part of the target audience for that technology. I know you guys have had this Blue Box type of deployments already, but maybe walk us through how those products are different from a fully EOS switch and how they're different from an EMS ODM switch from like a Celestica or whatever. Talk to us about the software and hardware differentiation with both the higher-end switches and the lower-end switches.

I mean, I touched on the scale-up slightly earlier in terms of the opportunity for us would be from the non-NVIDIA accelerators. In scale-up, it's really the complexity would be in the hardware design because every accelerator has different technical characteristics, right? The link budgets, the signal integrity, the HBMs that are connected to these accelerators. So it's really a custom switch. You can compare it to an NVLink switch, right? That's what that product looks like. From a software feature set, you could say it's slightly light, but then the few features that you need, reliability, congestion control, latency are super important. So slightly light in feature set, but those have to be very robust, those features, and way more complex from a hardware design. So the way we think about it, I think it plays into Arista's advantage because we are good at doing complex hardware designs.

Mark Foss
Senior VP of Operations and Marketing, Arista Networks

Yeah. This is Blue Box today is mainly deployed in front-end networks at really a handful of players out there. And the real differentiation there is you have diagnostics, better signal integrity, better power draw. There's probably a few others I'm missing, but that's generally the consensus there. Then they can use a third-party operating system on top of that.

Okay. So I mean, back to the scale-up. So this is as far as guideposts, when we see all this TPU growth or Trainium, that's going to most likely push, that's going to be a big impetus to push scale-up to Ethernet, and that opens up. Is that the right way to think about it?

Hardev Singh
General Manager of Cloud and AI Products, Arista Networks

It's a net positive. There will be a pull-through effect for Ethernet vendors, yes.

Okay. Got a lot more in cloud, but maybe we'll go over to campus for a little bit.

I didn't want to have that one campus question today.

Here's your first. Let's see if we can do it. Really good guidance, 50% plus growth for next year. So kind of accelerating after a few good years, which is normally not what happens in the campus world. So maybe walk us through how that's happening and still very low market share. So how much runway is ahead?

Mark Foss
Senior VP of Operations and Marketing, Arista Networks

Yeah. Sure. Arista started when we started shipping product in 2008. We were focused on data center and large clouds, the largest networks on Earth. When we got to 2018 timeframe, many of our customers came to us and said, "Hey, we love the quality of your software. The automation is unprecedented. We love CloudVision. Do you have the visibility that we have in the network? Can you please develop some products for the campus end users?" So we listened to our customers, and in 2018, we started delivering PUE switches. We acquired a company that did Wi-Fi. And slowly but surely, we've been chipping away a share, and we're targeting kind of the high-end enterprise.

So if you look at the campus market, which is about a $30 billion market, about half of that is kind of the large enterprise, which is kind of the global 2000, if you will. So our product set is really focused on that area, that segment of the market. And we've slowly chipped away there, and we found that customers were really looking for a viable second vendor alternative. Generally, Cisco is the big player in the large enterprise, and they've got about 75% share there. And customers were really looking for a good, viable alternative there that had high-quality software and automation and visibility. So Arista, they kind of came to Arista with open arms, and we're slowly gaining share. We're probably about, of the overall campus market, we're probably like 2.5%. If you look at the large enterprise segment, we may be approaching 5%.

If you look at our data center share, right now we're about 30%. So could we eventually get there in campus? Anything's possible. It probably will take longer to gain that much share in campus just because in the data center market, if you win a couple of these big clouds, you can get 10% share pretty quickly, but there's no cloud titan in campus. So it's kind of when you've got a nickel and dime your way up there. But right now, it's been high growth and primarily because of share gain is really what it's been.

Yeah. And there was a new announcement this week incorporating a little bit more AI. Talk about that a little bit.

Yeah. No, it was a couple of things. It was more about increasing the domain for mobility. So I think we could support up to 500,000 endpoints. And then also we have this autonomous virtual assistant, which really came from we got that from one of our security acquisitions, and we're applying that now to campus networks. So it's been really good.

Okay. All right, great. We touched on the $1.25 billion campus. Maybe back to the AI and the $2.75 billion target, another one with really good growth. Maybe a little bit about, obviously, you have two very large customers and a third that's probably getting very large. You've discussed these four deployments, the scale deployments. So maybe give us an update there. And how do you think about Arista's positioning as we go to more dense deployments and bigger GPU counts? Does that advantage the EOS give even more advantages in those types of deployments?

Hardev Singh
General Manager of Cloud and AI Products, Arista Networks

Yeah, I think the growth we're showing from this year's number on AI and then going to $2.75 billion is a very healthy increase. Yes, you have the large guys building these massive clusters, 100,000 plus, but I think there's also equally strong momentum on the neo clouds or the tier two clouds, baby clouds, whatever you want to call them, as well as enterprise. I think some of the verticals like financials or automotive research, they are seeing use cases for building AI clusters. Yes, they're going to be significantly smaller in size compared to the big guys, but they all add up. I think Jayshree in the last earnings call mentioned we have 20- 30 new customers in this second tier AI customers. So strong engagements. With these guys, the value of EOS becomes even more important, right?

They are going after the same AI use cases, whether they're providing AI as a service or they're doing their own AI workloads. The robustness of the network is key, and they want to pick the best of breed network, and they don't have probably an army of engineers like the hyperscalers do, so they depend on, for example, Arista even more to kind of help them build that AI network at the quality they want to build.

Mark Foss
Senior VP of Operations and Marketing, Arista Networks

For these large cloud networks, we've been working with these guys for like a decade, even more on building out large scalable clouds. We were really the logical choice when they started investing in AI. Then they needed an Ethernet portion of that for interconnecting their GPU. They know us very well and they love our operating systems. We were the logical choice to be selected for the Ethernet interconnect.

Hardev Singh
General Manager of Cloud and AI Products, Arista Networks

And just to add, the engagement model is also slightly different. With the tier two clouds or enterprise, they have a requirement. There's a time to market. All right, Arista or anybody else, do you have the network? Can we get this up and ready and get it going? The hyperscalers is very different. They are always at the cutting edge, trying to adopt bleeding edge technology. So with them, it's close partnership. You work with them on these requirements. You kind of almost co-develop with them, and then you kind of reach a stage of, okay, it's an RFP because they need to have dual vendor in whatever role they want to go after. Then you probably get a design win, then you kind of co-develop the product with them, and then you kind of ship, deploy, and all of that. So very different engagement model.

And I think the partnership we've had with these hyperscalers is close trust. It goes a long way. There's value not just in the product when we talk about hardware and software, but there's a lot of value they see in the partnership and the history of sometimes maybe guiding them in the right direction of choosing technology or defining new products with them.

Okay, great. Yeah, would love your perspective. One of the things that to the casual observer goes unnoticed is comparing revenues with other companies, and you guys obviously have a very large deferred product revenue balance. If you can talk a little bit, I've covered this space for a long time. I've seen this stuff before with newer technologies, but can you kind of just level set with us? What is causing that? What are the features or specifications that the customer is looking to see in the network before it could be recognized?

Mark Foss
Senior VP of Operations and Marketing, Arista Networks

Yeah, yeah. I think every customer is different, to be quite honest, and as we've been growing and we're deploying, I guess, larger projects, we're getting the acceptance terms can be a little bit longer sometimes. I guess our acceptance terms can range anywhere from 30 days to 18 months, and the deferred balance is growing, but there's stuff that's going in and out every quarter, so it's not like it's the same accounts that are in there or the same projects which are in there every quarter. Everything goes in and out, but I think it's a function of the growth, and it's a function of just larger, more complex projects is really what it is.

Okay. And then maybe sticking on the data center and AI, a little bit on the go-to-market. So could you touch on maybe like neo clouds and sovereigns that are getting a lot more attention? Are those harder to get at-bats? How are you making sure that you're involved? Because some of the competitors there, like a Cisco or an NVIDIA, might have a country presence or whatever for a long time. So maybe walk us through that part for some of the newer customers that can be some decent scale.

Yeah, you got to keep your eye out for these because sometimes these could pop up quick. You've never even heard of these guys. But we have some salespeople that are really savvy. They are constantly looking at the news. They're constantly doing research and figuring out where this spend is happening and where these neo clouds could be popping up. So.

To just, even if their networking is not as strong as yours, to bundle it. How does that dynamic work? How do you fight that dynamic? I'm sure the more sophisticated ones are making independent decisions.

Yeah. Yeah, no, for sure, and it's been a very valid strategy of NVIDIA. I think generally at the hyperscalers, it's more difficult to kind of deploy a bundling strategy with them. They're very savvy, and they've got a lot of buying power, so they will generally go with what they want to. I think the smaller companies definitely are more prone to the bundling strategy. But a lot of these neo clouds and stuff, a lot of these engineers, they've been around the block. They've come from the hyperscalers a lot of times, and they move around. So these guys know what they're doing. So oftentimes they don't allow that. But the bundling strategy definitely can work. But generally, the more sophisticated and experienced the networking people are, generally, they will generally choose to disaggregate and go Ethernet.

Hardev Singh
General Manager of Cloud and AI Products, Arista Networks

I think the trend of InfiniBand now really going down, Ethernet picking up, that I think helps the Ethernet players, including us, and gives customers, the smaller ones, even more confidence to adopt Ethernet.

Now, is that fully because the consortium worked together to match technical capabilities with InfiniBand, and maybe customers don't want a proprietary type solution? And is it the sense that there still will be places where InfiniBand makes sense, but in general, it's going to be a downward trend?

Combination. A bit of that, a bit of them seeing the big guys deploy it and scale and talk about it publicly. So it's a combination of a few things that they feel very confident. And I think also NVIDIA talking more and more of Ethernet Spectrum. So there's a change in the tune there as well. So I think it's a combination of things. Like Mark said, these guys are smart. They move around. Yeah.

Mark Foss
Senior VP of Operations and Marketing, Arista Networks

And these guys like to dual source as well. So if you have a proprietary technology, you can't dual source it, and they don't like to get locked in there. They want to have choice and be able to leverage against their vendors and stuff like that.

Okay. Maybe Hardev, talk to us a little bit about kind of speed migrations, 400 to 800, 800 to 1.6. How do you see where are we in that continuum now? I know some customers are going at different paces for sure, led by some of the very large hyperscalers. Sometimes it's led by hyperscalers that you're not a major supplier to, so it might look different. But how much do those transitions matter? And do you generally see those transitions as opportunities to take more share or get involved in more use cases? Maybe just walk us through that dynamic.

Hardev Singh
General Manager of Cloud and AI Products, Arista Networks

I'll start by saying 2025 is done now. 2026 is probably the year of heavy 800 Gb deployment, right? AI, and I'm talking specifically AI, but I think it's even more important with AI to adopt the next gen as fast as you can, so if we as a company can be first from a time to market to get the next speed out, that actually gives you an advantage, a head start, because you will get into those qualifying cycles early, big or small or medium-sized customers, and then you just grow from there, so I think, yes, we are. Andy talks about it, so we're heavily investing in our next gen 1.60-based products. There are new technologies that come along with that. There's rack scale infrastructure. There's liquid cooling. There are technologies around how to bring the power down.

So, LPO at 1.60, we still feel confident we can have LPO to work. There's Co-Packaged Copper. There's CPO. All valid technologies that are under evaluation. And I think that's one way to show differentiation. So I'll sum by saying, yes, getting to the next speed first and fast is a differentiation.

Mark Foss
Senior VP of Operations and Marketing, Arista Networks

One thing I'd like to add is it's really good to be qualified at the lower-end speeds with your software already, because most of the 90% of the qualification is related to software. So if you're already qualified at, say, 400 Gb, when you moved to 800 Gb, that goes really fast, right? As soon as the 800 Gb products were available, we saw a really fast transition from people deploying 400 Gb in AI to 800 Gb. It almost happened overnight for the new deployments. As soon as it was there, people just moved over. That was because people had already qualified the software.

Hardev Singh
General Manager of Cloud and AI Products, Arista Networks

We believe in shipping first and not announcing first. That's been a consistent strategy.

Mark Foss
Senior VP of Operations and Marketing, Arista Networks

Yeah.

Interesting angle to take. You did mention liquid cooling. I think, you know, I was at your annual stay, and I think there was a slide that showed a liquid-cooled switch. It was probably up there briefly. What's the timeline there? What's the savings? What's the power savings? What's the differentiation? And do you think that's going to become as big of a topic for switching as it has for, say, servers?

Hardev Singh
General Manager of Cloud and AI Products, Arista Networks

I think liquid cooling is going to ramp. It's going to ramp fast. But I think it's a function of the data center builds. So if I'm a customer, for example, who's planning to build these data centers, they can take time, right? Location, power, construction, all of that. And if I'm building my next-gen data center that's going to go live in 2027 or 2028, I probably want to have it 100% liquid cool because I can save all the ambient aircon, all that effort, and design it different. At that point, the network has to follow compute. The compute today is already there liquid cooled, the NVL72s or even the other accelerators. So at that point, the network rack will also be there'll be two things. You'll go at a rack-scale level. There's no more a switch.

It's a rack of switches, fully liquid-cooled, and then they'll complement the compute, which is already there. So you'll have one liquid-cooled infrastructure in the data center, and then the network is there. So don't confuse by saying everything's going to be liquid-cooled. I think there'll be a combination of air-cooled and liquid-cooled because their data center investment's already there where air cooling is there. So it'll probably start off with air-cooled and liquid-cooled, and then eventually liquid cooling take off.

Okay. You mentioned LPO and CPO. We get questions a lot about the impact of those technologies as well as OCS. If you could just talk about, one, the different interconnect, and then two, OCS, how you see that impacting the market switching routing markets?

LPO, again, is probably thought leadership from Arista to the industry, right? Andy Bechtolsheim is the one who came up with this concept technology, got the industry to adopt it, and then now in 800 Gb, it's in high-volume deployment, right? You're probably talking to the optic suppliers. And it has a big benefit to the customer from a CapEx cost, which is they're cheaper because there's no DSP, and OpEx cost because it's lower power, so I spend less money on power, which I could actually put more compute in, right? We are confident that even at 1.60, we can get LPO to work. So we're working on that. CPO is another promising technology. It's been there. This is probably second or third or fourth generation. It's again addressing the same thing, which is how do I reduce my cost and power?

There are a few things to solve around serviceability and stuff, but I think the industry will get there, so we will embrace when that maturity comes in, and there are other technologies too, Co-Packaged Copper. CPO also has use case and scale-up in the future, not just scale-out, right, so promising technology. You mentioned OCS. OCS is Google's done a great job at probably years of investment to get to the stage they are. How we see that adopted outside of Google to be seen. I would be a bit yeah, I don't know how much time that would take because there's a heavy engineering lift from a software side to get the technology, and more importantly, you need to have the scale to make that work, right, so.

Okay. Great. Maybe we probably have time for one more, Mark, for you with your operations hat on. The company was early on with purchase commitments, and I don't know, five years ago, someone was just saying, "Wow, these numbers are massive. What are they doing?" And they were needed, and they're still needed. So maybe talk to us about where you are in capacity and component availability, and what's the risk that there's some disruption in the next year or so because the demand is just so high.

Mark Foss
Senior VP of Operations and Marketing, Arista Networks

Yeah. Yeah. No, we try to do our best to mitigate risks. I think one of the biggest purchase commitments we have right now is with the chips because the chips are anywhere from 38 weeks- 52 weeks. So we're constantly having to pre-order these. And I think one of the big challenges is figuring out the mix a year in advance. You got to figure out because all these different chip architectures, you got to figure out which is going to be the highest volume one a year from now, and it's kind of an art more than a science. Then the rest of the supply chain, compared to when we saw the whole COVID situation where the demand went down and there were supply chain issues, now the demand's really high. So it's a little bit different.

So there's constantly little challenges here and there in the industry. There'll be a shortage here, a shortage there, and we jump on those, and we'll buy from third-party brokers every now and then if we need to. And we just do our best. But yeah, there is continuing to be a lot of demand out there, and we're just managing the best we can.

Okay. Great. Well, I think we're up against time. So thank you both so much for joining. Thank you, everybody.

Thank you, too.

Hardev Singh
General Manager of Cloud and AI Products, Arista Networks

Thank you.

Mark Foss
Senior VP of Operations and Marketing, Arista Networks

Thank you.

All right.

Powered by