Okay, folks, thank you very much for joining us. This is Simon Leopold, Raymond James, our Semiconductor and Data Infrastructure Analyst. Day two of our TMT+C conference here in New York. Very pleased to have the team from Arista Networks with us for fireside. We have with us Chantelle Breithaupt, SVP and CFO. We have Ashwin Kohli, who is the Chief Customer Officer, and Rod Hall, who is supporting the IR team in comms. Is that the right way to...
And finance.
And finance, wearing many hats.
Yeah.
But formerly sell-side analyst, so sympathetic member of the group up here.
Our team.
I do appreciate having IR people who have sat in our seat because they understand what we go through. Not that you don't, but it's a little bit different. I'd like to maybe see if we can kick this off to help out investors who surprisingly might be new to Arista. I want to see how you like to introduce the company to a potentially new investor.
Yeah, so thank you. Hi, everyone in the room and on the webcast. Thank you for that great question. So if people are not familiar with Arista, part of what we would identify as, welcome to a company who's the best of breed, pure-play networking company. We're coming up 12 years past IPO. We'll be reaching $10 billion as we get into next year with our 20% growth estimate. Tremendous run in the sense of best of breed technology, working in the AI, data center, cloud, enterprise, campus, many, many different aspects of the industry. And there's never been a time where there hasn't been more demand for networking to be as great as it can be. So we're very happy to play in that sector.
So I think one of the interesting aspects is the company is becoming multidimensional, which is, I think, important. So how do you think about describing the multi-year opportunity in the dimensions that are all appropriate?
Yeah, I think that so we had our Analyst Day back in September, and it was a great time to reintroduce, reset, because of this frothiness we're seeing overall in the networking industry, so we're looking at a $105 billion TAM, and that's kind of the outlook. It was $70 billion the year before, so a great increase year- over- year from 70 to 105 billion, and that encompasses many parts. It encompasses AI, data center, cloud, enterprise, campus, and I'm sure there's a few others, observability.
Routing.
Routing, and so I think from those perspectives, a great opportunity across those, and over the next few years, we look to capitalize on that $105 billion. We did a great job against the incumbent competitor in the front end, data center, networking, and now we're the leading market share leader, branded and unbranded when it comes to front end networking. We're the only vendor, front and back end, named vendor outside of China that has material in front and back end AI networking. So I think there's many different ways you can position this, but I would start there, I think, Simon, and go from there.
Great. I think one of the things that sort of characterizes your business is the leading customers. It's understandably a double-edged sword to have high concentration, but to have high concentration with companies that grow and spend a lot of money. How do we see the company's evolution changing that mix? You've had as much as 40% of revenue from two customers in the past. How do we think about what the concentration and the customer mix might look like several years from now?
Yeah, I think that we've very, very much appreciated our two top 10% customers, and we're very happy to work with them. We've learned a lot. Our engineering teams have grown up with their engineering team. So we would never take that for granted and always have benefited from it. But as the company looks to grow, as we get to the $10 billion mark next year, and looking out to the future years to your question, we are looking absolutely to diversify that revenue. You've seen us in the enterprise, some great growth in enterprise. Campus, we have a $800 million target for revenue in 2025, going to $1.25 billion in 2026. And from there, steady state, that's only 5% market share. So we'll do the same thing we did in data center, rinse and repeat on the campus side.
So you can see diversification both by law of large numbers and by where we're playing in the field outside of the two customers you're referencing.
And maybe broadly, how do you see AI playing into the demand side? So currently, there's some more debates about kind of this idea of, oh, AI is a bubble versus, no, it's not a bubble. It's this guy's winning, that guy's winning. What's your take as sort of an industry participant on the overall market and then sort of the supply and demand setup for you?
Yeah, and you guys absolutely participate in this conversation too. So I like to look at external data so it's not just what do we think. So if you look at, I like to use the 650 Group's reporting in the sense of what this AI market is. We don't think it's a bubble. So just to answer that question definitively. But if you look at, there's an estimated $2.3 trillion spend between 2022 and 2035. There's $300 billion on content, large language models. There's a trillion on Agentic AI and a trillion on autonomous and robotics. $2.3 trillion on spend. That absolutely gives us an opportunity to play and participate in that. So those are kind of the themes we see in the AI sector to your question, Simon. I think that we look forward to playing in both the front end and back end.
We've said that for every back end dollar, there's a $0.30-$2 pull-through on the front end as the companies and enterprises lean towards Agentic AI. Anything you guys would like to add?
Yeah, I mean, I could add to that as well, Simon, right? So if you think about the business on specifically AI, the opportunity in the future is much larger than what we have right now. Most of the focus is around a certain set of cloud customers, which are building these LLM models. I would say in the next to your question, in the next 5- 10 years, you're going to have this, what I call a hybrid model, which is SLMs. SLMs are going to go to the edge, which is a campus place, which is where Arista's playing a footprint in as well. And so that demand for data from not only into the Cloud, but also at the edge with the SLM models and actually marrying those two together is going to be a huge incentive moving forward.
Yeah, and the industries we're seeing lean in, just to round out specifically, the industries, we've seen educational, phenomenal lean-in by educational industry, financial industries, healthcare industry.
Sovereigns.
Sovereigns and neoclouds, which are being their own providers and having to differentiate. One of the banks I had dinner with last week, they have 320 AI use cases that they're working on and looking at us as a value partner and how to make those use cases come to life. So just to be specific, it's not just headlines.
So, I want to unpack maybe two aspects of this. One is sort of the nature of the customer. So much of the spending is done by these hyperscalers, but we hear about sovereigns, we hear about enterprise adoption, neocloud. So if we think about the market or the business opportunities for Arista, how do you see that evolving? So sort of what are your dependencies now, and how do you see that shifting over time towards sort of these more diversified customer bases?
I'll take Neocloud, and maybe you can take enterprise, Ashwin, and Rod. From a neocloud perspective, it's an exciting market. Neocloud and sovereign, let's just call it neocloud for sake of nomenclature. For the neocloud, what they really appreciate is if the conversation is open to best of breed, and it's not a conversation we're not invited to because of investment from our competitors or commercial model bringing things together in a way that we can't compete on. If it's open to best of breed conversations, what the neoclouds and sovereigns really enjoy is our hyperscaler, our deep, deep hyperscaler experience. They're like, how can we get that? The reason they want that is because they need to differentiate their models to their customers. How can we help them differentiate through our product portfolio, our hardware, our software, our EOS?
They come to us for our hyperscaler experience. We have had some really great designs. I think at Supercomputing, Core42 announced that they worked with our sales, Broadcom and AMD on a really great architecture. That is an example that is now public. We let our customers speak for themselves, who they are using. We are very excited by the neocloud opportunity.
Yep. Yeah, I mean, I could probably give you examples to complement the neocloud story over here. On the enterprise side, I was with the CIO of a large hospital just recently, and he was basically telling me that typically in the past, when you actually did an MRI or an X-ray, to analyze that data, it would take you initially, it would take a cardiologist a few days to go get the data in, analyze it, and then give it to the patient. Today, this one hospital is doing it in minutes. And so what they're doing is they're taking the MRI scans, they're uploading it, the AI LLM is basically looking at all the analysis and then giving the analyst the data back straight to the cardiologist within a few minutes. That is a massive change, especially, and this is just healthcare itself.
You talk to a pharmaceutical, completely different industry, and over there, typically, you would think about, hey, let's go do an AI inside the Cloud. They're actually doing this on-premise. So they've actually bought GPUs, and I'm seeing a big trend on this on the enterprise side, where typically enterprises were thinking about AI last year, and they were trying it out inside the Cloud. Today, what they're trying to do now is say, okay, let me go try to go figure out, I'm going to buy between 2K to 4K GPUs, put it on-prem, let me go try to go figure out the typical use cases, which is what this farmer was doing.
And then they will actually train on-premise, which is a big change to what they were doing training inside the Cloud just to bring the cost down and actually turn things around faster, and it was a lot cheaper for them. So there are many, many examples like this.
I want to follow that one up because this is a topic that when we were picking up the coverage of the semis, we were debating internally as well, so I want this to be open and objective because I haven't made up my mind, so I don't want to give you the impression I've made up my mind because I haven't. There's sort of this ongoing debate about how AI evolves in enterprise and organizations, what they do themselves on-prem versus when will they hire and employ the Cloud, and I imagine to a degree you don't care because as long as it goes on an Arista box, but I have to imagine you have an opinion, you've got some thoughts as to how this plays out. I'm curious your take on what happens on-prem or private cloud versus public cloud in AI adoption.
Yeah, yeah. No, that's a great question, so what I see today, Simon, is very clearly if an enterprise is trying to get their feet wet inside this AI space, they will do training inside the Cloud. And what they'll try to go figure out is, can they do for that specific use case, can they do the job training inside the Cloud? So once they've done the training inside the Cloud, then they'll bring the job back on-premise itself into what's called inference, and they'll do it inside their private cloud itself. So they'll run that on it goes from the back end to the front, and the front end is inside the data center, then it really becomes, okay, it reaches out to the rest of the network itself.
What's interesting is that will then go into not only a single data center, but it'll go to a bunch of campus sites as well. What will be interesting, and we don't know because I'm seeing this at the infancy right now. When I see this, I think about car manufacturers, oh my God, retail, insurance, manufacturing, banking, HFTs. I mean, we've got hundreds and hundreds of customers looking at this space globally. And what they're trying to figure out, okay, if we can do the training on-premise, can I do this quicker and faster or less expensive? I should do it in the cloud. I think it's a little bit too early to say where that's going to lie.
You're going to probably take another three to five years before that actually definitively happens because there's a big cost on-premise to do this in the private cloud, power, space, cooling, all these factors really come into play. So to be seen, but there is certainly interest in enterprise. I can see this right over the next few years.
Yeah, and the only other decision tree I've seen in addition, because we speak to many different customers at different venues, is the other thing I found is a bit of a decision tree or catalyst is who's kind of owning AI in the enterprise. Sometimes it's the CIO and the proper networking team, and sometimes it's a CTO or someone who's come in just for AI, and they do it within a closed team. And so sometimes, Simon, it's who's owning it in the customer and how broad are they taking it. And I think that also determines where and when they do on-prem versus in the cloud.
Yeah, yeah. I also want to explore a little bit about this question about the back end versus front end. So I think you've given us this broad range, 30%-200%. And the way you've offered forecasts, you just lump it all together. And I guess part of what I'd like to explore a little bit is when customers are making decisions, are the decisions on back end independent to front end? Because you've been in the front end for a long time, it's sort of the back end that's new. Or is there some direct correlation? Why such a wide range? Why do we lump them together? How should we really think about the logic there? Because early on, we were all sort of focused on measuring just back end.
Yeah. I'll take the front-end back-end correlation, then maybe if you want to take the fungibility side of it. So what we are finding as we are starting this journey 18, 24 months ago is that sometimes you'd have a clear example with the customer. This is for a back-end use case, and it was super clear. But what we found is that the pull-through, the $0.30-$2 is really what's at the edge that Ashwin's mentioned too, what's the kind of employee usage demand that's increasing. And so if you've recently upgraded your network in the company, that's where you might spend $0.30 to bring up the demand capacity that you need.
If you're on a refresh cycle and you haven't refreshed your data center or spend in a while, that's where you might be at the $2 to make sure you're building in the capacity for the demand that's coming for all those use cases that we just mentioned. Now, there is front and back end gets a little bit less clear sometimes.
Yeah, and it's tough, Simon. So today, before we were very definitive about AI being built on the back end, and there was this clear distinction between back end and front end, it's not so clear today. So if a customer places a demand from us for the switches, and this is the beautiful thing about our technology, you can basically use it either in the back end or the front end. And what Chantelle was saying, it's very fungible. We are proud of that because it's the same hardware, it's the same software, and it's the same support. And so we give customers a choice to go to do whatever they want.
And so if their strategy changes in the business to say, hey, listen, I want to use this kind of application for the back end, and maybe I want to use a different type of LLM or different type of SLM, they're able to do that with an Arista fabric specifically, an Ethernet fabric, which is very open, doesn't lock them in, it's non-proprietary, it's scalable, it scales out, it scales across to multiple data centers. That's what I'm really proud about what we offer customers. It gives them a choice. So this fine delineation between back end and front end kind of starts to blur over time.
I'd also just add to that, as you look at some of the new data coming out, some of the new studies. There was a Nested Learning paper coming out of Google that suggests that you're going to start to train in the front end as you perform inference. So you have a blurring of the lines, even of the training and inference functions in the front end, back end. So I think as you move out the next few years, you'll see those two things morphing together more and more. And that's why we start to talk about them as a sort of similar thing.
Yeah. I could give you one piece of anecdotal evidence as well. I was with the CEO of a very large storage company, and he had no idea what a back end and a front end was. Imagine that. We use it very often on the networking side, but for him, it's like, wait a second, a network is a network over here. I've got storage connecting over here specifically for AI workloads. And so he goes, I want to make sure if I'm connecting my storage to a network, it always has to, I mean, I'm obviously promoting us over here, but he said, I always want to make sure it's Arista because you have the deep buffer switches, you make sure you don't drop a packet on a packet on the storage side specifically. So for him, he had no idea what front and back end was.
Interesting.
Interesting, right?
So we just call them AI centers.
Yeah.
So I want to also talk a little bit about sort of the different decision processes and biases of the sort of market verticals. So hyperscalers, sovereigns, neos. So at least two dimensions, I want to see if you can unpack. This is a little bit of a long question. In that we've heard the hyperscalers and this concept of Blue Box, help us understand what that means, the implications. The one around sovereigns that I want to explore is recently several compute suppliers, GPU platform suppliers, have talked about the sovereigns being much slower than others because they're governments, they're bureaucratic. Wondering if that's been your experience, if that affects networking the same way it affects the compute side of the house. And then enterprises, the question is sort of along the lines of where are they getting the money?
Yeah. So do you guys want to take Blue Box and I'll take sovereign?
Yeah, I can. So the difference.
I'm sure that goes to three questions.
That's fine. We'll keep them contracted.
I'll keep you honest.
So I'll try to make it as simple, the difference between Blue Box and White Box. I think that was your first question. My answer is broken down into three simple steps: hardware, software, and support. On the White Box side, if you think about it on the hardware side, it's an OEM White Box manufacturer. Typically, those manufacturers, they don't care about software. Their strategy is, I want to make the cheapest hardware, and you can go slap on any software on top of that box. That software, which is a second bucket that comes into play in the White Box strategy, is typically SONiC.
And so what a customer will do, a very large customer, they will basically take the software code, they'll customize it for their own use case, and they will buy the OEM manufacturer from the White Box, they will customize it for their own use case, and then they have to then figure out the third step, which is, oh crap, how do I go support this? And it's not only support for day zero, what I call day zero, then it's day one, which is what happens next year and the year after, because that hardware platform is going to have a generation gap. So today, they may have bought an 800 Gb White Box vendor. It's going to go to 1.6, it's going to go to 3.2 at some point in time. They have to figure out how that story evolves over there.
Now, if you look at the Arista story with the same three buckets, the hardware manufacturer, we've taken care of the White Box, the OEM vendor. Arista is basically got hundreds and hundreds of hardware engineers in our labs basically building the most beautiful box out there. Why is that very differentiated to any other White Box vendor? We stress test the hell out of those boxes. We do earthquake testing, we do fire testing. I was talking to one of our testers, and he basically said, they're so maniacally focused on from the time that the switch arrives on the UPS truck to the time that the pallet drops on the floor, is there enough shock absorption on that? That doesn't happen on White Box side. So that's the story that happens on the beautiful architecture on the Arista hardware.
Then you marry on the software side, and we give customers a choice. This is what Jayshree alludes to, which is what's the beautiful thing we do with our customers? You can put EOS, or you can put FBOSS or SONiC on there. We give customers a choice. The last thing, which is the most key thing that most customers want, is support. Support only happens not only on where you pick up the phone and you call 1-800-Arista, hey, I've got a problem, go help me out, but it's on the engineering side as well, where we sit down, our developers sit down with the customers and say, how do you go architect this together? That is a hugely different value-add strategy and a scale story where we take all the headache away from our customers versus they have to go figure this out themselves.
Most customers who try to do the White Box strategy. You can probably count out non-cloud. On the Cloud side, a very, very small handful of customers are doing this. So I gave you a long, but short story, but I wanted to explain the differences between the two strategies. Sorry, Chantelle.
Oh, no, no apologies. It's an important message and then on the sovereign side, what we found and very much enjoy working with these customers that are leaning into their AI strategy for their countries and their states is that so a high intent on the sense of being successful, but it does take a little longer and I think the inherent cycle time is longer because they need to bring together the right team and business model and operating model to make it happen where they haven't really done this before. If you take large hyperscalers, they do this every day, all day long. For some of the neoclouds and sovereigns, they have to bring together the team so they look to corporations, and they look to the industry, and they either bring them into the country or some of them globally.
And so you have really smart thought leaders coming together for the first time working together, having to figure out how they make this happen. That's just an org model that takes some time. Then they have to work through their decision-making process, working with vendors like ourselves. And then the funding model, funding's usually there. There have been some instances where the funding gets a little bit lost in the shuffle. So we've seen some dropout that perhaps you know yourself. But generally, the funding's not the cycle time issue. It's more the org model and team coming together to make decisions that are very important for that sovereign state.
And how have you incorporated that into your guidance? So when you've given us the forecast for the year, did you initially account for sovereigns taking longer, or is there any kind of delta or shift? The reason I'm asking that is it's relatively new to me being an outside observer hearing this point, which is logical, but hearing this point from the compute folks.
Yeah, we've definitely taken an assumption on them when we think they will happen versus perhaps the customer may think they're ready. And usual Arista style, we're pragmatic about that timing.
It's always a bottoms-up approach. That's the way we look at things.
I want to ask a little bit some technology-oriented questions. One of the interesting opportunities that sounds like it's a couple of years out is moving the architectures and switching it to scale up. Much of it today, the focus is scaling out rack to rack, rack to storage. But scale up looks like it hits new opportunities for companies like Arista. Maybe your thought on that opportunity, a timeline, a TAM, how should analysts think about that in their sort of multi-year time horizon?
Yeah, so for us, I'll start with kind of maybe the bigger picture on this. So in the $105 billion TAM we announced in September, we do not have an assumption on the scale up. We're still looking at that as the industry works with the ESUN consortium to move the opportunities to Ethernet. So as we size up the ESUN timing and the market, we'll add that to our TAM, Simon, so it's not included in. But we do think it's an accretive sizable opportunity. I think from the perspective of working with the consortium on the ESUN, it's similar to the InfiniBand to going to Ethernet conversation we had two years ago. So we're very much excited to see that open up. From a technology design, it is an and different than scale out and scale across. And maybe if you guys wanted to comment anything on that.
Yeah, I mean, Simon, you probably saw the scale-up solution from a single vendor probably about 6- 12 months ago being announced. You've now got the same type of proprietary solution from a second vendor being announced as well. I think what's going to happen, and this is to Chantelle's point, what's going to happen over time is customers will want choice. And so they want to do things standards-based way of doing things. And that for Arista is just going to be a positive. And as that matures over the next 24 months, it's going to be our right to basically demonstrate to customers that if they're looking for an Ethernet-based solution inside a scale-up scenario, then they can go choose Arista.
Yeah, I was just going to check and make sure Ethernet was sort of the protocol of choice.
Yeah, I was just going to say one of the things I'd like to make sure we also emphasize when this question comes up is the hardware design capabilities of the company. Like Ashwin was saying, we have hundreds of hardware design engineers, and these people are best in class, I would say, and so there are a lot of challenges coming along with scale-up, water cooling, all these kinds of things that are not that easy to engineer, and Arista is very well positioned to do that for customers, so that's another thing that I think we bring to the party when scale-up comes around.
So, I want to make sure we hit on the enterprise campus opportunity because your forecast suggests phenomenal growth off small base, but it's a fairly large established industry campus switching. What sort of strategy? What's the timeline? Is it sort of, hey, we're Arista and we're here and that's sufficient? What gets you that kind of growth?
We would never just say Arista we're here and assume. It's very much top of mind and part of our DNA. The great thing is that once Arista declares an intention, you can assume the whole company's behind it because we only focus on networking. Starting in 2025 with a 5% share, similar to other things we've done in Arista's history, we will, I won't say slowly, but we will chip away. The reason I would say it's a little bit longer of a timeframe is because you're talking five, seven, nine years of a refresh cycle. We've had others in the industry demonstrate and say there's a great opportunity for refresh coming up in the next few years. We're very excited about that. We've pulled together the portfolio.
COVID put us a little back, I think, from the portfolio perspective, but now we've come out of COVID, acquisition of VeloCloud, bringing the SD-WAN solution together. So we have the portfolio internally. We've got our go-to-market strategy. Todd Nightingale coming in has been a phenomenal resource in leadership, working with Ashwin and team just to say, hey, this is how we're going to approach the go-to-market. Maybe use channel a little bit more. We always get asked the channel, are you going to over-rotate to the channel? We're not going to over-rotate, but we will use it more when it comes to campus. Anything else you wanted to add, Ashwin?
Yeah, no, I mean, if I had to summarize this, it would be investments in products, both on the campus-wide and wireless side. And we're making investments, we're making announcements on new products all the time. The go-to-market strategy is either twofold. It's either Chris and I investing in the direct focus on those teams, not only in the U.S., but globally as well. Not only for data center, but campus as well and routing and AI. And then the third thing would be channel as well. So Chris and I both focus on the go-to-market strategy for campus through the channel. It's a long play over here, Simon, and it's going to take time to go evolve as well. But we're happy with the results so far.
Right, so we'll do new logo acquisition, and we'll do land and expand. And one of the things, at least as CFO, that I look for is, are we winning campus first? And we are winning campus first and sizable deals, some that are happening actually in the city. So I think that from that perspective, we go in and land and expand. Once you are campus first, it's validity on the Arista brand and the portfolio. I've seen them go from a campus win to the monitoring fabric, and then we land a data center, and then we go back the other way. So it can be a great land and expand opportunity.
So I sort of feel like I heard a slightly different message, and it could be that I just missed it, admittedly. But in the past, when we've done our channel checks, we typically hear channel partners say, yeah, we see Arista, but they're channel unfriendly. And so it sounds like that's a change for you to become more channel friendly. How far along is this? Is this something that's been ongoing and I missed it, or is this a more recent exercise in terms of your strategy?
I think the way I would position it on behalf of the company is it's more of the same because we use channel now. I think it's the mix of what do you do that's channel fulfilled versus channel led. So I would say I would consider us very friendly when it comes to channel fulfilled.
I didn't mean to offend you.
No, no. No, no, it's okay. I'm okay with this. But the channel led, I think, is where other companies have taken more of a channel led. We're more channel fulfilled. But we do lean into curated channel led where we need it to happen either geographically specific or on campus. So I guess you can keep checking in with them. But we're very much working with them because the campus is more of a go-to-market with channel as well as direct.
What percentage are campus first? How do you characterize that?
Yeah, we don't disclose that. Sorry.
Time flies when we're having fun. Our time's just about up. So I always like to close with the following, which is, what do you feel is either the least appreciated or most misunderstood aspect of the Arista story?
I think right here, right now, given the growth of AI, if you look at everyone in this room probably has a different assumption on what they assume, but I assume you all have AI growing, which is very much network heavy if you think the demand's increasing. There's room for many, many vendors to grow, and it's not always an or, it can very much be an and, and so we'll continue with our Arista style. I would say look for our guidance, look for our deferred revenue growth, look at our purchase commitments, and those will be indicators for Arista, but we're very, very excited for the next 5- 10 years on this journey.
Great. Folks, thanks for joining us. Folks, thanks a lot. We'll have a breakout session.
Thank you.