Good afternoon, everyone. Thanks for joining us here at the UBS Tech Conference. I'm David Vogt. I'm the hardware networking analyst, and we're excited to have with us Arista Networks, Anshul Sadana, Chief Operating Officer. Before we get started, Anshul, let me just read a quick disclaimer from UBS. For important disclosures relating to UBS or any company that we talk about today, please visit our website at www.ubs.com/disclosures. If you have any problems, you can email me later. With that out of the way, Anshul, thank you for joining us.
Thank you, David.
I'm sure you don't need to read any disclosures. I think we're good?
I think we're good.
We're good. Perfect.
All risk factors apply.
So besides raising guidance and, you know, taking targets up, we won't get into that. Okay. So, so I think, you know, we had Cisco here earlier, we had other companies here earlier. Maybe just to level set where we are today with Arista. You know, I know you just had an analyst day fairly recently, where you set out targets for fiscal 2024, preliminary targets or framework and long-term guide. But I think there's some investors who are a little bit unclear on how we got here. I've talked to a couple people over the last couple of weeks. So, you know, I think the shift from Arista, you know, basically architecturally taking share in the hyperscalers over the last couple of years, caught a lot of companies by surprise.
Maybe we could start there and talk about kind of what you do differently from a solution, software-based architecture, and then how does that lead us to where we are today? I know you, we're going to talk about AI, but I want to kind of level set and set the table first.
Absolutely. I didn't expect any AI questions anyway. But, you know, over the last 15 years at Arista, we've grown in data center networking especially. I'll come to campus as well. But we started out with building what we believed was the best solution for the whole world for data center networks. We call it cloud networking. It included a change in design. We went from a classic three-tier access aggregation core, which was the de facto, to a leaf-spine design, which is more of a distributed scale-out architecture. Lends really well to cloud computing, but no one in the industry wanted to do that. And to do that, you had to build very high-speed products.
We're the first in the market with 10 Gbps, with 40 Gbps, with 100 Gbps, and pushing the envelope, not just as consumers of merchant silicon, but as drivers of merchant silicon. We work with our partners, like Broadcom or Intel, and drive their roadmap and tell them what we need for, on behalf of our customers. We coupled that with a beautiful system design that is, by far, I would say, the most efficient in many ways, whether it's signal integrity, which is how we are getting to linear drive optics, or power efficiency, lower power matters to everyone, higher quality. And then running a software stack that is very unique and differentiated from all the legacy stacks out there, including the way we keep all of our state in our database, inside our software and memory.
As a result, you know, small bugs, whether it's a memory leak or a small crash of an agent, doesn't bring down your network. Just have a small process restart, the system just continues to forward packets as if nothing happened. Initially, our competition poo-pooed us. You know, like, "Hey, this is a new kid on the block, and this is not going to succeed." But the Cloud Titans , as we call them, not only embraced it, they partnered with us. We evolved on that architecture for several generations to a point today where we do a lot of co-development with our biggest customers. It's a very unique situation. You know, typically, you have a vendor-customer relationship. We don't have that.
We have an engineering partner-customer relationship, and quite often we are telling the customer what their roadmap should be, not getting some RFP and getting surprised by it and so on. We've outexecuted our competition clearly in all of these areas and built on that. That was on the cloud side. We did the same approach to the enterprise, but the enterprise needs a little bit more help on the stack, especially with respect to deployment and automation. That's where we built our software suite called CloudVision, which runs on EOS, which is our operating system on the switches. CloudVision runs independently to manage and automate your entire network, and now CloudVision can run both on-prem or as a managed service in the cloud.
As a result, we can cater to many, many different types of solutions that is allowed to expand in different verticals, different geos, different parts of the network, including now campus. That's really what the story has been for us for the last 15 years or so.
Great. So that's a great place to start. So maybe we could start with the Titans. So obviously, Titans have been a critical part of the business. I think in 2022, it's disclosed it was like 43% of revenue. This year, it's probably around 40% of revenue, so you've grown exceptionally strong with those partners. How do you think about... You mentioned co-engineering and sharing the roadmap and helping them kind of understand what they need to go forward. How has that relationship evolved today? And since you mentioned AI, with regards to their AI roadmaps, like, how are you involved in what Microsoft is doing, and Meta, and others within that vertical, in terms of thinking about the next couple of years or even five years, for that matter?
Yeah. We are in a very privileged position in partnering with these customers. You know, I was in a meeting recently with one of our Titan customers, along with Andy Bechtolsheim, our founder and chairman. After the meeting, we were talking about it, and quite often we like to talk about what the future could be like. We are in one of these meetings where we felt we defined the future. That's what the world will be doing five years from now. That's how clusters will be built, that's how power will be delivered, that's how the fiber plant will be structured. You're talking about 2027 architecture. We do that quite often. Now, after that meeting, the customer's view was that this was the best meeting they've had in the last 12 months.
Now, this is a networking team, and they've been circling some really tough questions on what happens in the future as you get to 200 Gbps SerDes, as the cluster size increases. How do you change connectivity? What about the latency? What about different cables out there and the skew length, skewing of data between the cable lengths and so on? All the way to automation, monitoring, security, deep buffers versus shallow buffers, low latency, helping the application stack get there faster and using the GPUs a lot more efficiently. We're able to do that with pretty much all of our customers, all of the hyperscalers, titans. And as a result, we have this trust with the customer. Very open relationship. We understand that they want to be multi-vendor.
There's no purpose to go lock them in, because then once you do that, they work really hard to unlock themselves and go somewhere else a few years later. We are enjoying this growth with the titans so far, and I think for many years to come.
Does, does that roadmap visibility or that co-engineering visibility change with AI versus maybe traditional legacy workloads, where, again, you had strong product vision, you know, EOS, CloudVision, merchant silicon helped drive sort of the direction? But given the complexity and, whether it's power consumption, whether it's structuring the nodes, has that visibility changed with AI in terms of maybe not order... not, not order visibility, but roadmap visibility? What I mean by that, so you have a better sense today what the next five years looks like than if we had this conversation five years ago, what the subsequent five years would look like.
I think to some extent, what's happening is the focus on the future is a lot greater given the investment and the criticality of these AI clusters to the business. So customers are engaging. In the past, it used to be roughly a three-year roadmap vision. Now it's becoming five years. Not necessarily because we know the future that easily, but because the physical build-outs, a 100 MW building with liquid cooling is far more complex-
Right
... today to think about versus going from a 10 MW building to a 30 MW building eight years ago. So just the nature of the problem and the complexity is making our customers think harder and make us think harder as well. And as I mentioned earlier, a lot of these discussions result in us shaping the roadmap for our suppliers as well, which is critical. And, you know, we've been in this position for many years, but now I feel that the pace of innovation has actually picked up. There's so much happening in AI, it changes so quickly that on one hand, you're thinking about a five-year plan, on the other hand, you're not sure whether the next six months are going to work out as you thought or not.
Got it. So maybe just to clarify on how you think about AI for Arista, and we were having this conversation earlier, and Cisco has, I think, a slightly different view of their AI business. Their view is, you know, if it's silicon, if it's optics, if, you know, they upgrade the DCI because there's more data traffic going because of an AI workload, that in their mind is sort of AI. But I think you and Jayshree and the rest of the team have a much more strict, stringent definition. Can you kind of walk through how you're defining it? Is it just the back end, you know, back end part of the network-
Yeah
... that's AI today? And does, how does that expand for you over time?
David, I believe this is very much in context of the $750 million goal we gave as well.
Correct. Correct. Within, right within your goal.
For 2025. Now, look, we participate with every major cloud customer, Tier 2 customer out there. So if there's a large AI build-out going on somewhere in the United States, there's a good chance we're already involved with that customer in one way or the other. If you start counting everything as AI, there's nothing else left. So of course, 100% of our revenue is AI, you can count it that way. But quite often, when we ship a product, whether it's a top-of-rack or a deep buffer, 7800 spine, it's not clear to us when we ship the product, is this gonna get deployed as an AI cluster or as a backbone or as a DCI network or as a Tier 2 spine or a WAN use case?
In some cases, we can find out by talking to the customer, but it's not easy to account for it in the system. So the $750 million goal, that's purely back-end cluster networking for AI. We try our very best to calculate it or track it as best as we can. I think by 2025, we feel really good about that number and tracking it. Over the long term, is it going to be easier to track? I don't know. We'll find out as generations of products change.
Got it.
But for the next two years or so, three years, it seemed like the right thing to do. We also want to set the right expectation because where we are with the journey in AI, with Ethernet, and where 800 Gbps especially is, you know, we are right at the cusp of a product transition and a speed transition for our customers. This time, the speed transition is not coming from DCI or compute or storage, it's coming from AI. We know that part of the market really wants to switch to 800 Gbps as quickly, as quickly as possible. So it is a little bit easier to track as well. But our numbers are purely back-end networking, which is our switches with any relevant software, but no optics, nothing else added on top.
Right. And presumably, right now, what you're shipping for AI-related is all training related, or is there a sense that there is inference use cases that, you know, maybe show up in revenue in late 2025? Just how do we think about kind of maybe bifurcating the market in terms of training versus inference and what your customers are using equipment for?
Today, most of our AI deployments are with the large Cloud titans.
Right.
The large Cloud Titans haven't yet reached a point where they have discrete training clusters versus inference clusters. While some of them are just talking about or just starting to do a little bit of that, most of the large clusters today, based on the jobs they want to run, can be used for training or inference. So there are times where they take a very large cluster of 4,000, 8,000, 16,000 GPUs and they run it for training on one model for three to four weeks. They can use the same cluster for inference, and the job scheduler will automatically just create mini clusters of 256 GPUs, running training for a few hours, and so on. But these are not discrete build-outs so far. Does that happen in the future? There's a lot of talk about it.
Maybe in two to three years. I'm not sure how quickly that will happen, especially with the Titans.
Got it. So does that mean economically, that's a different sort of business model for you, in the sense that maybe there's an opportunity to put more of your switches and equipment closer to the edges of the network outside of the hyperscalers, as training becomes less of the total mix and inference becomes a bigger part of the overall mix, and you can perform inference in smaller clusters, further away from the data center, more closer to the edge of the network? Does that, does that broaden the market opportunity for you from a, quote-unquote, "AI perspective"?
Yeah. The question had a very strong assumption in there. I want to call it out, that inference will happen at the edge. And I think that question is still to be answered. I just honestly don't know the answer. It could happen in the cloud, it could happen on the edge of the cloud, or it can happen on the edge of the enterprise as well. A lot of this also comes down to licensing of training models and who owns the data, and issues related to data privacy. There's certain industries, like healthcare and medical, where just because of laws, it may be hard to just put all the data in the cloud. There are many other industries where it may be easy.
I think the cloud will be more efficient at it than trying to do it on a discrete two-rack, four-rack, four-rack cluster on the enterprise edge. But having said that, I think, number one, every non-NVIDIA GPU that I'm aware of, including the ones some of our customers are building on their own, their accelerators, or what our competition is about to present to the market, it's pretty much all Ethernet. And many of them are talking up on how fine NVIDIA is doing training, but all of these other processes will be good at inference. If that works out, that's really good for Arista, too.
Right.
Because wherever they are, they need Ethernet switches. Inference also needs networking, and we have a really good shot at that, too.
So can I come back to that assumption that you just called out? So, you know, a lot of companies are talking about bespoke models that are unique to their own datasets, where maybe they don't want to keep them in the public cloud for governance reasons, privacy reasons, and they want to have maybe that inference closer to the end customer or what at the end use case. So it doesn't sound like you're convinced that's a longer-term sort of driver of AI, either use cases and/or spend. You think companies, healthcare companies or other companies that have, you know, privacy-focused datasets are gonna continue to work within the large titan or hyperscaler community at this point?
Well, I'm not doubting at all that inference is a massive use case coming to us. It's going to happen. AI is going to turn every industry upside down. The question is, why would the cloud let go of inference? They can do bundling, they can do the smart, they can do discrete build-outs. You know, the cloud customers have done build-outs for different governments of the world, where it's a private build-out just for that-
Right
... one entity. No one else has access to it. Then why can't they repeat some of these models for other use cases as well? Or improve their edge, too. There was a battle between certain service providers and certain cloud companies in marketing speech on edge computing a few years ago, and some SPs had come and said: "You know, come to us because we can offer you one millisecond round-trip time to any 5G base station." And one cloud company was at a conference, I won't name them, but they're very popular. They said: "Come to us, we can give you 700 metro pops all around the world with one millisecond round-trip time." Five years later, I think we know who won.
Right.
So I think a lot will change, which is why this whole model, that training will be done by a few companies, you license the model, go to on-prem, run your inference engine there, is in a static world. The world will change fast. There'll be more competition, there'll be more services offered by the cloud companies, there'll be more-
Okay
... services offered by startups in the enterprise trying to succeed, and I don't see the future as-
Got it. Okay. Because we hear often from enterprise customers, you know, data storage, ingress, egress fees are, are pretty considerable consideration. So being beholden or trapped, for lack of a better phrase, within a hyperscaler, to get your data out, to put it back, to train it, to inference, it's pretty expensive. So, you know, obviously, enterprise doesn't have sort of the unlimited budget that the hyperscalers have. So that's why, you know, there is some thought that maybe you could be a little bit more cost-centric if you are focused on smaller clusters, more bespoke models at the edge of networks.
I think it come down to the enterprise stacks being really savvy, the operators being really savvy. If they can truly take advantage of that, it will work.
Right.
It's not that I'm convinced that cloud will win. I'm just not sure which direction it will go.
Got it.
Because if the issue is data in and out is too expensive, cloud will just reduce those costs, those prices, and then what? There will be competition that will just keep on evolving-
Got it
... in this manner.
So when you think about sort of the use cases for AI, how are you thinking about how it affects sort of legacy workloads and demand for whether it's... I don't know if you want to define it as a legacy switch. That's not AI-centric, which I know it's pretty difficult to-
Sure
... draw that line in the sand. What's not AI? What is AI? But is there any way to think about what the AI, you know, the workload spend on legacy applications look like versus AI? Is this completely additive? Is there a portion of the spend that's somewhat cannibalistic in your mind, and how do we think about, you know, where the priorities are today? Clearly, it's AI-centric today, but do we get to an equilibrium where it's a little bit more balanced in terms of capital allocation priorities?
Yeah... You know, our founder and chairman, Andy, in one of our customer meetings just two years ago, told a customer, "You know, this is what people used to do with legacy, 100 Gbps, but for 400 Gbps, this is what we are shipping.
Right.
I would tell him, "Andy, customer is still buying it, don't call it legacy." So same comment here, we call it classic compute.
Okay.
Just no, not to disrespect Intel and AMD, they're innovating as well on the x86 side. But the recent three quarters worth of or four quarters worth of trend has truly changed the CapEx model, and customers are spending every penny they have on buying GPUs and connecting them and powering them. They don't have any CapEx dollars left for the rest. But can we maintain the status quo for the long term? I don't think so. Couple of reasons. Number one, CPUs for classic workloads, for VMs and so on, are going to be far cheaper than buying expensive GPUs. GPUs are great for matrix calculations or mathematical functions, but not for everything else that you're running a standard application for. Enterprises will keep moving to the cloud.
Cloud companies often build ahead, competing against each other, but at some point they run out of capacity if they're only spending on GPUs. At some point, they'll come back. They don't want to lose all the business either. But enterprise is also spending more on AI, so they have less dollars to move to the cloud right now. I think over time, that will smoothen out just a little bit.
Okay.
Not as harsh as it's been. But, the classic cluster of compute storage, top-of-rack, fine. Right now, there is less investment going on there and a lot more in AI. Net net, I think to risk, whichever side wins will do well. So I don't think it changes, any material outcome for us. Maybe AI is actually more dollars, just given the bandwidth intensity that's needed-
Right
... and is good for us. But even if customers came back, we'll be okay.
Yeah, I mean, I think, you know, we, we look at companies that are, you know, in a position that have a much stronger foothold with the hyperscalers like yourself than some of the, you know, the legacy network companies that kind of miss some of the trends.
Calling them legacy is okay.
Sure, I'll call them legacy. But, you know, obviously, there's a reinvigoration effectively, right? And there's a lot of discussion that, you know, the largest broadly defined networking company has wins with three of the four hyperscalers. And I think you said publicly at your Analyst Day, you know, obviously, you guys welcome the competition, and you'd expect to remain sort of competitively successful. Do you think there's other entrants—like, how does white box play into this AI strategy? Obviously, they were a big player in the prior cycle. Given the complexity, how does that play into, you know, what hyperscalers, even or even enterprise is doing within AI today?
Yeah. So, we touched on this a little bit on the Analyst Day as well. You know, companies that everyone associates the most with white boxes also happen to be our largest customers. If they were just using white boxes, they wouldn't be customers. We partner with them very, very well. And the last decade or so, the industry has largely been on status quo. You know, Amazon and Google started building their own switches 15, 20 years ago for various reasons. Long discussion, we can have that later. But when Meta had to make that decision around 2013, 2015, they decided, let's do build, because they want the learning as well, but also buy from a good partner.
We partnered really well with them, done multiple generations of products that are co-developed with them to the same spec, and I think they found a really good match over there. The cadence of networking products has roughly been one new generation every three to four years for the last 15 years. Now, with AI, the world is moving faster, and with 100 Gbps SerDes and 200 Gbps SerDes coming soon, and the chip and the power, the signal integrity, the linear drive optics, the software stack, the tuning of load balancing and condition control, RDMA, UEC specs being added on top, things are actually getting far more complex very quickly. In the next 24 months, there'll be more products introduced into the market than what has been introduced in the previous four years.
As you very well know from all the layoff news, none of the cloud companies are increasing their headcount right now. They're also limited in resources, and it's an opportunity cost. Do they invest in building more of their own, or they partner with someone and invest their resources maybe in an AI application that will get them a lot more revenue or security for, for public cloud and so on? Not only have we found a balance, but we are at a place where the cloud companies want to depend more on us, not less. At the same time, they do have some religion on this topic. I don't expect white boxes to go away at all, completely. I think the market will mostly maintain status quo.
If anything, it will swing just a little bit in favor of companies like us, that are good at developing with these companies, rather than the other way around, and I think we just stay there.
Got it. So can we just maybe move down a step and touch on Tier 2 cloud, right? We always talk about the hyperscalers. There's been some in your definition, some resegmentation of hyperscalers. I think Oracle, OCI, has been sort of called out based on their server count. What are Tier 2 cloud players doing today, and what's the opportunity look like for you there with regards to their investment in AI? And is the landscape any different with, you know, competitors, whether it's large, you know, in large networking companies and white box? 'Cause, you know, we hear about Microsoft CapEx continuing to go up. Meta, maybe not so much, but just maybe help us understand how you would define what's happening within the Tier 2 cloud ecosystem.
Yeah. So, Oracle used to be in our Tier 2 cloud segment, but as you said, based on the number of servers and the size they're at now, it feels right to upgrade them to the cloud titan category. But other Tier 2 clouds are mostly serving their own space. You know, it's a software-hosted company, and they cater to millions of enterprise customers that come to their cloud for their software services or their software stack as a SaaS, and we do really well in those as well. A lot of the Tier 2 cloud is also evolving to offer AI services, especially because sometimes these days, even Tier 1 cloud has no capacity to take on other customers. Some of the cloud companies are saying, you know, take a step back. When EC2 came to the market, you could rent a computer by the hour.
Today, not every cloud is letting you rent a GPU by the hour. Their opportunity cost is just too high. You have to sign a multi-year contract if you want a GPU cluster and just use it for multiple years yourself. The Tier 2 cloud is finding an opportunity in that ecosystem, saying, "Hey, you know what? There's some open space here. Let me offer my services, too." And on top of that, some of the AI startups that are offering their own cloud services are building on their own as well. And so we're finding a very good match and opportunity there. But just to set expectation, that's a smaller segment than the titans. Titans are way bigger.
Right.
But we do well in this space.
Do they have enough capacity or availability from GPUs to really meet that spillover demand or that excess demand, right? So if I think about what NVIDIA is shipping, I would imagine the top five or six companies account for 80%, 85%, 90% of GPU capacity today. So I'm just trying to get a sense for how you're seeing that play out.
Yeah. So some of these companies also have either their own processors or non-NVIDIA GPUs and offer other services that they can within that spectrum. I think that's actually doing okay for us as well. But just like the previous comments on Tier 2 from a few years ago, Tier 2 cloud is just like Cloud Titans , just smaller. They're typically ex-Google, ex-Microsoft, ex-Facebook people in these companies. They already have been customers. They like working with us. They like automation. They don't like a legacy stack. They do exactly the way a bigger company does, just at a smaller scale, and we do fairly well. I think that will continue to stay strong as well.
Got it. So with the time that we have left, I wanted to maybe just touch on enterprise. Since it's been a key driver of the business the last couple of years, you know, you've taken your software, your hardware stack, and just kind of replicated the success in the hyperscaler community within enterprise. It's taken a lot of share. How do you define sort of the opportunity today? I mean, you've been growing high 20s%, 30% in the enterprise. The market doesn't grow anywhere close to that. So, you know, we get pushback from a lot of investors saying, "Look, you've picked the low-hanging fruit where people know the Arista, EOS, CloudVision, they know the, the hardware." How do we think about, you know, maybe across a cycle, what the enterprise looks like for you, putting aside campus for a second?
You know, when we were just getting started, one of our competitors was Force10 , and Force10 never attacked the big customers. They went to small HPC shops, they went to universities, they went to customers I'd never heard of before they even approached a Fortune 500 customer. That is what I call low-hanging fruit. What we've done is the opposite. We've gone after the hardest, toughest customers first, won that over from competition. These sales cycles have taken five to 10 years. Now, the next round is actually a bit easier, but these customers are not as big either, so it's a longer tail of enterprise. But we've seen customers come to us saying, "Hey, Arista, we've only heard good things about you. We are fed up of some legacy stack we have. It's causing outages, or we have subscription-related challenges.
We just want to come over." We are winning over there. So I think enterprises will just continue growing. They're gaining share. We're nowhere as penetrated as we are, let's say, in the titans.
Right.
So a long way to go. But that's on the data center side. We're also growing in enterprise campus. Enterprise campus, we're getting started from very small numbers, and our CloudVision, EOS, our switches, our Wi-Fi fit really well for these customers' needs as well. But these customers have a slow rollout, typically seven years to refresh and so on. So it'll be a long tail, but just keeps on growing. That's why we feel pretty good about our enterprise space. Remember, data center networking plus campus networking added together is a $50 billion TAM. This year makes doing just over $5.5 billion in revenue. We have a long way to go.
No, yeah, I get it, but I'm like, I look at campus, you know, and what other companies have tried to do versus Cisco, and yes, Cisco is a shared donor over time, but, you know, to get more than 2%, 3%, 4% market share has proven to be very difficult for competitors over, you know, decades. So obviously, you've been very successful from zero to your target's $750, which you reaffirmed a couple weeks ago. You know, is it do you need to invest more in channel, whether it's, you know... I know you're not going to be like Cisco, but, you know, where do you need to get to from a channel perspective to really have this business be, like, a multi-billion dollar business for you?
You know, the Global 2000, Fortune 500, maybe even Fortune 1000 customers, we can address with a direct sales force. The fulfillment is through the channel, but we address and sell through our direct sales force. For the rest of the market, the mid-market, we absolutely are more dependent on the channel as well. We're doing more with the channel internationally, and even in the US, I would say the smaller regional partners have become really good channel partners for us. The bigger channel partners often are dependent on the rebate dollars and so on from the bigger companies. They would generate enough pull from the market, from customers before they will pivot. I think we are starting to get there.
Got it.
We feel good about our opportunity there, too.
In the limited time that we have left, let me just ask you, is there anything we didn't cover that you think maybe is misunderstood by the market or the street at this point? I think your story's been pretty well discussed the last couple of months on AI as sort of the winner here, at least, at least the market's indicating. But just want to give you an opportunity to maybe touch on anything that maybe is not fully understood at this point.
No, I think we've covered it all between the earnings call, the analyst day, and our discussion today.
Got it. Great. So I think we'll just end it there. Thank you, Anshul. Thank you, everyone, and have a great day.
Bye.