I've known John McCool for many, many years. He's now at Arista. He was at Cisco. We used to talk about the 6500, Catalyst 6500, when my hair was less salt and pepper. Now, the good thing about John is we can go deep into technology, and we can speak about basically what drives the market, et cetera. Like always, I have a list of questions. We're going to make it interactive. If you have a question, please raise your hand, and we have a microphone, and I'll just switch the microphone to you, because I'm sure that you have as many questions as I have. I want to start from the end, John. First of all, welcome.
Thank you.
Thank you very much.
Thanks for having me.
I want to start from the end. NVIDIA drops a good bomb and says that they're growing so much in AI and generative AI. The question is: as much as you can see, what's the role of Ethernet in generative AI, and what's the role of InfiniBand? Meaning, is this an opportunity for the Ethernet switch guys like yourself and others, or is it just going to be kind of interconnected with InfiniBand, and there is less of a need for Ethernet switches?
Sure. Let me kind of talk about maybe that $11 billion or.
Yes.
Customers are, first of all, in early phases of deploying the technology, investing in applications for AI and trying to figure out how they're monetizing it. I think it's important, despite that number, to think where we are in this cycle. It's very early, very exciting. The other aspect that we see is because they're so expensive, people really want to make sure that GPU is being utilized, it's not stalling, and it's driving interest in the highest speed grade of Ethernet. The second piece of it is they're very interested in the non-blocking nature of our VOQ, virtual output queuing technology, in our higher-end platforms, like the 7800. All that's good stuff. When you get into how many of those GPUs are connected together with NVLink, which is NVIDIA's proprietary technology, there's a industry variant called CXL.
You can build small clusters of GPUs and CPUs together connected to memory through that. You can build a bigger cluster with InfiniBand or Ethernet, and then at some point, you're going to have to connect that cluster to Ethernet and drive bandwidth in and out of that cluster. This whole question about InfiniBand versus Ethernet is around, okay, if I'm moving from NVLink to a cluster, how big is that cluster? We view that InfiniBand will take it a certain way, but at some point, as those clusters grow larger and you're interested in technology like multi-tenancy, you want to share that NVIDIA cluster with multiple tenants, and that might be even multiple properties within your Cloud, the inherent segmentation technology in Ethernet becomes more important. There's some breakpoint in there of cluster size InfiniBand versus Ethernet that we're still going to try to figure out, kind of.
When we talk about generative AI, we speak about a training infrastructure, and we speak about inferencing infrastructure. Is there a difference in the use of Ethernet for training versus inferencing? Meaning, You know, right now, everything is in the center of the Cloud. It's relatively small. We are playing with it. We're writing songs to our loved ones. We're not using it yet for applications. Once we pass it and we find applications to drive generative AI, how do you view, first of all, from an architectural point of view, how do you view the inferencing portion, and then how do you view your participation, your Ethernet participation, in the inferencing portion of generative AI? Is there a difference or not?
I'm not sure that I've seen a difference personally, but it could be at this point. Getting things in and out of that cluster is going to be extremely important.
Got it.
The speed of which you can do that and then the latency of the interaction. becomes critical.
Is Ethernet giving something that InfiniBand cannot give? Meaning, is there any advantage of Ethernet switches, that you can accomplish something that you cannot accomplish with InfiniBand?
Absolutely. The scale. What we hear from customers is also the familiarity with Ethernet and the consistency. of operation between clusters for AI, general purpose computing, et cetera. That universal aspect. The commercial aspect's very important, so it's a competitive market in Ethernet. There's multiple suppliers, and InfiniBand's a captive technology at this point and single-source solution.
Got it.
Those things come into play.
Got it. I have a general question, and then I'm going to go back to basically, more Arista. The general question is, whenever we think about dependency on a single vendor, the market over the last 25 years went from a Cisco-centric market into a lot of, you know, new players like yourself and Juniper, and everyone is addressing different sides of the market. The dependency moved to a single source for the semiconductor, right? How is that going to change?
They're not. Right. I think semiconductors are hard, and they've gotten harder as you've gone down technology nodes from 16 nm to 7 nm, and now people are talking about 5 nm and 3 nm. The silicon processes are more expensive. Networking's still a very small piece of the spend at the silicon level, right? People are investing in GPUs and CPUs at much higher scale. You want to leverage those technologies from those other markets to build your networking silicon.
Right.
That's where we've really benefited as a company, going to a merchant silicon approach. Many of us built silicon in prior lives for networking. It just became a point where you had to amortize substrate costs, HBM memories that are on chip, multi-chip modules, and being able to leverage that. It's just difficult. It's expensive, and if you don't find a home in a large Cloud provider, it's very difficult to keep up with the expense possibly.
Got it. Last year, Cloud patterns grew very, very fast for you, 128%. We've seen slowdown already. I mean, we can talk about slowdown of orders or slowdown, you know, when companies are taking from backlog. At the end of the day, we've seen even public statements of Cloud companies saying they're gonna slow down some investments. Today, this morning, Ciena was very explicit and said that contracts and they're seeing things with optical layer, they said that Cloud patterns or Cloud providers are slowing down deployments. They're pushing out deployments. Things that were supposed to be deployed now are being deployed in the future.
What is your outlook for Cloud Titans at two levels? Number one is the actual need, deployment. Forget the, how they order. They might order a little bit every quarter, or they might give you a three-year contract. Forget how they order. In terms of deployment, what's your outlook for deployment? Are you seeing a slowdown or potential for slowdown in actually deploying products? The second thing is the other side, the ordering factor.
Understood.
How will that work?
Well, I mean, it was a phenomenal year for Cloud in 2022. We're still saying that Cloud is gonna be a significant mix for us in 2023. When we got on the call, just looking at the customer engagements we have and the projects we're involved in, we reconfirmed our consensus with consensus revenue of 26%. We kind of look out to 2024 as lead times have shrunk, that visibility's gotten shorter as well, commensurate with the lead time. We'll see what happens. Yeah, I think there is a lot of excitement around AI, and we'll see how that picks up. We've been involved in a lot of those use cases so far, and we'll see how it goes.
Got it. Is there a risk that Cloud Titans stop this? When I say stop, I take it to extreme just for the sake of asking you a question, but it could be slowed down. Stop investing in their network for a while because they just have enough capacity?
I think that we've seen a long-term trend of consistent growth of those backbones and networks to keep up with demand. You know, are they cyclical? Is there some nature to that? We don't know. I mean, we've seen kind of one. in our entire lifetime as a, as a company, but there's probably some, maybe some aspects of that.
Got it. Okay. The AI, we spoke about AI at the beginning. Whenever we talk to the semiconductor guys, it's one picture, but when we talk to the switching guys, they kind of temper our expectations to say, "It's not a big contributor this year, maybe in the future.
Right. Right.
How is the evolution? Think about now the next three years. What do you think are gonna be the steps of AI? Without time, without a timetable, but what do you think is gonna be the steps of contribution of AI? How Arista will participate over the years, and what needs to happen for you to participate big time in AI?
Sure. I think, you know, people are moving from kind of trials to these early deployments. We've certainly, from a software stack perspective, made some enhancements, if you will, to make AI work better. Like we've done with other workloads coming over to Ethernet. I think instead of a tipping point, when you start seeing announcements from end customers on how they're monetizing AI and what that's impacting new business, they'll be looking at wider deployments and maybe the efficiency and scale of those deployments related to their ROI. That's when I think you'll start to see this kind of Ethernet momentum around AI start to pick up.
Got it. What kind of applications are gonna drive AI? Again, today, I'm using it. Every birthday of my loved ones, I'm using it.
I wish I was smart enough to answer that question. That's really the great question is, how it's gonna be deployed and what it's gonna disrupt, right?
Right.
I think it's just kind of amazingly anecdotal things we've heard so far, but I don't know.
Got it. Okay. On the conference call, you spoke about reduced visibility from Cloud Titans. You touched on it a little bit. What are the components of reduced visibility? What does it mean, reduced visibility?
Sure. I think if I go back pre-COVID, we talked about visibility from kind of a procurement standpoint of six months. Maybe longer, though, from an architectural and product investment time frame. As lead times started to extend, and particularly with the Cloud folks, they were buying chips and processors, and they saw this earlier, that lead times were extending, and they really focused their plans way longer than they used to because of the issues with supply chain. Enterprise took a little bit longer to realize what was happening or maybe even admit that they're going from very short lead times to these extended lead times. As lead times increased, visibility was commensurate with those lead time increases.
Yeah.
Now as supply chain's gotten more, more predictable, let's say, I don't want to say normal, but more predictable, and those lead times start to condense, visibility goes down with it.
Got it. Now, You've gave guidance for the year. In the first half, first half 2023 versus first half 2022, you're gonna grow 41% per guidance. What's left for the second half is about 14%. We're going from 41% to 14%. A lot of it is tough comps in the beginning. What is the risk, and I'm asking it, I'm not looking for a number, I'm looking for a qualitative discussion. What is the risk that you go 41% first half, 14% second half, negative first half 2024? Meaning, it, could there be a scenario where we're gonna see a reduction of orders, a reduction of deployments, it's not orders, as the backlog is drawn down, and actually, 2024 could be a negative year instead of a growth year?
Right. We haven't talked about 2024. I'm glad you're not looking for a number.
Yeah.
You know, I think that, we still see a lot of interest. From Cloud customers, and also we have an enterprise business.
It's also not a binary, then again.
Where we've seen growth, and, you know, we're optimistic on our growth projections as a company.
Got it.
Yeah.
Okay. I'm not gonna get an answer from you.
You're absolutely not gonna get an answer.
Been working for a long time to do that. Okay. Your title is Chief Platform Officer. What does it mean?
I have two things. I have hardware development at Arista, as well as the manufacturing and supply chain.
Got it.
I spent a lot of time in the last couple of years on that latter part of my.
You're the one to blame, for the supply constraint?
Yes.
I wanna talk about I think we spoke already about the Cloud Titans to the end of it, right? There's just no need to go back to it. I wanna talk about service providers, and I wanna talk about enterprise, and I wanna talk about second-tier Cloud.
Okay.
Let's start with service providers. How do you participate in the service provider market? High-level question.
Sure, sure. We participate through both switching and routing, conventional products. We've done really well, I think, in the data center portion of those service providers. With routing, we saw in service provider some very early wins that, kind of looking back, we'd categorize as greenfield opportunities. Not necessarily, you know, needing legacy features and detailed functionality of MPLS, kind of looking forward. In fact, I think that was probably people not moving to those architectures as quickly as possible, right?
Yeah.
We've seen more change in that market, which is good. We've also really built out our routing stack with more details and functionality, so we're highly engaged in that market, but still, it's moving slower for us than we would like or hoped, but continue to focus.
Right. Early on, the functionality of your router was more limited, meaning you went after certain opportunities. You didn't go after the entire routing market. How do you envision yourself five years down the road?
I think that the architectures in those service provider stacks have to change to more modern Cloud-like architectures.
Got it.
So, rather than focus on, you know, being the 19th vendor, if you will, to build out that legacy stack...
Yeah
We're kind of focused on that next generation architecture.
Got it. Where, just to keep on routing and going back, where is your routing today being deployed? Meaning outside of service providers, is there a demand for the router also in the other markets?
Absolutely. The reason we went into routing was really based on Cloud providers. They got to a point where they're building their data centers, and they wanted to connect one logical data center, but couldn't actually fit it in a physical plan. Right. They added another tier. We called it the Universal Spine, and interconnected effectively with multiple routers. Legacy was kind of two high, high-powered routers, very expensive, interconnecting data centers to be redundant to an N-way backbone that was routing between sites. That was our entry. We extended it to the enterprise.
The enterprise PAN for routing, for backbone i s much smaller than top of rack and distribution switches, but highly strategic. You have site-to-site recovery, disaster recovery. Maybe I'm a company that has multiple assets, multiple companies, and I'm just controlling the backbone. They want to interconnect them, but segment them. That's been really important. The service provider is the third, which has been more of the legacy protocols and not moving as quick on, quickly as we'd like.
Switching to the enterprise market, first, what are the trends in the enterprise market? Did they go through the same, you know, capacity, constraints or constraints in, throughout 2000 to, 2022, 2021, and now we're seeing the same kind of trends we're seeing in the Cloud, or is it more normalized growth without the big ups and downs?
It's kind of like the housing market, right? You have all these different enterprises, and they all make the decisions to buy or sell at different times, right? They don't all kind of aggregate on a technology cycle as pronounced as Cloud. Just general trends, they had kind of 30 Gb in the data center, 10 Gb to interconnect their wiring closets. The modern technology was kind of 100 Gb, 25 Gb. Some of the access points were running at 1 Gb interconnect, and now with Wi-Fi 6, Wi-Fi 7, you wanna upgrade those to 5 Gb ports. The other thing that's happening is more IoT infrastructure. If you're a hospital, you're connecting mission-critical equipment that isn't in the data center, but it's out in the hospital itself. That needs to be secure. Security is a top-of-mind issue in the campus today. That's all happening.
Got it. Okay. I'm looking at the time. We're good. I wanna talk about the go-to-market. Before I go into technology, keep on technology, but go-to-market, do you feel that you have all the components that the go-to-market to service providers is different, Cloud is different, and enterprise is different?
Right.
Do you feel that you have all the components needed to address all these opportunities? If not, where do you put your focus on go-to-market? The question is a higher level. I want to understand if the company's focus is entirely technology, or go-to-market is also a focus for the company.
Absolutely, go-to-market's a focus.
Yeah.
I think sometimes when people look at campus-
Yes.
They think of that as a low end or mid-market.
Right.
Our focus is on the top Forbes Global 2000 or higher companies.
Right.
What networking needs do they have holistically? We started in that market just as a data center point player, then we added routing, and then we added campus, but that's the focus, right?
Got it.
As the portfolio has gotten broader, those companies view us as a credible alternate to the incumbent, as an enterprise networking company. Before, our sales team would have to wait for the data center refresh, and if they just started their job and they missed that opportunity, it's three years before the next refresh. Now they have a security opportunity, a campus opportunity, a routing opportunity. That's.
Got it.
That's where we're focused. It's one holistic focus on that market, and what can I sell into that?
I understand very clearly your value proposition to Cloud. I understand very clearly your value proposition to service providers. To the enterprise, I ask you to articulate it, both levels, both the campus and non-campus environment, data center environment. What value do you bring to someone like Bank of America? Not someone like Bank of America, a big financial institution or kind of big? What value do you bring to smaller customers?
Yeah. I know kind of in the Cloud, we think the value proposition is high-performance networking.
Yeah.
In fact, our origins in the Cloud were, they had very few people to operate a network with millions of users. There was an operational efficiency argument for Arista entering the Cloud. It's the exact same thing in the enterprise. How can I deploy and be operationally efficient? How can I do upgrades? What's the cost of quality in my software if I have to, get a security alert and upgrade, can I upgrade quickly? All those aspects of ease of use and operations are what we sell into the enterprise.
Got it. Campus, what's same?
Same. It's exactly the same. In fact, I can operate my campus network in the same manner, my data center and my routing infrastructure, using CloudVision and automation. The other thing that we've done, maybe a little bit different in the enterprise, is we see the architectures that are happening in the Cloud and how they manage their infrastructure, and many of the tools they develop on their own, we're helping enterprises on that road to automation. Most of them aren't ready to go to full automation like a Cloud provider, but there are a discrete number of steps they can make to get away from command line interfaces that are pervasive in our industry to a full automated stack.
Focusing on campus, I want to understand it. Where are the customers you're going after? Are these your existing customers, data center customers, and you give them some streamlined operations with the campus? Could it be greenfield, you're not selling to the data center, but you're trying to sell campus?
The plan was that we thought we would have our data centers be the first customers of our campus, and some of that was true, but we were surprised that there were a lot of prospects. Some of them had been calling on for a data center opportunity, but they weren't ready.
Yeah.
Actually took us first in campus. as you deal with maybe outside of the financials.
Yep.
There's just more campus opportunities in a lot of these enterprises, because they've outsourced a lot of their workloads to the Cloud already.
Got it. I'm going to stop here for a second before I continue. Is there any question from the audience? No? Good. You give me more time. 400 Gb as a driver, explain, first of all, the, what's the target market? Who is deploying 400 Gb, and what kind of a driver is it?
Sure. I think the initial 400 Gb deployments we saw as data center interconnect. The, the top tier of the network, where I'm aggregating all the bandwidth and integrating sites, that was a driver to 400 Gb. Now we see some AI. kind of how do I connect these clusters and get performance into the clusters? We've also seen, you know, use cases outside of the Cloud, financial, verticals, media, and entertainment.
Got it. Is it a big revenue driver, or is it... I mean, on one hand, 400 Gb is a driver because it's, on an absolute terms, it's more dollars. On the other hand, if it replaces 100 Gb, then every four ports, you sell it for the price of two and a half ports.
Right.
It could be both ways. Is it a revenue driver or it's more of a technology driver? That's what I'm trying to understand.
You know, From a market analyst, they measure us all in ports and port share, right?
Yeah.
We tend to think about the platform generation. The silicon's gone, you know, to 25 Tb, you can use that silicon to connect a lot of 100 Gb ports or less 200 Gb ports or one fourth 400 Gb ports. All our products have those variants today.
Right.
We're pretty agnostic on the port selection. We just would like you to buy a 7800.
Got it.
That's. There's been more disparity in this cycle around the port types.
Right.
When people moved from 40 Gb to 100 Gb, there was one big click. 'cause it wasn't as elegant technically to split the ports, but now we see customers make different choices based on legacy optics, what they have in their fiber plan, et cetera.
Got it. There's something I don't understand in the market, which is market share. You know, Cisco is the legacy provider in switches. They've been losing share for many years, but they have a respectable market share. When I look at the 400 Gb market share, you have 40%, they have 6%. What is driving 40% market share for you and only 6% market share for Cisco? Maybe the market share data, maybe it's not correct, but we've seen before, the market share data is not precise, but the gap is so significant that there's something in it, and I'm trying to understand.
Arista has done well with high-performance networking.
Yes.
Kind of since inception, and I think with the 100 Gb market, really was an inflection point for us. We built on that momentum with merchant silicon. We think that that approach was the right one. The quality around our software stack, that goes with it, and the ease of operational use, and we're just running that playbook and trying to execute on it as well as we can. We're very happy with the share results we've gotten.
Got it. Great. As a Chief Platform Officer, what's your challenge that you see in the next three to five years? What do you focus on? Where do you put your money on in terms of R&D, in terms of product evolution? Where, you know, three years from now, when we look back and discuss it, what do you think will be the change in the market, basically?
If I look up the hill in terms of performance and capability, it seems like I was where I was four years ago. I mean, there is just more faster. It's unbelievable.
Yeah.
That's gonna bring new technical challenges. You know, we've fought challenges around signal integrity. Now it's about thermal design and how we can pack these things together and get performance. I also think during COVID, we learned a lot about supply chain, how much we knew and how much we didn't know. You know, as the world's gotten more complicated, I think that's an area where we have to continue to be prepared for, both risks as well as new capability and challenges.
Got it. One topic that we hardly discuss anymore, and I think it's still important, is white box. There are some vendors that are doing more white boxing, some Cloud companies doing more white boxing, some less. What's your view on white box switching, white box routing as a threat?
Yeah, it's been pretty static. I mean, there are two large Cloud customers that drive a significant portion of that spend. Our two large customers are all multi-vendor, one of them having other OEMs.
Yeah
One having, white box. You know, I think that there hasn't much changed. I think that the threat of white box going into more an enterprise or even SP has diminished.
Right
As the software stack availability is limited, right?
Right.
The options there.
Right.
The large Cloud customers, if they have their own software stacks, they have their own supply chain teams, so there's quite a bit of additional investment. that you have to make White Box happen.
In the case of a White Box customer, so we're not gonna mention names, Google. Forget Google. Forget the name of the customer. In the case of a white box customer, how do you participate? I'm keeping it without customer. I'm keeping it at a high level. I want to understand, if the customer is choosing white box for the leaf or for, you know, for the low end, does it mean that Arista is not in the network at all? Can you still work in an environment where there are parts that are white boxing?
Absolutely. I mean, it's interoperable, there's some areas where maybe the software stack is different or they need different capabilities, we have some opportunity in those White Box environments, for sure.
Same question about AI. That's gonna be my last question because of time. I've mentioned names, but it doesn't matter. I'm not referring to the names. Google is using its own GPU. Microsoft is using NVIDIA GPU, different GPUs. Do you have different opportunities where the GPU, when the GPU is different?
I think that the network is sufficiently abstracted from that GPU, that it won't make much difference.
Got it.
You know, similar to what we saw with different CPUs over time.
Got it.
We've got to connect the cluster somehow.
Got it. Okay.
That's my viewpoint today.
Good. With that, I wanna thank you.
Thank you.
Thank you very much. Appreciate it.