Thanks, everybody, for joining us. We have the pleasure of having Ita and Liz from Arista with us. I will try to actually open it up to see if anyone has a question, but, as a lot of people know, I tend to ask a lot. But thanks for joining us, ladies. Really appreciate you guys making the trip out.
No worries.
Ita, congrats again on the early retirement announcement.
Thank you.
Looking forward to working with you for the next couple quarters, but before, you know, retiring, we're gonna have to pester you a lot on this little topic called AI that everybody wants to know about. You know, as we think about AI sizing here, a lot of people look at your supply chain. Broadcom talks about, you know, $200 million last for this year, going to $800 million for their AI chips.
Obviously, there's a lag on the supply chain thing, but as we start thinking out over the next, you know, year or two, how should we think about that lag in the supply chain from your angle with those AI chips, as well as is there a way to think about how big AI could be in the next year to three, kind of whatever time horizon you wanna have us think about?
Yeah. So it's pretty hard to bridge kind of from the Broadcom numbers. I mean, obviously, all of those chips don't come to us. You know, if you think back to our analyst days last November, which was kind of pre-ChatGPT, et cetera, we were talking about AI then, right? So the 7800 product set that we have, you know, is a good AI vehicle, and is deployed in some AI use cases today. Not kind of the post-GPT backend discussions that we've been having more recently, but definitely there's some AI traffic that's traversing some of that equipment today. The problem is we don't really know what's an AI use case and what's not, right? Because that chassis has been deployed in DCI, it's deployed in Spine Switch situations as well, right?
So that's one factor, is it's not as simple as how, you know, how do you count it? I don't know how Broadcom is counting it, honestly, in terms of where those go. So firstly, they all come to us. Secondly, we've got lots of use cases that consume those chips. And again, in part, you have the supply chain, and you have the fact that, you know, you're still living in a 52-week lead time world on, on silicon, for now. So it's kinda, you're definitely going to be dislocated, right? So I think it's hard for us to tie back to those numbers. I understand why everybody wants to do that, but it's, it's not completely something that we can quantitatively thread through to their, to their numbers. Yeah, the sizing of the market, I mean, the industry folks are trying to do that.
You know, I think the 650 Group sizes at $2 billion-$3 billion in 2024, growing pretty aggressively beyond that. Again, I'm not sure that anybody really knows exactly kinda what that's going to be. We think about it as, you know, AI is going to underpin kind of investment and innovation around the network for the next, you know, 3-5 years, which is great, right? We're fully engaged in that. But, you know, in terms of being able to slot in and say, "This is exactly what's going to happen," that's not clear yet, right? I think we still live in a world where we... You know, we've talked for a long time about 2024 being a kind of, you know, digestion, use it cloud year. I think that's still our view.
That underpins the comments made at the last call, and I don't know if there's anything about AI right now that changes that dynamic. We think it absolutely can be a driver for the future, but it's going to take some time to get there.
Yeah. So we frequently talk about, you know, networking, especially high-end networking, where you guys play, you know, speeds needs speeds, right? Like, does AI actually accelerate this 800G roadmap? Understanding we're just talking about 400G real material deployments today, or is that 400G system actually good enough to be able to handle essentially the workloads that AI, the AI-based workloads?
I mean, currently 400G can be deployed into those use cases and is being deployed in those use cases today, right? But obviously, you want the fastest speeds that you can have, right? If you, if you're going to generate more and more data, you're gonna want more and more speeds. So everybody's going to want 800G to go faster, right? But that, that in and of itself is not going to make it go faster, right? There's still a set of work that has to be done. I mean, you've, you've finally had silicon delivered, so we're working kind of on the systems now, and there's customer qualifications, et cetera. So it does take time to get there.
There's definitely, you know, emphasis on, you know, moving those dates as fast as you can, but still, from experience, it just takes time to actually get to a quality product that you can qualify and ship and volume.
Yeah. So what do you think about the capacity for... that's sort of required for the average CPU-based application versus the average GPU application today, let alone down the road? Do we have to worry at all about substitution effect within your revenue base of CPU versus GPU, kind of, or CPU versus AI workload, essentially?
I'm going to take the first part of that question then-
I think from a capacity standpoint, if you, if you look at CPU connections today, they're connected at 10, 25, maybe 100 gig, some of these larger customers. All those CPUs don't necessarily talk at the exact same time either, right? So there can be some under subscription or oversubscription built into the network. But when you're talking about GPUs, these things are connecting at 200, 400 today, and they want to go as quick as possible. They all have to speak at the same time. So from a network capacity standpoint, much more demands on the network from a GPU processing standpoint, right?
So, you know, when you're building out an architecture for an AI cluster, you're looking at a non-blocking architecture, which means that all nodes can talk at the same time at full capacity or maybe even, like, undersubscribed, right? So that you can have the capacity in case of a failure. So I think that they're definitely driving much more demands on the network, even just from a raw GPU standpoint.
So I think the number one question I've gotten this quarter on you guys is, well, why won't this all be InfiniBand based when you talk about AI workloads? So, you know, what advantage does Arista's Ethernet have over NVIDIA's InfiniBand, especially when you include some of the other pieces of the NVIDIA portfolio overall that can help the capacity and scalability of what they've got?
Yeah, I think that really comes down to scaling these AI clusters, right? So, you know, in the past, historically, maybe InfiniBand had some technology advantages. It scaled to 40G factor. It had RDMA. If you look now, you know, we have RDMA over Converged Ethernet. We have for the last 10 years, it's been deployed in many of these high-performance compute clusters. If you look at, like, the sheer technology rate, like we're at 400G going to 800G, 1.6T is coming down the road, too. So I think technology-wise, you know, Ethernet has taken what it wants from the InfiniBand world. And then when you talk about scale, and you think about, like, the scalable technologies, Ethernet is what's deployed today.
It's multi-sourced, which is very important to these large cloud providers. It's multi-tenant, which is very important if you're going to take these GPU clusters and carve them up, and service multiple customers. It's open standards-based. It's the same operational model that they have today. So I think that there's a lot of benefits as these AI clusters scale out. Then it goes back to how big is it really going to be, right? And I think that goes back to what you believe AI is going to be as pervasive as everybody wants. Like, everybody's going to want their data center to be AI-enabled.
So you guys have a huge advantage here. Thesis early on was we're, you know, going to focus in on the cloud and cloud-based workloads, and where you're seeing a lot of essentially the picks and shovels deployment today is with those hyperscalers, including two of your largest customers. And, big question that I think is on everyone's mind is, does that networking mix actually change when you think about that AI-related CapEx versus the traditional kind of CPU-related CapEx? So, you know, how should we be thinking about that mix shift of percentage going towards switches and networking gear in sort of this AI world?
Yeah, I mean, that's a question that we get all the time as well, right? And, you know, I think you don't have a lot of quantitative—usually, you get this data from, you know, white papers, et cetera, that are published by customers over time, after they've had some experience and have deployed some stuff in their network. We haven't really seen a ton of that yet. Again, the industry analysts are starting to talk about, you know, maybe we always talked about networking being kind of high single digits, 10% of the overall data center spend, and they believe maybe with AI, it's maybe 15%, right?
Those numbers are thrown out there, but again, I don't know that you've had real validation yet of a real-life kind of large footprint where that's been measured on a consistent basis, but that's kind of where the industry guys are putting it.
Got it. And as we think about the cloud and hyperscaler potential refresh, even some of your large enterprises, right? 2017, 2018, that was a good couple of years for you guys, given the, given the 100G kind of cycle was really ramping then. The hyperscalers have kind of moved towards 5-6-year depreciation cycles, depending on which one you want to talk about. But, you know, why wouldn't we also need to have that refresh of those systems on top of potentially some of the AI spending that you could get? So, you know, what's the balance going to be, or what, what's the feedback on the balance between what they need to refresh versus what they kind of need to invest, and how does that impact you?
Yeah, I mean, we can't forget they grew triple digits last year, right? There's been a lot of, a lot of spending that's kind of already happened, and they'll be big contributors again this year, right? And then after that, I think it just, it just comes back to where their priority is going to be, right? And that's going to be driven by, you know, how much new investment do they need to make? How much of that is AI driven, hot versus, you know, refreshing their existing? I mean, I think Microsoft, on Microsoft's call, I mean, they were clear that they're going to end up having to kind of make sure they invest to support Azure in the meantime, while they're kind of building out this, this new AI footprint, right?
So I think you're going to see some mix of that. Exactly how it, what it is, we'll have to wait and see. But, you know, they've been making big investments over the last year, this year, as well into kind of their existing footprint.
Got it. And circling back on AI, you know, we frequently hear about you guys externally selling, but on the internal side of things, so you guys are thinking about your own kind of costs, are there areas of that that would be leveraging AI that you are currently testing or planning or kind of helping out on the internal perspective?
Yeah, I mean, the biggest place that it would move the needle would be around all of the software development resources. Maybe a little bit on the hardware side, where, you know, you're using kind of design, chip design, testing, that type of stuff, right? And I think, so Ken is looking to see kind of, you know, what's truly valuable in terms of, you know, deploying it in that area, because that's where most of our, like, large spend items are. So that'll be the focus, I think, to begin with, is there something that can be done kind of to make the software team more efficient? We're always constrained in terms of, you know, the talent that we're able to hire and find at the level that he wants to have it.
If there's something that can make that more efficient, there's definitely a lot of things on the list to go work on. So that's probably the most important place, but I don't think he hasn't kind of concluded how he wants to think about that or how we would deploy that yet.
Yeah. And I think one of the surprising things, including for myself, admittedly, this year, is actually the resilience of your enterprise install base, let alone capturing new customers. So I guess, what's enabling you guys to you know, gain share within the enterprise, specifically, and then I got a follow-up around campus for that.
Yeah, I mean, it's a couple of things, right? I think there's a refresh in the enterprise. We've always talked about kind of, you know, the digitization, transformation, whatever you want to call it, but we are actually seeing enterprises more and more reliant on their IT infrastructure in a real way that, like, it impacts their business if it doesn't work, right? And that causes them to want to make an investment, and then we have an opportunity to go and show kind of just the differentiation that we bring and how much more reliable and how much easier it is to manage an Arista footprint because of the single image across both campus and data center, and because of the visibility and all the other things that we can provide, right?
I think that's it. Yeah, there's a pull from an enterprise perspective, and then we're more ready as well to go play in that market. Our CloudVision is way more featured, fully featured than it was before. It's doing a very nice job of managing everything across an enterprise footprint, and customers are really seeing value in that, right? I think it's the combination. The need is there, and then we're just more ready. We've got kind of campus, data center, routing. We're able to kind of address it better from a sales perspective. We're continuing to add salespeople to that. I think it's the right time for us to kind of be more successful and be more targeted in terms of those accounts.
Yeah. So back in May, we were chatting, and you actually said campus was starting to become, like, half the spend into the organizations, I believe, at that time. So what is actually enabling you guys to get in there? As there's just this question as to why does Arista have the right to win in that campus environment, and do you feel you're still on pace for that $750 million goal that you outlined at the last Analyst Day?
Yeah, I think, look, The reason why we win in campus is the same reason why we win in the data center, right? Again, campuses have become more complex. Campuses are, you know, even if it's an office footprint, there's the hybrid work, there's all the different ways that people want to access the network, etc. So that's way more complex, but it's not just offices anymore, it's retail, it's hospitals, it's colleges. Again, the complexity, the level of complexity has just gone up, and that's driving this need for a more simple approach to the network so that it can be managed efficiently, and that, the complexity doesn't break it, right? So I think that's a big driver of it.
It's the same - it all comes back to the same thing of just high, high quality and then simplistic in a good way, right, in terms of being able to manage the network.
Yeah. So, obviously, AI has been a big topic, but now as we start to turn the page towards the fall here, the question we're going to get is: What does 2024 look like for a lot of companies? You're not going to guide here, I don't think. But, I think you'd start, you know, a little bit of what happened at Piper this week. But, walk us through the puts and takes, as you think about that we should be thinking about for calendar 2024. You had mentioned, you know, digestion period, potentially. You know, what else should we be thinking about?
Yeah, I think, look, we'll do more of this, obviously, in November at the Analyst Day. You know, Jayshree did... You know, she, she's long had, probably since the Analyst Day last year, had this goal to be able to, you know, can you grow at least 10% in a year when cloud is muted or cloud is, you know, is, is kind of in that down part of the cycle, right? And, or more muted part of the cycle. So we talked about this a little bit on the call right at the end, but, I mean, that is a goal that we're kind of starting to feel better about, and obviously, we'll talk more about kind of the, the different ways that you can get there, in November at the Analyst Day.
But, you know, one assumption, obviously, is cloud is more muted, and then you're relying more on enterprise and some of the other cloud parts of the business to drive the growth, right? But again, whatever one we take, it's never going to be the perfect answer of what actually happens, right? So we need to have multiple different ways to get there. And so we'll come back and kind of address those a little bit more at the Analyst Day. But that's, you know, I think that's something that's firming up in our minds, that that's possible, even if you assume that cloud is muted.
Is there anything to think about for more so the purchase commitments or inventory turns? Like, where do you actually see inventory turns as kind of a normalized state for you? Or, you know, where does purchase commitments in a normalized environment actually go to, seen as, you know, the right way you've pointed us to is think about it as that and inventory together.
Yeah, so the purchase commitments are coming down nicely, right? And they should, right? Because we want to step back out of the contract manufacturers supply chain, now that everything is returning to normal, right? So we had to... You know, in order to kind of make it all work, we had to kind of step in and backstop, if you like, you know, purchase commitments on pretty much the entire BOM, which we never normally do, right? So now we're kind of coming back out of that, and we're, we'll be left with the key components that we have always buffered, that we buy directly. That's what you're seeing in that raw materials line, on the balance sheet.
A big chunk of that is silicon, and you know, the size of that is going to be very much linked to what the lead times are for those parts, right? And right now, that's still at 52 weeks. Does it get better? We'll have to see, right? But that will drive that number. So the purchase commitments are coming down. They need to continue to come down. We're very focused on doing that, making sure we're taking advantage of every lead time reduction that we can get. On the other side, you have raw materials, that's gonna be driven by those lead times. And then, you know, the finished goods, days of inventory is still not back to where it was kind of pre-COVID.
Number is bigger, but obviously the business is much bigger, too. So, you know, as things get better, we'll probably add a little bit more there just to get back to kind of something, days numbers, and then we need to be about the raw materials and where those lead times end up.
Got it. There's about five minutes left. If anyone has a question, please feel free to raise your hand. I'll call on you. Otherwise I can keep going.
you guys to get chips these days from your suppliers? I know they're all, they're all pretty tight, tight with TSMC on board. So how long of a lead time or wait is it? [audio distortion]
Yeah, I think, you know, for the key kind of components, the key chips, you know, it's always been, you know, it's a longer lead time. That's like currently 52 weeks. But there was predictability in terms of what you were going to get and when you were going to get it. And I think that's still the case, right? So, you know, if we give enough visibility, then... And they make commitments, they've been executing those commitments pretty well. The problem children we're like the small, you know, parts that nobody really expected, and we hadn't taken a real action to try to resolve early on, because we didn't understand that that was going to end up being kind of the golden screw. But that is much better.
I mean, that's, that's really coming back to normal now.
Yeah, thanks.
Yeah, maybe from my side. So obviously, GPU is a clear focus with some of the data center buildups. And, you know, typically you see that deployed up front before you get more close to it. But is there a way to think about the timeline from GPU deployment to when you start needing to really accelerate the networking purchases for, you know, how do you think about the difference between GPU [audio distortion]
Yeah, I mean, it's, I mean, simplistically, you'd say, you know, once you deploy those GPUs, you're going to want to make sure you've got enough networking around it, because that investment is so significant that relative to the network, that you're going to want the network to be there and not to be a bottleneck in any way, right? You know, I think what's happening right now in terms of, you know, those first deployments that we're seeing, I mean, at least a good chunk of those are going in with InfiniBand, because that's kind of the bundle that you're buying. So again, that's why, you know, I think it does take some time before we start to see AI be, you know, a big driver.
Jayshree talks about, you know, trials with customers now around infrastructures, pilots next year, and then you start to kind of ramp to volume beyond that.
Maybe just keeping on that AI theme, and probably more you for Liz, because I think most forget that you're an engineer at heart. Obviously, you guys are picks and shovels of AI here and, and actually the systems behind the thing. But how should we think about potentially, you know, other AI-based products coming out of Arista or leveraging AI in, in, in your products?
Yeah, I mean, I think that, if you look at our AVA, which is part of the Awake acquisition, right? So this is our network detection and response product. We actually have some AI and ML capabilities that we utilize in AVA to detect anomalies within the network. So it utilizes all of the EOS data, its state that we have from the architecture, and it's able to kind of process through what is normal activity, what is abnormal activity, what needs to be flagged, what needs to be remedied. And so we're offering that today kind of in AVA. You know, I think I will continue to work on those types of product lines, and see what else that we can do kind of with the data and the raw state data that EOS captures.
Yeah. Anything to call out in terms of, you know, sales cycles for this year? Do they actually start to get better in the back half of the year as, as we lap some of those tougher sales cycles, you know, picking up at all or anything to think about as, as we think of going into 2024?
In terms of seasonality or?
Just in terms of, you know, sweating assets for longer, at customers and you know, those deals that you expected maybe to close are getting pushed out potentially, or is that, you know, starting to slow down and we're actually, you know, seeing those deals, especially on the non-AI side, non-hyperscaler side, actually close?
Yeah, I don't, I don't know that we've seen any huge difference there. I mean, obviously the lead time change is a big deal from a customer perspective, because instead of, you know, having to make decisions for 12 months from now or even longer, yeah, you can, you can now kind of... You're starting to get back to maybe like a 6-month lead time, so you can make decisions in a closer horizon, right? So I think that's, that's definitely a factor that allow customers to, you know, some customers hesitate to decide something for 12 months from now. Once those lead times get shorter, I think, yeah, we'll lose some visibility, but from a customer perspective, I think it's, it's really important, right? It gives them a lot more flexibility, whether that's the enterprise or the cloud, right?
I mean, to have to wait 12 months to get a switch, and once you have a need, that's just too long. So it really is good to start to see, those lead times come down, just from a pure business perspective.
We're coming up on time here. You know, it's kind of bittersweet here a little bit for me, but obviously, you've announced that you're going to be, you know, stepping down, retiring here, and congrats again.
Thank you.
I guess, why retire now? And given that you're, you know, involved in the CFO search, what is Arista looking for out of the next CFO?
I mean, why? Honestly, because I can, right? I mean, it's something I've been thinking about a little bit more. I've spent a lot of time in Ireland, so I've got a lot of family in Ireland and stuff, but it will just make that whole thing a lot easier. What are we looking for? I mean, again, you've met the team. You know the team, right? It's a very smart kind of... It's a culture where, you know, everybody is humble and focused on the business and so finding somebody that fits that, that can kind of help the team and bookend the team going forward, I think that's the key, right? So culture is a big part of the search, for sure.
Awesome. Well, thank you very much for making the trip out, and congrats again on your retirement, and looking forward to the next big thing out of Arista.
Okay. Thank you very much. Thanks, everyone.
Thanks, everyone.