Hi, good afternoon, good morning, depending on where you are. I am Mike Genovese, the Cloud and Communications Infrastructure Analyst at Rosenblatt Securities. Thank you for joining us for the third annual Age of AI conference. Super excited today to have executives from Arista Networks, ANET, a really , you know, company with an exciting history, but also an exciting future. Not that long of a history, but it's been, it's been a really good one. Today we have from the company, the Chief Platform Officer, John McCool, who I think is as cool as his name sounds. We have the fun one at the company, Liz Stine, the Director of IR. John and Liz, thanks for being here today.
Thanks for having us.
Thank you.
All right. Well, let's get into the questions. For the investors who are joining us here on the webcast, there's a tool on the right-hand of your viewer screen to submit questions, which will just be emailed directly to me. I have 100% hit rate so far of asking the questions that I get on the fireside chats, and I expect that to continue. So, so, John, let's talk about, you know, 2023, the second half of 2023. I mean, you guys are having a really good year, more than 25% revenue growth. I mean, well more than that actually, you know, but then you've also talked about push outs and, you know, push outs, pull ins, and there's a little bit of caution in the environment this year.
I think, you know, web spending's probably supposed or Cloud Titans spending's probably supposed to get even better next year. But help me kind of describe the 2023 environment, where there's these push outs, yet you guys are growing, you know, in the, you know-
Yeah
30% and more. Yeah, you know, talk about 2023 before we talk about 2024.
Sure. You know, if, if you think about, let's talk about 2021 and 2022.
Mm-hmm.
We saw significant growth in the Cloud Titans sector, right? I think part of that was with supply chain and the constraints, the, the Cloud Titans, specifically recognized that, you know, we were in a difficult environment on supply, and recognized that much earlier than the enterprise customers did, who might not have direct access to semiconductor companies. We saw a lot of growth in those two years, continuing into 2023, but at the same time, starting to catch up on supply, in the beginning of the year, and started to meet rollout and deployment schedules. We've been, you know, very cognizant of driving those products into the market, consistent with our customers' ability to roll them out and put them into the environment. At the same time, you had the enterprise deployments starting to kick in, and we were able to take some of that capacity and deliver to our enterprise customers who've been waiting on product for some time.
Yeah. I definitely wanna get into the enterprise. I think we'll talk more about the, the cloud and, and specifically the Cloud Titans. First, I mean, you know, I wanna get your perspective from, from inside the industry, 'cause from outside the industry, it seemed like all of a sudden with ChatGPT, just, you know, really just a few months ago, that everything changed, and all of the CapEx priorities of the cloud changed, and they were all of a sudden overnight doing something completely different than they were doing before. You know, from inside the industry, things probably look different from than, than, than, than, than what I just described. Just talk about how AI has, has impacted spending so far. I mean, we're gonna talk about the opportunity, you know, that's ahead of you on AI.
Yeah.
Just again, for 2023, sort of, you know, from your perspective, how has AI really changed things in 2023?
Sure. I mean, I think we would agree, you know, in some ways, that ChatGPT was a, a defining moment. At the same time, we, you know, we'd been involved in customers who were engaging in AI, doing trials, starting to think about how they would deploy. I think at that moment, when this became a very public announcement, people had to take a look at the projects internally, how were they doing, think more aggressively about how they were gonna monetize them, and really kind of announce their plans, a nd tip their hand, if you will. It really became kind of a shift in kind of an internal-focused effort to being more external and public.
At, at the same time, I think they were kind of rationalizing what that means to their infrastructure going forward, and how they refocus on driving those initiatives. I think what we've seen more recently is an understanding that that's gonna require, you know, continual investment in the AI piece, but also, along with their core networks, to be ready to put in those AI clusters, along with just continuing to sustain and run the business.
Yeah. I mean, as you think about 2024, do you expect then that, you know, as I kind of call it, that these cloud guys will, will walk and chew gum, in terms of they'll be able to, you know, focus on the general core cloud and get, get what they need done for AI? You don't think AI is gonna basically suck up all of the air in the room?
That's a good question. I think we've seen, you know, over the last six months, kind of that understanding that you have to do more than one thing, and customers starting to react to how they're going to deploy and make those investments going forward. I think we'll see that filled out even more strongly through the back half year of 2023.
Okay, before we get into sort of the 2024 and beyond outlook for cloud, let's go back to the enterprise business for, for a second, because, you know, you made that comment earlier, where you said, you know, "And then, and then this year, the enterprises started to deploy these, these large networks." Let's, you know, look- I guess double-click more on that, right? I mean, what about, why did that happen all of a sudden?
Yeah, I think, this might be one that wasn't all of a sudden. We've been working on the enterprise for some time. I think that the, you know, the enterprise was slower to recognize the impact of COVID on the supply chain, and perhaps some disbelief in those extended lead times before the cloud folks did. They're starting to deploy. I think, at the same time, there's a very unique Arista piece to this, as during COVID, I think our teams were able to get in front of more enterprise customers. Within different market segments, different verticals, we've seen movement of folks from different IT teams. We were always very strong in financial vertical, media and entertainment.
Moving into healthcare, and industrial use cases, before we probably didn't have the recognition, who's Arista, in those kind of accounts. As more people have used our products and moved to other companies, I think that there's less headwind for us just being able to insert. At the same time, we've built out the portfolio over the last three years, from kind of a data center, data center routing to campus use cases, network visibility. We just have more to offer, and effectively, our sales teams are getting more at-bats to bring in Arista kit into the environment.
Okay. Well, I mean, yeah, there's, so l et's stick on enterprise for a while before we get back to the cloud stuff. Talk about, talk about more of the things that you have, of the more complete solution. You know, I also wanna. You know, what other pieces do you need to add there? Then the, the other part of this that goes with it is, today, in your enterprise business, how much is direct touch, and how much is distribution? Do you think- how will that change over time?
Okay, maybe I'll take the latter part of that question first. Our enterprise go-to-market effort is very focused on Fortune 2000.
Mm.
We, we have to drive a, a direct preference in those accounts, get them to understand what we have in the portfolio, what our unique advantages are. The go-to-market is very much Arista-led, with partner, fulfillment and deployment. You know, we have gotten a much stronger engagement with the channel as a result of those deployments, and it wouldn't be atypical for Arista to focus on a Fortune 2000 account in a region, and that partner who's deploying to see other opportunities that might be smaller and pull Arista into those accounts.
We think this approach, we still have a lot of headroom, with that focus, either in achieving new accounts and new account penetration, but even in large accounts that we're in, they have significant network TAM that we're just beginning to tap into, so we have room to grow, even with the accounts that, that we're installed in today. At some point, you know, I think we, we may look at a mid-market type of approach, which would be more of a distribution channel and a led approach, but we're, we're not, we're not there today. Now, now, regarding the portfolio, you know, I think we were recognized early on as a data center player, that allowed our entry into specific verticals that were data center heavy, right?
Financials, media, entertainment, as we built out the routing stack, we could then address data to site- data center, data center interconnect, with the virtual router, data center to cloud. With the campus, we, we broadened that out to PoE use cases, collecting, connecting IoT equipment, network visibility with one of our acquisitions, Big Switch. You've kind of seen this use case build out. More recently, the WAN transit portfolio was a piece where we didn't really address the connection of a large enterprise and their backbone into remote offices or branches or retail locations, we added that last year. We've strengthened that with our introduction of network identity, with our AGNI product, that we call it, to be able to identify users coming into the network.
The Awake technology we've coupled with our campus engagement, which gives us a unique way to profile, interconnect people who are coming into the campus environment from, from unknown locations and identify them. I think we have a very strong, broad portfolio now.
Yep
A cross all pieces of the enterprise network. Obviously, there's, you know, some filling out and building out of the portfolio we can do within that, but I think all, you know, typical use cases, at some level, we, we're able to address.
I didn't hear you talk about Mojo or Wi-Fi. Did you, was that just you just left, did you-
I just said there was too many words. For folks working on Wi-Fi at Arista, I apologise for that. No, that's been, you know. I think one of the reasons we, we did that acquisition when we went into campus is, is we realised you had to have both a wireless, and fixed-
W ired portfolio to even engage in some of those portfolio bids. Yeah.
Like, I think one of the, the unique things and the differentiating thing about Arista is that we've been able to expand our product portfolio and enter these new markets, all while keeping that common operating system, that EOS.
Mm-hmm
A nd that common management platform or CloudVision. These are the things that, you know, our customers see value in, from the fact that you can run the same operating system end-to-end, that you can manage it, and you can get visibility through CloudVision end-to-end, from your network, rather than come out with a new product portfolio that had its own operating system or own management tool.
Mm-hmm. Oh, great. On the, on the second quarter earnings, I heard the company. I hope I'm not mischaracterizing this, but I sort of heard the company say, "Well, you know, low double-digit revenue growth, probably about where consensus is of 11%, is, you know, a reasonable target, to, you know, to have," basically. I mean, that's what I, basically heard from the company. I think the quarter before that, the confidence wasn't as high to say anything about 2024. It makes sense that you know, we're, we're closer, we get a little bit more visibility.
You know, as you think about whatever the growth rate for 2024 is, and cloud as a driver, enterprise as a driver, just talk about them relatively as drivers in 2024 of growth of the company.
Yeah, I mean, maybe just kind of step back at, to the comments. You know, obviously, moving into Q2, a little more visibility. We did talk about, Jayshree talked about double-digit growth as, as our target. As we get into our Analyst Day in November, I think you'll see more color around that 2024 projection. We'll have, obviously, more visibility from our customers. In terms of mix, I don't think we're ready to call that. I mean, typically, what we've seen going into a new year is we'll have multiple ways to get there-
Mm-hmm
W hich will be important. I think we do expect our enterprise momentum to continue into 2024, I think that's, that's a piece of it. We'll have a good idea, too, what, what cloud looks like, and probably more color around this AI and core use cases, and how that plays out.
Yeah. Well, just, just starting at the sort of the, the CapEx for cloud, and, and for Cloud Titans in particular, for 2024, it seems like for the ones who've talked about 2024, they, they, they either raised it or they, you know, they pushed out CapEx from 2023 into 2024, so that the 2024 growth is higher than before, 'cause 2023 is maybe a little lower and 2024 is higher. You know, it looks to me like. You know, I'm not suggesting that the overall CapEx should, should line up with your revenues. I understand that year- to- year, that's different. Over time, they'll look the same, but the year- to- year, there's different.
I'm just saying as a general backdrop, what looks to be an over 20% cloud CapEx growth, to me, you know, I wonder what you guys are thinking internally for 2024, seems, seems pretty solid. I know that we'll, we'll definitely get into the, the AI discussion and the timing of the AI discussion for, you know, Ethernet in the back end and what that means. It just seems like entering 2024, I guess, instead of talking about it, I'll ask you, what do you think, what do you guys think about cloud spending in 2024?
Yeah, I mean, we, we, we follow the announcements, and it's generally, you know, it's, it's good to see it, improving rather than decreasing. As you said, it's been difficult for us over time, in a specific duration, to do a complete correlation of that spending, you know, which includes, you know, outlays for new data center, building buildings, pulling fiber. You get down to the IT portion of that, that CapEx, it's a smaller piece, and within the IT piece, networking's about a 10%, of, of the mix, right? The, the timing, I think is , and the correlation, of those annual spend numbers, is, is a piece that has to get dialed in a little bit better.
Generally, I think we're happy with the overall macro environment, and, you know, I think we'll learn more as we get towards Analyst Day.
Okay, cool. You know, this might be. I don't, you know, I don't know if other people start the question about AI here, but actually, let's start by. Talk a little bit more about the Ultra Ethernet Consortium. I mean, is that something that you guys? I mean, you're a founding member.
Yes.
Was that your idea to start it, or your, your idea with some other companies? I mean, just tell us more about the background of that.
Yeah. I mean, we, we've been involved in many of these Ethernet industry consortiums to, to kind of move the base of the standards forward. The, the standards committee work at their, their own timeline, but move much more quickly when more than, more than two, three, or four people came together to offer recommendations on how to adopt those standards or new standards that might move the needle forward. I think if you look at the collection of folks that are involved, some of them are our competition, some are involved in the GPU space. We've worked together before. This isn't kind of an unnatural act to, to move the state-of-the-art forward. The, the opportunity for Ethernet is what we call the back-end network.
If you look at these AI clusters, they're constructed with a number of ports that face the user in some way, and I, I think that's uncontested that those ports would be Ethernet ports.
Yeah.
What's, what's interesting to the Ethernet community is they are combined together in a cluster for GPU-to-GPU communication to do, do training and conduct algorithms, et cetera. That, that interconnect, I think, represents a new opportunity. Neither InfiniBand nor Ethernet was purpose-built for AI. Neither was imagined when there was AI, and there's enhancements and capabilities that can be added to make, make those technologies work better with GPUs.
Yeah, I mean, just some of the standards for Ethernet, could you talk- that, that, that are needed, that, that, that you kinda wanna develop quicker? Can you just talk about some of those features?
Yeah. I, I think the general problem statement is you have a high-value asset now connected to the network, with the GPUs. Incredibly expensive, large amounts of memory, and you wanna get full utilization out of that asset, so you don't want them to stall. It, it drives the bandwidth requirement higher, which was already on the Ethernet roadmap to, to 800G and such, so that's good. There's things that you can do around, you know, balancing the load, dealing with congestion inside the switches, giving visibility to the end user on the utilization of various GPUs that are connected in the cluster. So they're the kind of aspects that would be added, that the, that the group is looking at.
The other piece is just making it simpler to deploy Ethernet for these use cases, so making it a little bit more plug-and-play than it would be today. They're the, the set of enhancements, I think, that the consortium, is gonna address.
Okay. I mean, I think you said, I'm trying to remember from the 2Q call, that there are currently, there are currently trials of back-end. I mean, with hyperscale customers.
Yeah.
You're in trials for back-end Ethernet?
So, you know, I think if we look back at Q1 with all the excitement of AI, I think we, we actually wanted to characterize where the state of the industry was. I think there were a lot of, lot of people wondering, "Why isn't Ethernet there today?" You know, "Are deployments happening in Q2?" And I think that what we did in this announcement was try to lay out the framework for how we see things progressing. We talked about first half of 2024, trials. Second half of 2024 being more proof of concept, so people are actually deploying the technologies. And then in 2025, really being the opportunity for, you know, a meaningful revenue transition in AI, assuming that AI is the ubiquitous use case that everybody believes it's gonna be.
Are these, going to be 800G Ethernet switches going to these networks?
I, I think there's a definite interest in 800G, but there's, you know, our 7800 product today and some of our Tomahawk series products today are being used in kind of AI clusters. There's not a binary kind of event here, but with the new capabilities that'll be added in 800 gig, things will get better, and I think more interesting in terms of bigger deployments.
Yeah. You know, I mean, I think that, Well, I can only speak for myself. I wanna say the financial community, but maybe I could just speak for myself, that, you know, we're getting better at, sort of counting the number of, you know, GPU ports that are gonna need to be connected optically to other ones, and, you know, what these things cost, and sort of figuring this out. You know, the size of the, the size of the Ethernet opportunity, though, for you guys, I guess I don't really know how to think about it. You know, and do you have any, any advice for the back-end opportunity? You know, number of, number of switches, capacity that's needed, or, or, you know, compared to before? obviously back-end didn't exist before, so I don't know, you know, I don't know what we'd compare it to, but, you know, my question is, can you help on, on this kind of, you know.
I-
M odeling question?
Yeah, I think this math is hard. You know, if I look at kinda the rollout of Ethernet into telephony, right? You could count the number of the phones on the desks, and this is how many, and what's the replacement rate. The the challenge here is the AI use case itself will be very diverse, depending on what application you're, you're trying to run. If you're. How, you know, what are you actually building? The use cases will vary in the number of interconnect between the GPUs and how many front-facing ports out the at the front of it, so we believe there's gonna be a lot of diversity in that answer.
There's a lot of I think quantification of the value to the end customer and how much they can monetize it that also has an effect on this. We're, we're, we're far from standardization in this industry. You know, we're, we're in the very Wild West days of people building things out, sorting out what's optimal for them, and it could be very use case driven, customer by customer, on how they, they see the value.
I guess my question now, like, I have a question here, which is, are there, are there more than three, and I don't just mean, you know for you guys, you know, that, that, you know, that you're selling to, but, I mean, for the entire industry, are there more than three important customers here? You know, or, you, you know, is it all, I mean, it might not even be all the Cloud Titans, as far as I can tell. You know, and then beyond the, the Cloud Titans, are there important AI customers? I don't wanna sound ignorant, but I really- so far, I see this focused at three Cloud Titans in particular, and I wanna know if I'm wrong about that.
I, you know, I think this could go more broad than just, just the Cloud Titans. I mean, that's obviously where our focus is.
Mm.
We think they'll have strong advantages of scale right off the bat. We think they have interesting use cases. A lot of people believe that actually having the data is important, because what's the value of AI if you don't have the data to feed it? That's where our focus is. There could be, you know, smaller use cases in enterprises or government that require smaller clusters, and maybe they're just interconnected with NVLink, let alone InfiniBand or Ethernet. Yeah, I think there'll be a lot of diversity, but specifically for Arista.
Yeah
W e're focused on that cloud segment.
Well, do you see the opportunity with both of your large. 'Cause I was kind of excluding, I think, your largest customer from this, and I, I might be wrong about that, but, but to me, it seems like the ones who are really building this infrastructure are the ones who sell it, you know, to other enterprises, right? I mean, they that they sell, they sell a workload processing capacity to other enterprises, and, you know, I think your largest customer's not really in that business. So my question, though, is, are for AI networks, is there a huge opportunity with both of your biggest customers, or is it more one than the other?
I don't wanna get too customer-
Yeah
S pecific, I would say certainly there's definitely, you know, infrastructure-as-a-service type of opportunity.
Mm
S imilar to what we saw with just computing. There's also internal use cases in large customers on how they're gonna monetize AI for their own business.
Yeah. Well, yeah, that's helpful. All right, well, look, I've been trying to avoid the question, 'cause I'm a little bit bored by it by now, but, maybe, some people on the phone aren't, so I, I have to ask it, some people on the line, about, you know. 'Cause, 'cause, I really see it as a matter of when, not, not if, and, you know, InfiniBand, but, you know, maybe just give us a little bit of color of how InfiniBand did get this much traction so far in the networks that have been built. Is it just 'cause it was available and there, and that's, that's why we saw it?
You know, tell us the reasons why, you know, Ethernet, has to be the technology of choice, you know, for the long term, I think.
Sure.
Yeah.
I, I have to give you the brief history of time here just to answer that question.
Mm-hmm.
Just for people's background, InfiniBand was also started with a consortium, but wasn't standardized by IEEE. That consortium was folks that were involved in high-performance computing, I believe, Intel, Mellanox, HP, were involved in that. At the time that happened, Ethernet wasn't very motivated for the data center or high-performance computing. It was operating at 1G, and wasn't addressing some problems. Two things that InfiniBand did that were very important for the industry. One, they adopted 10G interfaces before Ethernet, and they had a roadmap to 40G and beyond. The second one was something called RDMA. This is moving data from an application out to the network, and they worked with the Linux community to make that happen. Ethernet came along and co-opted those technologies.
You saw Ethernet move to 10G, 40G, and at the interface and bandwidth level, both InfiniBand and Ethernet are moving relatively in lockstep. You know, 800G coming on Ethernet, I'm sure InfiniBand has some kind of roadmap to 800G. 'Cause the investment is so high on those interconnect technologies, they're being standardized across the board. Also, Ethernet adopted RDMA through a standard called RoCE, RDMA over Converged Ethernet. That's built into operating systems today. Ethernet also developed the ability to have a lossless fabric, which was a key part of InfiniBand. The, some of those more potent type of high-performance things have been co-opted and integrated into Ethernet already.
That said, Mellanox was acquired by NVIDIA, with the crypto craze, InfiniBand became a interconnective choice for those, those crypto clusters, and a very convenient way to roll out GPUs. I, I kind of think there's a technology piece of this, but there's also a, a consumption-
Mm
P iece of this question, and, you know, one supplier being more vertically integrated and selling kind of a solutions-based approach, versus people who wanna consume from multiple vendors, who may be cooperating, but also are competitors, and have a more diverse environment.
Okay , so that's great background, and that, that actually, that was not boring, not to me. Not to me. Maybe to some people, but.
Did somebody fall asleep today?
Yeah, somebody, somebody fell asleep, but I actually woke up.
Good.
So, but, now the question of, you know, why do, so many of us, I think including you, too, like, think that it's, you know, not a matter of, if but when, that, that, you know, that Ethernet is, you know, as, as the volumes ramp, as the models get larger, as the data centers get larger, it almost inevitably has to be Ethernet.
Yeah. I mean-
So, so why is that?
I think first you have to believe that this AI thing is real, and it's gonna scale and be really big.
Mm.
If, if you have that as your backdrop, right? You want an interconnect that's ubiquitous, you want multiple vendors, you want a rich ecosystem at all points of the infrastructure, down to the GPU. You have some cloud customers today building their own GPUs, right? We do believe there's gonna be some diversity there. The InfiniBand technology is really owned by one company. There's no merchant silicon or multi-vendor approach to that. We also think people will be interested in doing the same things they do today with compute.
Have multi-tenancy, so they can cut up their network or GPU investment into multiple buckets virtually, and then either lease them to enterprises, or even for their own use, they might have different use cases, and wanna just build that one large cluster and be able to flex that over time. They might want GPUs in diverse places, like an edge, an edge environment. All those things that we've seen, you know, that you needed from a technology point of view to deploy, are already existent in Ethernet. That's kind of our view. If, this isn't big, and it's just gonna be a small number of clusters and proprietary, then, then maybe what we have is good enough, but I, I think this has legs.
Yeah. How should we think about the fact- I mean, when Cisco just reported, right, and they talked about $500 million in orders, I think that they were talking about for chips, right? I mean, for Ethernet fabrics, I think they're really talking about something that would compete with your supplier, before it would compete with you. I just, I mean, you know, the fact that Cisco is more vertically integrated in their business, and that, you know, they've always been one of your key competitors, how do you think about your focus on what you do versus their, I guess, a little bit broader focus or much broader?
Yeah, yeah. I mean, there's portions of their business we don't compete in at all.
Yeah.
We just compete in that networking core segment, right? I, I don't think that we have seen the strategy being much different than it has been. Lot of announcement around Silicon One, but that's consistent with how Cisco's done business for years, developing their own silicon, and integrating that into products. The optical, they always had an optical business. Optical has become more of an integrated business with optical components. We, we don't compete with them in the optical space. We support multi-vendors, including Cisco. You know, in terms of execution, I don't think we've really seen them be a chip supplier. They're still competing with us on a system-by-system basis, if that helps.
Okay. Then, you know, I guess the, the, I mean, the, the, the relationship with NVIDIA, how would you characterize how you, you know, interact with them? Because I think that in the future, in this open environment, they'll be supplying a good, a good portion of the GPUs and a, a smaller portion of the networking, and you'll be supplying a bigger portion of the networking, and you guys will probably have to get along, right? So how do you think. I mean, what's your relationship with them like?
I think that, that's a good way to frame it. I think we gott to get along. We wanna support anything that's connected to the network. To the extent that they wanna sell GPUs into an open environment, we're happy to engage, and I think we don't have a. No, there's no friction in the relationship. Obviously, there's this InfiniBand versus Ethernet piece today, I don't think that we're focused primarily on different things as companies, right?
Yeah. Well, I, you know, I guess I don't know the company as well. I, you know, I don't cover that sector, but, it seems to me that they're, I mean, they're clearly enjoying benefiting from the Mellanox acquisition, but are they. You know, do you see them taking steps to kind of extend the lock-in period? You know, that, that's I don't, but I'm wondering if you, if that's something you guys see out there, if, if you're competing against a vendor who's, you know, trying to hold onto this monopoly longer, or if they're, you know, really- I mean, if they- I think if they were supporting their customers, which they're willing to open, you know, to. It's not, I guess it's not up to them.
I mean, my question is, is really, well, I'm stumbling over my question here, but how do you see them operating out there? Are they gonna, are they gonna play nicely?
I mean, look, I think they're competing on more of a vertically integrated approach, which doesn't just include the network, but, but all the way up the stac to what they're doing with CUDA, and really trying to take advantage of the interest to, you know, deploy quickly and roll out racks and a system that you can build applications on. That seems to be their focus, and InfiniBand comes along with it. To the extent customers wanna push against that and open this up to be multi-vendor, I think that's gonna be a very customer-driven approach, and I would anticipate that, that we all get along.
Okay, great. Let's back it up and take it to a higher level. We, we got a question from the field. We're not right up against time, but we're getting, we're getting towards the end. I think that this question kind of gives us a chance to combine a lot of what we've talked about and, you know, go back up to kind of the summary view. It is from a longtime shareholder who wrote in and said, "Arista is clearly winning in its markets from enterprise to the Cloud Titans, what is the main reason for this? Is it the radical simplification of the solution, of the solution versus competitors? I, I sometimes struggle to explain this to my other portfolio managers and people not as close to the market.
This guy wants to tell his colleagues what makes Arista special, and why you guys are successful in the market.
I struggle as well, 'cause the answer is so boring. It's hard to imagine, but deploying networks is difficult, and been fraught with a lot of frustration by the people who operate the networks. You know, as we sell into either a cloud provider or an enterprise, there's somebody has to get up every day, run that network, make sure it's secure, deal with upgrades, deal with adds, moves, and changes, and their, their life has not been that great. Networking was very slow to evolve to a software-based, based approach with APIs, so you couldn't add your own management stack or do integration on the switches. It was also fraught with a lot of bugs.
If you upgrade, story that we get from multiple customers, "I went to do an upgrade because there might have been a, a security vulnerability in the kernel, and that upgrade didn't go well. First, the, the system crashed, I had downtime. When I finally got up and running, a feature that previously worked no longer operated." We just focus on that problem by the way we've constructed our software, the way we automate the testing. I think that the, the opportunity and challenge for our sales team is, it's easy to say that I have better quality, but everyone says that. You actually have to experience. They'll encourage a customer to adopt Arista in a use case. "Please call our customer support," that's the second question.
Call them up, ask them some questions." It's just a better, it's a better user experience at the fundamental level.
Yeah, it seems like. I mean, you really obviously focused on the highest-performing networks, and I'm still kind of struggling to understand how kind of out to the edge in the enterprise you go. I mean, there's a focus on the Forbes Global 2000, who probably are the most likely to have their own data centers if they, you know, if they do that, or the biggest campuses. You know, I guess just over time, should we expect you to keep moving farther and farther out, out to, edge solutions? You know, I mean, as you mentioned earlier, you have so much opportunity just in, in what you already do. Help me think about how far to the edge and how kind of much. You know, and to me that implies much, you know, more distribution, right? That doesn't really seem to be your focus in the near term, but if we really look over the long term, do you think you'll be?
We, we, we,
Y ou know, much more?
We see that more of a function, the distribution of the account sizes that we go after. If you go after a Fortune 2000 company, typically they're driving standards to be used globally across all their operating space.
Mm-hmm.
They'll want from a provider like us, to address as many use cases as possible, so they can reduce the number of vendors that they work with. The edge is definitely important. If you think about healthcare, you have, you know, large hospitals, but they're connected to regional centers. The strip mall down the street from me has a, has a Sutter Health, little building there, that, that needs an edge deployment. So it's, it's vital. It's, it's, you know, key to their business. Now, as you go to sort of less global, more regional mid-market providers, there could be a different go-to-market approach, but we're, we're not saturated with that Fortune 2000 yet.
Yep. Just a couple things on the business model to wrap up here. It seems like, I mean, when we think about leverage going forward, you know, 2024, 2025, revenue growth, you know, gross margin expansion, or operating leverage, I mean, you know, what do you think will make the most impact on the operating margin?
You know, I think that's difficult to break down. I think we, we talked about gross margin, and really kind of driving back our gross margin. Part of that's mix, but we're also recovering from some of the supply chain pieces, so there's definitely opportunity there. You know, I don't, don't have any other color-
Yeah
on that.
Yeah. So on Ita, I imagine that the search must have started, you know, for Ita. What's just the timeline on the transition for-
I think, you know, Ita's committed to help us through the transition. I don't think we announced any specifics around the timeline at this point. Liz, I don't know if you have any-
Yeah, I think it was more around the intent to retire in 2024, from the earnings call. She will be helping us find her replacement.
You know, I mean, you've got a very, very strong balance sheet. You guys generate cash in your business. Inventories seem like they have to come down from a high point. I mean, they, they haven't come down quickly yet, but at some point, they really do have to drop, and that, that'll be cash. Just, I mean, I guess, you know, you're the Chief Platform Officer, so, so, so maybe capital allocation's not your, your area, but we have Liz here.
I, I'm more on the-
I just- I mean, what should we think about Like, are you guys gonna become a bank, or what are you guys gonna do?
I mean, I think that, you know, it's important for us to maintain a healthy balance sheet. We do have, you know, some aggressive competitors. We've got to fund the business. Also invest in the business, whether that's, you know, R&D, whether it's sales and hiring, and targeted sales and marketing, et cetera, or into M&A. Although, you know, kind of trying to find that perfect M&A. I think everybody's always asked us if we could do something, you know, big. I think bigger is a little bit more of a challenge. I think that we've given you some good ideas around our M&A strategy with the types of M&A that we've done, kind of these smaller tuck-in, network adjacent. You know, because we're really focused on the network, right?
We wanna have, you know, the right to play in those spaces. Obviously, you know, return cash to shareholders, you know, and we do. A share buyback, we still have some to execute, from, for the current, phase, and then we'll obviously work with the board on what that looks like moving forward.
Yeah. Well, great. I really enjoyed talking to you guys for 45 minutes. I, I learned a lot, and it was interesting.
The Ethernet versus InfiniBand wasn't boring?
No, it wasn't boring. In fact, I think we could have another conversation about it another time, I think.
Yeah.
John, thanks so much. Liz, unless there's anything else you guys wanna say before we go, I think we will call it here.
Appreciate the time and the opportunity, Mike. Thank you very much.
Yeah.
Thanks, Mike.
Same. Okay, thanks for everybody for joining us, and I look forward to seeing you at, at more firesides. Thanks. Bye.