Arista Networks, Inc. (ANET)
NYSE: ANET · Real-Time Price · USD
176.91
+4.36 (2.53%)
At close: Apr 24, 2026, 4:00 PM EDT
177.19
+0.28 (0.16%)
After-hours: Apr 24, 2026, 7:59 PM EDT
← View all transcripts

BofA Securities Global A.I. Conference

Sep 12, 2023

Operator

Ladies and gentlemen, the program is about to begin. Reminder, that you can submit questions at any time via the Ask Questions tab on the webcast page. At this time, it's my pleasure to turn the program over to your host, Tal Liani.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Hi. Good morning, everyone. Thanks for joining us. In the last two days, we're hosting multiple companies to speak about the entire value chain of AI. We heard from contract manufacturers, we heard from semiconductor companies. We heard yesterday from Cisco about their silicon, and now I'm very pleased to host Jayshree Ullal, President and CEO of Arista, to speak about Arista. There are so many questions I want to ask her, but before I start, I want to welcome Jayshree to our small virtual conference. Thanks for coming, Jayshree.

Jayshree Ullal
President and CEO, Arista Networks

Tal, thank you for having me. As you said, it's been 25 years together. Look forward to many more.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Yes. I can guarantee you it's not gonna be another 25—not for me. I want to start kind of with the high level of the question. I told Jayshree, by the way, I'm saying it to the audience. I told Jayshree before that we're not gonna talk about the quarter, we're not gonna talk about the numbers. We just want to focus... This is an opportunity to speak with Jayshree about AI, about the readiness of the company, and about her views of the market. I want to talk about the evolution of Arista over the last years in last few years in the context of how you support the participation in AI, in generative AI.

Jayshree Ullal
President and CEO, Arista Networks

Yeah. So, you know, I, I think you all know Arista very much as a pioneer of cloud networking, and we are most known for pushing the envelope of scale of networking, whether it's for, you know, the front-end network and the amount of traffic and this, patterns for Leaf-Spine topologies to connect thousands, if not hundreds and thousands or millions of CPUs, virtual machines, containers, right? But while this has all been going on, there's been an incredible phenomena that started as recently as last year, and Arista has been working on it with some of our leading customers on what I call Arista 2.0, whereas we are now building a platform that's not only capable of carrying large workloads, workloads in the data center, but really what I call centers of data.

The centers of data may be in a campus, may be in a routed WAN environment, and maybe in a branch. But now there's a very interesting, given the topic of today being AI, there's another interesting whole area emerging of what I call the back-end network, which Arista has traditionally not participated in. And so I think the next phase of Arista is really building a platform for all of what we've done already for the cloud and Web 3.0 era, but now bringing that to bear as AI clusters in the back end of the network.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

What is the back-end network? As far as I understand right now, the choice of technology is InfiniBand. How do you participate-

Jayshree Ullal
President and CEO, Arista Networks

Yeah

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

in the future?

Jayshree Ullal
President and CEO, Arista Networks

Yeah. Yeah, absolutely. Yeah, as you all know, InfiniBand's been around 25 years. This has been a very well-recognized, you know, InfiniBand Trade Association. Doesn't have a broad set of vendors. In fact, in many ways, it's a vendor of one, NVIDIA. But they have been delivering consistently for several years on high-performance compute. But now when you look at the role of InfiniBand and Ethernet for AI, first, I think it's important to step back and say: Why is this even relevant? Because there's a massive AI data exchange going on, where the AI workloads and demands on the network are both data and compute- intensive.

In fact, the workloads are so large, and the parameters of the matrix are so distributed across thousands, hundreds of thousands, sometimes millions of processors, that you have to look at both the large language models and the recommendation systems, LLM, DLRM, and how you share all of your parameters across these thousands or millions of processors. And so this, as I said, requires you to do a constant compute, exchange, reduce cycle, and the volume of the data is, that's exchanged, is so significant that any slowdown in the network with these expensive GPUs will ultimately impact the applications. A poor network is a poor choice, and as you rightly point out, there are two very good choices.

Today, the most commonly used technology, bundled with NVIDIA GPUs, is InfiniBand, but I believe the future is very much for Ethernet, and I have never bet on a non-Ethernet technology, although I worked on many of them: ATM, FDDI, Token Ring, to name a few. But, you know, and those work in the file print share environments, but as Ethernet needs to be stretched and subsumed for AI applications, you need additional capabilities. So in general, I would say you need a mission-critical AI network, and neither InfiniBand nor Ethernet are fully achieving those goals, and both need a lot of optimization going into the future.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it. And why, why did InfiniBand, just in simple terms, why did InfiniBand... Why did they start AI with InfiniBand and not with Ethernet? And-

Jayshree Ullal
President and CEO, Arista Networks

Yeah, so I think it was very difficult to reimagine a high-speed transit network, when the, the real problem right now is large language models, training, inference, GPUs. So the natural connection with the GPUs from NVIDIA became InfiniBand. And, you know, the upshot here is wire speed, delivery of packets, large synchronized bursts of data, and I would say especially latency, which has been a strength of InfiniBand. But when you go back and look at Ethernet and InfiniBand, even over the last 10 years, historically, Ethernet has always lagged a little bit to InfiniBand. When InfiniBand was doing 40 G, you know, Ethernet was doing, 10G. When InfiniBand, you know, went to DDR, EDR, NDR, HDR rates and was always doubling, Ethernet was always behind, and that's changed dramatically in the last year.

Today, you can push the envelope of Ethernet at 100 G, 200 G, 400 G, 800 G, and you can see a path to 1.6 T. This is what I think makes Ethernet a natural, standards-based ecosystem with a wide level of, you know, capability ecosystem as well as troubleshooting techniques. As you know, the Ultra Ethernet form is on a mission to enhance the capabilities of AI and HPC using Ethernet. The advantage of Ethernet is a no-brainer. It brings broad economics to wide deployments, familiarity of tools, and then obviously we can push the envelope of silicon geometries with Moore's Law, with all of the silicon vendors we work together with.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it.

Jayshree Ullal
President and CEO, Arista Networks

One of the other things I think was when you're building InfiniBand clusters, you only needed to worry about L2 subnets. You know, now today, as you start to build a back-end network and you build these clusters, you also have to think of the uniformity and connection to the front-end network. The ability for Ethernet to be a routed protocol and run over IP is a tremendous advantage. If you can get all the advantages of a back-end AI high-performance network with Ethernet and connect it seamlessly to your front end, then one plus one is far greater than two.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Right. So how is Arista - how is it gonna play out for you? You're gonna - you're focusing on Ethernet. You're, you're talking about maybe different flavors of Ethernet. How long does it take? How do you participate in the build-out of AI and generative AI in the intermediate term, and then how do you participate in the longer term? What needs to happen -

Jayshree Ullal
President and CEO, Arista Networks

Yeah, yeah

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

for you to participate in the longer term?

Jayshree Ullal
President and CEO, Arista Networks

Yeah. I think you have to parse the problem and look at it differently in different—it's different strokes for different folks. If you're really building a small cluster within a server rack, I don't even know if InfiniBand or Ethernet plays into that. So if it's, you know, 100 nodes or so, you're just gonna connect with an internal IO of some kind, or almost a bus technology, not a network. And that may be PCIe, CXL, or NVLink at the back end, right? But when you start to talk about thousands and thousands of nodes and needing AI at scale, those AI jobs are going to really, really, the underlying network to improve job completion time is very critical.

So Arista's focus has been on making sure we can work with the GPU vendors, NVIDIA is our friend there, with the NIC vendors, again, coming from not Arista, but different vendors, be it Broadcom, NVIDIA or Intel. And then really applying the right scale, you know, for the different traffic patterns. And I'll give you a couple of examples. In the 1990s, when we talked about scale, it was just Ethernet with Spanning Tree because you're mostly detecting loops. In the 2000s, when we talked about scale, you have technologies like MLAG at layer 2 or ECMP at layer 3, that can allow you to build scale and with active, active paths with the Lea f-Spine.

In the next phase of an AI network topology, you need a heck of a lot more packet spraying, load balancing, where you can allow every flow to simultaneously access all paths in a destination to improve the job completion time, because your entire completion time is affected by the last packet, and that's the worst culprit. So we're doing a lot of work to bring dynamic load balancing and packet spraying, and this is something that's also being endorsed by the UEC, the Ultra Ethernet Consortium. Another thing that's very, very important, as we enhance Ethernet, is having the right monitoring and visibility techniques.

You have to be able to not poll at, you know, millisecond or minute intervals, but really at nano and microsecond to get all of the logging and visibility and counters and characteristics, because things are moving so fast and furiously, and Arista has been always developing features like that for the cloud, and now we're extending that with our EOS for features like AI Analyzer. Network congestion is a key metric, and we always get trapped in this, "Okay, Ethernet is not lossless and InfiniBand is," but none of that matters when you look at the aggregate of how many nodes you're trying to support. And, you know, a common incast congestion problem can occur at the last link of any AI receiver when multiple uncoordinated senders are just jamming traffic.

So a good example of that is the, you know, all-in-all, all-to-all AI operation across GPU clusters. And so having the right Ethernet-based congestion control mechanisms and algorithms is critical, so you can spread the load not only across multiple paths, but it's designed to work in conjunction with this multipath, you know, spraying, to have the right virtual output queuing incast, and then the egress buffer memory and output has to be appropriately balanced. So VOQ Fabric and a lot of the things we've done in the cloud will replay here for an AI network as well.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it. We spoke about back-end and front-end networks, and I have two questions. First, do you see a different opportunity for the training portion and inferencing portions of AI, or is it kind of the same thing?

Jayshree Ullal
President and CEO, Arista Networks

No, they're definitely different in the envelope of the number of parameters and algorithms you're pushing. But I would say not only. So I talked about small networks being in the server. A good way to look at it is medium applications like inference may not require as many GPUs and as much of a network, but would still be substantial scale. But the ultimate scale of training billions of parameters and the associated tokens, et cetera, is really in the training, and this is where most of our forward-looking customers are focusing. Because if we can solve the training problem, we can naturally solve the inference in smaller networks.

So today, I would say, and I think I mentioned this in earlier calls, we're largely in trials and pilots, where we are really proving that the training algorithms can work across a lossless, congestion-free Ethernet network and map to the scale of their cluster. And the scale of their clusters we've seen anywhere from 1,000 GPUs to building in a single tier, for example, 7,800 AI Spine, to a two-layer cluster, where you can add more GPUs to the scale of 4,000-8,000. And of course, eventually, we're gonna be building clusters for training that are, you know, 32,000-100,000 GPUs. So the size of the GPUs defines the metrics of training clusters you want.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Mm.

Jayshree Ullal
President and CEO, Arista Networks

Right now, most of them are in pilots and trials, but I fully expect production in 2025.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it.

Jayshree Ullal
President and CEO, Arista Networks

And those will get even larger over time.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

When you grow the cluster size from... I'm making up a number, from 1,000 GPUs to 4,000 GPUs, does it mean that the networking cost is 4x also? Is it linear-

Jayshree Ullal
President and CEO, Arista Networks

Josh, I hope I, I hope not. But it depends on the design. So the beauty of our architecture is you can, you can build a single-stage spine, an AI Spine, and collect and connect 1,000 GPUs. As you wanna add more GPUs, and again, the numbers will vary by mileage, depending on whether you wanna connect with 400 G or 800 G. But the idea wouldn't be that you just sort of keep adding linear costs, but you would add a layer of leaves, AI leaves, to connect those GPUs to get sort of a two-tiered additional aggregation or port density. So it wouldn't be 4x, but certainly you would add more ports, and it can be at least 2x.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it. In today's world, until we migrate to Ethernet everywhere, what's the impact when the backend network is being built with InfiniBand, what's the impact on your front-end network?

Jayshree Ullal
President and CEO, Arista Networks

I think today the backend networks, again, largely are built in silos. It's a cluster that doesn't connect to the front end. 'Cause you think about it, how do I connect an InfiniBand to Ethernet without substantial loss and latency, and it's a gateway function? Nobody does that. So I think the clusters in the back end are largely not talking to the front end because we're still in this mode where there are two different islands. I do think this is why Ethernet will be heavily favored, because once we can solve the load balancing, the monitoring visibility, the congestion control algorithm is, you know, on the right track, whether it's, you know, PFC, per flow congestion control, and dealing with all the system-level mechanisms, then it's seamless.

You have no translation, and you can move back and forth between... And you, you're no more limited to a layer 2 only subnet, and your high availability isn't constrained by the number of subnet managers that you can support. So everything gets a lot better when you have Ethernet front and Ethernet back. But today, that's not how it's happening. It's mostly silos.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it. I wanna maybe speak about the support from the GPU community. You know, it's public information. Google is having its own GPU. Others are using NVIDIA. AMD is developing, others are developing. There are small startups even. What is your view on the support for GPUs, and is there any preference, or is there any way that you look at it differently, for example?

Jayshree Ullal
President and CEO, Arista Networks

No, listen, I think NVIDIA has definitely won the first phase of AI, systems and solutions with the GPU and the maniacal focus, not just on one GPU, but the different types they have, and then built into systems like DGX and HGX is very remarkable. And as you know, there's a shortage. There's no glut of GPUs here. There's an extreme shortage. So in the foreseeable future, you know, like the next one or two years, I think there's pretty much only one major vendor supplying GPUs. However, if I look forward, much like I look forward on InfiniBand versus Ethernet, I think the industry always needs a diverse ecosystem. And as you rightly classified, I think there'll be three types of players.

There will be alternative vendors to NVIDIA, and this is where companies like AMD and Intel, with their Habana, come in. Then there will be startups. Difficult for startups to compete, but with the appropriate differentiation, perhaps. Don't rule out also our own customers, many of who will develop, you know, GPU accelerators or GPUs of... that are customized for their, their environment and very committed to price performance for their applications. So I think those are the three categories of XPUs we will see. You know, Arista will be Switzerland on that and looks forward to connecting with a fantastic AI fabric in all of those scenarios.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

The other way around, NVIDIA is they have a competing Ethernet offering in the Spectrum- X networking platform. Is this a concern for you?

Jayshree Ullal
President and CEO, Arista Networks

No, I don't think it's a concern for NVIDIA or Arista. We're gonna be partners on most occasions, but when it comes to the Ethernet switch, we'll have some overlap. But I think building, you know, AI and general purpose Ethernet switch isn't easy. We've been at it now for a decade or more, whether it's a software stack. But and so our applications are going to be both for AI and obviously for the multitude of the other platforms I talked about in the data center campus and WAN. And we are very comfortable that ultimately you cannot build an island for an Ethernet switch, you need to build multiple use cases that connect the back to the front, and that's where Arista will shine.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Right. How is 800 G and above connected to our discussion of AI? Is it the only driver, the main driver? How do you view the market?

Jayshree Ullal
President and CEO, Arista Networks

Yeah, that's a really good question, because historically, as you know, these speed transitions took time. I still remember when we were waiting for 10G to happen in the beginnings of Arista, and I think that was a very long tail. It took at least 10 or 15 years because there wasn't any ports, port density and server connectivity available back in 2008 when Arista was shipping products, and that changed over time. I think the acceleration to 100G , I'm going to skip 40G because it was kind of neither here nor there, happened a lot sooner. It didn't take 10 or 20 years. It took, you know, more like five years for the cloud, and of course, there's now another five to 10 years with the enterprise.

So when I look at 400G and 800G , in theory, it should take time, but the reason it won't is because I think there's really three speed transitions happening. One is where the, you know, the classical enterprises are moving to 100G , and then the cloud providers building spines and front-end networks and distributing, you know, their centers across different geographies, are starting to do a 400G migration. And usually I would have told you, "Well, the 800G will take time," but now you have this killer AI application for the back-end network, and this is where we envision 800G .

Now, as we start to deploy 400G and 800G in the back end, by the way, that will affect the front end of the spine networks as well, and, you know, that'll have a 20% effect on the 400G network, which will then need higher performance. So I think these three use cases will have a vicious cycle across them, and AI is definitely a killer application for faster deployment of 100G , 800G , sorry.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it. When I look at the market today, not AI, just the market data, you know, cloud titans or data center, big, big kind of data centers, there are some companies are using white boxes, white box switching, some companies are using branded solutions from yourself, mainly from yourself. And we all know who is doing what kind of in the market, there are only four big players. Do you think the appetite for using branded or white boxes, the appetite will change or the way that the architecture works will change with AI versus regular networks?

Jayshree Ullal
President and CEO, Arista Networks

Not really. I think, you know, you can only talk about white boxes when things are well-defined and mature in terms of the hardware and software stack. And, you know, as you know, you and I have talked about this many times, the discussion of white box appears a lot of times, but it is something we actually embrace, and we recognize it's there in some use cases, but isn't there in most complex use cases. And I think AI would be a very difficult thing to white box at a time when things are moving, changing the performance, the latency, the aggregate scale, the functionality, the UEC forum, not everything is in flux. So I think until we get into some stability, it's difficult to think of any of these things as white boxes, but in fact, quite the opposite.

I think you will see a lot of AI focus on customizing for generative AI, for inference, for very high-performance training models, and I think it'll be at least a few years before we see any kind of white box in the AI world.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it. Is Cisco a bigger threat now versus before? We hardly saw Cisco before, especially with cloud titans. They're talking now about more orders, you know, higher level of orders and backlog with cloud titans. Do you see more of Cisco in the market now?

Jayshree Ullal
President and CEO, Arista Networks

Oh, I always view Cisco as a very respectable competitor. You know, I was there 15 years, and I'm now 15 years with Arista, so we never take Cisco lightly and respect their ability to do things as a very large, dominant company in a lot of market sectors. Specific to cloud titans, I think especially their strength in the acquisition of Acacia and Optics cannot be underestimated. We were partnering with Acacia for a long time. So I definitely think they've always had a presence in cloud titans. And of course, Arista's presence is much stronger.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

The fact it was one of my questions that I had is about the optical integration. The fact you don't have an in-house optical integration, is it something, is it a weakness that you need to strengthen, or can you work without it?

Jayshree Ullal
President and CEO, Arista Networks

I think we can work without it because I think we choose to work with best-of-breed optical vendors. Optics, as you know, by the nature of it, is changing all the time, and you have to have a specialized set of expertise on it. We've chosen to work with partners on that specialized expertise. And, you know, a good example of that at OFC earlier this year is where Arista demonstrated that using our electrical SerDes instead of just doing co-packaging, we could drive longer distances, and with reduced power on our switches for long-haul optics, or, you know, medium-haul, I should say, 'cause long-haul could imply hundreds of kilometers. And this was a pretty powerful demonstration of LD, Linear Drive, where you can sort of remove the DSP and push the envelope of capability.

Embracing all of these different optical options rather than locking ourselves into one particular solution, including our own, has always been our thesis.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it. Once Ethernet... I love talking to you, by the way, Jayshree, because I can shoot all my questions, and I know that you're gonna answer it like an engineer, not like a CEO that is a hire.

Jayshree Ullal
President and CEO, Arista Networks

Yeah, you know, it – I guess, I'll work on answering like a CEO. What's your next question?

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

No, no, no, no. Once Ethernet is in the back end, is it-

Jayshree Ullal
President and CEO, Arista Networks

Yeah

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

... a different product? Are Ethernet switches for AI different than Ethernet switches for front end, or is it the same product?

Jayshree Ullal
President and CEO, Arista Networks

I think they can, they don't know exactly and precisely how the shapes and turns will be. There are definitely products like our AI Spine that can be the same, but they can have flavors of functionality and capability that make them more AI-friendly. And then, in other cases, if you're starting to build extremely large clusters, and you're very optimized for job completion time and end-to-end latency, they can also be different products. So I think it can be... we can have both options-

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it

Jayshree Ullal
President and CEO, Arista Networks

... depending on the use case.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Talking about kind of more the applications, the networks, is the—we talk about... We, we always say AI hyperscalers. Is it just about hyperscalers, or are you seeing, or will we see an appetite for, to build AI networks also in other parts of the market?

Jayshree Ullal
President and CEO, Arista Networks

Well, I think if you're looking for large-size production trials, it definitely favors at least the cloud titans, and I would say maybe some tier two cloud providers and even some extremely large enterprises. So I think it's still single-digit customers, but they may not just be cloud titans. They would be those who have an appetite to really invest to offer an AI service of some sort. So you gotta be thinking, you gotta be thinking customers with large CapEx and deep pockets, whether it's in the enterprise or specialty cloud or cloud titans. And so we see that as single-digit customers right now, that will be very large and but we see them as very meaningful. Now, that doesn't mean the enterprises aren't interested.

I think just about every enterprise will have some sort of a small cluster to prove the thesis on some of these AI applications. But I think the largest ones will be in these three categories: either cloud titans or specialty cloud or extremely large enterprises.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it. Will all AI networks look the same, or are we-

Jayshree Ullal
President and CEO, Arista Networks

No, not at all. Not at all. Going back to my small, medium, large, I think if you start with this premise that you want an extremely, you know, LLM, DLRM, you know, billions of parameters, then, then you're gonna build... One size doesn't fit all, but you're gonna build something that's a AI at scale, AI Ethernet at scale. You're gonna worry a lot about the congestion control, the load balancing, the monitoring, the visibility, not just the hardware, but the software. If you're building a small cluster within a rack, you can go to another extreme and never even have it be a network. So I think the way to look at this is small, medium, large, based on application and also based on size of network.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it. I'm just looking through. We spoke a lot about kind of what I wanted to ask you. I'm just going through my list of questions to make sure I don't have anything left. My kind of last question is about what drives AI. We kind of take it for granted that these networks are gonna be deployed, and we speak about the architectures and technologies, but as an expert in the space, can you take a step up or step back and think about what drives a AI? What drives it from an enterprise point of view, from kind of applications, from a consumer's point of view, why are these hyperclouds or cloud titans investing in AI so much?

Jayshree Ullal
President and CEO, Arista Networks

Well, I think if you take a step back and say: Why are customers building AI clusters? I think there's an incredible optimization work required to get the high flop utilization. If they're putting in all these GPUs, it's for applications that are real-time, streaming, gaming, you know, that need that high flop utilization. So if they didn't, then they'd just run the traffic like they always do today on a regular network. So these AI clusters are very specific to deal with the high bandwidth, high scale, predictable latency. Not always ultra-low latency, but predictable latency, right? There's a second part to this, which is not just the application performance, but also the lack of storage bandwidth.

As you increase the number of GPU cores, you know, a lot of these systems don't have enough memory and storage, and if the memory is not large enough, or the storage isn't large enough, you're gonna do frequent checkpointing to get the highest bandwidth. And every few year, hours, you'll need to dump a checkpoint and deal with different types of mechanisms to do that at 400G , 800G . So AI is not just about optimizing the application and job completion time, but also about the fast and reliable connection between GPUs and the associated memory and storage. There's another thing I think that people miss, which is, you know, in the traditional CPU world, you had well-defined compilers, and you had very good frameworks for that.

Today, whether you look at NVIDIA's CUDA or, you know, open source frameworks like PyTorch, this is an area where there's a significant amount of work that needs to be done because, you know, relative to AI, you can think of PyTorch as something like a new abstraction for the C language. And you know, you can't just have a classic compiler and library, and you've got to really map the software ecosystem, you know, to be able to do all of this. And obviously, TensorFlow and Google TPUs are doing similar things. And in these compiler-based systems, which is again, why these applications are driving so much optimization, you have to be looking at constant operations, GPU trips, memory trips, et cetera.

So the replication of all of these applications have, you know, a huge, repercussion on the network being mission-critical and high performance, on the memory, on the storage, and the applications and the ecosystem to run these applications. These are all things we did in the CPU world and take for granted these last couple of decades, and that's what we have to look forward to.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it. Last question, I always ask about the impact of power and physical constraints, so-

Jayshree Ullal
President and CEO, Arista Networks

Mm-hmm.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

and it's so important within AI. What does it mean for the future products of AI on the switching side? What are the... You know, we less discuss these topics, but it's those that design data centers know that this is probably the most important topic for them.

Jayshree Ullal
President and CEO, Arista Networks

Oh, yeah. It's probably one of those, you know, dull, boring topics, but the most critical, as you said, because it's real world. If I just step back and say what's happening to power, as you start of going to 400G , 800G territory, the optics is playing a bigger role in the power. It can be 30% of your switch power, whether it's AI or not, by the way. AI will just make that worse. And this is why the Linear Drive and things we're doing to improve the power on the optics is so critical, you know, to bring it down by half.

The second thing is you don't always need optics if you're within a data center, so there's a lot of different types of non-optical cables we can use to reduce that power in a network configuration. But remember, the network is only probably 10% or 15% of the power contribution. In an AI network, we have to worry about the other 85%-90%, particularly as you're bringing all these clusters of GPUs together. And we're starting to work with customers that are looking at very, very advanced techniques of liquid cooling, not just ambient, where they have to really worry about the immersion and cooling systems required for these GPUs, which then, of course, has an impact on the network as well, and that is the 90% problem in AI today.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Got it. One day someone will learn how to operate these GPUs inside an aquarium. Submerged in water.

Jayshree Ullal
President and CEO, Arista Networks

You can see them along with the fishies, I guess. Yeah-

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Exactly.

Jayshree Ullal
President and CEO, Arista Networks

That'll be a new museum.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Jayshree, thank you so much. It was a very deep and thorough discussion. I managed to ask all the questions I got from investors and pretend it were my... it were—these were my questions. So, thanks for the time and the effort, and for the investors, if you have any other question, please don't hesitate to call me. If I don't know the answer, I will forward it to the IR team of Arista.

Jayshree Ullal
President and CEO, Arista Networks

Tal, always a pleasure, and thank you guys for having me, and look forward to connecting again soon.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Perfect. Thank you so much.

Jayshree Ullal
President and CEO, Arista Networks

Take care.

Tal Liani
Head of US Data Networking and Cyber Security Research, Bank of America

Bye-bye.

Jayshree Ullal
President and CEO, Arista Networks

Bye.

Powered by