Arista Networks, Inc. (ANET)
NYSE: ANET · Real-Time Price · USD
176.91
+4.36 (2.53%)
At close: Apr 24, 2026, 4:00 PM EDT
177.19
+0.28 (0.16%)
After-hours: Apr 24, 2026, 7:59 PM EDT
← View all transcripts

Status Update

Nov 9, 2023

Liz Stine
Director, Investor Relations, Arista Networks

Good afternoon, and thank you for joining us for Arista's 2023 Cloud and AI Innovators Analyst event. We are excited to spend time with you this afternoon to outline Arista's vision and strategy for long-term growth and innovation across the business. My name is Liz Stine, and I run the investor relations team here at Arista. Similar to prior Analyst Days, we will hear from our broader executive leadership team, providing an update on the Arista 2.0 evolution, momentum, and strategy. Let's take a quick look at the agenda for today's event. Our President and CEO, Jayshree Ullal, will kick off the event, diving into the continued evolution of Arista 2.0, and addressing our vision and strategy for the future. Anshul Sadana, our Chief Operating Officer, will then take us on a deep dive into our 2023 client to cloud platform innovations.

Our Founder and Chief Development Officer, Andy Bechtolsheim, will then do what Andy does best and talk about the future industry innovations with a specific emphasis on AI networking. Hugh Holbrook will then provide insights into Arista's initiatives on AI, including some examples of our customers' AI networking designs. We will then take a short break mid-event to allow us all to stretch our legs and check our inboxes, and then kicking off the second half of the event will be our Founder and Chief Technology Officer, Ken Duda, walking us through Arista's network as a service offering. And following Ken, our Chief Platform Officer, John McCool, will give us an update on Arista's operations and manufacturing footprint, including a discussion of our hardware platform innovations. We will then wrap up the presentations with our Chief Financial Officer, Ita Brennan, providing an update on Arista's financial outlook and business goals.

To end today's event, we will have the entire executive leadership team join to answer questions in a live Q&A. Now, before we get started, let me quickly go through the safe harbor. During the course of this investor event, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the fourth quarter of the 2023 fiscal year, our longer-term business model and financial targets for 2024 and beyond, including revenue targets for certain market segments by 2025.

Our total addressable market and strategy for addressing these market opportunities, including the growth of the AI market and our product strategy for this market, our drivers for growth and diversification, our investment and capital allocation strategy, EOS's architectural advantages and future evolution, product innovation, customer demand trends, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, working capital optimizations, and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K, and which could cause actual results to differ materially from those anticipated by these statements.

These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this event. With that, let's go ahead and kick things off with Jayshree Ullal, our President and Chief Executive Officer.

Jayshree Ullal
Chairperson and CEO, Arista Networks

Thanks, Liz, and welcome to another exciting edition of Analyst Day, filled with a packed agenda as our leadership will be sharing our vision, strategy, innovation, and business goals. In today's agenda, I will share some of our views on the market, Arista's role in it, and specifically dive into our AI and Zero Trust networking strategy. Also, give you a preview into our 2024 business goals. To recap our Analyst Day in 2022, we shared our vision for Arista 2.0 to migrate from a pure best-of-breed cloud networking products to PICs or places in the cloud, and now, of course, a data-driven and AI platform based on our flagship software stack and network data lake foundation.

This strategy has served us well and is underway, enabling our product success across the enterprise, both as a network, as a service, and igniting our modern network operating model based on CI/CD, continuous integration and continuous development principles. It has also helped us achieve market share and business success in a robust data center market environment. Since Arista started shipping products in 2008, the annual bandwidth shipped for the entire data center market has grown a staggering 350-fold. Just in the past 2 years alone, annual bandwidth shipped has more than doubled. Arista has now shipped a cumulative of 75 million ports as well in this time frame. 2023 marks the beginning of Arista 2.0. This graph shows your annual revenue for the past decade. In 2008 through 2014, we focused on building our core best-of-breed data center products.

From 2015 to 2022, we superimposed category innovations in adjacent markets such as routing and campus. Arista 2.0, ahead of us, is the age of platform innovations with infused software for AI, WAN, security, observability, Arista-validated designs, and so much more to come. Our performance in the high-performance data center switch market has accelerated in recent quarters, with share percentage in the mid-20s in both ports and dollars. Our share gains have consistently been among the highest of all our cohorts since we entered the market. Now, if you narrow the data center switching market down further to 100 gigabit and up slice, Arista continues to be the market leader with a whopping 40+% of total ports. This is an important metric as this segment represents over half the dollars and is the highest growth segment in cloud networking....

Clearly, customers prefer our blue boxes as we coexist with white boxes. Arista remains focused on multi-domain software with architectural superiority based on our single EOS, single CloudVision stack, and this is truly a simple and elegant foundation that our customers have come to appreciate. The power of one distributed software stack across a breadth of LAN to WAN use cases is a compelling advantage. Our state-driven NetDL with AVA for AI ML assist has enabled us to build award-winning client-to-cloud platforms. Our recent enterprise momentum is based on this universal spine with many types of edge leaves across data center, campus, branch, and WAN architectures. As pioneers of cloud networking, three major principles differentiate us, bringing us into the enterprise. First, best-in-class, highly available proactive products with resilience and hitless upgrades at multiple levels.

Second, zero-touch automation for predictive client-to-cloud one-click operation that relies less on human staff and is therefore hands-free and software-driven. Third, prescriptive insights that bring AI, ML, and AVA algorithms for increased security and root cause analysis. This ability to distribute cloud networking across our customer sectors has earned us the top spot in Forrester Wave as the clear leader in programmable switching and the customer validation in Gartner's Voice of Customer for campus. Facts and figures speak for themselves as we have the industry's lowest vulnerabilities in both wireless and switching across the decade. Our peers are 10x higher. Our Net Promoter Score is a third-party conducted study with a score of 93%, while our peers are significantly below. Our customers demand mission-critical networks and do not tolerate the cost of downtime that can be $ hundreds of millions.

Customers do not have the staff to deal with quality problems. There's a lot of fatigue in the installed base, and Arista is becoming the preferred gold standard. Generative AI has taken our industry by storm since ChatGPT with OpenAI and Microsoft unveiled last November. While compute and storage has driven 10 gig , 100 gig , and 200 gig migrations, AI and data are the killer app for massive non-blocking network bandwidth of 400 gig today, migrating to 800 gig and even 1.6 TB Ethernet ahead. AI is driven by frequent and intense computational communication, where the performance of AI training and those large language models, or LLMs, is dependent on the job completion time. A good network is pivotal for predictable AI communication.

The network can reduce the TCO of GPUs dramatically because a 30% inefficiency or idle state of GPUs can waste $ millions in an AI cluster. There are three important network attributes to consider while building AI networks and workloads. Network scale. The AI workloads push the collective operation where all-reduce and all-to-all are dominant collective types. Today's models are already moving 1 billion, going to 1 trillion parameters with ChatGPT-4. Of course, there's others as well, like Google Gemini, open source Llama, and Tesla's Grok. During this very intense compute, exchange, reduce cycle, the volume of data exchanged is so significant that any slowdown in a poor network can critically impact the AI application performance.

I'm kind of reminded of the 1990s, where the internet required simple networking topology, basically a spanning tree, to eliminate loops in the network with only a scale of multi-megabits. Then in the 2010s, multi-pathing with technologies like ECMP or MLAG delivered end-to-end cloud networking at multi-gigabit speeds for the cloud titans. Now, in 2025 era, AI network topology will allow every flow to simultaneously access paths to destinations with dynamic load balancing at multi-terabit speeds. Think about this: AI will be supporting a radix starting maybe at 10k GPU clusters, going to 100k and even 1 million GPU nodes in the years ahead. And Arista plans to be smack in the middle of this with a non-blocking, scalable fabric with cross-sectional bandwidth to support these AI workloads.

Key to an AI job completion is reliable and rapid bulk transfer from source to destination. The AI application is not only interested in knowing the last part of the message, but also the arrival to the destination. Therefore, message latency is far more important than packet latency. Flexible ordering mechanisms use all the Ethernet links to guarantee end-to-end predictable latency and communication. In AI, networking congestion is common as an incast problem. It can occur on the last link of the AI receiver when multiple uncoordinated senders simultaneously send traffic to it. To avoid hotspots and flow collisions across expensive GPUs, algorithms are being defined to throttle, notify, and evenly spread the load across multi paths to improve utilization of these expensive GPU clusters. Here's a contrast of InfiniBand, Ethernet, and improved Ethernet, defined by the Ultra Ethernet Consortium, or UEC.

Ethernet has been leveraging open standards and routing protocols over the last five decades, with a distributed control plane to avoid single point of failure for the entire network. Ethernet, together with IP, scales really, really well to determine best paths and reconverge new paths. InfiniBand, defined by the InfiniBand Trade Association, is typically a vendor of one. Its routing is controlled by a single subnet manager, and if it goes down, no new connections can be made for the entire subnet. Now, this may be okay for smaller networks, but is likely to be very brittle for large AI clusters. Ultra Ethernet promises to be efficient, utilizing packet-to-packet spraying techniques. Packets can be evenly distributed across the links without worrying about maintaining strict ordering on the network. Furthermore, flow-based congestion control throttles the sender when there's network congestion.

It can improve dynamic flow control by improving across and operating across multiple paths. In sensitive deployments, Ethernet and Ultra Ethernet have standards-based encryption capabilities and a variety of segmentation approaches. Ethernet is simply just getting better and better for AI networks. To quote Bob Metcalfe, the co-inventor of Ethernet 50 years ago, "I will not bet on Ethernot technologies." Let's take a look at the anatomy of an AI compute node to better understand Arista's role in networking. First, there's typically a 2 or 4 or multi-socket CPU with an OS. Not only is the CPU the basis for front-end cloud networks for queries, checkpointing, et cetera, but it's also the back-end control plane, effectively a coordinator. The centerpiece of the back end is a fleet of GPUs for data movement and crunching.

The collective operations are generated and coordinated by the GPUs to handle all these model parameters for training. The NIC and DPU are the transmit and receive paths for the GPU to the network via PCIe or CXL. Finally, there's the top piece, where the AI is the cornerstone of the leaf-spine network for large-scale collectives across thousands of GPUs. Shown at the bottom is a native bus to connect tens of GPUs and make it look like one for scale-up tensor parallel communication. The most common connection today is in NVLink from NVIDIA, though Intel Gaudi uses Ethernet, and AMD is defining an open infinity fabric. Arista's AI spine on the top enables data parallel scale-out dimension to improve training times for large LLMs, supporting massive ingestion of training data.

Arista today is involved in multiple AI trials in 2023, with early adopters leading to pilots in 2024 and hopefully production in 2025. Let us review some of the most common customer use cases. A popular one is the 7800 AI spine as a back-end cluster, up to 576 ports or GPUs in a single chassis. With a pair of 7800s, you can directly connect more than 1,000 GPUs. You can also expand to a two-tier AI leaf and spine with 16-way ECMP to support close to 10,000 GPUs in a two-tier topology. In 2024, we'll be showcasing a very promising technology, the new single-stage Distributed EtherLink Spine. By harnessing the power of Ethernet and BGP for massive AI clusters, you can build nodes of 10-32K or even more.

As you can see, AI designs come in all shapes and sizes. A choreographed back-end communication cluster can operate and move AI data across multiple GPUs. For small GPUs, a single small network may be based on simple CXL, PCIe, or NVLink. For medium GPU clusters, a rack-level AI leaf spine does the job. Then for enormous AI GPU clusters, a single distributed stage EtherLink spine can be the answer. Multi-tier AI topologies of 100,000-1,000,000 GPUs can also be constructed, and this will be discussed more by Hugh and Andy later today. There are many estimates in AI networking. We expect this, though, to be approximately $5 billion TAM in 2027 for AI Ethernet, and we hope to earn our fair share of it.

Switching gears, as we build a client-to-cloud platform, our customers are overwhelmed by a number of vendors and tools that sprawl across the network, costing $ millions. These high-touch operations are manual, costly, and incapable of handling high-speed access. Multiple consoles and layers of security are creating widespread delusion, disease of consolitis and securitis, and yet we wake up to breaches every day. Why? The increasing focus on zero trust drives the need for granular trust zones to limit lateral movement and reduce the attack service. The dynamic and distributed nature of modern IT networks makes it difficult to implement ironclad policies with traditional firewalls. The advent of virulent threats, the collapse of the perimeter, and the lack of east-west protection or visibility are something we must overcome with these silos. We must challenge the status quo and silos.

Arista plans to address and disrupt security by building it in the network in the next several years. To appreciate how, we must know what happened in the past. There was a clear definition and separation between the inside and the outside. There was a strong definition of perimeter controls like firewalls and VPNs, and this was meant to keep attackers on the outside. Well, two things have changed. First, the enterprises have moved to a highly distributed model, including IoT and cloud services. Second, most attacks tend to leverage inside credentials and legitimate applications that are already deployed. Zero trust architectures of the future are mandatory to defend this kind of environment. As defined by CISA, the Cybersecurity and Infrastructure Security Agency, microperimeters are important. Until now, firewalls may have been fine for perimeter-based compliance tools, but they do not scale to address the granularity of defense in-depth.

Arista has mapped the CISA reference model to each building block, as shown here. Network segmentation maps to MSS. Threat mitigation maps to NDR. Encryption maps to TunnelSec or MACsec. Visibility maps to our CloudVision with DANZ monitoring fabric. Automation maps to our CI/CD principles, and finally, compliance to our latest CloudVision AGNI for identity controls. At Arista, we believe the network infrastructure is the ideal and often practical place to build and enforce these microperimeters for zero trust networking.... The network is uniquely positioned to deliver zero trust. We aim to seamlessly integrate our MSS with network admission control via CloudVision AGNI for continuous threat detection and response based on sensors in the switches. This unifies the micro perimeters with holistic feedback for real-time risk assessments.

ZTN can also extend client to cloud across the WAN with real-time data transit encryption, called TunnelSec, and built-in observability with our NetDL and AVA insights. This morning, we just announced our ZTN underlays can work within and between cloud overlays with the best-of-breed security overlays. In particular, a key partnership with Zscaler to provide holistic insights for client to cloud security. Arista's ZTN strategy and value is simple and elegant. The switch is the key enabler of identity, firewall, observability, data in transit, as well as threat mitigation. We view this as an incremental $5 billion TAM in 2027, and layered options upon our EOS campus platforms to reduce attack surface and minimize breach impacts. Okay, so you've heard a lot of views on my strategy and much more to come from the experts ahead. But what does this all mean to our business?

First, we are committed to our enterprise campus goal of $750 million by 2025 that we set back in 2021. We believe the post-pandemic cognitive canvas validates Arista's differentiations versus the legacy, and we are pleased with our progress here. We expect customers to deploy production AI networks in 2025. Therefore, we're signing up to a new goal of at least $750 million in AI networking revenue in 2025. Note, these AI goals are not multi-year numbers, nor orders, nor front-end cloud networks or long-haul optics reporting. After three outsized growth years in 2021 of 27%, 2022 at over 48%, and 2023 at 33%, we expect 2024 to moderate to double-digit 10%-12% growth. We're now modeling approximately $6.5 billion in revenue for 2024.

More details are forthcoming from Ita's CFO presentation. Our TAM in 2027 has now increased to $60 billion from $50 billion+ last year. We expect our 5-year, 2022 to 2027 CAGR revenue to reduce from historically 20% levels to a respectable mid-teens %, given our larger revenue baseline. It took Arista more than 12 years to exceed $5 billion in revenue in 2023, overcoming many obstacles and challenges from the recession to being a startup, to litigation, to the pandemic, to the supply chain crisis, and now macro uncertainty with unnecessary wars. We are aspiring to achieve $10 billion in revenue, approximately in half the time it took us to achieve our first $5 billion. This, of course, is not a firm forecast, but more an aspiration to continue the Arista 2.0 journey towards both consistent profitability and growth. Thank you.

I'm indeed proud of the Arista team for our accomplishments, unique culture, and trajectory. We are a talented and cohesive leadership team, and we are constantly adapting and innovating to the evolving landscape in the typical Arista way. It is an exciting time for us, and we couldn't be more appreciative of your enthusiastic support to us. We are entering the golden age of Arista 2.0 ahead, bringing innovations in software stack, AI, security, observability across all our client to cloud platforms. To describe this more, Anshul Sadana, our Chief Operating Officer, will dive into our product portfolio and platforms.

Anshul Sadana
COO, Arista Networks

Thank you, Jayshree. Hello, everyone. I'm here to talk about Arista platforms. Our client to cloud platforms have come a long way. We started with the data center, with one operating system, Arista EOS, with a state-based architecture and now a network data lake around it to stream data and analyze it in real time. The same architecture now applies to our campus platforms, our routing platforms, many of our software offerings, and even cloud connectivity. We have one EOS and one CloudVision for all of our use cases. This simplicity and elegance stands apart from the rest of the offerings. We started with our universal cloud network designs for layer two, for small scale, for layer three, for large scale, or with VXLAN and EVPN for very large scale, with segmentation and VMs, containers, and hosts.

The same leaf-spine designs are also used by our customers for a cluster, for front-end or back-end, within a data center to interconnect multiple clusters together, or within a region to connect multiple data centers together. These can be as large as gigawatt regions now. We've taken the same design approach, the leaf-spine, and applied it to campus. The only difference here is the leaf switches or the leaves are different products. These are leaf switches with power over Ethernet or our Wi-Fi access points. They all connect into a campus spine that can be layer two, or layer three, or layer three with VXLAN and EVPN for segmentation. These industry standard and non-proprietary network designs have enabled many of our customers to quickly scale, automate, and interoperate our products in their campus deployments...

Along with AGNI and CloudVision, we have managed to integrate into the rest of the infrastructure within the campus environment that may be a bit different compared to a data center. Our customers operate the network, especially in the enterprise, through CloudVision. The Arista CloudVision experience is unique. It's a single platform that allows you to get visibility for provisioning, for monitoring, for compliance, for security, all from one place. Now, let me talk about our portfolio expansion in the last year. 2023 has been a busy year. We've doubled our spine offerings, our AI products, and our routing portfolio. We've introduced over 30 new fixed and modular products in the 70 to 80 or the 7800 series. We've added support for TunnelSec, which are tunnels that are fully encrypted end-to-end, independent of what's in the middle. We've added offerings for dense 400 gig DWDM products.

And the newer products also bring in 50% reduction in power on a per 100 gigabit basis. Cloud-grade routing now scales from 1 TB to 460 TB per second on a single product, with a single EOS and all of the same consistent features across the board. In addition, we've expanded our data center leaf products as well, with five new silicon platforms, including the latest Trident 4 and Tomahawk 4 products. These are necessary leaf switches for AI and HPC applications, helping our customers transition from 100 to 200 to 400, and now 800 gig. We now offer seamless 10 gig, 25 gig, 50 gig, and even 100 gig SFP. That's single lane, single channel, 100 gig SFP on some of our products. That's the 4th generation of these leaf switches for different compute connectivity.

It's a very comprehensive portfolio, as you can see, going from 1 gig to 800 gig, again, with the same single EOS. Quite often, we get asked about build versus buy. This is in the context of white boxes. As you know, companies that often delve into these projects are also our largest customers. We've had years of co-development partnerships with them. As an example, we partnered with Meta on some of the co-development of hardware, like the Arista 7368, which was introduced in 2018 with 128 by 100 gig, and Meta introduced their version, Minipack, with 128 by 100 gig. Both products interoperate, both products can run either EOS or FBOSS, and our customers get the 2-by-2 matrix they were looking for. This has continued.

We introduced the next generation of products a few years ago with the Arista 7388, 128 by 200G, and Meta did the Minipack 2. We've continued to partner with Microsoft, with SONiC. As a result of this, and a result of our execution, the Arista value proposition competes fairly well with white boxes. Designing next-generation products is actually getting harder, not easier, and our customers very much understand that. As an example, in the last decade, we've gone from 64 ports of 10G to 640G switch, to now 51 TB in one switch. The port speed has gone from 10G to 800G. The system power has gone from just around 100-120 W to now about 1.6 kW per switch. In addition, the complexity keeps increasing.

As an example, the PCB layer count. We used to have only 20 layers in the 2012 timeframe, and now we are over 40 layers, touching about 48 layers in that one PCB for our 51 T switch. We do well in delivering on these advanced use cases, whether it's cutting-edge technology, time to market, more routing features, and so on. This year, we also introduced the Arista WAN routing system. This is a self-healing over-the-top traffic engineered WAN network. It runs using Arista EOS and CloudVision, helping our customers connect campus to campus, campus to data center, site to site, site to cloud, all with just one architecture, all operated with one CloudVision. 2023 has been a busy year for optics development as well, for 800 gig, for AI, for DCI, and for lower power deployments.

As an example, we added 400G ZR+ optics. These optics allow our customers for long-haul connectivity for over 1,000 kilometers using simple pluggable optics on both ends. We also introduced our linear pluggable optics. The linear pluggable optics reduce power by 50% on a per-module basis for 800G. 2023 has been a great year for the 7130 portfolio as well. We've introduced 25G in our ultra-low latency offering. These switches come in different flavors. Some are FPGA-based. Layer 2 and Layer 3 features are up to 130 nanoseconds of latency, significantly faster than the previous generation, or Layer 1 switching at 5 nanoseconds at 25G. These new products will allow both exchanges and our trading customers to upgrade their infrastructure from 10 to 25, all powered by EOS. We've also expanded our campus portfolio...

We introduced 15 new fixed and modular products this year. The portfolio is quite complete. We have desktop switches, entry-level PoE switches, enterprise-class PoE switches, and modular switches for very large enterprises as well. We've increased the PoE power delivery from 60 watts to 90 watts for some demanding applications. We're designing our customers' deployment for very dense Wi-Fi, both Wi-Fi 6E and now Wi-Fi 7. Our campus deployments are rich, cognitive, running a single EOS and CloudVision. The Arista cognitive enterprise vision continues from best-in-class reliable architecture, to zero-touch operations, to now zero trust networking. This is driven by our Network Data Lake and AVA-driven insights, providing visibility and security to our customers' operation.

Talking about AVA, the Arista PoE switches are the only ones we know of in the entire market that integrate the NDR sensors directly onto the leaf switches in the hardware and stream that data real time to the AVA nucleus engine for back-end analytics, using our AI ML algorithms to provide threat detection within our customer's campus network. We've increased observability for our customers. With our universal cloud network designs, you can run a great network. In addition, right from there, you can mirror into the DANZ monitoring fabric, all of the flows, all of the traffic that needs to be analyzed, and we have several smart nodes available as a packet recorder or an analytics node that can provide enough visibility to the customer or steer that traffic to different tools, be it SecOps or NetOps, for end-to-end monitoring of the environment.

In summary, 2023 has been indeed a busy year. We've introduced over 50 new platforms, lots of innovation, be it for the AI spine, be it for cloud-grade routing, be it for cloud and enterprise data centers going from 100 to 400 to 800 gig, be it for cognitive campus, going from 10 meg to 100 meg to 100 gig, or ultra-low latency and HPC applications, and a lot more automation and software on top to make this infrastructure better, to operate, easier to operate, make it more secure, and increase observability all at the same time. With all this innovation, we are well positioned in multiple growth areas. Thank you. And with that, I would like to welcome and invite Andy Bechtolsheim, our Chairman and Chief Development Officer, to come and talk to us about the future of AI.

Andy Bechtolsheim
Chief Development Officer, Arista Networks

Thank you, Anshul. There has been unprecedented growth in the size of large language models, with the largest models today exceeding 1 trillion parameters, which is an increase of a factor of 1,000x from the 1 billion parameter models just 3 years ago. The trend towards even larger and more capable models is showing no sign of abating, but as importantly, it takes way too long to train these large models today. Reducing training times by a factor of 100 would be highly desirable, which would require a 100-fold increase in the performance of future AI clusters. To get to this 100-fold increase in performance requires to increase both the speed per GPU and the number of GPUs in a cluster. AI cluster sizes have historically quadrupled every generation, and we expect that trend to continue.

GPU chip performance has historically increased by a factor of 2x-3x every generation, so that is roughly 10x for the next two generations. Together, it is expected that the combination of higher chip performance and larger cluster size will deliver a 100-fold increase in compute performance over the next two generations. Now, turning to networking, the fabric bandwidth for these future AI clusters also needs to grow by the same amounts to keep up with the increase in accelerated compute performance. In addition, to deliver the best performance for AI applications requires the most efficient network architecture, one that avoids congestion, allocates bandwidth fairly to minimize tail latencies, and offers the ultimate in reliability and availability. These AI networks are larger and more capable than any network previously built.

To address these requirements, we are developing a portfolio of new products that will support hundreds of thousands of AI chips in the most efficient way. We strongly believe that Ethernet provides not only the most scalable and most reliable, but also the best performing network fabric for large AI clusters in delivering the lowest end-to-end latency at the application level. These AI fabrics differ from traditional networks in three key areas. First, they're much flatter than traditional networks to minimize congestion and latency. Second, they need to provide the utmost in reliability, since any network outage that impacts the AI application is very expensive. Finally, these networks require the highest speed switch silicon, combined with the most efficient optics, to offer the lowest cost and lowest power per bit.

Between products that we are already shipping today and new products we are developing, we expect to have the widest range of AI-optimized switching solutions in the market to address AI applications and customers of all sizes, ranging from AI clusters in the hundreds to thousands, 10,000s, and 100,000s of GPUs. All of our new products will support LPO, or linear pluggable optics, that use about half the power compared to traditional DSP pluggable optics. This will significantly reduce the power and cost of large AI networks. We also expect a rapid adoption of Ethernet 400 gig, and beyond that, 1,600 gig Ethernet in the AI networking market. This slide has a forecast from the market research firm, Dell'Oro Group, that projects that the vast majority of AI networks in 2025 will be based on Ethernet 400 GB, and by 2027, on 1,600 GB.

This represents the most rapid pace of adoption of any new port speed in the history of Ethernet. There are, of course, multiple choices for AI backend network fabrics. Besides Ethernet, this includes single-vendor solutions such as NVLink and InfiniBand. We expect proprietary and standard solutions to coexist in the market, but standard solution to prevail at scale. In particular, we see great momentum around the Ultra Ethernet Consortium, which is defining a best-in-class network adapter for AI applications. While we are not in the network adapter business ourselves, I wanted to highlight the importance of this effort, which will enable Ethernet to deliver best-in-class performance for AI applications. In summary, the accelerated computing market is growing at levels that represent a tectonic shift in the industry, a once-in-a-generation inflection point in computing and in the broader cloud infrastructure market.

Enormous investment is going into this new AI infrastructure, with large new data centers being built purposely for AI, that will house hundreds of thousands of GPU chips. AI chips and cluster sizes are scaling rapidly with an expected rate of 10x for each generation or 100x for the next two generations. Our goal is to deliver the broadest portfolio of solutions for this market that will provide the highest scalability, the highest reliability, and the best performance for AI clusters and applications. And with this, I would like to introduce Hugh Holbrook.

Hugh Holbrook
Chief Development Officer, Arista Networks

Thank you, Andy. That was fantastic. I'm excited to be here to talk to you again. I want to talk about what's happening in networking related to AI, how we're using networking for AI, and the features that we're building around AI in the coming year. It's very exciting. Looking back a year ago to today, it feels like it's been a sea change. As you all know, I'm sure, ChatGPT and GPT-3 became public in our consciousness on November 30 of last year. Feels like there was tremendous amount of interest in the media and in business, in our customers on AI. Everybody wants to know about it. It's having an impact on networking. It's very exciting. Today, I want to talk to you about how that is playing out at Arista, what the role is for Ethernet in AI, why it's compelling.

I want to talk a little bit about the Ultra Ethernet Consortium, some advantages and what we're doing to make EOS compelling for AI, platforms that are coming next year, as well as the AI network designs that are enabled by those platforms. Lastly, I want to talk about the distributed Ethernet link switch and give you a sneak preview of some exciting technology that's coming next year. So starting out about what's happening with Ethernet and AI. Ethernet, we're continuing to see, it's just showing tremendous advantages for artificial intelligence, from a number of reasons that are really quite compelling to our customers, to the market. It's an Ethernet for AI, as it is in all use cases, has a very deep multi-vendor ecosystem of NIC switches, PHYs, optics, tools, testers. It's widely known, widely understood, training materials, books, operability, manageability, debuggability, it's monitorable, monitorable.

People who know Ethernet know how to operate it, know how to run IP. Ethernet has compelling scale in conjunction with IP and IP routing. Ethernet has, you know, effectively unlimited ability to scale to large data center networks. And then there's a very, a very strong, proven track record of innovation in Ethernet on all of the above things, switches, PHYs, transceivers, optics, features, functionality, both at the link layer at Ethernet, at routing, manageability. All of these things combine together to make it a very compelling offering for AI, and these advantages are more important for large scale networks that people are trying to build out this year and in 2024. Looking at the Ethernet AI ecosystem and where it's at right now, in 2023, there's a ton of accelerators from lots of different parties... startups, hyperscalers, vendors producing them, there's no signs of a slowdown.

We see AI moving toward Ethernet, using Ethernet going forward, and we think this will continue going forward. One development that's happened since the last time we spoke is the arrival and announcement of the Ultra Ethernet Consortium. The Ultra Ethernet Consortium is an open standards organization founded by silicon system vendors and cloud service providers. You can see the list of steering committee members here on the slide, whose mission is to publish open standards to enhance Ethernet for AI and HPC. In my opinion, the key deliverable of the Ultra Ethernet Consortium is a new transport protocol that provides a modernized version of RDMA for the workloads of the future. Today, what we see in our customers is that load balancing is a, if not the, key problem for which multipath delivery is a critical solution.

The Ultra Ethernet Consortium is developing a transport protocol that provides this, as well as out-of-order packet delivery. So packets show up, they're delivered directly into memory without waiting for the previous packets that may or may not have been received. It uses modern congestion control, building on the lessons that have been learned over the history of IP and TCP over the last 25 years. It's self-configuring, efficiently handles retransmits, ramps quickly to high speed, avoids incast, and lastly, it's designed for scale to very large systems that people want to deploy for AI clusters. The timeline of the UEC, it was announced in July. New members are joining now as we speak. The first public specs will be probably early 2024, with products to follow. The UEC is really here to try to make Ethernet even better. We have wide deployments of Ethernet for AI.

The UEC is a developing technology that's coming that will improve Ethernet for AI workloads of tomorrow. So now I want to talk a little bit about EOS and AI and our relationship between those two, and the sorts of enhancements, and really what it is that we've done in EOS, and what it is that we've put into the software in our products to make them better for AI and to improve the experience that our customers see. Looking from very high level at the workloads that we see for AI, nothing has changed since I spoke to you around a year ago. GPUs are still the fastest traffic sources, bar none. They send a ton of traffic really fast. They send traffic over a small number of flows, and GPU workloads tend to be synchronized.

So all the GPUs in some job, a training job, training ChatGPT, or Llama Two, or DLRM, they all start sending traffic simultaneously burst, puts a lot of demand on the network. Moreover, the applications are structured such that the performance of the whole application is determined by the performance of the slowest flow. None of this has changed. It continues to provide very, very demanding traffic patterns on the network and demands a lot of functionality out of the switching. So some of the features that, that we are focused on for AI networking, first and foremost, due to the, like, high-performance workloads, the high rates of traffic that AI training and inference generates, we're focused on going faster, providing you the best silicon as soon as we can. Load balancing is a big problem, support for RDMA, visibility and telemetry, and then hitless upgrades.

I'll talk about each of these. Looking forward into 2024, from 2023 into 2024, we expect J3 AI and Tomahawk 5 silicon to be the cornerstone of our products next year. 400 gig GPUs and NICs are mainstream today. In 2023, we see wide deployments of them for AI clusters. In 2024, we think we're gonna be having 800 gig products, and we'll be seeing our customers moving into 800 gig technology. We expect these deployments to go quickly. With anything AI, once something is available, people want it, and they want to deploy quickly. There's a lot of advantage to moving to the new technology.

Because of this, stability, quality, and the feature set that our customers need, having that all ready at day one is really critical, and we are structured in our EOS software team on providing that, namely providing the quality first and foremost of all, but then the stability in the AI feature set along with that, so that customers have the features that they need to deploy the networks that allow them to run the applications that they have. For AI, I think load balancing is really the key problem, and the issue is, again, side effect of the structure of AI jobs, where one hotspot in the network can slow down the whole job.

And so if you see here on this picture, there are two flows sending to the GPU on the right, or the system on the right, and if they collide on the wrong node, we end up with congestion, and the goal is to spread those across both of the spines. AI, again, is uniquely challenging because the structure of the workloads. We have developed a suite of features in EOS for load balancing to improve RDMA and RoCE workloads that we see today. They're listed here on the slide. These are technologies that avoid congestion in AI traffic and improve the performance meaningfully in AI applications. Because this is such an important problem, we're focused a lot on this. We've learned a lot from AI deployments that we have so far, and we're continuing to work in this area.

Nearly all of the AI applications that we see are based on RDMA, and this is specifically RDMA over Ethernet, RoCE, as it's known. In RDMA traffic, the flows are short, so they need to ramp to wire rate quickly, and they have to throttle quickly under congestion. There are two specific technologies in Ethernet and IP that are critical to allow RoCE to do this, namely explicit congestion notification and priority flow control, ECN and PFC. EOS has a number of proven, deployed optimizations for both PFC and ECN that enable RDMA to perform well. We're shipping those now and continuing to enhance those and support those on the new higher speed 800 gig technologies that are coming next year. Visibility is probably the next most important problem for AI workloads because the speed and the nature at which traffic happens and how quickly it changes.

Problems occur on very short time frames, and network problems don't necessarily manifest themselves as packet drops. In order to debug problems that are happening due to application performance, link failures, GPU issues, host issues, we need to provide visibility on small timescales collated from a whole set of servers that are collaborating on an AI job. This means going beyond drops to see application and protocol behavior, including collectives, buffers, congestion marks, PFC. EOS has an extensive suite of monitoring, mirroring, and timestamping features for AI, and I think you'll see more of that coming this year. We're excited about our ability to deploy that to improve our customer's experience with their AI applications. Lastly, one thing I wanna mention is that upgrades in an AI network are not something to be taken for granted. Specifically, AI clusters are very expensive.

They tend to be extremely highly utilized, and the downtime of any network device doesn't tend to be particularly well-tolerated, especially one that's a top-of-rack switch, where there's no alternate path into the network from the GPUs. GPUs tend to be singly connected because they're such high-speed devices. It's too expensive to connect them into multiple top-of-rack switches, so the outage of a top-of-rack switch takes down all of the GPU servers below that top-of-rack switch. With the larger top-of-rack switches that we'll have in the coming year, this is increasingly larger blast radius for an upgrade. So upgrades, which for whatever reason, security, fixes, enhancements, having the ability to do those in real-time without affecting the workloads, is an increasingly important requirement for our customers. We call this feature SSU. This feature is supported today. It's an important feature.

We expect to continue to support this in the AI top-of-rack switches, the AI leafs, that will be shipping in the coming year. Now, I wanna talk a little bit about the key products that we'll see in AI deployments in the next year, and really, I'm gonna talk about two of them. There's an AI leaf, the 7760, based on the Tomahawk 5 chip from Broadcom. This is a 51.2 terabit switch. It is the highest speed single-chip system that anybody is shipping today. Large buffers for AI. It supports linear pluggable optics, which is compelling from a cost standpoint. There are a lot of optics in these networks. The cost is going up of optics from generation to generation. This reduces the cost and power usage of the optics needed for an AI network. This supports all the AI features that I've talked about.

We're excited about this switch. It's gonna enable a lot of designs, and I'll show you that coming forward. Paired with the AI leaf is the AI spine. This is a 7800 or really a family of 7800s, based on the J3 AI chip from Broadcom, going all the way up to 576 ports of 800 gig. Internally, these monsters on the right of the slide are completely non-blocking and internally congestion-free. They enable very high radix switching architectures, flatter networks, more locality in the traffic. So really compelling to pair this with the AI leaf that I showed you before, and it enables a suite of possible network designs for different trade-offs. And so I'll show you what some of those are, but first I wanna talk just a little bit about the structure of an AI network.

So an AI network is centered around the servers in the middle, which typically have GPUs in them. Common GPU servers today have 8 GPUs inside them, so those are the boxes in the middle of the slide here that you see. Above that is a front-end network, which is a typical data center network that has queries from, you know, things that someone might type into GPT-3, the results going back. Job management is performed there, starting up new jobs, spinning down jobs, extracting the results, checkpointing, monitoring. All of this happens on a data center network, so there's a normal network like Arista has been selling for a long time. It's typically a leaf-spine network. The demands on that don't tend to be quite as high from a bandwidth standpoint as there are on the back-end network, which is the network that largely is all RDMA traffic.

It's where the training happens, the inference, the distributed computations, what's called the collectives, that are sort of the, the communication patterns that power the AI applications, and that's on the bottom networks. This is a pretty common design we see as a front-end network and a back-end network. Some customers are converging these two, but today, this is, this is probably the most common design. I wanna talk next about the back-end network designs, which is what's really pushing the high speeds, and that's where the GPUs are talking to the GPUs, or the accelerators are talking to the accelerators, to really drive the training applications. There are some considerations in network design for these back-end networks. As the switches, the leaf switches, get larger and larger, there's more and more power in terms of just number of GPUs.

A GPU server today might be 10,000 W. Having a number of these, our next-generation designs might be able to support 7 or 8 of these underneath one top-of-rack switch. That tends to be too much power for normal data center networks. So this is forcing the GPU servers to be distributed across multiple racks and then some kind of optics or active electrical or optical cables to connect those GPU servers to the leaf switches. In network designs, to get larger, you can add more tiers. When it's possible, especially for AI networks, staying within 2 tiers is preferable. It helps to address the load balancing problem, means you need fewer optics, and you use less power. So we see our customers trying to stay within 2 tiers when they can.

Sometimes they need to go to three tiers, or they have reasons to do it. Generally speaking, our customers are building fully non-blocking networks, at least up to the scale of 100 or thousands of GPUs that are interconnecting. They may interconnect non-blocking pods of 100 or 1,000 GPUs together with some oversubscription, but the AI workloads and the traffic patterns are so demanding that they tend to build full bisection bandwidth networks for AI training and inference. The workloads that we see have some influence on the back-end network designs. Different computer vision, DLRM, LLM models produce different workloads. Some of them are easier to load balance, some are bigger, some are smaller, some have more locality. Job placement algorithms can influence the kind of network demand, how often you're starting jobs, what size of jobs. Do you have small jobs? Do you have big jobs?

Do you defragment periodically by letting some parts of the cluster go idle, or are you running the cluster at 100%? One thing that is certain is that nothing is, nothing is stable. All of this has changed and will change. Our customers are sensitive to that. They would really like networks that work for any workload they have today, as well as the workloads that they don't know about that they might have in the future. That's a common theme. At the same time, they're trying to optimize and make reasonable trade-offs. So now let me talk about just a handful of different topologies that we see our customers deploying, building out, testing today. Simple topology that actually serves, like, for a fairly large set of use cases is even just a single spine switch.

So if you take a 7816, our largest AI spine, that can support up to 36 racks of GPUs. This would be 576 GPUs at 400 gig. In today's technology, next generation, it could have twice as many. This is a completely non-blocking network. This works great for inference. It's good for training up to this scale, fine-tuning models. This is a network that we see deployed today. Another network topology that we see commonly deployed is a combination of 7760 AI leafs and 7800 AI spines. This picture shows one network that's scaled out up to 64,000 GPUs in two layers, with 7800s at the spine, 7760s at the leaf. This is two tiers of switching. It simplifies the load balancing problem, reduces the opportunities for congestion from traffic. This is a very large network, 4,000 racks.

This makes a lot of money for the GPU vendors, but we see people wanting to deploy networks that are, that are this scale, absolutely. Here's an example of another network that scales out even larger. This is 2 tiers of 7800, what I call the AI Spine, but taking one of the AI Spines and using it as a leaf. This is a 2-tier topology shown here with J3 AI at the leaf and at the spine. The high radix of the Tier 0 leaf enables very large Tier 2 networks. This could scale up to 165,000 GPUs. There's tremendous locality in this network. We could have 288 GPUs talking directly through a single switch with no possibility for congestion, no load balancing required in that network. This is a pretty compelling design in some cases.

Another example of a network that we actually have customers looking at and deploying today is a three-tier network based on 7760s, the AI Leaf. This can scale out to be very, very large. The limit of this is an exceedingly large 512,000 GPUs with the next, next-generation technology. But here's an example showing how this would scale out up to 64,000 GPUs, which is a target that we do have customers wanting to hit and go beyond. Three tiers of technology, pods of 4,000 GPUs that are connected with two tiers of, two tiers of load balancing, and then interconnected with a super spine, all with 7760s. There are many variations of the networks that I showed you. There could be one, two, or three layers.

The GPUs could be running, and the links could be running at 400 or 800 gig. I can oversubscribe or overprovision the uplinks out of the leaf switches, depending on the workload and what I know about my traffic characteristics. There's rail-based and densely connected networks is another design point that I didn't talk about. All of these choices are enabled by having a broad switch portfolio. It's another advantage of the Arista EOS portfolio is that there are many designs that are possible, all within the same operating system, the same switch families, the same set of AI Leaf and AI Spine silicon that I've been talking about. Now I want to talk about the Distributed EtherLink Spine, which is some new technology. This is a sneak preview of something that's coming in 2024. It's the first of a suite of technologies in the EtherLink family.

It is comprised of a pair of systems, a fixed config leaf and a fixed config spine, and these two are put together into a two-layer fabric, which we call the Distributed EtherLink Spine. Today, this can scale out to 4,800 gig ports. In the future, this will scale out to 32,000 ports. But the important property of this is that there's an entirely lossless fabric between the leaf and the spine, regular Ethernet connectivity to host. This is all managed like a BGP network, but the property of the lossless fabric between the leaf and the spine, this is a packet-spraying fabric, enables us to have some very unique properties that are great for AI. The Distributed EtherLink Spine really nails the load balancing problem perfectly and elegantly.

Because we're spraying the traffic across all of the spine switches from any leaf, this ensures that there's no congestion between the leaf and the spine or between the spine and the leaf. There's egress credit scheduling that prevents in-cast on the last hop. The result is that we get lossless delivery. This will work for any endpoint today, any RDMA NIC that's shipping today. This performs really well for the most demanding AI workloads, including all-to-all, jobs with poor locality due to cluster fragmentation. We think this will have fantastic performance in the face of link failures, slow nodes, slow endpoint, congestion on the PCI bus. We're excited about this technology. Like I said, this is a sneak preview. We'll see this coming next year. In summary, looking forward, Ethernet is a clear winner for AI networks, like it is in data center networks.

The silicon and product roadmap is very clear in 2024. The next generation is likely coming in 2026. We expect that to double the speed again. EOS has many optimizations for AI that I talked about. It enables network architectures that will scale up and down for 400 gig, 800 gig coming next year. A broad portfolio of products and features allows our customers to optimize their network designs for the AI clusters that meet the needs that they have, for the training and inference patterns that they have coming. We're really excited about what we have coming forward this year, how it's going to be applied to AI, which has been truly transformational, has been very exciting over the last year. Thank you for coming today to Analyst Day. I'm really excited about the products that we have, what's going forward.

Now I'd like to hand it back to Liz. Thank you all.

Liz Stine
Director, Investor Relations, Arista Networks

Thanks, Hugh. Before we move to the second half of our Cloud and AI Innovators event, we are going to be taking a quick break. We will see you back here shortly to kick off the second half with Ken Duda, diving into Arista's Network as a Service strategy, including new manifestations of NetDL.

Bryan Holmes
VP of Digital & Technology Solutions, Andelyn Biosciences

Hi, my name is Brian Holmes. I am the Vice President of Digital and Technology Solutions at Andelyn Biosciences. Andelyn Biosciences is a gene therapy CDMO, or contract development manufacturing organization. One of the core strategies for Andelyn Biosciences revolves around this concept of a connected plant, and a connected plant architecture is designed around Industry 4.0 principles. This means that any data that is, acquired at a device level, whether it be a sensor or a laboratory instrument or manufacturing equipment, we are looking at bringing this all together into one centralized repository for analysis. This also included building out our new corporate headquarters, which is 200,000 sq ft state-of-the-art facility.

One of the key challenges that we faced in embarking on this journey of building out the new facilities was obviously the supply chain challenges that we had with the pandemic, but also knowing that we needed to build things in advance of our facility being ready. So when we started evaluating the different technologies, it really came down to Arista and their ability to allow us to build a cloud-based solution, as opposed to waiting for the facility to be ready, and then six months later, delivering a network. Upon discussions with multiple vendors in the space, it became clear that Arista was at the forefront from the technology that allowed us to meet one of our guiding principles of being born in the cloud and also designing with security in mind. So this starts in the data center.

This goes through with our, our switching and routing, our wireless capabilities. We even have the Arista configured along with our visitor management system for, for guest wireless access, which works seamlessly. Arista is really our one-stop shop for all of our networking equipment at our facilities. I can say without a doubt that Arista is 100% a decision that we stand behind. We are enthusiastic about where the... both the product and the company are going, how we utilize Arista. I highly recommend Arista to anybody going forward with a network project and undertaking because it's really a top-notch solution and company.

Speaker 22

EPB was founded as an independent board in Chattanooga, Tennessee, in 1935. Currently, today, we serve about 200,000 homes and businesses. We also started, in 2009, a fiber optics company, which provides video, voice, and data services for both homes and businesses within the Chattanooga area.

One of the things we leverage with the Arista access points is on-and-off marketing events. The city of Chattanooga, it's a pretty happening place. There's folks that are in need of good Wi-Fi, and we use the Arista gear to help us deliver that quality product for those events.

The mayor of Chattanooga came to our CEO and expressed a concern that the Chattanooga Convention Center was getting complaints about Wi-Fi experience. We actually are known as the Gig City here in Chattanooga, and the brand was not marrying. So we approached Arista, and Arista was wonderful. We were able to deploy not only the APs to the convention center, but we were able to deploy Arista switches. It also made the convention center the first 25 gig customer and the first convention center to actually be 25 gig.

Arista really had some gear that could receive our 25 gig network. We ended up deploying a switch in our what we would label our MDF. From there, we actually built out our infrastructure within the facility by use of fiber. The beauty about that is we were actually able to transport 25 gig to each of those IDF closets. From there, we were able to pull our Category 6 Ethernet cable out to each of the access points.

During COVID, we needed to be able to help Hamilton County Schools for all those kids that were going home and needed to have good quality internet service. We called Arista and we said, "How can you help us deploy hotspots so the kids could go to different spots within the community and be able to complete their homework?" We were able to work together to deploy that, and the community response was overwhelmingly positive.

Arista, in being the partner that they are, are really helping us think ahead and think outside the box and come up with solutions that may have not even really been on our radar.

We have developed a new, wonderful relationship with Arista, and so we continue to see this partnership to evolve because Wi-Fi is becoming more and more necessary for each and every business to run their business and be successful. And Arista is there for us.

We look forward to a long partnership with EPB and Arista.

Liz Stine
Director, Investor Relations, Arista Networks

Welcome back to the second half of our presentations, leading off with Ken Duda, our founder and Chief Technology Officer.

Ken Duda
CTO and SVP of Software Engineering, Arista Networks

All right, thank you, Liz, and thanks everybody for watching our presentation for Analyst Day. I'd like to talk to you about a really exciting topic to me, which is Network as a Service. Now, I've talked about Network as a Service at some of our previous events, talking about the concept and then talking about our architectural foundation. Today, I'm really excited to tell you about some of our progress in this area. We've been doing a lot of work, and we have three use cases I'd like to share with you. But first, let me go back and remind you, what is the big idea behind Network as a Service, and then how are we building it? What's our architectural approach? Network as a Service, first and foremost, is a change in philosophy.

Instead of focusing on each of the boxes and each of the ports and each of the links, we're going to change the focus and focus on the service being provided to the end user. What is the end user expecting? How do we define that service? How do we provision it? How do we monitor it? How do we make sure it's doing what it's supposed to be doing? How do we make sure the user is getting the quality of service they expect? All of that is Network as a Service.

In this model, what we provide as a vendor is CloudVision, a Network as a Service management system, along with validated service models, the types of services that we've defined that our operators can then select from, and then fill in the service template with parameters, with policies, specifying the tenants, specifying how resources are to be allocated, which resources are allocated to which tenant, and then CloudVision makes it all happen by providing these models, providing a CI/CD framework, an integration pipeline where the operator can make changes, test those changes, and then push them to production.

CloudVision provisions the switches in accordance with the service model, performs testing to make sure that the service definition is actually going to do what you expect. That testing happens both before provisioning as well as after provisioning and during ongoing operations.

Before you roll out changes to your service, you can test it in Arista Cloud Test and make sure that it's actually going to behave the way you expect. Then when we roll it out, we perform testing as well at the service level to make sure that the service that we're providing is the service that you defined. Network as a Service in CloudVision also generates documentation for your deployment, what each switch is doing, how it's configured, how it relates to the other switches.

CloudVision monitors the deployment, provides alerts of any service problems or disruptions, in some cases, performs automatic repair actions to restore service around common failure modes. CloudVision also enables level one operators in the context of your service to make certain kinds of changes in the field on the ground that are required in day two operations.

CloudVision automates upgrades, automates service expansion, service model changes, so when you change the parameters of your service, CloudVision automatically rolls out the change to effective switches in a way that's hitless, non-disruptive, does not affect any of the existing service deployments. And finally, CloudVision provides visibility and troubleshooting tools, so the operator can always see what's going on with their service, with the different tenants, as well as all the devices and links that comprise the service.

So why are we doing all this? Well, first and foremost, it's way better for our customers, okay? Without the Network as a Service approach, customers have the burden of figuring out exactly how to configure every switch and to keep track of every service instance and how all these things are supposed to work. Network as a Service puts pre-validated designs into our customers' hands.

These are complex designs that encompass all the different configuration and operational aspects of operating these services. This enables our customers to get the most out of their investments in their infrastructure, to save time and energy by not sort of reinventing the wheel and figuring out how to configure all this from scratch, plus avoiding certain kinds of design flaws and mistakes, turning on all the right features, enabling the right security hooks, making sure that the service is deployed consistent with best practices. Our pre-validated designs do all those things right out of the box. Next, the automation around Network as a Service saves the customer a ton of effort, going box by box, typing in configurations.

But the effort is actually the less important half of automation, because more important than just saving effort, automation reduces mistakes, because the computer gets the configuration consistent every time, whereas humans always make some number of mistakes when touching a large number of boxes that can lead to very difficult-to-debug network issues. Network as a Service also provides a level one operator interface. This is a simplified interface that allows less skilled operators to make certain kinds of configuration changes during day two operations. It's easier for them. It's less error-prone. They're more constrained to what they can do to only be doing the things they're supposed to be doing, changing port configurations, enabling or disabling interfaces, replacing transceivers, the operational tasks like that, without risking that they may accidentally touch and break something they weren't supposed to be interacting with in the first place.

And finally, by having a Network as a Service model, we can improve our visibility and troubleshooting tools, because no more are we limited to simply showing you what's going on at the level of bits and bytes, packets and headers, links and interfaces. Now, we can show you what's happening in the terms of your service, in the terms of tenants and services and applications and users, and the application-level flows between those things. We enable you to see what's happening in your network in context... So how do we do this? The core of Network as a Service is the EOS stack. The EOS stack starts with best-in-breed hardware. At the bottom, our switches, all running the same network operating system, EOS. The same binaries run across our entire fleet, from the campus to the data center, the cloud, the service provider backbone, and the wide area network.

By having the same operating system, we get that consistency of operations that enables CloudVision to automate everything in a uniform way. All of those switches continually stream the state of the network into NetDL, our network data lake, which is a data storage facility, a query facility, an API surface that enables automations, visibility, threat hunting, AI and ML, and then CloudVision runs on top of all that. The EOS stack enables consistent operations across every domain of your network, your public cloud deployments, in the internet and core, in your data centers, in the WAN and branch, at the edge, and in the campus. This consistent approach means uniformity. It means one stack to learn. It means one release to qualify.

It means consistent operations, which not only saves cost, saves expense, reduces complexity, but also improves the overall quality, reduces failures, because there are just fewer ways for things to go wrong when everything behaves the same. Now, diving more into NetDL. NetDL is the central component of the EOS stack. NetDL brings in state from switches, like I already mentioned, but also from non-switch components of your infrastructure, identity and access management systems, vulnerability management systems, network source of truth, like NetBox, threat intelligence systems, internet or application performance monitoring systems, digital experience monitoring. And it brings all this data in and sort of joins it together so that you get all the data of what's happening on your switches in the context of what's happening in your larger infrastructure.

This enables better capacity planning, application-aware path computation, security event management, threat hunting and forensics, workflow automation, and then business intelligence, gathering intelligence from your whole infrastructure, not just from your switches. NetDL provides a single API surface that making available to applications data about everything happening in the network, from these devices themselves through servers, infrastructure, applications, DNS, users, access, authentication, the whole shooting match. On top of NetDL, we have AVA, our AI-driven autonomous virtual assist, which is continually learning what's happening in the network, monitoring quality of experience, looking for common sources of network problems and threat hunting, looking for suspicious behavior and keeping logs of all of this for forensics, for analysis after the fact. We also have other NetDL-enabled applications from third-party ecosystem partners, as well as custom integrations for specific customers. All right, so enough generalities.

Let's look at the specifics, the things that we've built over the last year that we're excited to share with you. These are three proof points of how Network as a Service is already impacting people's operations. Proof point one, Network as a Service in the data center. What I'm gonna show you here is provisioning a new tenant, a new service instance, using Arista validated designs to make configuration really easy, really simple, and then, I'll show you how by using the validated design, we can provide service-oriented, meaning tenant-focused, visibility of what's happening in the data center. So here's some actual screenshots of our product. This is a topology of a data center with spine switches. We have here three pairs of leaves and three server racks. And in this context, we're gonna add a new tenant.

So we go to the tenant provisioning page, click on Add Tenant, enter the name of the tenant. Now we're going to tag or label infrastructure components that are specific to this tenant. So the tenant's gonna have some servers in rack one and some servers in rack three. So we're gonna tag the switches at the top of racks one and three with the tenant identifier, so that the service delivery system understands that virtual topologies for this particular tenant will need to include those switches. With the tagging done, we're now ready to go ahead and create a service instance for our new tenant. So we're gonna create a new EVPN, VXLAN-based layer two virtualization system, virtual topologies that carry tenant layer two traffic across a shared layer three infrastructure.

In the context of that service instance, we're gonna create a couple of VRFs, a couple of VLANs, and then when we go to deploy all of this, before deployment, the deployment system shows us what configuration changes it's gonna make to each device in order to realize, in order to make effective this new service instance. You can see that it's creating VLANs. It's creating VLAN interfaces. It's configuring things like, you know, the source interface of your IP DHCP helper. Like, that's a thing that you might easily forget if you weren't using a validated design, which never forgets the details. Once you've had a chance to review the configuration changes, you can approve and execute the change request, and then the change request runs in parallel across each of the leaf leaf pairs.

You can see here it's only affecting leaves one and three. Leaf two is not affected because leaf two was not included in that tenant's infrastructure. And so, after the change control executes and pushes these configurations to each of the affected switches, we can then use a CloudVision feature. CloudVision is, as I already mentioned, receiving information about each switch from the network, and it saves all that information in a time series. This lets us do really cool things to answer questions about: How is my network now different from my network at some previous point in time? Where I can actually compare any two points in time. And so here, after deployment, you can see that we're comparing one of our leafs, Leaf three A there, from the time before the deployment to the time after the deployment.

What you can see is there are a bunch of added VNI bindings to the new VLANs. Our new VLANs, 13 and 74, mapped to the corresponding VNIs for that particular customer. And there's some new MAC addresses that have shown up for that customer as well. In addition, we can see the routing tables have changed. So when we compare just the regular IP routing tables, you can see new routes for the customer overlays for those new virtual topologies. This ability to understand how your network is changing over time can be super useful to understand why something is behaving the way it is, or why the behavior changed from one time to another. It's a core feature of CloudVision that works with or without Network as a Service, actually.

But it's obviously very useful in the Network as a Service context because you can see exactly what your validated design, exactly how it affected your network operationally. And here we have the tenant-specific dashboard. This is a dashboard specific to XYZ Corp. You can see that it shows only the switches that are involved in XYZ Corp's deployment, and only the virtual topologies connecting those switches. The dashed lines in the topology view are VXLAN tunnels. You can click on the links to get more information about them, and also more information about the devices that are related to this tenant, including CVE information, bug information, software lifecycle information. All of that is clickable for more information about each of those aspects of operations, and then more information about the resource consumption of this tenant.

How many routes that they're injecting, the number of MAC addresses, and things like that. This ability to see what's going on in terms of tenancy is a key aspect of Network as a Service. My second example here is NaaS in the campus. And what I'm gonna show you is that Network as a Service is useful in a campus context as well, because you get centralized campus-wide management, you get campus-specific troubleshooting tools, and then simplified level one operator workflows for making changes in the campus. And to demonstrate this, we'll first show you the campus dashboard. This is a high-level view of all of the campuses of the enterprise, and there's a bunch of different information you can see here about what devices are connected, how those connection requests are going, whether there's authentication or connection problems. Also, connectivity monitoring.

Is each campus able to reach each of the key services that the enterprise relies on? There's other campus-specific information here, alerts and events, traffic information, flow information, all of which can be expanded. But what we're gonna do here is imagine that there's been a trouble ticket opened. A user has reported some poor performance, and we're gonna try to understand why. So we're gonna drill down into a particular campus, see all the entities connected into that campus. Entities are broken down by types. We've got infrastructure entities, we've got client entities, and also application entities. It turns out, more than you might think, people run application infrastructure directly in the campus, especially in healthcare and manufacturing sorts of use cases.

So we go ahead and search for the user who is reporting the problem and find their device here. This is a particular iPad that had troubles connecting, and you can see the retry rate there. Clearly, there was a spike around 2:30, which is around the time the user reported the problem. You can also see in the event log that there was a connectivity change there, and then also events on the IDF switch connecting to the user's access point. So what we have here is all the connectivity information, the performance information, and the topology information all integrated into a single view, all tracked over time, so you can see not just what's happening now, but also what happened back in time.

It's clear that there's some issue in the access point connected to the IDF switch that gave rise to the customer's problem. We can then see that there is. Look at the switch and see that there's, you know, link errors on one of the links between the access point and the switch, and then we need to change the port. Remote hands is gonna need to go and change the cable, but also the port configuration. This is where the L1 operator workflow is so important. There's a very simple GUI for selecting the port that you want to change, selecting the new profile you want to assign to that port. We're gonna switch the profile from a device profile to an access point profile.

When you confirm that assignment, the same provisioning system that I already demonstrated, rolling out the EVPN configuration changes, will go ahead and roll out the interface configuration changes the same way. So you can see how CloudVision enables the operator to provide a connectivity service in a way that's with unparalleled visibility, troubleshooting, and ease of configuration. My third proof point is in the WAN, in the wide area network, where I'm gonna demonstrate per application, wide-area routing policies. I'm gonna demonstrate the troubleshooting process for slow network performance, and also demonstrate our ability to automatically repair from congestion issues. So we start off by looking at a view of our WAN. This is a highly collapsed view that shows the backbone and the five regions of our enterprise. We can then search this topology for a particular flow.

So if there's been a trouble ticket opened with respect to, say, as a radiologist in one of our clinics was having a hard time accessing certain images stored on a certain server, we can find the flow associated with that work. And then when we find the flow and click on it, we can see the path the flow is taking. And you can see at the bottom of the screen, in the bottom right, the time slider that indicates we're at the current time. Though at the current time, there's no issues, but the customer reported the problem back a little earlier, and you can actually see some labels on the timeline where there were some issues. So we then drag the timeline slider back and highlight the range of time where the issues were.

What you can see is that there is more than one path involved. At the location of the left hand of the range, the highlighted path is being used. That wasn't the same path as in the current time. So let's go forward in time and see what happened. When we go forward in time up to the event, we can see now that there's a problem on one of the WAN links that was involved in the flow. If we click on that segment, we can see the details of what was happening on that link. In particular, how much of the link's bandwidth was being consumed by which application.

What we can see was there was a big spike in prod imaging traffic, so the blue on the left shows that there was a significant increase in that traffic, leading to the red and yellow above it, where we're seeing high latency and ultimately packet drops. And then if we go to the next event in the timeline, we can see there was a topology change. CloudVision Pathfinder automatically detected that there was a problem with that application topology and rerouted just the highest priority application onto a backup link. And so that was the point. This point in time here is when an automatic repair took place that rerouted the traffic. So just this particular application is now taking an alternate path, and we can see that the, the...

By quick-clicking on that path, we can see characteristics of what that service is, what the service utilization is, what, how much bandwidth is available, and now we can be comfortable that the repair operation was successful, and we shouldn't expect any problems with the service moving forward. Arista is uniquely positioned to deliver on the vision for Network as a Service. First, we have a comprehensive vision of how this is actually going to work with the validated designs, with the separation of the NetDL and CloudVision portion from the control plane and data plane of the physical switches. We have the right architectural foundation, a single EOS. I cannot emphasize how important this is.

It is so difficult to create architectures that span from the campus through the WAN, into the data centers, into the cloud, when every component you're dealing with is running a different operating system, with a different state model, with different configuration details. I really have no idea how you could even really ever achieve this. And then we have a single NetDL, a single data lake, where we bring all the state into one place, and this is also so important, so that you're recognizing users and authenticating them. You're recognizing applications and performing application-aware segmentation, security, quality of service, and WAN path selection in a consistent manner from your data center to through the WAN into the campus. Again, without a single NetDL to integrating all these different sources of information, I don't know how you do this.

And finally, a single CloudVision, an application and automation framework built on top of NetDL. It can run in the cloud for op, for operators who want the simplicity of cloud-managed, or the same CloudVision runs on-prem as well, so that customers who are more sensitive to security issues or to availability issues have the option to operate CloudVision themselves. This unified platform that fits so many different deployment models is key to how Arista can succeed here, where others are not likely to succeed, in my view. Finally, we're resting on our strength in data center, in campus, in WAN, service provider, and cloud. We have products and offerings and high-profile customers, marquee name customers in all of these segments, and this, direct contact with the complete problem space.

We are the only company in the industry who has both the unified architecture and also the breadth of reach required to deliver the Network as a Service vision. So, I'm really looking forward to continuing to deliver good news on our Network as a Service implementation, but I wanted to leave you with a reminder of our development strategy. And number one, most important, is that we never compromise on quality. Quality is the most important thing. Networks have got to work. The network is central to the lives and operations of so many people, so many organizations, companies, governments, NGOs. The world relies on networks, and we have built the highest quality network in the business, and we are not going to let that slide. So we're not gonna do anything that puts quality at risk.

If you look at their Network as a Service architecture, you'll probably notice it involves essentially no changes to the switches themselves, because if you don't touch it, you don't break it. So, we believe we've got the right quality foundation for Network as a Service. We're always going to maintain that solid foundation, the architectural foundation of single EOS, single NetDL, single CloudVision, and then also, as we execute, we're building feature by feature in a way that adds value today, things that we can directly sell, monetize, add value for our customers, and simultaneously building towards a larger vision. This is where our competitors often make mistakes, and they distract themselves with a shiny prize and start climbing the mountain to try to get it, but they can't make it all the way there because there's no way to show business value along that path.

I think we've done a very good job of identifying where we can build features that we can sell today, which also contribute towards the Network as a Service vision moving forward. So thank you very much for your attention. Now I'd like to hand things off to John McCool to give you a manufacturing update. Thank you.

John McCool
SVP and Chief Platform Officer, Arista Networks

Thank you, Ken, and thank you all for tuning in. I have responsibility for development of hardware platforms, manufacturing, and supply chain, but today I'd like to focus on manufacturing at Arista. First, let's start with some background. To deliver on the company's growth, we made substantial investments in our manufacturing and supply chain capability. We have four regional manufacturing hubs, two in North America and two in Asia, to service worldwide demand. Our regional hubs are staffed with Arista manufacturing and supply chain professionals, who work closely with our partners. These teams assure that the uniform processes we've developed for test, product assembly, and material control are applied at all locations, so we can provide quality of product and security of our supply chain to our customers. Arista's manufacturing sites are complemented by four direct fulfillment centers and deliver orders to our partners and customers.

Our worldwide reach is supported by over 200 service depots in 98 countries that offer rapid response for customer replacement gear so they can maintain critical network uptime. Now, we've seen a significant improvement in the state of the supply chain over the course of the year. Increased predictability in component delivery has allowed us to restore on-time shipment confidence with our customers. We'll exit 2023 with an average lead times cut in half from where they were at the beginning of the year. With that, though, we are seeing some changes in the post-pandemic era, and 2024 brings a new set of challenges. Geopolitical tension has replaced COVID as a key concern for our customers. This keeps supply chain resilience as a continued focus going forward. The increased demand from AI is putting pressure on supply of advanced process nodes and packaging technologies.

Lead times for advanced components remain stubbornly high at twice their pre-pandemic levels. An increase in regional regulations drives more focus on emerging compliance requirements as we support our customers' globally distributed operations. With all these considerations, customers continue to engage with us on sourcing requirements. Interest in greenhouse gas emissions has increased as they look to understand their own impact on climate. Product complexity is dramatically increasing. This brings new challenges in system design that require new technologies, materials, and associated manufacturing techniques. Let's take a closer look at some of these design challenges. Bandwidth is increasing. The doubling of capacity and port speed increases the size of chip packages, which in turn increases the complexity of interconnect. At the same time, the doubling of port speeds require an increase in the precision and the quality of that interconnect.

With speeds moving from 54 Gbps to 112 Gbps on to 224 Gbps, we have the dual challenge of increasing interconnect density while minimizing the reflections, crosstalk, and other signal integrity considerations. These higher data rates increase power consumption, challenging our current cooling techniques. All of this complexity provides our team an opportunity to innovate across the electrical, mechanical, software, and manufacturing domains. So let's look at our focus for 2024. To address these considerations, we're focused on four major areas. First, supply chain resilience. The attention we set on supply chain resilience during the pandemic will continue. These efforts include a focus on regional manufacturing and associated sourcing in both North America and in Asia. This approach gives us the flexibility on where we source our products and adapt to shifts in demand.

We'll continue with our efforts on multi-sourcing and design alternates to assure continuity of supply. Then finally, we'll continue to strengthen our agreements and engagement with a broad set of key strategic suppliers. These close partnerships have been important to our leadership in advanced networking and will continue to be important for our next generation. The second area is on operational efficiencies. We're improving tools and processes for demand and production planning. We're driving improvements in logistics and inventory management, taking advantage of the more predictable supply environment. Now, let me talk about what I mean by enterprise reach. We're increasing the number of customers and the networks we support on a global basis. We're supporting a broader range of products, which reflect Arista's increased use cases of networking equipment in the enterprise. At the same time, regional regulatory requirements are involving.

Our compliance teams are engaged in emerging compliance requirements so we can continue to fulfill and service global demand. With our focus on large cloud, enterprise, and service providers, we support customer-specific engagements around supply chain. These conversations include supply chain security, capacity planning, new product introduction, and hardware quality processes. An area of growing interest for our customers in 2023 was around greenhouse gas emissions. We were able to release our 2023 baseline for scope 1, 2, and 3 emissions to the relevant reporting agencies and achieve third-party verification of those results. The last area is next-generation technology. Leadership and networking performance has always been fundamental to our success at Arista. We're planning ahead for new generations of technology to execute on our roadmap. Now, these new technologies are going to require a high level of collaboration between engineering and manufacturing... And we have an advantage.

We have an integrated vertical approach. Our hardware designers can quickly validate their first designs with an integrated diagnostic stack that we've developed solely for the purpose of finding and isolating hardware faults. These platform diagnostics continue through the product lifecycle and are used in manufacturing for production and test of each unit. Test information is logged and retained so that we have a quality repository for each serial number that we produce. Our EOS stack allows us to validate pre-production builds in real-world environments in our system test group, along with EOS regression testing. And then finally, our field feedback through customer support provides a unique lens into what's happening in our customers' environments on the reliability of those products, and we can use that to enable immediate improvements as well as enhancements for the next generation. Let's talk about what I mean by advanced manufacturing technology.

We had some significant hurdles to overcome in our 400G generation of products. These involved process technologies to assemble boards with high-density IC packages. We developed a high-speed custom interconnect so we could reduce the system power consumption, and we mastered the state-of-the-art in high-speed, large form factor build, design, and construction. Moving forward to the next generation and beyond will require that we master a new set of technologies and manufacturing techniques. So we're taking a proactive approach to explore how we design and build these products. We're engaging in design for experiments, so we can validate the manufacturing techniques that will be required and iterate on those improvements before we go into production. This early engagement will allow for quicker time to volume for all our new products. Finally, I want to thank you.

I want to thank the Arista manufacturing team, along with the Arista customers. We've been able to deliver an increased growth and broader product portfolio during one of the most challenging times we've had in supply chain in our industry. Our ability to focus on execution and adapt to new challenges gives us confidence that we'll continue to lead in the next generation of advanced networking products. Now I'd like to hand it off to our CFO, Ita Brennan.

Ita Brennan
Senior VP & CFO, Arista Networks

Thanks, John. Thanks, everybody, for joining us for our 2023 Analyst Day. I hope you enjoyed all the previous presentations as much as I did. Even being inside the company, it's always super interesting to see everything pulled together like we did today with Hugh and Andy talking about AI, and then Ken and his passion just around the enterprise and around the enterprise network operator, and making their lives better and making their network performance improved with high quality, automation, visibility, et cetera. But of course, none of this is interesting unless we can distill it all back down to the financial model, and that's what we'll talk about for the next 15 minutes -20 minutes. So we're going to talk about growth and diversification, investment, capital allocation, where we're making investments, and how we're thinking about capital allocation.

And then finally, we'll take a walk through the business model and look at some updates from what we showed you last year. One of the fundamental tenets of Arista, right from the very beginning, has been the concept of profitable revenue growth. Even early on, it wasn't sufficient to have products that managed to achieve revenue. We wanted to have products that achieved sustainable business and allowed us to grow not just revenue, but also grow earnings and grow the return on the investments that we were making. This is something that stayed with us all the way through the company's growth and expansion over time. We remain very focused on where do we make investments, how do we know that those investments are the right investments, and making sure that we're achieving a return on the investments that we make.

When we got together this time last year, we talked about a revenue growth rate for 2023 of approximately 25%. As we told you last week on the earnings call, we believe we're well ahead of that now and that we'll achieve a 33% growth rate. But in addition to that, we're not just growing the top line, we're also growing earnings per share and the operating margin as we went through the year. And we think now we'll grow the EPS in excess of 40% for the year as well. So that matching of revenue growth and earnings growth is still fundamental to how we think about the business.

What allows us to do that, what makes this model capable of having that revenue growth and profitability at the same time, it largely still comes back to the markets that we're addressing. This is our TAM. Looking at our TAM across all of the different areas of the business, core adjacencies and software and services. But underpinning all of this is that we're still operating within that networking domain. This whole TAM chart is anchored on EOS, on that single operating system that Ken talked about, that Jayshree talked about, and that's what enables us to be differentiated in and across all these markets. So we don't have to go out and acquire entities or expand into areas that are totally unrelated to what we're doing, because the markets that we're playing in are large and they're growing.

We showed this TAM chart last year at a $51 billion TAM. Today, it's a $60 billion TAM, and the beauty of it is that that expansion has come from organic growth in parts of our markets and from us adding capabilities inside some of those large markets where we're beginning to take a greater presence. So AI, for example, in cloud, is accelerating cloud spending, and the addition of products like AGNI and some of the other products that Jayshree talked about earlier, is allowing us to expand our enterprise and campus TAM. That's what allows us to have the scale that we have, but with a lot of leverage of R&D across all the various pieces of the business....

This goes back to our building blocks of growth, a concept we've talked about now for some time, where we're really expanding across the various sectors of the business, and we're expanding across the various product categories of the business. We're continuing to add capabilities, add visibility, add routing, add campus capabilities to our product set in order to enable us to expand into these markets. Of course, as new opportunities come along, such as AI, we're also focusing on those and driving investments for those. If we go through the different areas of the business, we start with enterprise. Enterprise is now becoming a very important part of the Arista business. There was a time when people wondered if we could actually be successful in the enterprise, but we are being successful in the enterprise.

We have a 28% CAGR over the last 5 years. We believe enterprise will grow faster than the corporate average in 2023. So we are clearly on the map and have the proof points that we need to show that we can consistently, over time, continue to grow in this part of the business and continue to take share. If we look at our cloud business, this is obviously our heritage, this is where we started, and we continue to make the right investments to continue to support these customers. And these customers are great customers. They've grown significantly over time. They now have huge footprints, and they're still able to grow their businesses and grow the investments that they're making in their infrastructure, and therefore, in what they're doing with us.

AI is obviously a major focus right now for these customers, but it's just another example of these customers driving technology leadership and technology expansion, and us participating in that and helping to contribute to their success as they, as they move forward with their next opportunity. Again, we're seeing a very strong CAGR here, and we think even after growing triple digits in 2022, the cloud will represent greater than 40% of our business in 2023. Then we come to the providers. The providers, if you remember, have two groupings inside of them. There's the specialty cloud part of the business, and then there's service provider, the traditional service provider. Again, the specialty cloud customers are really focused on things that are very similar to what the cloud customers are, are focused on.

They're driving for the same technology changes, they're driving for the same performance. And on the service provider side, we're taking all of those cloud principles that we've developed, and we want to drop them into applications inside the service provider networks, where that can bring real differentiation to those customers. So it's a part of the business where it is growing, but there are continued opportunities for us to do more and do more with new customers and new opportunities. I wanted to take a few minutes to talk about large numbers. Everybody talks about large numbers, but it's somewhat flippant sometimes. I think in this particular case, it's pretty striking. We have doubled the size of the business in two years, from 2021 to 2023.

What that means. That sounds simple, but what that means is that when you take the growth that enabled us to grow 27% in 2021 and apply it into our forward-looking 2024 view, it represents 10%-12% of growth, whereas back in 2021, it was 27%, right? So this is not to say that we are not going to grow and that we don't believe we have the building blocks for us to continue to grow, but we do think the growth rates will moderate somewhat just because the size and scale of the business has expanded. Again, turning to this chart, we had talked about this last year, where, you know, we need to look at the business over some longer period of time just because some of the spend elements are cyclical, right?

So we have been growing greater than 20%, five-year CAGR for many five-year periods now, and when we looked at this last year, we had talked about, could we grow greater than 20% in a 2020 to 2025 period? And again, I think with the performance that we've had this year, we're obviously well on track to exceed that metric. So what's our next milestone? What's our next target? As we sit here today, in 2023, we're looking at a five-year CAGR for 2022 to 2027, and we believe we can achieve mid-teens growth in that time period.

Now, that does include the 33% growth that we expect to see in 2023, so that leaves us with this double-digit growth rate for the outer years, and we'll talk exactly about how that plays out a little bit more, when we look at the business model chart. Let's turn to gross margin for a second. I know you heard from John earlier about some of the complexity that we've been dealing with on the supply chain, operations, manufacturing side of things, and obviously, that all drives complexity of gross margin. There's a great tendency, when you think about it from a model perspective, to want to hold all things constant and address one of these drivers at a point in time. But the reality is they're all moving together.

So it does drive to a complexity in calculating and forecasting gross margin and understanding what the drivers are. But we're making progress. We've seen improvements in supply chain. We've seen improvement in lead times. We are seeing reduced broker fees and broker costs. We're stabilizing manufacturing, the manufacturing footprint, and we are finally rationalizing our supply commitments. So all of this is good. We have taken some incremental inventory-related charges as well, just because of the size and scale of the purchase commitments and inventory balances, and the changing tides between AI and classic investments on the part of some of our larger customers have caused some shifting in the forecast. But we are taking, you know, taking that into account as we go, and I think we've taken a reasonable view, and incurred some incremental charges there.

And then, of course, we believe over time, we'll return to some normal pricing, negotiations. Longer term, what drives gross margin? It's still back to customer mix, is the key driver. In periods where we have large, high-volume cloud, accelerated growth, that will pressure gross margin. Periods where we mix towards enterprise, that will help gross margin. But again, both being very rational at the operating margin line, it's just this divergence at the gross margin line... Now turning to investments. Where are we making investments? What types of investments should we be making, right? We've talked about R&D and why it's so important. It's particularly important on the leading-edge product investments that we're making with our larger customers as they transition into new capabilities. Once we create that, those capabilities, then we can leverage them across the rest of the business.

Sales and marketing, obviously, as we look to expand our enterprise presence, we need to grow our sales and marketing investment. We're focused on technical sales capabilities. We're focused on tailoring the investments to the different parts of the business. We're focused on targeting larger customers, the Fortune 5,000, Global 2,000. We're targeting particular channel and partnerships where we think we can make a very viable business for both us and for the partner. On the G&A side, we just need to be thoughtful and efficient. We'll do what's right and make sure that we keep up with everything that's happening in the business. Sales and marketing is one that gets a lot of commentary and a lot of discussion, both internally and externally. How fast should we go? Are we doing enough? If we went faster, would we garner more revenue growth?

Would we garner bigger parts of the market? And I think you have to step back and think about this in the context of all of our sales and marketing investment, or the majority of our sales and marketing investment, is actually being applied to the enterprise, right? And if you do that math, you get roughly a mid-teens investment per revenue dollar for the enterprise, right? Still may be a little bit light, and maybe we could do some incremental things, but still very much within the bounds of what you'd expect from a large, at-scale enterprise company. Turning to capital allocation.

I think as we step back from the capital allocation question, one thing that we, that we do believe now, having gone head to head, toe to toe, if you like, with larger competitors in a number of different situations where we needed to be able to step into the market and have the same credibility as our large competitors would have, we will always carry significantly more cash on our balance sheet than perhaps a strictly financial model would indicate or would mandate, right? We saw this most recently with the supply chain. It was very important for us to be able to step into the supply chain, and we did, and make commitments and have people have confidence in those commitments, even though we were obviously a much smaller footprint than some of our competitors.

The other thing that's changed recently is that now you're getting paid to hold cash, right? Interest rates have increased significantly, and we'll exit 2023 earning roughly $50 million of interest income on our cash balance. So this makes the balance between when do we hold cash, when do we use cash and return cash to shareholders to be a little bit more interesting, and we are trying to optimize it and make sure that we're doing what is most accretive for the company in the near term. So you'll see us continue to balance those. We wanna hold. We will hold the cash, and we'll take the return while the market sustains these types of interest rates.

And at the same time, we know the stock is volatile, even for reasons beyond our control, and we will step into volatile situations and be aggressive in terms of using cash and returning cash to shareholders when those opportunities present themselves. We have $145 million of cash remaining on our current authorization, and the board is again ready to review further authorizations into next year if and when we need them. Purchase commitments and inventory, just a quick touch point on this, because I know it's important to your models from a cash perspective. As you can see, we're making progress. We look at these two together, even though really the inventory is what drives your cash number, but because of the way they're linked, we need to drive them together. I think we'll see sustained improvements in the purchase commitments right through 2024.

It won't necessarily go back to where it was before, because we'll have to keep a balance of purchase commitments for key components that still have 52-week lead times. And then on the inventory side, I think it's flattening out, and we can probably drive some improvements there as we go through 2024, as well. So what does that all mean from a business model perspective? On the left side, you can see what we talked about this time last year. Like I said, I think we're well on our way to providing improvements to that. Then you can see our outlook for 2023, which is really our guidance, midpoints of our guidance from last week. And then look at 2024.

We've talked about this double-digit growth and wanting to be able to achieve double-digit growth, even in a period where spending with some of the different parts of the business is moderating. That gets us to a 10%-12% growth rate for 2024. We do believe that we'll see some expansion in operating margin just because we've had such accelerated growth at the top line, and even though we're making incremental investments, we still believe that some of that will flow through to the bottom line, that we'll be at a roughly 42% operating margin. Then turning to our longer-term outlook of 2022-2027 and our targeted growth expectations in that period. This includes, obviously, our 2023 guidance and outlook that we gave you last week.

Looking forward, it's encompassing our internal goal of achieving at least double-digit growth in those outer years. So yes, this means that the growth rate is moderating, as we've talked about previously, but we're still adding significant incremental revenue to the business. This actually lines up pretty well with Jayshree's journey that she described, where we would hope to achieve the next $5 billion, going from $5 billion to $10 billion, in half of the time that it took us to achieve the first $5 billion. Looking at the rest of the model, we think we can improve gross margin slightly, but still anchoring around the 63%, and we're reserving the right to make some further investments around R&D and sales and marketing to give us this 40% operating margin target for this extended period.

That brings us to the conclusion of the finance piece of the presentation, and now I'll hand it off to Liz, and she'll tell us how we get set up for Q&A. Thank you very much for joining us.

Liz Stine
Director of Investor Relations, Arista Networks

... Thank you, Ida. As our executive team sets up for the Q&A panel, please enjoy this short video.

Speaker 23

Traditional network designs had a clear inside and outside, with strong perimeter controls like firewalls and VPNs, meant to keep attackers on the outside. But two things have changed. Enterprise networks have moved to a distributed model that spans everything from traditional campuses and data centers to remote users, IoT, and cloud services. And most attacks now tend to be malware-free, leveraging insider credentials, access, and legitimate applications that are already deployed in the environment. Zero Trust architectures are necessary to defend this kind of environment. And in a Zero Trust world, the age-old model of deploying firewalls between security zones is no longer feasible, since each individual asset needs its own security zone or microperimeter. While firewalls are great as a perimeter-based compliance tool, deploying them at such a granular level is a non-starter, given the significant cost and operational overheads involved.

At Arista, we believe that an organization's existing network infrastructure is the best and often only practical place to build and enforce these microperimeters. Moreover, the network can eliminate security silos by being the unified fabric that underpins key security pillars that sit above the network stack. For instance, within and between cloud workloads, solutions like Zscaler provide tremendous insight and control. Likewise, EDR technologies such as CrowdStrike can observe and act on threats that manifest on the endpoint. Arista's Zero Trust Networking and observability architecture, powered by our network data lake, NetDL, integrates with these and other tools, while also natively delivering seamless capabilities such as network authentication and authorization, data protection in transit, segmentation to build microperimeters, and continuous and real-time risk assessments via threat detection and response.

Each of these capabilities uses Arista's Autonomous Virtual Assist, AVA, to proactively deliver insights by leveraging a variety of artificial intelligence techniques, such as deep neural networks and generative AI, that anticipate network and security operator questions and extract answers from NetDL.

Liz Stine
Director of Investor Relations, Arista Networks

All right, well, welcome to the live Q&A panel. Joining the stage is the entire executive leadership team here at Arista, and we will now be opening up the lines for questions and answers. Out of respect for everyone, please limit yourself to a single question. And with that, we'll kick off our first question with Aaron Rakers from Wells Fargo. Aaron, are you there? Hi, Aaron. Aaron, I think you'll have to unmute yourself. Is he unmuted? Question, Aaron. Give me the question. Can you speak closer to the mic?

Aaron Rakers
Managing Director, Senior Equity Research Analyst, Wells Fargo

The UEC and how that's evolving. I guess there's different architectural approaches to things like congestion control, moving out of the network. I guess what I'm trying to simply ask is: how do you see yourself competitively positioned relative to other Ethernet alternatives, particularly from the largest GPU provider in the market, for these large-scale AI fabrics? How do you delineate the differentiation for Arista in that context?

Jayshree Ullal
CEO, Arista Networks

Yeah. First of all, Aaron, I think there's two approaches to solving AI. One is a vertical stack, where you get the GPU, the NIC, the network, everything from one supplier. And then the other is a best of breed, where customers, especially the large-scale customers, will look for the absolutely best network, best NIC, and best GPU. So I think where Arista will succeed is in that second horizontal best of breed case. And we believe we'll be very differentiated, as you heard from Andy and Hugh's presentation on dynamic load balancing, availability, visibility, hitless upgrades, as well as some deep, congestion control features. With or without UEC, UEC will make Ethernet new and improved and better, but we're already seeing a lot of pilots that we're in, where we can significantly improve the congestion control, the end-to-end latency, and the bandwidth scale.

We believe we'll get our fair share. As you know, we described a quantitative view of our fair share, which is we're aiming for at least $750 million in revenue in 2025.

Aaron Rakers
Managing Director, Senior Equity Research Analyst, Wells Fargo

Yeah. Thank you.

Liz Stine
Director of Investor Relations, Arista Networks

Thank you, Aaron.

Jayshree Ullal
CEO, Arista Networks

Can you all hear me, guys?

Liz Stine
Director of Investor Relations, Arista Networks

... We can all hear you now.

Andy Bechtolsheim
Chief Development Officer, Arista Networks

Okay.

Liz Stine
Director of Investor Relations, Arista Networks

Thank you so much, Aaron, and, thanks for your patience as we were working on getting the audio piped into the executives. We'll go to our next question. Our next question comes from Alex Henderson at Needham. Hi, Alex.

Alex Henderson
Senior Analyst, Networking Technology & Optical Equipment, Needham & Company

Great. Thank you so much. I hope you can hear me. I gotta say, I had a hard time hearing the management's response to Aaron's question. So maybe we could turn the sound up from the panel. The question I wanted to ask is, you know, right up my alley on the security side, and I guess Ken would be the best person to target it to. Seems like Zscaler is an outstanding company to tie up with, very nice integration there. I was wondering whether there are additional integrations coming down the pike, you know, with other vendors that you can talk to or whether you're going to ride with the initial horses there.

That's a market growing at 30%-50% clip, so obviously, huge opportunity, and I totally agree with your vision there. I'm wondering how much budget you can get out of the firewall market, and, and for that matter, the micro-segmentation market, which seems completely irrelevant with your technology. Can you talk to those areas and buckets that you can you serve to fund that growth?

Ken Duda
CTO and SVP of Software Engineering, Arista Networks

Thank you. There's several questions hiding in there, so I'll do my best to give a comprehensive view of this. Look, security is a huge opportunity for Arista because we need to move away from security as a point product that you bolt onto the side of the network, to security that's woven through the network. And this is something that we're in an outstanding position to provide, along with partners like Zscaler. I don't have any others for you to announce at this time, but I think our track record speaks for itself. Arista has always supported open solutions, open protocols, standards, and integrating with anyone who we is willing and able. We're very much a believer in best of breed when it comes to networking and network architecture and technology.

I think along those lines, we have integrations with VMware already that relate to micro-segmentation. Micro-segmentation works just fine on an Arista fabric. To your point, maybe it's less relevant also in a world where, with macro segmentation, you can create segmentation boundaries for both virtual and physical workloads using the same fabric. So I think customers have multiple options here, and we're in a very good position to be an essential ingredient in the overall solution. I hope that answers some of the question.

Alex Henderson
Senior Analyst, Networking Technology & Optical Equipment, Needham & Company

Yeah. Great. Thank you very much, and you sound much better. Thanks.

Liz Stine
Director of Investor Relations, Arista Networks

Thanks so much. Thanks so much, Alex. Appreciate the question. Our next question will come from Amit Daryanani from Evercore ISI. Hi, Amit. How are you?

Amit Daryanani
Senior Managing Director, IT Hardware & Communications Equipment, Evercore ISI

I'm good. Thanks a lot for doing this. Hopefully, y'all can hear me fine. You know, I guess maybe the question I had, just going back to the AI opportunity. Today, it appears that most of the hyperscalers are using InfiniBand rather than Ethernet. I'm curious, like, what are the two or three or four things you think Arista or the Ethernet consortium broadly needs to solve for, to start to see an uptick in Ethernet deployments on the back end of the network? And maybe related to that, when you talk about $750 million revenue opportunity by 2025, what does that imply? What percent of the back end do you think, back-end network, do you think goes to Ethernet?

Andy Bechtolsheim
Chief Development Officer, Arista Networks

Is this working? Hi. Yeah. So I'll let Jayshree comment on the number question. But on the network question, you know, broadly speaking, if you look at the history of networking dating back, you know, 40, 50 years, there were always some specialty networks like FDDI and ATM and Token Ring and things that are even not around anymore. Because at the time, there was a perception that they're somehow better or faster, or they did something special, whereas in reality, Ethernet was always the right answer, you know, for all these use cases. Now, today, you know, InfiniBand is tuned for RDMA-type applications, but you can do exactly the same applications on Ethernet, so there's no fundamental technology difference between those two networkings, and the overarching advantage of Ethernet is scalability.

In other words, InfiniBand, technically speaking, is a layer two network tied to, you know, 10,000 of nodes, whereas Ethernet can scale to 100,000s and more. So as customers want to build much larger networks, there isn't actually another alternative except Ethernet.

Liz Stine
Director of Investor Relations, Arista Networks

I'd like Anshul to answer the $750 million question because he and I are on the hook for that number.

Anshul Sadana
COO, Arista Networks

Absolutely. And just to add to Andy's comment as well, when you look at, you know, how customers run networks, the operational side is extremely important because you're deploying data centers throughout the world, and you have very few people to go manage, touch the hardware, change things, and so on. These customers have been using our products or similar Ethernet IP-based products for a decade or longer now, very successfully. So for them, there's a lot of interest to leverage that tooling investment and bring it on for AI as well. To the numbers question, you know, we already gave a target, and Jayshree talked about $750 million as a target in 2025 or by 2025.

Now, when you look at the AI market and where we are with Ethernet, there's a lot of work going on with lab trials, design, experiments, building new products. You've heard about 800 gig coming in a year or more. There's linear drive optics, which save us a significant amount of power. So when you put all of that together, these systems are coming along really well. We'll go from small lab trials to pilots next year, and as the 800 gig takes off, to high volume production deployments in 2025.

Liz Stine
Director of Investor Relations, Arista Networks

... Great. Thank you so much for your question, Amit. Our next question comes from Antoine Chkaiban from New Street. Hi, Antoine. How are you doing?

Antoine Chkaiban
Technology Infrastructure Analyst, New Street Research

Hi. Thank you so much for taking my question and for all the color today. So, yeah, I'd like to go back to the $5 billion AI Ethernet TAM in 2027. So, I think NVIDIA computer revenues are set to nearly triple this year and maybe double next year, so that's exceeding $60 billion in 2024, and that would probably correspond to about $100 billion in AI server spend, and say that corresponds maybe to $15 billion -$20 billion in AI networking, and that's just on the 2024 horizon. So I'm just wondering, if $5 billion is Ethernet, so that is about a third of your overall AI networking market. Is that the right way to think about it?

Then how should we think about the $5 billion? How does that split between maybe front-end and back-end? Is that just back-end? And maybe even if we could dig further, how does that split between the different network topologies that we discussed today, the different AI use cases and maybe products that you announced today as well?

Jayshree Ullal
CEO, Arista Networks

Yeah. So Antoine, I think the way to think of that $10 billion-$15 billion in 2025 overall is obviously today it's predominantly Ethernet, InfiniBand, sorry, and Ethernet's coming up very nicely. So there'll still be some InfiniBand in that 2025 number. And then there'll be a lot of other things too, like optical switching, et cetera. So even though we're predominantly looking at that number as the back end, connecting to GPU clusters, not the classic cloud networking front end, it's probably gonna be a third, a third, a third. Ethernet's an important piece, InfiniBand's an important piece, and then there's another that's an important piece.

Liz Stine
Director of Investor Relations, Arista Networks

Can you guys hear me?

Alex Henderson
Senior Analyst, Networking Technology & Optical Equipment, Needham & Company

Yeah, let's try that.

Jayshree Ullal
CEO, Arista Networks

Okay. Can you? Did you hear the answer, or do I need to repeat it?

Antoine Chkaiban
Technology Infrastructure Analyst, New Street Research

I did hear you, but it was a bit muffled, yes, but I could hear you.

Jayshree Ullal
CEO, Arista Networks

Okay.

Liz Stine
Director of Investor Relations, Arista Networks

Great. Thanks so much.

Antoine Chkaiban
Technology Infrastructure Analyst, New Street Research

Thank you very much.

Liz Stine
Director of Investor Relations, Arista Networks

Apologies for the trouble, but we really thank you for your patience. We're gonna work on getting everybody's clear answers. Our next question is gonna come from Samik Chatterjee of JP Morgan. Hi, Samik. Are you there, Samik? Looks like we've unmuted you. We cannot hear you, so we will come back to you. Let's try a question from Ben Reitzes of Melius Research. Hi, Ben. Can we hear you?

Ben Reitzes
Managing Director, Head of Technology Research, Melius Research

Hey, how you doing? Can you... Hope you can hear me. All right, great. I wanted to know, I guess, with regard to the, well, clarification on this AI revenue, you know, I appreciate, by the way, that you're doing a revenue goal and not some order number that we can't figure out or something else, so I really appreciate that. What is the revenue number now, just so we know how fast it's gonna scale? And is that... Are you expecting in this AI revenue for it to be mostly the clouds, at first, and then enterprise coming later? How do you see that mix versus cloud titans versus others, as it plays out through your timeline? Thank you.

Jayshree Ullal
CEO, Arista Networks

So Ben, we're not giving out explicit AI revenue numbers at the moment, but it's very small. We look at it as very low single-digit percent of our total revenue. As 400 and especially 800 gig accelerate, it's gonna be a larger number, and that's where we're looking to jump in the next two years to 750.

Liz Stine
Director of Investor Relations, Arista Networks

Great.

Ben Reitzes
Managing Director, Head of Technology Research, Melius Research

Okay.

Liz Stine
Director of Investor Relations, Arista Networks

Thank you so much. Thanks so much for the question, Ben. All right, Samik, let's try that again. We've unmuted your line, Samik. You can go ahead and ask your question.

Joe Cardoso
VP, Equity Research, IT Hardware & Networking Equipment, J.P. Morgan

Hey, guys, can you hear me? This is actually Joe Cardoso on for Samik. I saw Ita give me the thumbs up. So yeah, my question, when you talk about your aspirations to reach $10 billion in revenue, can you maybe just talk at a high level as to how you're thinking about the composition of that revenue goal and how it compares to the revenue composition today at $5 billion plus? And obviously, specifically, trying to touch on how you're thinking about driving diversification from either a customer perspective and/or a product solution perspective, particularly in the backdrop of the expanding portfolio. And Tim, you talked about today. Thanks.

Jayshree Ullal
CEO, Arista Networks

I can start, Ida, and certainly you can add. You can look at it from a sector perspective or a product perspective. If you look at it from a sector perspective, particularly since we combined the cloud and the AI titans into one, we think that'll be the most significant sector, likely to be, even when we get to that $10 billion in the 40% range, because these are big spenders, either for classical front-end cloud networking or for back-end AI. So that combination, I think, will be important. At the same time, one of the fastest growing will be the enterprise, and we expect that to be in the 30s. And that'll be a really, really important one.

It's a collection of many more smaller customers, but we are so underserved and underpenetrated in the Fortune 500 or 1,000 or even Global 2,000, that we've got a lot of headroom in the next 5 years for that. And then the third category, the providers, the service providers and the tier two cloud, well, you know, they tend to be cyclical, but they're big spenders too, and they particularly will be very interested in some of our, not just cloud networking products, but routing adjacencies, software value add, the WAN products we introduced earlier this year. So we, we think it'll be roughly a 40, 30, 30 kind of split. And on the product side, I still think data centers, as I've often said, are a massive, not just growth opportunity, but large TAM for us.

But more and more, we're gonna be moving not just to data centers but to centers of data. And the centers of data may be in the campus, in the branch, in an AI location, in a WAN, or in a traditional data center. So I think our product sector will vary very much and will look very different than it does today. Did you want to add something?

Ken Duda
CTO and SVP of Software Engineering, Arista Networks

No.

Jayshree Ullal
CEO, Arista Networks

Okay.

Liz Stine
Director of Investor Relations, Arista Networks

Great. Thank you so much. Appreciate the question. Our next question will come from Karl Ackerman at BNP Paribas. Hi, Carl.

Karl Ackerman
Managing Director, Equity Research, Semiconductors & IT Hardware, BNP Paribas

Hi. Can you hear me okay?

Liz Stine
Director of Investor Relations, Arista Networks

I can hear you.

Karl Ackerman
Managing Director, Equity Research, Semiconductors & IT Hardware, BNP Paribas

Great. The AI horse, if you will, has been kicked several times, so I guess I'll try and pivot this to something a bit different. Ken, you spoke about at length, really around network as a service. And then also, you know, Jayshree, you spoke about how enterprise is expected to grow as a portion of your overall sales through 2027. I know in the past you have argued that the proof point of service revenue is really just hardware sales. But at the same time, you know, how do we think about the growth of your services and software offering, as you, you know, seek to expand your enterprise business over the next several years? Thank you.

Ken Duda
CTO and SVP of Software Engineering, Arista Networks

Let me try to tackle that from a technology perspective. I think there's a financial dimension there that I'm not prepared to tackle. But from a technology perspective, I think the thing to realize is that this is really all about the management plane. How do customers operate these networks? And what we've already done with our CloudVision service, is created a service revenue stream for our company, where the customer uses our tools, our online service, to manage their own internal network. And as we broaden and deepen that offering to encompass security, to encompass campus, to encompass WAN, to grow beyond a data center, provisioning, automation, and telemetry, and into a complete, you know, infrastructure-wide management system, including application visibility, network performance monitoring, security, threat hunting, forensics. The, the list of things that CloudVision is already capable of is really quite significant.

And I think the opportunity here is to offer enterprises a sort of fundamentally better value proposition, that no longer are they cobbling together piece part solutions and then having to figure out how to manage and automate them all themselves, but instead can use a unified EOS stack, including our operating system that runs across every switch we make, along with CloudVision that manages the entire estate. And it's just a fundamentally different approach that will lead to an increasing stickiness and increasing fraction of revenue, being service-based. Maybe, I don't know if Ita or Jayshree wants to comment more on revenue dimension.

Jayshree Ullal
CEO, Arista Networks

Go ahead, Anshul.

Anshul Sadana
COO, Arista Networks

Sure. Thank you, Jayshree. I'm so glad Ken and the team are so passionate in building technology and fulfilling our customers' needs. You know, when you look at NaaS or Network as a Service, it can be viewed in two ways. While financially it can be represented to look like subscription or service, there are many companies that are just leasing hardware. That's not what we do. CloudVision as a service, as an example, is truly helping our customers manage global infrastructure from one dashboard, and it's a service they rely on for day-to-day operations. This automation... I'll give you an example of a customer that went from over 300 on-site engineers to about 50 automation engineers to run the same footprint. That's the value add we bring to the table. You can measure this simply as a separate line item.

There's a bigger impact in the pull-through and the stickiness of the product, which is what's making our value prop so much more relevant to the customer. As Ita has mentioned in the past, we won't break it out separately just yet. It has to be truly, truly very material. As you know, from the last earnings call, our software and services combined is about 16% or so of revenue, and growing. So we believe this will bring a lot of future growth to us as well, at good margins.

Karl Ackerman
Managing Director, Equity Research, Semiconductors & IT Hardware, BNP Paribas

Thank you.

Liz Stine
Director of Investor Relations, Arista Networks

Thanks so much for the question. Your next question is gonna come from Michael Ng of Goldman Sachs. Hi, Michael.

Michael Ng
Managing Director, Global Investment Research, Goldman Sachs

Hey, Liz. Thank you so much for the question and for the presentation. I was wondering if I could ask Andy or Anshul, would you just talk about what Arista is doing in linear direct drive, you know, the potential cost benefits, and competitive differentiation it might offer, and whether, you know, Arista is doing something that other vendors can't easily do, and, you know, does it help maintain your competitive advantage? Thank you.

Jayshree Ullal
CEO, Arista Networks

So, um...

Andy Bechtolsheim
Chief Development Officer, Arista Networks

One? Yeah. Linear pluggable optics save roughly half the power per optics. Optics are typically half the power of the total system, so you're saving a quarter of the power of the system solution. So this is a very significant power saving that everybody appreciates. The challenge in implementing linear pluggable optics is that the system design, it's an end-to-end channel. It has to be absolutely perfect to get the right bit error rate and the right performance. So all of our new next-generation systems are being designed to support linear pluggable optics, and, and it will work as far as we know today. We, we can't say what our competitors are doing, but it's not easy to make that work.

Anshul Sadana
COO, Arista Networks

Andy, if I can add one more thing. All of you have seen headline news. The biggest constraint for AI deployments and data center deployments in general, even more significant constraint than availability of GPUs, is availability of power.

Andy Bechtolsheim
Chief Development Officer, Arista Networks

... So if we can provide a 25% saving on the network power to the customer, it's a very attractive value prop, and customers are very anxious to get this into production as quickly as possible.

Michael Ng
Managing Director, Global Investment Research, Goldman Sachs

Great. Thanks, Andy. Thanks, Anshul.

Liz Stine
Director of Investor Relations, Arista Networks

Thanks, Michael. Our next question is going to come from Meta Marshall at Morgan Stanley. Hi, Meta.

Meta Marshall
Executive Director, Senior Equity Analyst, Telecom & Networking Equipment, Morgan Stanley

Great. Thank you so much. Maybe a question for me on the Ultra Ethernet Consortium, and just a couple pieces there. You know, what are the timelines and milestones for release? Just, are reaching some of those milestones critical for achievement of your $750 million target? And then just in terms of kind of how to think about the technical gaps that Ultra Ethernet solves versus what EOS still needs to bring. I realize that EOS is still the vast majority of that value, but just trying to figure out what pieces Ultra Ethernet might bring. Thanks.

Andy Bechtolsheim
Chief Development Officer, Arista Networks

So, EOS and Ultra Ethernet Consortium have nothing to do with each other. In other words, our switches with EOS run perfectly fine with current adapters, and they will work even better with future UEC adapters, but there's no interaction per se, between the system switch product and the NIC. What the Ultra Ethernet future NIC will do is hardware-based retransmission, you know, packet spraying, receiver-based scheduling, all kinds of good things that even InfiniBand does not support today. So it will deliver a best-in-class solution from a performance perspective for HPC and AI applications, which is what it's targeted for. But all of our current efforts are our standard product that are not dependent on UEC, and that's, I think, the pipeline we have been talking to you about.

Jayshree Ullal
CEO, Arista Networks

Yeah, just to add to that, Meta, our $750 million is based on the fact that we are already improving Ethernet with a beautiful EOS stack that Ken and Hugh have developed, and we have been trialing with our customers. Ultra Ethernet makes that even better, but as Andy said, it makes it better more on the NIC side, so... And we are not in the NIC business, so it'll complement what we're doing beautifully and hopefully make our $750 million even better in future years.

Meta Marshall
Executive Director, Senior Equity Analyst, Telecom & Networking Equipment, Morgan Stanley

Great. Thanks so much.

Liz Stine
Director of Investor Relations, Arista Networks

Thank you so much, Ita. I think we have time for one last question, and our last question is going to come from Tal Liani at Bank of America. Hi, Tal. Tal, can you unmute yourself? Well, is he in?

Tal Liani
Managing Director, Equity Research, Data Networking & Networking Security, Bank of America

Here we go. Can you hear me now?

Liz Stine
Director of Investor Relations, Arista Networks

Now we can hear you. Great.

Tal Liani
Managing Director, Equity Research, Data Networking & Networking Security, Bank of America

I was trying to find the right window. I have a follow-up on Meta's question, and I have a clarification on the seven fifty. How do you count the seven fifty in AI revenues? Meaning, what products are included? Because we are all trying to compare it to what Cisco is saying, and it's not apples to apples at all, and I think you, you hinted at it. How do you count it?

Jayshree Ullal
CEO, Arista Networks

Yeah.

Tal Liani
Managing Director, Equity Research, Data Networking & Networking Security, Bank of America

The second, on the protocols that you just answered, once Ethernet is getting to the next level and the protocol is decided on and implemented, does it make InfiniBand not needed in a sense that, is that going to replace InfiniBand, or are there going to be use cases for InfiniBand and use cases for Ethernet? How do we think about the comparison between the two?

Jayshree Ullal
CEO, Arista Networks

I'll take the first question, and maybe Andy or Anshu. So on the first question, first of all, our definition of AI Ethernet networking includes products that will be used for AI Ethernet networking use cases strictly in the back-end cluster. We're not counting the front-end ports because it's very difficult to tell whether they're used for AI or cloud networking or DCI use cases, et cetera. So this is a pure definition of all the products that will natively connect to GPUs. GPUs largely from NVIDIA, but it could be in the future, AMD or Intel as well. So that's a very important definition. Secondly, it's not orders, it's not backlog, it's not bookings. It's a commitment to revenue in that 2025 timeframe. Thirdly, we're not counting, you know, all the long-haul optics or anything that may be.

If it's connected to the GPU and Arista's involved in the purchase, then it would be there. But otherwise, if it's related but not counted to the AI cluster, it won't be there. So it's a very strict, narrow definition of an AI use case, but it does include multiple products. I expect the 7800 spine to be a workhorse there. We talked about the two-tier leaves. I expect a lot of 7050 or 7060 of future leaves. I expect some 800 gig products that we haven't yet shipped, and that, that'll be coming in the 2024 timeframe, including the Distributed EtherLink Spine. So it's a suite of products that all apply to connecting GPUs to build some of the world's largest clusters. Andy, over to you.

Andy Bechtolsheim
Chief Development Officer, Arista Networks

Yeah. Yeah, and if I could comment on the, sort of Ethernet InfiniBand, question here. So I, I think it's important to understand that InfiniBand and also NVLink are, in fact, single-vendor solutions from the leading GPU company. So yes, they have a very large share where they can bundle and push these solutions, but there's no other chip that will, as far as we can tell, that will connect to InfiniBand or NVLink, right? So it's partially, you know, NVIDIA's large market share today that drives the success in this market. However, we do believe there's other very good chips coming down the road, and naturally, they would use Ethernet.

Liz Stine
Director of Investor Relations, Arista Networks

All right. Well, I think that concludes our 2023 Cloud and AI Innovators Analyst Day event. We are going to post a recording of today's event, as well as some supplemental slides on the investor section of our website. Thank you so much for joining us today. Thank you so much for your interest in Arista.

Powered by