Great. Thank you. Good morning, everyone. I'm Samik Chatterjee, and I cover the hardware and networking companies at JPMorgan. This is a big one. I do have the pleasure of hosting Arista and particularly Jayshree Ullal, the CEO, and Chantelle Breithaupt, who's the CFO for the company. Thank you both for coming out to the conference, and thank you to the audience. I can tell you I'm already seeing questions populate here, so I know this is going to be a busy session, but maybe I'll start you off with AI, understandably. You know, when we think about the incremental opportunity for the company relative to AI, you've done well with the cloud customers for a longer period, even before that. How do you think of AI either being a continuation of that opportunity, or does it layer on top of what you've already done with the cloud companies?
Yeah. First of all, thanks for having me. I just stepped off a red eye, and it's a pleasure for Chantelle and I to be here. I think by now you know Arista is all in on AI, no question about that. It's gone well beyond experimental to pilots and production this year in many of our customers. I would classify our AI endeavors in, first of all, two categories: networking for AI, where we're building these really high-speed, high-scale, low-latency Etherlink products, as we call them, to support our largest cloud customers and some of our enterprise and smaller cloud customers as well. And then AI for networking, where we're using AI/ML as an assist to do better observability and security and root cause analysis using some of the technologies we've developed called AVA, Autonomous Virtual Assistants.
These are kind of two sides of the same coin. One is really building that robust scale-out network foundation for all that's going on with AI accelerators and GPUs, et cetera. The other is much more of a high-fidelity, capability to improve some of the network characteristics.
Got it. You've invested significant resources in relation to your portfolio specific for the AI products.
Yep.
One of the questions we get often from investors is, how do you get confidence around cloud companies continuing to spend on this front?
Yeah.
What has driven you to be willing to spend that much resources on it? I'm sure you have a different perspective.
Yeah. No, we've now been on it for at least three years, perhaps longer. It started our—there was a turning point in our efforts, which I would best classify as the aha moment for many of us in November of 2022 when ChatGPT really got announced and the relationship between OpenAI and Microsoft cemented. That was a very interesting moment for us because what we saw is the advent of what we would call, in networking language, the backend network. Historically, with the cloud, we'd been participating in the frontend, connecting to compute and storage and data center interconnects, et cetera. We'd never really participated in the backend, which had been, you know, much more of an isolated, kind of like an HPC cluster or was built off of PCIe or CXL or, in a lot of cases, InfiniBand.
We actually, in 2022, felt like we were outside looking in, where we had the frontend network, but the backend was a hodgepodge of many things. First, we set out to do what we do best, which is look to migrate these kind of isolated technologies into Ethernet. Through our founding efforts on Ultra Ethernet Consortium, we moved to show how Ethernet can really be highly elastic. It can scale. It can deal with a lot of extreme congestion. The traffic of AI is very different than cloud in terms of diversity of flows, in terms of, you know, long-lived versus short-lived, in terms of multiple senders trying to send to one receiver and collapsing of the receiver. Having that Ultra Ethernet or UEC spec come out was very important to us. In parallel with that standard, we started developing products.
I would say the efforts really started the last three years and is obviously picking up steam. One of it was to have that important scale-out Ethernet-based network to be able to achieve this. Because, you know, if you throw a lot of GPUs and you do not get the utilization of those GPUs or the GPUs are inefficient by 30%-50%, you are wasting millions of dollars, which you can get back all that money and more if you put in a good scale-out network.
We can talk about AI a lot more, but before we go further, just help us think about—we do not want to ignore the non-AI part as well.
Sure.
How do you think today about the addressable market for the company in relation to AI versus the non-AI? What are the drivers for the non-AI part, that you see?
It's so interesting that we now suddenly talk about non-AI when that is our biggest market, right? I actually would classify our AI market, our data center and cloud market, and then our enterprise and campus market as three huge pillars. Every one of them is a $20 billion-$25 billion opportunity, which gives us the entirety of a $70 billion TAM. While there's tremendous opportunity in the AI, we shouldn't forget the other $50 billion or $60 billion right now of TAM we have, which is where we've been excelling for some time in the last 10 years. The nice thing is there's a lot of synergies between all three because as you build these AI clusters, you're going to have to carry that traffic over the frontend on your cloud, putting pressure on the performance and capabilities and refresh cycles you will need there.
I think there's a symbiotic relationship between the AI backend and the cloud frontend. The enterprise is something we're super excited about as well. It's more of an AI assist over there, but there's such a large legacy incumbent type of install base with a lot of fatigue, and customers are looking for alternatives there that are different. We used to say nobody gets fired for buying Cisco or IBM, but today, if you don't look at alternatives, you could get fired. We're seeing tremendous opportunity there as well.
Got it. In the traditional cloud, non-AI, whatever you want to call it, you had—
Yeah, I call it classic cloud, like the Coke.
The differentiation for Arista was already quite well defined for Arista switches. When you now move those to AI products, just help us think about, are the differentiation drivers for Arista's products the same? Are they even more sort of specific in terms of what AI products need? How should we think about the hyperscalers that you work with and the differentiation that you had in the cloud, classic cloud, carrying over to AI?
Right. So just to step back, one of Arista's huge differentiations has been our software stack. You know, we were built to do a much better modern networking stack for cloud, for enterprise, for data center that we call EOS, Extensible Operating System. And with sort of 15 years of invention into it, close to 20 now, and three generations of enhancements we've done, we still think it's one of the best networking stacks, ever seen. It applies as much to the cloud as it does to AI. Now, it's based on open networking principles. It's based on Linux, but the foundation we've added with a publish subscribe network data lake model and all the features we do for routing, switching, access list security, it's been a tremendous effort. It's a high mountain for anyone to climb. That is a differentiator for the cloud.
That's also a differentiator for AI. Now, there's some unique differentiators in the AI above and beyond that, where you have to pay much more attention to the tail latency, not just the packet latency, but the message latency. You have to pay much more attention to the diversity of workflows, to the, you know, the fact that there may be some small flows and some large ones, but there may also be an extreme amount of traffic patterns where you're, you know, dealing with the large computation aspect of it, where you, you know, the AI workflows are much, much more compute intensive. You have a vicious cycle of this data that you have to deal with. There are some unique characteristics on traffic patterns, on flow management, on latency management, on congestion control that are unique to AI.
The beauty is we're able to build on the software stack we already have. In fact, not only are we able to build on it, we're bringing that all the way down to the host. You guys might have seen our announcement with NVIDIA, where we can bring some of these AI agentic capabilities right onto the NIC or host so that not only are we doing it at a network-wide level, but we can do holistic visibility down to the host as well.
Jayshree, for the—I do not want to put words in your mouth, but overall, if there was a software differentiation in the classic cloud, it sounds like the software differentiation or the moat is even higher for Arista in AI.
Yeah, it's cloud plus, right? The cloud plus being you can do much more latency analysis, data analysis, and sampling, et cetera. It's required even more because, you know, if you thought the cloud was moving fast, the AI traffic is moving 10x faster. Everything from visibility to automation has to be done that much better.
Got it. If I then switch gears, and you've done really well with your key hyperscale customers, Meta and Microsoft, but beyond that, we've seen limited willingness from, like, a Google or Amazon to use Arista, and they've stuck to white box. Maybe just help us think what's—um, sort of how does AI change that? Do you see potentially a change because of AI, given the amount of complexity associated with it?
Yeah. I think just going back to the Google and Amazon question, one of the things that inspired Arista and how we began and started shipping products in 2008 is we saw that Google and Amazon had to build their own internal data center network because nobody could give them that non-blocking leaf spine active-active software experience. We said, wait a minute, why is that? You know, why is it these guys are having to put thousands of engineers to build it? That is how Arista was born. It should be no surprise that, you know, they are still hanging on to their network and love it as much as we love EOS. However, to your point on AI, as they build these large intra-data center clusters, it does create opportunity outside for inter-AI and inter-data center capabilities.
I can't speak to specific customers, but the more AI traffic they put inside the data center, the more opportunity we can have in the periphery for data center interconnects and other use cases that can include routing as well. We're not completely out of those customers, but obviously the intra got built over the last 10years-15 years, and they love it as much as we love our implementation. Changes are.
Got it. There's a lot of conversation around white box, but I think when investors talk about white box, they tend to miss that there's no white box really outside the hyperscalers. And when you think about what we're seeing in terms of the tier two sort of cloud companies really emerging as a big spender, how do you think about the opportunity there now, particularly when you compare to sort of the classic cloud when the ramp was? Like, how do you see the opportunity differently versus?
It's funny. I was telling somebody just this morning that if you go look at the very first messages that came out after we went public in 2014, it was these two things. It was, there's the threat of the white box for Arista, and there's the concentration of two customers, Microsoft and Meta. And 11 years later, we're still saying the same thing. You know, it must show that while this is a recurring trend, Arista is doing two things extremely well: continuing to maintain relevance with these important customers and at the same time continuing to coexist with white box. We've never said they weren't there, but where it makes sense, we work with them. Arista is still the uncontested leader in the spine, and we work on a lot of leaf cases that are not white box.
It is important to understand we embrace the white box as part of our offering too in terms of working with the SONiC operating system or the FBOS operating system. You are absolutely right to say these are a class of customers that can invest 1,000 development engineers and therefore do different things. Not every other customer can do that, and therefore it is limited to a minority, but large customer base that is willing to do, you know, take the do-it-yourself approach, if you will, and at the same time augment it with Arista type of approaches.
I don't think that'll change, but I think it'll be a heavy lift to go much beyond that because then every enterprise customer or even tier two cloud provider would need hundreds of thousands of engineers, which, you know, at the end of the day, nobody can afford that kind of thing. They look for TCO. TCO comes from good CapEx, but also good OpEx. The beauty of Arista is, in fact, that we provide the combination of both and reduce their automation time. I was just with a customer recently where they were telling me they were able to take down 88,000 commands and configurations they were doing that would have otherwise taken them days, if not months, with our platform to 37 minutes. That kind of advantage doesn't come if you don't really apply the discipline of the right automation and visibility.
I'll just follow up on the second part of that. NeoClouds or the tier two clouds, as they're called, like, how are you evaluating the opportunity there? What are the engagements there like at this point?
Yeah, quite good. We're doing well with both NeoClouds and enterprise customers. They're much smaller configurations. A lot of times they're also non-NVIDIA GPU configurations where customers or these NeoClouds are looking to create some differentiation. I'd say it's early days and they're much smaller than the large cloud or AI-tightened deployments.
Got it. Moving to another topic that comes up a lot is how should we think about the differentiation that Arista brings in the spine layer of a data center, particularly how critical are Jericho chips to the differentiation? I'm sure you've seen these questions as well, but do we envision a change in the landscape where the importance of having a Jericho-enabled switch moderates over time?
Yeah. No, that's a very good question. We've always had two suites of products: the Jericho, which is our value premium product with a virtual output queuing architecture, highly, you know, differentiated buffering that can deal with massive congestion management. And you can imagine AI is extremely sensitive to that congestion management. If you're going to take chances and not use the Jericho, then we have the Trident and Tomahawk family with smaller buffers, in which case the customer or we have to work together to optimize those buffers very carefully. Once again, you're trading off CapEx for OpEx because if you don't use the high buffer, then you have to spend more time thinking about where are all the congestion managements and put a lot of design and thought into that, right? How differentiated is the spine? Very differentiated. Arista is known for this value-added capability.
We're probably, Broadcom is one of the largest customers in the Jericho family. And we're in, you know, 80% of the spines we deploy for large clusters and cloud and AI deployments is based on value-added buffering and Jericho. There are customers we have that go the other route and take an approach where they'll try and do the optimizations themselves. And they have noticed that they have to do a lot more work than let the products do the work for them. So it's kind of, you know, 80% of one and 20% of the other, but either way, there's work to do. Either the products have to do the work or the people have to do the work.
Yeah. Just to clarify, when you say there are some customers that choose the alternate, is that using Tomahawk in the spine?
Yeah, there are customers who can, in smaller configurations, use Tomahawk on the spine. One other advantage or disadvantage of that is you have to have a lot more cables and optics because you'll end up building instead of a two-tier or three-tier architecture, and you'll trade off doing a higher-end spine with a lot more optics and cables. That's another important thing. Cable management and optics management, we just finished installing tens of thousands of GPUs with one of our customers. They had to bring in 1,000 people just to do the cable and optic management. It's all about trade-offs. You can do it on the switches, or you can do it on the cables, or you can do it at the people.
Okay. Okay. Fair. Optics is a good segue. Can you talk about, one, the need to support technologies like LPO, LRO that the industry has already talked about? Does it really change anything materially in terms of economics for Arista when you go and support those? Are those just more adjacent technologies that you just need to enable at some point?
Right. Arista's maniacal focus is networking, but networking has to connect to other things as well. One of the most important connections is pluggable optics. We've been big fans of that, and we've supported that all along. That does not mean we build the lasers and we build the optics, but we make sure it works. That is an important system-level advantage that we bring, everything from the security to the reliability to the troubleshooting of those optics. We're huge fans of pluggable optics because in that configuration, not only can it work, but it can work long distances, short distances, 10 kilometers- 100 kilometers. We can do levels of encryption. You mentioned LPO.
Two years ago at the Optical Fiber Conference, Andy Bechtolsheim and my team, we showcased the pluggable optics, the long-haul pluggable optics, where we were able to drive using our electrical SerDes on our switches without DSP long distances. Yes, this is a huge advantage. That was a proof of concept. Today, in the AI and cloud world, we're ready for LRO, LPO, all of the different pluggable optics because it gives you the best density, the best reliability, and the best distance and power management as well. You could save a third of the power by using LPOs. It's pretty significant.
Yeah. Maybe taking that discussion forward to co-packaged optics. Obviously, one of your, I guess you can call it competitors, as well as enablers of the AI ecosystem, has discussed co-packaged optics. I mean, maybe from an investor standpoint, there's a question more about when will Arista be ready with a co-packaged optics solution. Secondly, how do you view the opportunity around the market? Do you see it as a sort of eventuality that everyone has to go to co-packaged optics, or do you see multiple other technology solutions before?
No, I definitely see multiple. I think for troubleshooting and reliability and manufacturability, there's no beating the pluggable optics. Now, if the co-packaged optics has better density advantages and cost advantages, then some will naturally experiment then. You know, CPO has been around for 10 years- 20 years. This is not a new concept, but I think it's getting another life, you know, like it's a cat with nine lives. And definitely, if there are advantages there that pluggable optics doesn't give in terms of density or power, we will surely study it. We will always, though, also look at the reliability and troubleshooting mechanism because in any kind of production environment, you need all three. You need reliability, you need power, and you need extreme cost efficiency with high density. We're big fans of all forms and shapes of optics.
We've generally seen pluggable to be more preferred, but if co-packaged copper or co-packaged optics, particularly in rack levels, as you start to build high-density AI racks, starts to become interesting, we'll definitely embrace it as well.
Okay. One more technology question. A lot of conversation recently about optical circuit switches.
Yeah.
Google uses them in the spine. What does that imply if more hyperscalers move to adopting OCS?
Yeah. Maybe Google has an answer there from we can ask. I would say there's only one customer I've seen that add the optical switches, and they've made it work. As a mainstream technology, even with AI, I'm not seeing that. I have seen many more deployments of the AI spine connecting to different kinds of leaves. I think, once again, when you try and optimize for one type of problem and that to the physical layer, nothing to say it can't work, but then again, you give up a lot by not having a full-fledged network that can give you the right security, routing capabilities, VXLAN, access list segmentation, et cetera. I'm not knocking it. I'm sure it can work, but I don't see that as a mainstream way of doing things.
Okay. Yeah. Let me bring in Chantelle here, and it'll be probably not right if I don't ask a question on the second half guide, and you can take a shot at it as well.
Second half of which year are we talking about now?
Which year you're willing to give us?
Last year. I can tell you we did well.
That's what I got. The question is, really, you had a strong Q1. Your 2Q guide is materially above the Street, and now you're implying almost plateauing at this 2Q revenue level for the rest of the year to get to your full year guide. We get the macro uncertainty, but in terms of really seeing any tangible slowdown in any customer vertical, what is giving you that concern, or what are you really watching for if we had to say, okay, this is what makes you feel better when you come back next quarter for the guide? What does that have to be?
Yeah. So thank you for the question. If you heard the last earnings call, thank you. We did have a great Q1 and I think a pretty robust guide for Q2. We see momentum on the top line, and we do not see that changing as we look through this year. We want to distinguish that versus the choices going into this earning print were to hold the guide, change the guide, or pull the guide given the kind of the tariff situation. I want to distinguish the top line momentum from a tariff conversation. The full year guide was held until we get the answers on the reciprocal tariffs, July 9th, I think it is, so we can have one cohesive P&L update to the guide. This is not a momentum dropping, shift changing, a re-rating of anything that we are thinking.
It's really, let's say, we'll give you the enthusiasm we're seeing on the top line, wait for the reciprocal tariff answer, and then come back with a full year guide that's connected. That's what I think we would leave it at until we get to the next quarter of conversation. We're very excited. You know, you heard us on the call talk about we had a 60%-62% gross margin guide this year, thinking it was going to be a cloud AI-heavy year. As we went through Q1 and looking into Q2, we're pleased that we're seeing performance demonstrated across all of our customer segments. Very happy to see that. We'll continue to have that conversation. It's part of why we guided Q2 at 63%, raised the operating margin to 46% at that time in our guide for Q2.
We will see what the year can be once we get an answer on the tariffs because our supply chain does touch Mexico, Malaysia, Vietnam. We do have exemption under the USMCA, so that's great. A little bit from China for us, so the news today does not really change our world on that conversation. We are really waiting to see what happens with Malaysia and Vietnam. That guide staying at 60%-62% in the call was to say if we took the top end and did no mitigation, did no price pass through to our customers, that would be the so-called worst-case top-end scenario. We will be very happy to come back and see what the landscape is at the next call.
Okay. Got it. Let me just open it up to see if anyone in the audience has a question, otherwise we can continue. Anything in the audience? Okay. Maybe, Jayshree, I'll ask you, just breaking the order here a bit. One question that I've seen from investors and has come up even in the discussions at the conference is in relation to your management team and the changes there. You did have a bunch of announcements on the last earnings call, but I think there is, and for most investors, I know they've been in the Arista stock for a long time, and this is the first time they've seen that much management change happen at the same time. There is some level of hesitation in terms of what's going on.
One, sort of what's the broader holistic view that you're taking on the management team, and how do you reassure investors that sort of you'll reach stable state with the management team?
Yeah. So first of all, I believe we've already reached stable state with the management team. You know, change is inevitable. I think we were fortunate and blessed to have a management team work together for 8 years- 15 years like we did. That is unusual in our industry. Usually, change happens a lot faster. I think financial success and aging created some choices for people. You know, some of them chose to retire, and some of them chose to do something different. You know, anybody who has a 15-year tenure here or even an eight-year tenure in the Silicon Valley should be commended because that's not the average tenure, right? That all came together. They came over a period of two, three years, and we actually have them more often.
We just do not announce them, but we announced them this time because they were officers, and we had to, and we wanted to. I am actually very pleased with the changes we have made in the management team, and I believe they really set us up for the future for the next 10 years because they are generational. They are all much younger. At the same time, they all have tremendous experience in Arista and outside for an average of 10 years- 15 years at least, which is rare in networking. You know, it really allows us to describe our next generation bench strength. At the same time, we do not rule out bringing in additional new leaders as well. We want to do both.
Our culture is so unique that the expertise of an outside leader and coming in, and as Chantelle would say, connecting with the keeping Arista weird is not an easy thing to do because from the outside, we all look like a big corporation with a $100 billion-plus market cap. Inside, we're a nimble, agile startup that's highly engineering-driven, highly customer-driven, highly quality-driven, and we do not have a lot of middle management, and therefore, it requires much more from our executive management and our individual contributors. Every executive leader is not just managing. They are contributing. They have their own individual contributions. That style is very different. I do not think I have seen it in any other company I have been in, and it is really unique to Arista. We want to preserve that uniqueness.
As we miss some of the leaders who left us or retired, I feel very good that we have improved and made it stronger and better for the long run.
Yeah. Are there more management changes, additions coming in relation?
Yeah, I would expect there are.
Okay. Okay. Just changing gears here. We were talking recently to one of the leaders in the distribution reseller channel, and one of the comments he made about Arista really stood out to us, which is why we wanted to ask you the question is when they were compared to other companies like Cisco, for example, HPE, they've seen everyone addressing or attacking the AI landscape with a lot more partnerships, whereas Arista has been at its execution best.
Solo.
At the same time, solo.
Solo.
I really wanted to get that across as a question in terms of do you see something that's very well done on a standalone basis, solo basis that cannot be done in a partnership, or do you even see the need to have partnerships that the rest of the peers are pursuing at this point?
I think when it comes to outstanding engineering, we're very pleased with being solo because we just do it better than anyone else. The bench strength there, the breadth and depth is stronger than ever. When it comes to solving customer problems, we don't feel the need to be solo. We'll absolutely partner with resellers, partners, technology partners because you got to make it work. You know, even with our competitors, a few weeks ago, we were on a call at a large bank where you can't be solo because the network is connecting to everything. If an application is responding to a network or something changed in the DNS or naming system or configuration. It's not our nature to be solo. It's absolutely important to be multi-vendor and interoperable. I think depending on who you talk to, you'll see two sides to us.
You'll always see us build the best innovation and best technology, and you'll always see us delighting customers. If a partner helps us in that process, we'll very much work together. If the customer says, "No, I want you to do it," we'll be right there with them. You're probably hearing from the resellers more now because in the past, it was much more of a direct connection in the data center. In the campus and enterprise, the reseller partner network that we are cultivating, we are getting more and more committed to internationally and in the US because we can't be everywhere. I think solo isn't our only mode of methodology of working, but has been when it came to engineering and innovation.
Okay. Got it. There's one question here, which is on Nexthop, but I'll rephrase it for you and make it broader, which is Nexthop is a new company trying to enter the space. When you think about other similar companies that might look to enter the space, given the attractive TAM, I mean, Arista was once a new company in this area. How do you think about the sort of barrier to entry or how tough is it for new suppliers to really come into the space?
I'll go back to, again, when Arista entered, you know, it was very hard to battle an incumbent and become a switch vendor unless you have some differentiation. As you know, Arista worked five years to build a software stack and then worked with merchant silicon vendors to really, really have differentiation. That's very different than the white box. The white box operates at a 10% margin. It's more about systems integration of general purpose hardware and an open software, whether it's SONiC. There's a market for that, which has been proven by many vendors, whether it's Celestica or Act-On or you name it. I think of that more as a systems integration low margin business. I think of what Arista does as value added premium, you know, 60%+ margin business, different strokes for different folks.
Yep. Got it. Last question from my end. Talk about the enterprise opportunity, and I'll sort of bifurcate into one, the opportunity on the data center side. How much of this is still about going and battling the incumbent versus on the campus side where you've sort of significantly enhanced your efforts, but there's probably more to come and it's a much bigger revenue TAM as well.
Yeah. Look, everybody starts at the bottom rung of a ladder. In the data center, when we first started, we were looking at a $20 billion market cap. And you know, in our first years, we were tens of millions, right? So you got to start somewhere. In the campus, it's a massive market, but it's been a more dormant market, particularly during the pandemic where people were still not in their offices or whatever. I see a big change now in the post-pandemic campus where it's gone from that dormant state to I need some expectation state, if you will. Just like the data center morphed to the cloud and people needed cloud characteristics, the campus in the post-pandemic world is moving to a much more active state where they need wireless and wired to be equal citizens.
They need the same leaf spine topology so they're not building parallel networks for wired and wireless. They need a level of automation and security and branch networking so you can connect not just large campuses but smaller. I think we've got a lot more opportunity ahead of us now that the campus market's gone from dormant to active state, just like the data center market.
Yeah. Just last one as a follow-up there, what we've been wondering about for a while is with all this AI opportunity that you're pursuing in the cloud, has enterprise and campus come down in terms of the amount of time you can even focus from a management level on those opportunities, or is it pretty much sort of under the cover going at the same rate in terms of resources?
Do you want to talk about that?
Yeah, absolutely.
Yeah, I would say full throttle on all of the topics. You know, I think AI just gets the attention given the questions you've been asking today, the great questions. There is no doubt I spend a good part of my time on enterprise and campus along with the sales team and the engineering teams. It's bifurcated. It's full focus. There isn't a favorite child when we're diversifying revenue. So we're happy to serve all those markets, and we're working very effectively on each of them.
Yeah, and I would agree with that. That's why the management depth is so important. We've got our enterprise leaders with, you know, Ashwin, Chris, and now another Chris. We've got our cloud leaders.
When we segment the market, our leaders can spend more time in the individual areas, but I end up spending 50-50 time between the cloud AI on one side and enterprise on the other.
The great thing is the product portfolio can serve all those use cases. It is a great thing.
They are interconnected, for sure. We are still doing networking. We are not building an entirely different business, right?
I will wrap it up there, but thank you, everyone. Thank you for coming to the panel.
Thank you, everybody.
Thank you to the audience.
Thank you. Thank you.