Arm Holdings plc (ARM)
NASDAQ: ARM · Real-Time Price · USD
203.26
-7.92 (-3.75%)
At close: May 4, 2026, 4:00 PM EDT
201.43
-1.83 (-0.90%)
After-hours: May 4, 2026, 7:59 PM EDT
← View all transcripts

Arm Everywhere: Investor Session

Mar 24, 2026

Ami Badani
CMO, Arm

Hi, everyone. My name is Ami Badani. I think I've met some of you, but probably not all of you. I'm Arm's Chief Marketing Officer. As you saw in the keynote, huge, huge moment for the company, so thank you all for being here. A little bit about myself. I've been at the company for two years. Prior to joining Arm, I was at NVIDIA running the BlueField DPU software business. Then in my early years, I was actually an investment banker at both Goldman Sachs and JP Morgan. Kind of had a bunch of different roles throughout. We're gonna spend the next several hours together, and I'll kinda walk you through what that looks like.

We're gonna have each of the general managers of the businesses come up and talk about the product strategy as it relates to our three businesses. Chris Bergey will talk about the Edge AI business unit, Drew Henry will talk about the Physical AI business unit, and then Mohamed Awad will talk about the Cloud AI business unit. They will all talk about the product strategy. We will have Jason Child, who's our Chief Financial Officer, come up and talk about the financials at a company level. At the end, we will do a 45-minute Q&A where we will have all of the general managers on stage, including Rene and Jason. Please hold your questions till the very end.

Each of the business units, again, will go through their strategy, and then you can ask questions to each of them at the very end. With that, I'll hand it off, and sorry, I also wanted to introduce Sharbani, who will go into the software section after all of the business units, 'cause that's a very important piece of our business that we wanted everyone to understand. With that, I'll hand it off to Chris Bergey, who will talk about the Edge AI business.

Operator

Please welcome Arm's Executive Vice President, Edge AI, Chris Bergey.

Chris Bergey
EVP of Edge AI, Arm

All right. Thank you all for being here. Obviously an exciting day for Arm. I was asked to just give a little bit of my background. I've got some old faces I see here in the audience. For those who don't know me, I am going to actually celebrate my 30th year in the semiconductor industry this summer, starting out of engineering school at AMD. Spent almost a decade at Broadcom, and actually joining Arm six years ago, a little over six years ago, was the first time I actually worked for an IP company versus a silicon company, and I did some startups in between there.

For me, it's exciting to see Arm get into silicon, but I'm super excited about what I represent and what we're doing inside of the Edge business unit. As you probably are aware, Edge business unit actually represents a lot of the legacy Arm business. As it is, and as strong of a footprint as we have, many of you may be surprised to see the amount of growth that we believe we're going to see over the next five years from a TAM perspective, and that's about 40% increase in TAM. Now, why is this happening? Not a big surprise, AI. Essentially, what is happening is AI workloads are coming to all kinds of devices.

Obviously, it's been pretty exciting to see the OpenClaw moment that Rene mentioned earlier, and this has been really fun for us because we've been talking about agentic AI for over a year now, and it was really hard for people to necessarily connect and have that aha moment with, "Well, why would I want a personal agent, and why would I do these things?" Now you're starting to see these technologies come to market, but what you're seeing is that you start seeing the concepts, and that's great, but I think the other thing you see with many of these early agentic use cases or even some of the prototyped agentic OSs, if you've seen any of that work that's being done very much in the development side, is that the concepts are great, but the performance is not there.

I can tell you that this has come through in spades for me. I actually just spent the last several weeks traveling between Mobile World Congress in Barcelona, we had Embedded World in Germany, and the number of customers that came to me and said, "Chris, you know what? We need to up our compute. We need to think about going to more advanced nodes than we were thinking about. I need to think about a more powerful CPU complex. I need to think about more memory bandwidth." This is really what's gonna drive much of this growth that we're expecting.

When you look at where is the Edge AI opportunity for Arm and inside of my business unit, obviously mobile continues to be an incredible driver for us, and these devices continue to be essential to us, and we believe that many of the agentic services are gonna start running on top of those devices are gonna drive an incredible amount of additional silicon content. One of the questions I get there is, "Well, wait a minute, Chris, you know, what about the dynamics around pricing?" We're obviously seeing some of the tensions that are happening already today around memory and storage.

When you actually look at it economically and you look at the cost of many of those tokens in the cloud versus doing them locally, there's quite a few economic drivers, there's quite a few performance drivers around latency and real-time response, and then there's also a lot of requirements around privacy and where does that data stay, and how do you protect some of those things and where the agents have access to that data. We see a huge amount of growth opportunity in mobile, and I'm gonna talk more about that as we get into our CSS strategy. The second thing is intelligent edge.

Obviously this is a huge category of devices, but what you see across these devices, whether they're a consumer device, whether it's an enterprise device, whether it's an industrial device, you're seeing AI come into those devices and people looking at new use cases and new capabilities. One of the analogies I like to use is touchscreens. You remember, you know, the iPhone is almost 20 years old, the iPad is about 15 years old, and that was when we really started using touch. Now, if you get a child a screen and it doesn't have touch, they basically give it back to you and say, "I don't like this. This is not an experience I know." What you're really gonna see in all these devices is that's how essential AI is going to be.

The AI experience, the device is intelligent, it's able to sense things you wanna do, you're able to have conversations with it, all those kinds of activities. We see this across the board, both around existing categories of devices and then new devices that you can see around, you know, XR platforms and the like. We see a lot of exciting things going on around personal AI computing and how does AI change the way we have these compute paradigms in our homes, in our lives. We'll talk more about that. The first thing to touch upon, Rene obviously did a great job this morning talking about this, has been the journey from IP to CSS. This has been quite transformative for us.

The conversation that I would have with so many of our customers were, "Chris, we're not having the best performance in the market." Right? "If you could just help me get the best Geekbench score, if you could just help me get the best SPEC score, I could get more for my product. I could get more market share. I could do better, and I'd be willing to share some of that with you, Arm, because you're helping me be successful." This was really the birth of CSS and what we did for CSS inside of the Edge AI business unit. What we do is that we've looked at what does it really take to make a world-class product?

Really what you need to do is you need to optimize all the way from the transistor through the libraries, up through the RTL, up through the physical implementation, and then as well working very closely on the software stack. This is what we have done with CSS. I think you heard YH mention it in his video, the president of Samsung Semi around the amount of work that we do with leading companies like Samsung, like TSMC, and the like, to make sure we are optimizing that full stack. What we do is we give customers not just the RTL, the IP, the GPU IP, leading CPU IP, system IP, but we actually are able to stitch that together, and then we also give them the recipe.

We say, "Here you go. This product in the latest node, 4 GHz , gets you this score." For some customers, that's amazing because they say, "You know what? I'm new at this. I'm just an OEM. I wanna get into silicon, and so give me that recipe and I'm gonna cut and paste that right into my chip." Now, we have more legacy customers that say, "Hey, we think we're really good at that too," and we say, "Great." What we give them is a benchmark because many times their engineers say, "We don't think 4 GHz is possible." We say, "Here it is." They get a chance to try to beat it and try to do 4.1 GHz, 4.2 GHz.

This has become a very virtuous cycle for us in really being making sure that our partners are able to do very well in the market, that they participate in. You can see this in our results. At this point in time, as the base of the quarter that's ending this month, we now have achieved where 25% of our mobile royalties will be coming from CSS. We're not stopping there. We actually have a tremendous amount of growth, as you can see, from the slides, that we believe we're gonna be able to pick up as we're really able to enable even more partners to do even better in their markets. Obviously, this agentic drive for more performance across all price bands becomes very, very interesting to us. The opportunity goes beyond mobile.

A couple segments that I like to talk about is one, which is kind of the emerging XR type, segment. I don't know about you guys, but obviously, I like Meta on the data center side. It was great to see Santosh this morning. I think their glasses are awesome, and I think I own three pairs at this point in time. I definitely am a full-time user. It gives you an idea of next-generation platforms. All of those glasses are Arm-powered, and in fact, the wristband shown there is actually a neural wristband that you can basically now, when they have the display glasses, you can start turning up and down the volume, doing these kinds of activities. That also is Arm-powered based on our Ethos NPU and MCUs.

Again, we're really able to help enable these next-generation platforms that are starting to use AI-type features in all kinds of different power envelopes, because that is actually one of the most difficult computing platforms right now because of the battery life, because of sensitivity to weight on your face, the temperature concerns, and all those kinds of things. Beyond that type of platform, which to me is also driving a whole rebirth in wearables, we're also seeing a tremendous opportunity around personal AI computing. One of the products I like to point out is another CSS product, that's the GB10 coming from NVIDIA. What you're seeing here now is this new category devices where people are saying, "Hey, I wanna run agentic AI." You can see what people are talking about from an OpenClaw point of view.

You see great YouTube videos and examples as people are playing with these things. We very much believe that there's going to be a tremendous amount of computing platform opportunity here. In fact, some of the products in this category I think you know are sold out, that you can't get 'em. Well, what are the attributes of these products? Well, they have a tremendous amount of Armv9-powered Arm CPUs. Two, they've got great GPU subsystems, whether that comes from Arm or whether that comes from other partners' GPUs. And then the third, they have an amazing UMA memory system that's able to provide the bandwidth that those AI platforms need. We believe this is gonna drive a huge growth platform. I just mentioned I came back from China.

It is OpenClaw crazy there right now around the number of platforms and people that what people are trying to play with and are thinking about in the future types of products that they wanna build. What I think is also exciting is how this is translating into what you see happening in the PC industry. What I really see happening there is a really exciting transition happening. Obviously, Arm has made a big push around Windows on Arm. We've made some nice gains over the last several years. But what we're seeing now is something totally different. What you see now is really at that premium level, you're seeing people thinking about AI workloads in the personal computing space. Could be running agents, but it's also about what is about the next generation creator.

You know, when I think about PC creating, it always was about publishing, websites, those kinds of devices. Now that's not a creator. A creator is creating videos, AI-generated videos. They're creating using Stable Diffusion to create new graphics capabilities. Those kinds of platforms, again, are a tremendous strength for Arm's partners, and we believe that Arm will be able to participate in a very meaningful way at that premium part of the this personal AI computing. What's also happening, interestingly, is at the opposite end of the market, which is really focused on the more traditional client use cases. You're starting to see all of these efficient computing platforms come out, whether it's Google thinking about how they're gonna merge Android and Chrome together for Aluminium, or potentially Microsoft and how they're doing 365 Link.

Then last year you have Apple coming in with their neo platforms. With the tightness around memory and storage, I really believe you're gonna also see this more efficient category really start to eat up a significant amount of the PC market share, and that's a great story for Arm and our partners because they participate in many of those markets today. We have amazing power efficiency. We can get amazing memory efficiency. Not to mention our GPU footprint. I think many of you know, but maybe it's a secret for many, is that, you know, Arm is the highest volume GPU shipper in the world. In fact, we've shipped over 12 billion chips that utilize our GPUs. We're also making significant amount of AI investment in this space around our GPU platform.

Expect to see more from us and our partners in that area. What you think about from a CSS opportunity in this space, this is very different than mobile. This is not a space that we have a significant amount of market share today. This is really a growth opportunity for us. You're seeing kind of the projections that we believe, and this will be a significant driver for us, as we bring those CSS platforms into many of these other computing areas. This is a key strategic area for us, and we're getting tremendous leverage from what we're doing already on the mobile and GPU side. Lastly, I wanna touch upon just a little bit about AI software, and Sharbani is gonna talk more about the importance of software. Rene obviously touched upon it earlier today.

I wanna just nail this point. Armv9 is absolutely the most secure and advanced AI CPU architecture out there. Richard Grisenthwaite was up on the screen there. He is our Chief Architect. What he and the team have put together with our partners is the most advanced platform. SME2, our Scalable Matrix Extensions, is now shipping in all the leading handsets, both from iOS as well as Android. What you're going to see is those platforms proliferate. At this point in time, we're approaching about 50% value share on Armv9 penetration. That's gonna actually go up to 85% in just the next two years. You're seeing Armv9 go everywhere. AI is gonna enable us to take that into other markets as well because of the AI footprint that we have here.

Again, you can see it's the leading OSs. It's the leading applications and tools and then lastly, the leading AI frameworks. For many of you know that many of the really tough challenges around AI is around software frameworks and how do you make it easy for users to migrate their workloads, for them to reduce the quantization challenges, and it for just to be easy to run AI. A year and a half ago, or almost two years ago, we introduced KleidiAI at Computex, and it has been transformational for us because KleidiAI is a library that allows developers to integrate into their platform such that when their AI workload lands on our Arm CPU, it is able to use the advanced instructions that are possible in that platform. This is what has been integrated now in these frameworks that you see below.

I'll let Sharbani tell more about that to all of you later on today. With that, I'm gonna hand it over to Drew Henry.

Operator

Please welcome Arm's Executive Vice President, Physical AI, Drew Henry.

Drew Henry
EVP of Physical AI, Arm

What an amazing day. You know, days like this don't happen often in a company's history, and it's just, you know, for me personally, it's just an exciting moment to be participating in it. I have had an opportunity to participate in a lot of new technologies being brought into various different markets of my career. This one ranks at the top for me personally.

As many of you may know, I spent quite a bit of time in the early days at Silicon Graphics actually building out compute platforms where we had to build our own silicon on these little tiny wafers that were like this big, and then moving on from there to NVIDIA, taping out and launching a whole bunch of 500 mm² die platforms for all the, you know, the compute GPUs when I was doing compute GPUs for Jensen. I've been doing stuff in this industry for a very long time, and it's fun to participate in all those other kind of programs.

This one in particular is a big one for us. Rene walked through that history of kinda where things started when it went private. I joined Arm shortly just after that, and joined Arm specifically to help get started the infrastructure business that's now this Cloud AI business. As a matter of fact, I remember Mohamed and I one day standing in front of a whiteboard trying to figure out how do we help customers build stuff faster. That's when we first came up with the early ideas of, we called it at the time, the virtual SoC or what's now become, CSS.

That journey over that time from when we were starting that to now was not just about figuring out how to get, you know, new products into the marketplace or starting new businesses and the like, but it was also transforming the company. For the last number of years, I've been involved in the taking our physical design group. We had a physical design group that would do very low level silicon optimizations for the world's, you know, the top foundries across the world. We would work with them to help them build and make sure that they had the most optimized silicon in the, you know, in the wafers that they were putting through the foundries.

Then we'd work very closely with our partners to make sure that they could get up to the clock rates and speeds like that that Chris was talking about. That group eventually is what's transformed into a group that Steve Holter now runs, which is our silicon group. At the same time, I've been working in now how is it that we take our entire software ecosystem, get our software ecosystem optimized because Arm, if we're gonna be participating in doing CSSes and doing silicon and stuff like that, you also have to participate and help people to be able to optimize their silicon platforms. That's what I'd been helping with that for quite some time.

Rene asked me just recently to start this Physical AI group for us, and I told him I was delighted to do so because I think that Physical AI is gonna be the most interesting computing platform in the history of computing. I'll get into it in a few minutes, a bit on that in a second. Physical AI, just so that you kind of understand what I mean by that. Physical AI is where AI is embodied in a machine. Where that AI embodiment, that machine has to sense, decide, act safely in the physical world to be able to take action.

I like to think of it as, in the sense this way, the key metric of this particular industry is the latency, the time between a photon hitting a sensor and an actuator actually firing. If you're traveling, you know, 65 mi an hour down the road and something is detected by that car that's operating autonomously, that moment from which that sensor detects something and the braking system or steering system or something like that go after firing is the key attribute of that. It's an incredibly complicated computing problem. It is, I think, going to become one of the most profound computing markets in our industry and in the history.

To understand a bit about the TAM here, compared to what Chris showed and compared to what Mo has talked about, today we think about the Physical AI TAM as being relatively small compared to these other, some of these others, of course. About $25 billion a year or so, moving up about $50 billion a year. At $25 billion a year, it's really not hard to kinda do the math on that. Yeah, Physical AI space principally today is automotive platforms beginning to move to more autonomous platforms. 100 million-ish transportation vehicles, associated vehicles sold each year. So you can kinda get a sense for that market.

Industrial robotics, which is kind of the robotics space that's really generating value today in the world, is only about 500,000 units a year. It's not a big unit opportunity today. That'll grow over this next five years or so, and it's growing as a result of content growing, compute content growing inside all these devices. Compute content going from kind of traditional automotive platforms that you see today to these much more autonomous platforms that are starting now to show up with robotaxis and the like happening. The units aren't increasing dramatically, and the compute content is.

You're seeing a large increase in total compute content, which is driving up this, you know, doubling of the TAM over this particular time. I had to beg and plead with the team to say, "Please let me show something a little bit past 2031." This is where I think the hockey stick happens for this particular market. I think that this market will grow in 2031, and don't ask me what year, Drew, this thing's gonna hit to a TAM of $200 billion, because I can't tell you exactly when that's gonna happen, except I absolutely to my core believe it is gonna happen. It's gonna happen, this hockey stick is gonna go up high. You've heard this from others.

This is really as a result of Physical AI being embodied in robotics platforms, and humanoid robotics platforms in particular. I was in China last week with all the humanoid robotics companies working on some of the new computing platforms for them. It's really quite amazing how fast this is going to go. I believe this is what's gonna happen, and I think, frankly, the $200 billion undercalls it. I think it'll be a $1 trillion TAM sometime in that timeframe. You gotta get ready for it. Of course, this is platforms that span across a whole bunch of different types of form factors in the transportation space.

It's, you know, everything from the traditional cars and that are becoming more autonomous now, and then eventually of course now autonomous robotaxis and the like. We're seeing that, you know, if you're just cruising around here in San Francisco, you see them now all over the place. Of course into autonomous trucking and autonomous heavy machinery and the like. I'm working on programs right now with customers where we're talking literally about how it is you deploy an entire fleet of heavy equipment into a construction project, and that heavy equipment actually goes in and does all the work that it needs to do autonomously, which will be quite remarkable.

Of course it's moving into robotics, and those robotics platforms will be everything from robotics that will be in humanoid form factors for actually doing a whole bunch of other type of projects. Surgical robotics and things that are used in industrial applications and in warehousing and logistics and the like. Of course security platforms and then medical delivery for drones and food delivery and all this kind of stuff. All these things are happening. The key thing across all of it is that compute is key to it. This is what we've been getting ready for.

Similar to what Chris was talking about for these businesses, for Physical AI business, similarly we've been preparing for this by getting our platforms ready for helping people become more vertically oriented. For the longest time at Arm, of course, we just provided people the individual components of IP, and we've, you know, generation to generation have improved the capabilities of those platforms, even to now to Armv9. But it's not sufficient to just be able to offer all this stuff as, you know, bags of capabilities. You need to be able to help people build stuff faster, and that was the key thing.

We saw this early on in the infrastructure business that the vertical OEMs that wanted to go off and also build Arm platforms similar to what AWS was doing, and they wanted to get to market fast. They wanted to get their own silicon in the market. How do you do that? You do that by actually enabling them to be able to do that themselves. So our taking, instead of saying, "Here's all this stuff. Try to figure out how to build it yourself," we come in and say that, "Hey, listen, we know it's hard to do this stuff, so we're going to curate it for you. We're gonna design it into a compute subsystem.

We're going to make it something that you're gonna be able to build much more quickly into your platforms because you're going vertical." I think that we're in a decade or more of more and more companies going vertical in the platforms that they're building for a whole bunch of different reasons that we can talk about sometime. This move from individual bits of IP into now these curated designs, what we've designed the subsystem itself, has been a big move for us, and one that's worked out quite successfully. For our business, what it means for our business, it's a, you know, we've moved from the Armv8 type architectures into these Armv9 type architectures, which Chris described the benefit of that quite eloquently.

That results in 'cause we put a ton of investment into that, and there's a lot more value that's brought into that. That itself resulted in a doubling of the royalty rates that we collect just on those particular type of platforms. Then when you move into CSS, it's a doubling of doubling. We are building on top of Armv9. We then again increase the amount of value that we bring to our customers. Of course in exchange for that value that we bring here to them, we get value back, and that's increased royalty rates. To understand what it means to go from faster time to market or to reduce the amount of labor necessary to build stuff, this is hundreds of millions of dollars of value to our customers, hundreds of millions of dollars of value.

We've done the analysis on it. You can't show up to your customer and say that I'm increasing the royalty rates without also showing up and saying, "Here is my financial analysis that shows you what the value is that we're bringing to you when you get a product in the market faster, when you actually reduce the amount of time it takes for you to to build it." Things like being able to provide a compute subsystem so that they can actually take out a chip and take that to market without making any changes to that chip, that's a huge economic value to them. That's the success that we've had with doing these platforms like our CSS platforms. These CSS platforms are now designed and optimized in the physical design space.

They are designed and optimized now to be able to support this world of physical design. This is the work that we've been doing. As you move on and understand kind of what's happening as the world transitions over time, you've got to understand where we're coming from. This is where there's significant investments that we're making. The world today is the vast majority of vehicles that are on the planet today, of course, are traditional automotive platforms. Arm has been in this space for a very, very long time. We created the Physical AI business unit just a few months ago, but we've been involved in the business area and market areas for this for quite some time, decades.

For instance, you might be surprised to know that in the last 12 months just into the Physical AI space, we've shipped over two billion Arm devices through our ecosystem into support this space. It's because there is an awful lot of compute that go into these platforms. The traditional automotive spaces, it goes everywhere from the actuation systems that manage the braking, manage the acceleration, monitoring a bunch of different devices that are inside the vehicle, all the way through to the central compute that does things now like easily for many companies, you know, platforms keeping you in lane or helping you know, you know, monitor speeds and stuff like that, to the more sophisticated platforms that you see from the systems that are becoming more autonomous. That's all moving now to new levels of compute inside these autonomous vehicles.

The big things that are changing is the architectures in these platforms. These architectures are evolving to become much more complicated. Again, with that more complication is more value that we bring into it. An autonomous vehicle has substantially more Arm value that's brought into them than even you see in traditional automotive platforms. Because we invest so heavily in those traditional automotive platforms and helping them advance, they're putting a lot of technology into those platforms and then all of that is moving into the autonomy platforms as well. What's happened in these autonomous car platforms is you're seeing, again, an increasing amount of compute that's being put into them, increasing amount of compute that's built on top of Arm. Again, our CSS platforms are very enabling of this in our latest platform that is called the Zena CSS.

The big thing now as you move from this progression from automotive platforms with like ADAS capabilities into these autonomous vehicle platforms, the vehicle is moving itself. When the vehicle is moving itself, it's a big investment not just into the platform that moves the vehicle around, that manages the autonomy, where decisions have to be made in milliseconds or faster, but you also have to improve the information that's displayed inside the vehicle because people, if you're not driving, what are you doing, right? You want to know where you're going. You want to see mapping systems. You want big displays that are going to be in these vehicles in addition to entertainment that's going to be provided to you.

There's going to be an awful lot of change that's going to happen inside these vehicles that's perfectly set up for the way that our business models are set. Then, of course, all of this moves into robotics. The same platforms now that are going into the autonomous vehicles are the same platforms that are going into robotics, empowering robotics. The reason why the world and industry is so far ready, so enabling and ready to move into from autonomous vehicles into autonomous robotics and why so many autonomous vehicle companies are also doing autonomous robotics investments and building those kind of products is because the computer is the same. It's a much more complicated actuation system. Matter of fact, that humanoid robotics platform is hands down the most complicated computing system ever. Incredibly complicated.

Controlling all the actuation systems, being able to assure that you're able to move a pinky appropriately at the same time that you're walking at the same time as you might be getting instructions as something's talking to the robotics platform and the like. It's an incredibly complicated program, complicated platform. The computing is substantially increased. As a result of that, our royalties could substantially increase. I think it's going to be a fast growth product for us. What's interesting, when you actually take a step back and look at it all, you begin to recognize that there are about four planes of computing that exist in these more autonomy platforms. The perception-driven intelligence, which we are incredibly good at designing. These things have to run in robotics platforms, have to run in automotive platforms.

This is the platform that is responsible for how something moves around, how it makes autonomous decisions about how it's going to a car drive around a city or a humanoid robotics platform or some other type of robotics platform make its way around a factory floor or make its way around a home, whatever. That's a very complicated computing platform, one of the most difficult computing platforms you can do because the real-time aspects in it. You've got to make decisions in milliseconds, not nanoseconds. There's this completely separate compute domain that is the interaction-driven compute domain. That's where you're conversing or you're sitting in an autonomous car and say, hey, wait, stop. I want to get a coffee right over there. The autonomous vehicle has to recognize exactly the context of that command that you just made.

It's got to be communicating to the actual drive system that's managing the drive. All of this stuff is incredibly complicated. It moves into how you manage the actuation system. It moves into how the entire system is connected through the cloud. Incredibly complicated platform and one in which we are really, really well situated in. Examples of that span the industry. We've been working with the absolute pioneers in this space, Tesla of course. I drive a Tesla. I drive 95% of my time now fully autonomously in my Tesla. We've been working with Tesla for a very long time. We've been working now. You've seen announcements from us from Nuro, which is doing an autonomous platform to help companies that want to have autonomous vehicles go off and do that.

They're building platforms for that using our technology. Rivian just announced last December that they now have an AI platform that they've built for themselves. Again, companies going vertical, enabling our ability to enable them to build that vertical platform themselves, being able to build the silicon platform that they want using our technologies, Armv9 technologies and our CSS technology is what enables all this stuff. Then robotics platforms like Agility Robotics, which also builds on top of the Arm platform today. All these companies are companies that are examples of real-life applications of people moving from Armv8 to Armv9 or moving from Armv9 into CSS platforms and being able to use these systems that we provide.

The interesting thing as Chris was saying is now how this adoption happens. In our space, the Physical AI space, the adoption happens a little bit slower because it takes a little bit longer to put these platforms in the marketplace because of the way that safety requirements are imposed on these platforms. You wanna make sure these platforms are safe. You have to do a certification program to enable that. We move along just a little bit slower than some other industries do in the adoption of these types of technologies, but the adoption is absolutely happening. We're moving into the Armv9 world where Armv9 is now becoming actually parts of our royalty collections.

CSS is building on top of that doubling I referred to before, to be able to again increase the growth of this particular industry for us. This is, like I said, all through that, those 2030s before the big growth happens as we move into at the beginning of the next decade, which we are incredibly well positioned for. Which is why I tell people that we are just at the center of this incredibly large market opportunity that is staring off right in the horizon for us that'll be $200 billion per year TAM, I think probably even larger, you know, when this really begins to hit its stride.

Like I said, I'm spending a lot of time in China looking at actuation systems and how those will drive down to become cheaper. This is how involved we are in the ecosystem. This is gonna be just an incredibly fast-growing market for us when it hits that portion of the hockey stick, and we are really well positioned for it. We wouldn't be well positioned for it if we weren't making investments in software. That's my final point. We do invest substantially in software ecosystems. As a matter of fact, we work very hard to migrate any platform on any other legacy architectures and make sure that they're optimized and moved into the Arm ecosystem.

As described earlier, you know, that's becoming incredibly easy for us to enable now for a whole bunch of different reasons. We have a very strong software ecosystem that we're able to rely on now that is incredibly mature, running all the platforms that we run on, which means that we are very well situated for this growth into the Physical AI space. Building on top of the investments that we've made over a number of years in building out these kind of platforms, moving from Armv8 to Armv9 to CSS platforms and the like. The intimacy that we have with customers going vertical, wanting to be able to build these kind of platforms and the like.

We're bringing kind of all the capabilities of our ecosystem together to enable this, but it does all require this very rich software ecosystem. All right, I'm gonna turn it over now to Mohamed, who's gonna come back up and close on the opportunities now in cloud.

Operator

Please welcome to the stage Arm's Executive Vice President, Cloud AI, Mohamed Awad.

Mohamed Awad
EVP of Cloud AI, Arm

This feels like déjà vu a little bit. I'm hoping the slides work well for me this time. Let's go through it and see where we're at. Obviously, our opportunity in cloud is huge. You know, we're seeing a tremendous amount of traction in lots of different ways, and I'm gonna spend a lot of time kinda talking you through how I think about the business, how I think about the market opportunity, and also how I think about, you know, our go-to-market strategy. How we engage the customer through a couple of different channels now. Obviously, one is IP, another is CSS, and then we've got our AGI CPU as well. I'm gonna kinda talk you through that. Let's just level set on the size of the market.

You know, like I said, the numbers are big, right? Accelerators obviously make up a huge chunk of the potential. You know, I'm not gonna talk too much about accelerators today, but I am gonna just highlight, and I think it's important to understand that the accelerator opportunity in this kinda massive growth that we're seeing in the accelerator space is actually driving a tremendous amount of growth in the other areas as well. I don't think that can be sort of understated. You've seen that, you know, with sort of the explosion of data center CPU demand recently and kinda moving forward, and we're actually seeing that in other markets as well. As these things become more and more agentic, the demand around CPUs continues to grow.

We think the data center CPU market well over $100 billion by 2030 fiscal year ending 2031. You know, Jason is gonna go into some of this in a little more detail later.

I wanna show you a different view because when I think about the product view or the go-to-market view, I actually think about it slightly differently than that. I first think about it in terms of first and foremost like the cloud more broadly. These are your big hyperscalers traditionally, and then kinda the folks that are driving big cloud infrastructure deployments. Think AWS and your Google and your Meta of the world. It's also your Cloudflare and those types of players who are driving these big infrastructure deployments. There are sorta multiple places that we play with partners like that. I mean, first off, we play obviously in the high-performance server space. Just think the general purpose compute server.

That is absolutely a place where we have a bunch of content, and we continue to work with them with things like CSS and IP in some cases. But then beyond that, we also engage these customers in other aspects of their infrastructure. Think DPUs or networking equipment. All of their storage devices have Arm. There's actually Arm all over the data center in a lot of sort of places which don't get talked about quite often. Those are other avenues where we're engaging these customers, sometimes not directly. In fact, oftentimes we're engaging these customers through ASIC partners who take our IP, build silicon, and then go feed into those devices. The next segment is really enterprise.

Enterprise is, you know, an area where historically we haven't done a whole lot in terms of general purpose compute in enterprise, and that's because of the dynamics around a channel and having an actual product that we could sell into that area. These customers are gonna be much less likely to, you know, to build silicon. In fact, none of them build silicon, right? What they do is they look for silicon off the shelf, and they look to buy silicon in, right? Our presence within these enterprise players has historically been limited to wherever our silicon partners go off and engage. Think of, you know, networking, storage, security, again, all areas where those appliance type devices, we're all over them.

We partner with companies like Broadcom and Marvell and others to kinda go service this chunk of the market. Now, obviously with AGI CPU, there is an opportunity to go after some more of the higher performance type devices, and I can talk about that a little bit in a second here. Excuse me. Finally, we got what we call wireless and edge. You know, this is primarily dominated by you know, a lot of the big telco guys, so think like the Nokias and the Ericssons of the world, et cetera. Similar sorta story with these guys. It's actually more of a hybrid. If in the cloud world, we engage almost always directly with those customers, sometimes through semiconductor partners, but very, very directly with them.

In the enterprise, it's almost always through semiconductor. On the semiconductor partners. On the wireless and edge space, it's kind of a hybrid. Opportunities to work directly with them. They're oftentimes licensees of ours, build their own silicon, but they also go and partner with our semiconductor ecosystem partners. Again, in each of these, it's kind of a little bit of a different go-to-market motion going on, and AGI is driving up demand across all three of these segments. You know, when we started the business, obviously what was called the infrastructure, I think Drew mentioned originally we called it infrastructure business, it was really about developing IP and providing that IP to the customers.

In some ways, even the idea of developing IP, which specifically for cloud, is a relatively new concept. I mean, we really just launched Arm Neoverse, you heard me talk about that a little bit earlier today, back in 2019. Prior to that, the story was pretty straightforward. We would hand customers a document, big document, and a phone number, and go for it. Go build a processor. So you could imagine the sort of barrier to entry associated with that. We were helping them by building software or trying to get the software ported, but all of the aspects of actually building the CPU and the system IP and et cetera was not something that we actually spent a whole lot of time on.

That shifted when we launched Neoverse, and the impact of that shift was a dramatic reduction in the barrier to entry, a dramatic reduction in customers' ability to go off and build products, right? That's why in 2020, 2019, you saw AWS kind of emerge. Since then, we've continued to lower the barrier to entry, and that's what really CSS was about. It was like, hey, how do we lower the barrier to entry even further so we can capture even, A, capture more value, yes, but B, also accelerate time to market for the customers? We saw that happen with CSS.

We've got one our first customer that we provided CSS to, you know, it's a great quote, which some of you may have heard me say before, which is, you know, they are they told us that it saved them about 80 man-years worth of effort. Going from, you know, because they had CSS as opposed to the discrete IP, it saved them about 80 person years worth of engineering work. We had another customer who from the time we handed them the CSS to the time they had silicon back running Linux, so this includes time through the fab, less than 18 months. That's kind of unheard of in the semiconductor industry.

It was really about accelerating time to market and helping those customers get there, and then obviously accelerating our own value capture. The result of all this, since 2019, has been pretty clear, whether that be with the traditional hyperscalers or with others. It's really been this kind of you know hockey stick-like ramp which I was showing earlier today. You know, at the end of the day, Arm Neoverse is really about, hey, how do we you know deliver more growth, more cores, higher ASP, and greater volume? You know, this chart, super impressive. I think one of the most impressive parts of this chart, which I don't think people quite grasp, right, is that from 2019 to really 2025, that's actually tremendous volume and great volume, and we're moving forward.

Things are really starting to kick up, and people kinda ask why, and this is the answer. We're just getting going. I mean, the reality of even the CSS story is that some of these hyperscalers are literally just starting to deploy their products in earnest now. They've launched them, and they're really, really launching them, and we have visibility now into what their plans are, and that's what drives our sort of confidence level just in the base CSS business. This is before I talk about AGI CPU. This is before I talk about things like, you know, 6G base station refresh cycles, or what the impact is to networking, or how the enterprise is gonna react to agentics. That's before I even talk about that.

I've got customers, hyperscalers, the largest consumers of compute, that are just now taking their products into market, and that's driving a tremendous amount of upside for us. We see that kinda happening moving forward. It's super exciting. By the way, as I said earlier, the whole goal of CSS and the whole goal of the IP was to accelerate that and allow these customers to get to market faster, to build more products, et cetera, et cetera. You're seeing them now start to get to market, and it turns out that if you kinda look at this a different way, it's actually really interesting because what you see is exactly what I just described. You had initial IP. You had one or two customers that were able to take that IP and effectively productize it and drive real volume.

Now, since we've launched CSS, you see that ramp continue to move forward at a pretty amazing rate. Now, what's really interesting about this slide, you know, both Drew and Chris talked about this idea of doubling of doubling a doubling, and we've got a kinda similar dynamic going on here. I hadn't thought of it in that way, so I won't frame it in that way. Structurally, it's the same thing. You've got multiple tailwinds happening here. First is the share tailwind, which I just described. You've got these hyperscalers which have now deployed products or are now deploying products, and so that's accelerating and moving forward. Great. We get more ASP per core because we've moved to CSS. That's also good. That's, so capturing share and higher ASP. That's a good story.

Then you've got a growing TAM on top of all that. Not only are you capturing percentage share at each of these customers and capturing more ASP value at all these customers, but you got increased unit share as well. That's what's driving this kind of all of this revenue upside. Now, for me, that's exciting. I hope you guys are excited too. I gotta tell you what's even more exciting in my mind is kind of the story of the day because, you know, you've got all of these growth drivers in our base IP and CSS business, and then you know, if we kinda keep going, we've got this massive market expansion that's now opened up to us, right? That market expansion means what?

It means that, you know, we have this opportunity to capture so much more value from those existing customers while also addressing a bunch of customers in new ways that we historically couldn't. Remember that whole enterprise piece or all of those Tier 2 hyperscalers which even with the CSS really can't go off and build their own silicon. Companies like Meta, these are all places where we weren't able to capture with a legacy IP and CSS business. Now we have SoCs to go after them, which I think is creating a tremendous opportunity for us. The only thing I would say about this slide, and we talked about it a little bit earlier today, I think Rene mentioned it, and I mentioned it as well, and I really wanna drill this point home 'cause it's so important.

Candidly, one of our biggest strategic advantages in my opinion is that we'll give you any one of these. If you're a hyperscaler and you're looking to build a DPU, sure, I'll give you IP. You're a hyperscaler and you wanna build a chiplet or a piece of silicon which is very tightly coupled to your accelerator and you need a CSS, I got that for you too. You wanna buy silicon off the shelf? Sure, I can give that to you. Now you can use any one of these things across your infrastructure. The reality is none of these players are only buying one piece of silicon. They're building complete systems. Now we have an offering and an option for them across that entire range.

That is very uniquely Arm and something that really only we can do, which allows us a tremendous amount of leverage going forward because it allows them a tremendous amount of leverage in terms of their software story. I don't think most people quite capture the significance of that when people think about their overall infrastructure plans. This isn't just about market expansion. It's about making sure that we can go off and address and meet those customers wherever they are. Let's talk about some of the specific customers we have for Arm AGI CPU and the diverse customer demand that we have because this actually feeds back into that same point. Let's talk about the AI data center for a minute.

What you see up there are a bunch of customers who are looking at using it for the agentic use case, the one that we talked about earlier. Remember when Rene said agents never sleep? I love that line 'cause they don't. They keep going all the time. AGI CPU is great for that, but it's also great for that head node capability. A lot of these guys are building accelerators. Guess where they're getting some of the IP for their accelerators? Not the core IP, but some of that connectivity IP. They're getting it from us. What can we do? We can provide AGI CPU, and then we can provide some of the interconnect technology or some of the other technology on their side to make that link, that connection that much better.

That's part of the story that we think about with one of these guys. In fact, most all of these guys are licensees as well, which is an important part of the story. On the cloud side of the house, what you see is actually a couple of different specific interesting use cases sort of emerge. If you look at SAP, that's a really interesting customer because what SAP has done is they've actually moved their entire HANA database over to Arm in the cloud. They're loud and proud about the fact that they're an AWS Graviton customer, and they're gonna continue being a great AWS Graviton customer. We helped them do that porting work as part of our IP and CSS business. The reason why they're interested in AGI CPU is very simple. They're seeing all these benefits for Graviton in the cloud.

They run a hybrid environment, and then they look at what they're getting in their data centers because they've got some sovereign data requirements in places like Europe, and they can't necessarily get the same benefits that they're getting in the cloud. All of a sudden they're like, "Hey, how do I get the same benefits that I'm getting with Graviton, but get it in my own data center?" AGI CPU allows them to do that, and guess what? You heard me say earlier, there's, you know, well over 10,000 customers that are using Arm in the cloud today. Those are all potentials for AGI CPU. Cloudflare is another great example. They kinda fall into a little bit of a different camp. I would call them sort of. I don't wanna be derogatory and call them a Tier 2 hyperscaler, 'cause that's not really what I mean.

What I mean is they're not at the sufficient scale to go off and be able to build their own silicon. They'd love the advantages that you get with moving to Arm, but they haven't had a product. They've been underserved. How do they get those TCO benefits? Arm AGI CPU allows them to. That's why they're interested in Arm. You look at F5 Networks, and I like these guys for a completely different reason, which is they build what I would effectively lovingly call an appliance, right? They build a bunch of these kind of appliance boxes, and typically these folks that are building these appliance boxes, whether it's networking, storage, or security, they got a bunch of SKUs in their range. They've got high-end SKUs and low-end SKUs.

The low-end SKUs are, you know, usually have to be pretty power efficient, and they all use Arm for that. We have a tremendous amount. All is maybe too strong of a word, but many of them use Arm. We have a tremendous presence in that area. Okay? When it went to the higher performance SKUs, there was no off-the-shelf offering that would work. What did they have to do? At the low end of the range, they had to use one code base because of efficiency, and at the high end of the range, they had to use a different code base because for performance. The reason why these guys love Arm AGI CPU, it allows them to kinda streamline their entire code base. Becomes a very straightforward story so they can go off and adopt technology.

Tremendous opportunities here, and that's kind of the way that we think about the market when we go off and attack it. When we look forward and we talk about our roadmap, the things that we're gonna continue to lean into are those same things that have made Arm AGI CPU so exciting to customers already. Best-in-class performance, best-in-class scalability, and best-in-class efficiency. You heard me say it early over and over again earlier today, and those really are the things that we're gonna lean into. We're gonna make sure that we're leading in terms of interfaces. We're gonna make sure that we're leading in terms of memory and IO, you know, and that includes looking at things like NVLink. That includes looking at things like next generation PCIe.

That includes looking at all different memory technologies to make sure we optimize for those three key attributes over and over and over again because we are mission-focused on kinda solving this problem for the industry. Of course, there's a tremendous opportunity here, and, you know, Jason's gonna talk you through the numbers in quite a bit of detail. You saw the sort of hockey stick. You know where the numbers are in terms of our royalty revenue. The opportunity for AGI CPU we think is enormous. We are incredibly excited about it, and I'll let Jason provide some more details in his session in a minute here.

Underlying all of this, and the thing that I just wanna make sure that I sort of emphasize, like both Chris and Drew, is that our software ecosystem sitting on top of that Arm Neoverse compute platform regardless of how you digest that Arm Neoverse compute platform, whether it be via IP or compute subsystem or AGI CPU, is the foundation to what has put us in such a position of growth and why this business is so exciting right now. Thank you.

Operator

Please welcome Arm's Vice President, AI Services, Sharbani Roy.

Sharbani Roy
VP of AI Services, Arm

Hi. I'm Sharbani, and I'm one of the over 2,100 teammates that we have here at Arm obsessing every day about software. I feel like Drew and Chris and Mohamed kind of did my job for me already. It's probably a good time to underscore you need software to unlock the power of incredible hardware. You can't have hardware without great software.

That's why the breadth, depth, and value that we provide to our partners through our software ecosystem is our advantage. Our ecosystem is massive. Rene already called out that we have 22+ million developers building on Arm. 22 million. I'm pretty sure that makes this the biggest software ecosystem there is. Now, when I think about this breadth, it covers our full platform. I wanted to pull all of these logos together that you've seen. You know, we covered each of these kinds of individually, and this is just a representation of the various companies, the various partners that we work with, but it goes from cloud to edge.

It goes from the phones that are in your pockets to the laptops that some of you are working on right now to write amazing reports talking about all of the awesomeness that we are bringing forth here today. It's helping to power the agents that you're gonna use to help you write your next reports, to help you figure out what your next strategies are gonna be. It's in the cars, it's in the robots, it's everywhere. This ecosystem is absolutely massive, and it's not just about it being the most ubiquitous compute platform. We also have incredible depth. We go up and down the stack. If you look, you can see we've got the best operating systems all the way up to your favorite applications, like Santosh talked about earlier today.

Now, when it comes to the depth, the depth of our software ecosystem is our durable advantage. We've been at this for a while. We've been doing this for 15 years. We have over 1,300 open source and upstream projects that we work on, and we have over 50,000 partners, 50,000 companies that we collaborate with. 22 million developers building on Arm. 50,000 companies. That's a lot of elbow grease. We've been working really hard. Right, Drew? Yeah.

Yeah. Like I said, we work up and down the stack. We start off at firmware, doing kernel-level optimizations. We're working up through open source, through all the operating systems, all the way up to the applications, and we're incredibly strategic about how we think about where we need to optimize software. That's a lot of software. That's a lot of territory to have to cover, so we have to think about how we're gonna have the absolute best leverage and the best ROI when we're investing in each of these projects. That means a lot of times we're thinking about how we have optimizations at the operating system level or, like Chris was talking about, with our KleidiAI optimizations.

We wanna be thinking about how we're not just working with partners but also how we're just putting things out into the ecosystem, helping our partners with their proprietary stacks, so the software optimizations that we provide just run seamlessly from cloud to edge and just works behind the scenes most of the time, like magic. When I think about what we wanna obsess over when we're trying to drive for that depth of these optimizations, I'm gonna tell y'all the same thing I tell my team every day. We obsess over three KPIs, perf per watt, making it as easy as possible for developers to build on Arm, and making it as fast as possible for them to deliver innovation into the market through those applications that each of you love to use every day.

These advantages only compound with the AI economy with our agentic future. How many of y'all are using agents today? I'm gonna hope a lot more after this session as well. I'm gonna let you in on a hot tip. I like to think about agents like apprentices. Each of you are a great artist in what you're trying to do, and an apprentice can only help you really, amplify and think, you know, kind of provide you with all of the additional support you need to have your next great innovation. To do that, they have to do a lot of experimentation. They're running all the time. They gotta be running as fast as possible, and so that means they're gonna have to swap in and out a lot of different types of software.

When you set your agent to go run off and be solving something for you want it to be able to have good judgment and be able to quickly swap out different models or swap out different cloud providers even so that you're not just blowing the bank on, you know, your latest experiment or, you know, if it's something that's absolutely mission critical. The good news is the agents are gonna deliver the work to your fingertips, to your phone, to your laptop because when you think about our ubiquitous platform, that's all running on Arm. Agents are gonna be delivering the value where Arm is actually serving you the best.

The software ecosystem comes into play because our software has been completely optimized across the board from cloud to edge so that these agents can really easily go and pick and choose from software that's gonna work on those devices. They don't have to think about it. It's just gonna work. It's gonna run great on Arm. These advantages only compound. It's breadth, depth, and I wanna talk a little bit more about the value. We provide our partners with incredible value by helping them gain horizontal leverage across the Arm platform. I wanna jump into an example with Meta. Seems appropriate. We heard a lot from Santosh and from Paul today. Meta was facing a challenge, and we've been working with Meta for years.

It's not just on the hardware side but also on the software side, and this is something that's near and dear to my heart as well. You know, you think about Meta is famous for having a single stack, a monorepo, where you have a developer who is pushing something new, building one of your favorite applications, a new feature, but whenever they push something out into the internal code base, everybody's able to go and pick that up, right? It's gotta meet an incredibly high bar, but you have to have horizontal leverage within the company so anybody can pick up the best in class, the latest and greatest, 'cause they're always at the cutting edge.

This means that when we're coming and we're gonna be partnering with Meta on software, we have to be at the absolute A-game when we're going to play with them. Now, they had a challenge because they wanted to unlock the value that we have from some of our new architecture from our CPUs. Chris talked a little bit about KleidiAI and our SME2 capabilities. Now, that's incredible performance, incredible perf per watt and that we're driving for. But when you think about how models are actually made, you're training massive models in data centers, right? You gotta use a ton of compute, you can have billions of parameters, and you gotta squish that down and put it onto a device.

Again, how am I gonna fit that in my pocket so I can go be checking my favorite update on Instagram or, you know, sending my family, like, great videos through WhatsApp? What we knew we needed to do was to go jump into helping them out with optimizations through their AI frameworks. Now many of you may be familiar with PyTorch. It is, has been the fastest growing framework. I have a little bit of bias. I used to work in frameworks at another company, so I love all frameworks are great, but I can see the power of needing to take those AI frameworks. They come quite large, right? Models are larger, the frameworks themselves are large, and we needed to be able to help Meta optimize once and deploy many to fit these models onto the devices that we're all obsessing over.

What we ended up doing is we worked with them through PyTorch. We actually have Arm has an active seat on the board, we help out with the technical advisory board, and we're helping direct both PyTorch and open source as well as with Meta. We also provided incredible contributions to ExecuTorch, which some of you may have seen went 1.0 last October. Which means, again, it not only is benefiting Meta and all of the developers inside of Meta, but is also benefiting open source altogether. Now, when you're going out and some of you might be playing around with PyTorch models, you don't have to go through a whole path to go and deploy it on LiteRT. You can actually go through ExecuTorch. You can go faster. It can be easier for you.

You can still get the best perf per watt that you get on the Arm Ubiquitous Compute platform. That's a repeatable performance gain and efficiency for Meta, and that's also something where we've been able to take those lessons and we've been able to extend that more broadly into our ecosystem. Again, we have breadth, massive ecosystem. We have the depth, and we're strategic about how we help up and down the stack. We work directly with our partners, both on their proprietary stacks and to help them in open source, so that we can have these repeatable gains that compound and will compound for years to come. All right. I think next we've got Jason's gonna come up and give us a little bit of an overview.

Operator

Please welcome Arm's Executive Vice President and Chief Financial Officer, Jason Child.

Jason Child
EVP and CFO, Arm

Hello, everyone. My name is Jason Child. I'm CFO. I think I've probably met all of you and probably multiple times, so it's great to see you again. Very exciting day. Exciting to walk you through some of the numbers. I imagine many of you have been waiting to see the meat, what does it all add up to. Anyway, what I'm gonna do today is I'm gonna make sure that the business case and the financial model behind the Arm AGI CPU is in fact very, very clear. There are really three points to this. First, demand from new and existing customers is allowing us to materially expand our opportunity through selling chips.

Second, our existing IP business continues to have strong underlying growth drivers, and the chip business is compounding, and not displacing the IP business. Third, when you put those together, the combined model has much larger revenue, profit, and EPS potential by FYE 2031. All right, introducing the first phase of Arm's market expansion. Let me start with the framework. Customer demand for Arm-created chips and the size of the opportunity has led us to explore the chip market over the past three years. After exploring the market and our own capabilities, we're introducing, as Rene and the team have talked about, the Arm AGI CPU. Furthermore, the Arm IP business is going from strength to strength. Royalty revenue growth and the license revenue growth both have multiple long-term structural growth drivers, which we expect to continue for years to come.

Finally, the financial consequences by FY 2031 that the combined model is materially accretive to revenue, gross profit, operating profit, and EPS. Importantly, much of the investment is already in the business, so the additional chip gross profit has meaningful drop through to earnings. Arm's opportunity is huge, and we're growing into the largest market in history, one that is just getting bigger and bigger. It's over $500 billion today, and we think this grows to more than $1.5 trillion in FY 2031. Just to be clear, this is just semiconductor logic, CPUs and XPUs. No memory, no optical, just chips where you might find Arm technology either today or in the future. Breaking that down a little bit more. Cloud AI includes cloud compute, both CPU and XPU, enterprise compute and networking, supercomputers and so on.

This is a $330 billion market today, growing at over 30% a year to around $1.2 trillion in FY 2031. Edge AI includes the chips that go into smartphones, consumer electronics and IoT devices. This is a $180 billion market which we expect to grow at around 7% CAGR over the next five years to $250 billion. In Physical AI, which includes automotive applications and robotics, this is a $25 billion chips market today, doubling in size over the same period. You've heard Drew talk about how the autonomous vehicles and robotics opportunity could be larger than both Cloud AI and Edge AI. However, our current view is that the inflection point likely happens after fiscal year 2031. It could be earlier. This adds up to over $1.5 trillion.

Double-clicking on the Cloud AI market, we have a greater than $100 billion of cloud, AI and enterprise data center silicon and a $55 billion for wired and wireless networking. The remaining $1 trillion includes data center accelerator chips. We will come back to that opportunity another day. Today, we are just focused on the CPUs and there is $455 billion of non-accelerator CPU TAM that is in Arm's sweet spot. I want to direct particular attention to the $100 billion- plus in the data center CPU TAM as that is the market we are addressing today with the Arm AGI CPU. These numbers may be higher than you've seen before. However, we have visibility into our customers' roadmaps.

We are confident that the demands in inference and agentic AI will put on the CPU performance and drive healthy volume growth and ASP increases. We're expanding our opportunity in three ways. Firstly, with our IP and CSS offering, the only customers we can address are the large hyperscalers who want to build their own chips. Our maximum revenue is limited to a fraction of the chip value. Assuming a 100% market at a 10% CSS royalty rate, this adds up to $2.4 billion. Supplying a complete chip allows us to address the full value of this market at $24 billion. Secondly, as only the largest cloud service providers were building their own chips, we are expanding the opportunity by offering products that all data center companies can use, from the largest hyperscalers to neo clouds to telcos and enterprises.

Thirdly, over the next five years, we expect the size of the market to increase significantly, driven by the expansion of CPU use for inference and agentic AI. Together, this takes our total opportunity from $2.4 billion of possible royalty revenue this year to more than $100 billion of revenue in FY 2031. Not all of this will be captured by chips. We still will be offering our customers CPU and IP, and compute subsystems, and this will help us maximize our total revenue from the data center by allowing our customers to choose the right solution for them. This slide makes the economics, and thus our rationale, for entering the market more tangible with a simplified hypothetical example using an illustrative $1,000 chip price.

CPU IP has about a 5% royalty rate, so for every $1,000 of chip sales, Arm receives about $50 in royalty revenue. This is at a 100% gross profit, so we get $50 of gross profit dollars. Compute subsystems have about twice the royalty rate, so it generates about $100 in royalty revenue and gross profit. For a chip at a $1,000 of chip revenue delivers about $500 of gross profit dollars. While IP and CSS are extremely attractive on gross profit margin, the chip model produces more gross profit dollars per chip. This could be an order of magnitude higher than the IP model. That is why we are pursuing all three models.

It expands our market to include customers that were not interested in an IP model, gives our current customers choice, and for Arm it creates a much larger profit opportunity. We're not going to force any of our existing customers to migrate to this new model. We are welcoming customers to stay on their current IP CSS model should they choose. Should they decide to embrace the silicon model, however, this chart illustrates how that decision would be significantly positive for our gross profit dollars. All right, how are we landing the new chip business? This business is based on customer demand from multiple companies and across hyperscalers and large enterprises. These are companies who prefer to buy a chip from us over building one from our IP. We believe that Arm is uniquely positioned to build a CPU for the data center.

If you look at all the Arm-based CPU chips, most of the technology already comes from Arm. Because we have such strong initial demand, we have been able to quickly turn customer interest into actual business. As you've seen, we already have multiple customers lined up, and we have line of sight to more than $1 billion in chip demand over the next two years. The vast majority will fall into FY 2028. Our biggest challenge is not finding customers who want our chips. It's actually memory shortages limiting our customers' ability to deploy our chips. We expect material revenue from the Arm AGI CPU starting in FY 2028 with an exponential ramp to around $15 billion in FY 2031. The first driver is increasing demand from new customers, naturally increasing chip volumes.

We also expect increasing chip volumes as well as rising complexity to lift ASP significantly by FY 2031. Turning now to our IP business, starting with royalties. Arm's royalty revenue has multiple secular growth drivers. The end markets to which our technology is being deployed are growing. We are gaining share as our customers deploy more Arm, more Arm-based chips. Increasing complexity is driving up core count, especially in the data center and high-end automotive chips, which leads to higher royalty per chip. Our most advanced technology commands a higher royalty rate. Over the past five years, royalty revenue has grown at about a 14% CAGR. This has accelerated to over 20% in the past two years as Armv9 and CSS have started to ramp. Looking forward, we expect that royalty revenue CAGR will be 20% over the next five years.

It might not reach 20% every year, as there will still be the occasional market downtick or inventory correction. Our licensing revenue overperformance in the past few years lends confidence to these future royalty growth rates. We have discussed for a long time our efforts to add more value to our customers and to be compensated for that value. AI is a significant tailwind to our journey as customers are pressed for both more compute power and faster time to market. You all know we've essentially doubled royalty rates from Armv8 to Armv9 and again to CSS. We could also charge a significantly higher royalty rate for each Armv9 generation. I'm sorry, a slightly higher royalty rate for each Armv9 generation. A higher royalty rate for each CSS generation.

The increasing complexity of the chips of today and tomorrow is also contributing to our royalty revenue growth. In the data center, for example, AI, including agentic AI, is driving our customers to design an increasing number of cores into their chips. Tracking both customer increase in core count over recent years, and our customers' planned increase in future core count, we can see the number of Arm cores per chip increasing by about 20% per year. The significant annual increases in cores and rising price per core as customers take more Arm technology is a big part of our confidence in continued robust growth in the data center royalty business. One of the questions we often get is around our visibility into the future, revenue trajectory. I think this tells the story.

Many of the contracts that underpin our royalty revenue forecast are already signed, and the royalty rates are already agreed in contract. We have delivered the technology, our customers are building the chips, and in many cases are already shipping the chips in high volume. Looking out over years 2027-2031, 70% of revenues we're forecasting to collect are already covered with royalty rates set in contract. Even by fiscal year 2031, the contracted base is still around 60%. The remaining 40% is almost all with existing customers who we are confident will want access to the next generation of Arm technology, typically at higher royalty rates, higher core counts, and higher volumes than they do today. We expect continued Armv9 adoption across all edge devices from smartphones to smart glasses to smartwatches, pretty much every device with a screen.

We expect a further boost from CSS adoption, not just in premium smartphones, but across all personal AI computing platforms, including in the PC space. The combination of higher royalty rates from next-generation Armv9 and next-generation CSS will deliver outsized royalty revenue growth. You can also see that we have a very sizable 65% of our forecasted royalties based on rates that are already under contract through 2030 or FY 2031. Royalty revenue under contract tends to be lower in edge devices than for cloud and Physical AI devices due to very fast design cycles in consumer electronics. Turning now to the royalty revenue in the Cloud AI business. I've touched on the drivers of our expectation in rapid cloud growth already.

Rapid growth in the market, ongoing share gains, rising numbers of cores per chip, and delivering greater value per core create a powerful compounding story. Our confidence is bolstered by the contracts that cover 85% of our expected royalties over the next five years. We expect our healthy royalty growth in Physical AI to continue as cars, particularly autonomous and EVs, continue to adopt more sophisticated silicon for the digital cockpit and driver assistance. We also anticipate continued share gains in this sector. Our confidence is very high. Given the long lead times in automotive, 95% of our royalties are under contract through FY 2031. As Drew explained, we are very excited about our opportunity in robotics.

Much of this opportunity lies beyond FY 2031, and thus is not captured in these figures. Over the next five years, we expect that Cloud AI will be our fastest-growing revenue driver, even without the contribution from AGI CPU chips. When you add in chip revenue, it will surpass Edge AI in FYE 2030 and become by far the majority of our revenue in five years' time. Finally, to licensing, as you know, this has been growing well ahead of our expectations. At the IPO, we said that we would grow low single digits, which we quickly lived up to mid to high single digits, and it's been recently growing over 20% per year.

This has been driven by a combination of the AI cycle, more customers getting access to Arm technology through subscription licenses and compute subsystem agreements, and the expansion of our license and design service agreements with SoftBank. This license growth is the basis for the royalty commitment that you saw in the prior slide. We think all these drivers will continue with the SoftBank licensing growing around high single digits and with the AI cycle continuing to provide the majority of growth through more demand for next-generation CPU IP and compute subsystems at higher royalty rates. Of course, the strong license revenue growth should lead to higher royalty revenue growth in the years and decades to come. As I mentioned right at the start, the Arm AGI CPU business, the royalty streams, the licensing revenue all compound on top of each other.

The chip business is targeting customers who either don't have the internal resources nor have the desire to develop their own chips. We do not expect the chip business to displace the IP business, though if some customers do ultimately choose to switch, it is accretive to earnings power, as we previously discussed. With around $15 billion of revenue from the chip business expected in FY 2031 and another $10 billion of IP revenue, we are forecasting very strong revenue growth from the combined business over the next few years. The good news is that we have already done a significant part of the heavy lifting when it comes to hiring the engineers needed to hit our plan.

If you've been following our financials for the past few years, you will know that we've already ramped R&D to support our product roadmaps. Increasing R&D combined with good execution creates a virtuous cycle of new products driving revenue growth. From here, we are forecasting mid-teens OpEx growth through FYE 2031. Most of the incremental spending is our R&D investment in new technologies. We expect our revenue by FYE 2031 to have grown by more than 2.5 times faster than our non-GAAP total costs. As revenue and gross profit scale, particularly in chips, much of that incremental gross profit can drop through. That is the operating leverage in our model. Our focus here is on the long-term earnings power of the company.

Before we get there, yes, we recognize you have interest in the near term as well, and so we are affirming the Q4 guidance that we issued in February. Back to FY 2031. We see two meaningful profit engines. First, we expect the IP business to reach about $10 billion of revenue, achieve a 99% gross profit margin, and deliver over 65% non-GAAP operating margin. We are today increasing our operating margin target 500 basis points from our previous long-term target of 60%. Second, we expect the Arm AGI CPU business to reach about $15 billion of revenue with a gross margin of at least 50% and a non-GAAP operating margin of over 30%.

Putting those together, we have a consolidated business with $25 billion of revenue, industry-leading blended gross profit and operating margins, and more than $9 of non-GAAP EPS in FY 2031. This is not a story of choosing between IP and chips. It's a story of combining a very high margin IP model with a large, fast-growing and accretive chip business. Let me close on the three points that I started with. First, customer demand is allowing us to materially expand our opportunity through selling chips. We already have line of sight to over $1 billion of demand from some of the companies that you've met today and are forecasting $15 billion of incremental revenues. Second, our existing IP business continues to have strong underlying growth drivers with the chip business compounding and not cannibalizing the IP business.

We expect this to deliver around $10 billion of revenue in FY 2031. Third, by FY 2031, the combined model is significantly accretive to revenue, gross profit dollars, and operating profit dollars with more than $9 of EPS power. Because much of that investment is already in the base, the incremental economics are very attractive. All right. With that, we're gonna now conclude my presentation, and we're gonna make a quick transition to start the Q&A session.

Ian Thornton
VP of Investor Relations, Arm

I know, I was too excited.

Jason Child
EVP and CFO, Arm

No, I think you're good.

Ian Thornton
VP of Investor Relations, Arm

Thank you. We know that we've given you a lot of information today, and so you have a lot to absorb. This time, next half hour or so, or 45 minutes, is for you to ask your questions. If you'd like to stick your hand up, and then when the microphone comes around to you, say who you are and who you work for, and then ask your question.

Andrew Gardiner
Head of European Technology Equity Research, Citi

Thank you, Ian. Good afternoon, everybody. Andrew Gardiner from Citi. I had a question on some of the numbers you just presented, Jason, particularly the trajectory of the CPU revenue over the next five years. You gave sort of, I think, the sampling dates, or rather Mohamed gave us the sampling dates of sort of this year for the first gen, next year for the second gen. In terms of the revenue you're talking about, that $1 billion, I presume that's all first gen, largely in fiscal year 2028?

Rene Haas
CEO, Arm

Mm-hmm.

Andrew Gardiner
Head of European Technology Equity Research, Citi

Okay.

Rene Haas
CEO, Arm

That's right.

Andrew Gardiner
Head of European Technology Equity Research, Citi

Then how do we, in terms of the second and then third generation, what's your visibility into that in terms of the customer engagements? We've seen SoftBank NRE in particular flowing through the licensing line. What's the level of commitment from customers that's giving you the confidence in that ramp to $15 billion in fiscal year 2031?

Rene Haas
CEO, Arm

Mohamed, do you wanna talk about the customer side and I'll talk-

Mohamed Awad
EVP of Cloud AI, Arm

Sure.

Rene Haas
CEO, Arm

Some of the numbers.

Mohamed Awad
EVP of Cloud AI, Arm

Sure. Bottom line is that we've got committed customers through the first two generations, and we're currently in definition phase for the third generation with customers. You know, you heard Santosh earlier today talk about a multi-generation partnership, so there's that, with Meta. But there's also, you know, other customers who are committed to both generations at this point. Just to put a fine point on your earlier question, sampling now, production by the end of this year, so for gen one.

Jason Child
EVP and CFO, Arm

Yeah, it's on the numbers, yeah, expect it to be, you know, as we said, somewhere in the line of sight to $1 billion-ish in 2028. We expect it to kinda roughly double again the next year and then double again and, you know, so that's slightly more than double maybe even the year after that. As we get more customers who are deploying, we are past some of the memory shortages or at least further down the road on some of the memory shortages.

Also because, as I mentioned a bunch of times, the later generations will also be at much higher ASPs as well, and that's consistent with what we're seeing across all of our partners and their plans for core counts and plans for just more performant, larger, more expensive chips. The second thing I would add is you mentioned SoftBank. The SoftBank that doesn't have anything to do with this. The work we've been doing with SoftBank, as I think we've mentioned on past calls, that's really to address potential demand for Stargate on the accelerated compute side. As I said in the presentation, we don't have any updates to that project yet, but any NRE or whatever, all that's separate from the silicon that we announced today.

Ross Seymore
Managing Director, Deutsche Bank

Perfect. Ross Seymore from Deutsche Bank. Thanks, guys, for the great presentation. Perhaps for Mohamed, but maybe even for Rene or Jason as well. Mohamed, you talked about the competitive advantages of the AGI CPU versus x86. What about versus other Arm offerings? NVIDIA has its own obviously, and they've talked about going merchant. And somewhat related, how are you so confident that there's no cannibalization? You know, you ship the chip, NVIDIA doesn't ship the chip. Doesn't that have some impact, accretive as it might be, on the business model?

Mohamed Awad
EVP of Cloud AI, Arm

Yeah. What we see, and when you think about the product space, and frankly, customers that are gonna go off and build their own silicon at that scale, if they're doing it, they're doing it because they're trying to generate some specific type of system-level advantage, meaning, you know, the investment required to go off and build these things are typically tied to a broader system design that they're going off and targeting, and that's true about all of our customers. You know, if they were just looking for a product which was really what I would say applicable to a broader sort of cloud space or, you know, specific, a broader use case, then they may not go off and do that.

That frankly, that's why you've heard us say over and over again that the market was underserved. That sort of, you know, highly optimized, you know, focus on efficiency, focus on performance, focus on scale, and not compromise with by adding any sort of additional things which tie it back to a particular system or a particular accelerator, et cetera. That's really what we're focused on.

Rene Haas
CEO, Arm

I would also add that we announced the Arm AGI CPU today, and last week Jensen said Vera is gonna be a multi-billion-dollar business and he wasn't planning on selling CPUs. To have a discussion about channel conflict when a month ago neither products exist tells you just about how big this market is. This market is going to be very, very large. As I mentioned earlier, I think the 4x increase of CPUs around agentic AI, we may be undercalling that number. I think there is a very, very large market here where multiple players can play, and right now I think the demand is higher than we think it is. Just again, look at the fact that Jensen announced a product that six months ago he was essentially not thinking he had a CPU business.

Harlan Sur
Executive Director of Equity Research, JPMorgan

Yeah. Good afternoon. Thanks for hosting this event. Harlan Sur, JP Morgan. Following up on that question, I totally agree with you. You know, the Arm IP architecture within the data center is set up to serve so many different applications, right? You have merchant CPU, Qualcomm, NVIDIA, MediaTek bringing merchant solutions to the market. At the same time, you've got some of your cloud customers, Google, Amazon, Microsoft, Apple, designing their own Arm-based architectures, right? And all of these guys are focused on accelerated compute and AI. I would argue with one, two, three, four, seven players in the market, not including Arm, that this basically covers the entire 120 million CPU per gigawatt TAM opportunity that you talked about.

Again, kind of help us, where does Arm AGI sort of fit into t he demand profile for data center GPUs sort of going forward, is this more of a, like you said, maybe an enterprise play or some of the Tier 2 guys that don't have the capability to design their own silicon?

Rene Haas
CEO, Arm

Yeah, I think I'd just go back to what we talked about earlier today. Two large demand drivers. One is just the need for compute and the need for power efficient compute. I think that starts to move the landscape, which has been accelerating already from x86 towards Arm. There are large parts of the market today that we don't serve around enterprise simply because of the legacy software there. Could that be an opportunity in the next number of years? Absolutely. Then when you look at some of the applications that Mohamed showed, whether it is a head node working with a Cerebras or somewhere in the data plane where you might be working with a networking customer, like an F5 or a Cloudflare, I think this just gets a lot bigger relative to the total opportunity.

I think it starts with the need for compute and the need for power efficient compute. We would not have embarked on this business, Harlan, if we didn't think it had long-term legs, and we would not have showed a roadmap today that had multi-generations if we didn't have commitments for it. Lastly, as we talked about earlier, we had been pulled into this business. We have had customers, once they saw what CSS could do and the need for power efficiency, they pulled us towards this milestone, and that's even before the agentic stuff took off.

Mohamed Awad
EVP of Cloud AI, Arm

The only thing I would like to add to that is in the cloud specifically, you've got well over 10,000 customers using Arm Neoverse specifically in the cloud that are looking to leverage that software investment on-prem. Arm Neoverse specifically, this is really the only offering that sits in that category.

Kevin Cassidy
Managing Director and Senior Research Analyst, Rosenblatt Securities

Hi, Kevin Cassidy from Rosenblatt Securities. Maybe just to expand on that, maybe Mohamed, you might have answered my question already. But if making your own silicon in the cloud is a good idea, why isn't it a good idea to move into the client and edge space?

Rene Haas
CEO, Arm

Could be a good idea. We're not talking about anything but what we talked about today, though.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Thank you. It's Mehdi Hosseini, Susquehanna International Group. Two questions. One for Jason. What is your underlying assumption for just blended chip prices? I'm under impression that you do benefit from higher prices, especially, given the fact that your royalty rate is fixed, but economic value is going up and the chip prices has a factor in it. So what is your underlying assumption with the inflationary trend in semiconductor industry, excluding memory? And then for Mohamed, just going back to the prior question, how should I compare and contrast your AGI CPU to LPU? And just maybe from a qualitatively big picture. You don't have to get into details.

Mohamed Awad
EVP of Cloud AI, Arm

Yeah, I mean, LPU is a completely different beast that's really focused on an aspect of accelerated compute specifically for inference and then decode specifically. So it's not really the same category. It's really about accelerated compute, where we're, as a CPU, really kind of think of it as the control plane, the management engine, the thing that coordinates all those accelerators. Accelerators can't exist without CPUs. LPU is an accelerator.

Rene Haas
CEO, Arm

LPUs generate tokens. CPUs distribute the tokens. That's a very different workload.

Jason Child
EVP and CFO, Arm

On the pricing, you know, we of course don't disclose exactly what the pricing is. It's gonna be different by customer based on volumes and, you know, based on who it is and what you know, what their situation is. You should assume that if you look at the market of what someone charges for an equivalent chip to AGI CPU or something, you know, like the x86 alternatives, they're somewhere around, I think, just shy of $2,000, somewhere in that range. Think that it's, like, probably not too far from that. Then when you look at next generations, I think most of the assumptions are that those prices go up by 50%, maybe 75%, maybe even double, just depending on what the core count is. You know, our assumptions will probably be somewhere in line with what you see the rest of the market doing.

Ananda Baruah
Research Analyst and Managing Director, Loop Capital

Thanks. Ananda Baruah with Loop Capital. Thanks for doing this. This is great. It's going back to use case. This is a clarification question. Sounds like you guys are talking about TAM expansion to the space from what you're providing with your chips today. If that's accurate, is part of what's going on is there's an appetite for compute that's not present in the marketplace? You guys can really help fill in that compute need, and some of your existing customers can pivot some of their resources to some of the other computes, like right compute tool for right compute job. You know, maybe you fill in more of the inferencing side. Some of the compute they're using for inferencing, they can pivot back to training, all while you're also amplifying the compute TAM.

That's sort of the first clarification question. I guess the second one is. Sounds like you feel solid on supply chain. Maybe memory's tight, but there's no talk of wafers yet. Just on the wafer side, is there anything. It sounds like you have that ironed out, but just wanted to get a clarification on that as well. Thanks.

Rene Haas
CEO, Arm

I'm gonna take that. I would say, to make sure I understand your comment, the way I would break down the compute needs, and maybe it's back to the LPU question, is that there's a category of compute: generating the tokens and then there is the orchestration and distribution and scheduling of the tokens. We think that latter category is expanding in a large way, in a very, very large way, and that is the market that AGI CPU addresses. The generation of the token market is not directly addressed by this, but somewhat indirectly in the case that the better you are at orchestrating and distributing tokens, the more tokens that you can distribute, the more tokens that get generated. It's a flywheel. The more demand you see for CPU resources, which is why we think we may be conservative on the numbers.

We're not gonna change them, but we think we may be conservative because they work in tandem. To be very clear, we're sitting on the side of the workload that is managing and distributing the tokens. Do you wanna get to the supply chain?

Eric Hayes
EVP of Operations, Arm

Yep. My name's Eric Hayes , I'm Executive Vice President of Operations. On the supply side, we have the supply locked up very well for our ramps into production. You know, our supply chain partners, it's not just wafers, it's packaging, substrates, assembly test, and our supply chain right now isn't really just what we're buying from. It's, they're partners. They've been partners of Arm for a long time, and we're expanding our business with them. They've gone through and they have actually spoken with our customers and have confidence on our ability to deliver through to our customers, and they see us as a new channel for them where this is gonna be deployed and it's a small investment.

The numbers that you've seen up here represent small numbers to our supply chain, so it's very easy for them to supply into this and to help us develop this channel for them.

Rene Haas
CEO, Arm

We've worked very closely with TSMC for decades. We have relationships up and down the board, including at C.C. Wei's level. We talked to C.C. very early on this development, making him aware of our strategy, our intent to get into this market, both in terms of a strategic partner, as well as making sure we had security of supply. Most of the people on this panel have all worked in this business at one point in our careers, so it's something that we know requires a lot of focus and attention.

Speaker 22

Thank you for hosting the day. Rene, I have a question for you. In the keynote you mentioned that there is potentially up to $10 billion CapEx savings related to, you know, better performance of the CPU. Can you help us figure out whether it's more focused on the head node performance or the data prep final? Like, how do you guys conceptualize that between, you know, what's improving the GPU performance, right, and what's kind of straight CPU?

Rene Haas
CEO, Arm

Yeah. Without giving you the entire formula, the way we thought about it was at the highest level, you're talking about roughly $50 billion of spend on a gigawatt data center, and with CPUs growing from 30 million cores to 120 million cores, and then we calculate out what the power consumption looks like, we save about half versus x86. That gets us roughly to that math of $20 billion. I think it was $10 billion.

Mark Lipacis
Senior Managing Director, Evercore ISI

Hi. I'm Mark Lipacis from Evercore ISI. Thank you very much for the presentations. Two questions, if I may. When you look out, you talk about a kind of a TAM or an opportunity going out five years. Is it? Can you talk about the diversity of your clients, your customers as you go out? Is it a 60/40 where you have, you know, five big customers and everybody else, or does it look something different? I know if this is for Rene, but you know, the explosion in demand for tokens and CPUs, I, you know, I have to say I haven't seen a market ramp like this in covering tech for 25 years.

It seems like, you know, a lot of companies have been caught off guard on this, and they're, you know, we talk about there was a question about memory and what do you think gates the ramp for you guys? Is it going to be limitations in memory? Is it gonna be limitations in the industry's ability to ramp data centers? Like, what is the gating performance for you guys? Thank you.

Rene Haas
CEO, Arm

Yeah. No, thank you for your question. We brought up a lot of important points there. You know, first off, every customer logo that we showed today is a customer that we are going to have backlog from. These weren't evaluators, these weren't maybes, these are all customers. To be able to stand up on the day of the launch with two very large companies in this space, Meta and OpenAI, and then some great partners that we showed on the video, SAP, Cloudflare, that's a pretty significant milestone. We already have a number of customers locked in, and we intend to expand that list. I think it's gonna be a very diverse customer base.

I think in general, whether it's turbines, whether it's access to energy, whether it's HBM, whether it's standard DRAM, we are going to see probably a supply crunch for this type of acceleration. That went into our logic, you know, when we went through the numbers here. We didn't assume an unconstrained supply situation. We tried to be sensible about it. I do think personally we are still in such early stages of how AI is used in large enterprises that we've got more surprises coming on the way, just relative to the amount of pure compute that these AI models can subsume. Just look at the agents, right? These agents that run 24/7 and swarm a cloud. It's a little bit of a horde mentality.

The more compute you give it, the more agents that get spawned, and where does it kind of end? I think we are in very, very early days, and I think as typically when you see in these kind of situations, innovation starts to take over and people get very creative in terms of how to solve the bottlenecks. Good news for us, CPUs are a great tool to solve bottlenecks in all kinds of areas of the data center architecture, which gives us a high degree of confidence when we think about growing the base IP business in addition to a chip business. We think it's a really broad opportunity, and we don't think the future looks like the past, right?

You can look backwards and say, "Okay, you've got these guys doing chips and these guys doing IP, and I add up some of the parts and I can't get the math to work." The math never works that way when you kinda look forward because things change relative to how architectures are developed and solutions are driven. We have some core elements that give us very high confidence. We're very power efficient, we have a lot of software, and a CPU is a really good tool to solve a lot of different jobs. That all factored into why we went out as far as we did with the level of conviction that Jason talked about. We have a very good line of sight to where we think the future's going.

Victor Chiu
Equity Research Associate of Data Infrastructure and AI Semiconductors, Raymond James

Thank you. This is Victor Chiu from Raymond James. Could you provide some color, I guess, around the genesis of this initiative, you know, and how the discussions with your launch customers took form? You know, I'm just kind of curious if the hyperscalers, you know, approached some of your current customers, you know, first to kind of develop a solution which they weren't able to do, you know, and then came to you to develop the chip, you know, afterwards, or if it was vice versa? Just, you know, just wondering how they came to the conclusion that partnering with you would be the better alternative than doing it with, you know, someone that already has the infrastructure in place and the scale in place to produce a chip that meets their efficiency needs?

Rene Haas
CEO, Arm

Yeah, I can start and then Mohamed can fill in the dots. One of the things that makes Arm incredibly unique relative to where we sit in the ecosystem is because the Arm compute platform is where all the software starts. We have a lot of conversations about our roadmap with customers who aren't our direct customers. What do I mean by that? Take Microsoft Windows for a moment. We spend probably more time than anyone in the Arm ecosystem talking to the Microsoft Windows group about where Windows is going, so we can understand exactly what are the things that we need to put on our architecture so people can provide solutions to drive for that ecosystem.

With that overhang, we're having conversations all the time with people like a Meta or an SAP who are going to be users of our product somewhere in the value chain. It's in the course of those discussions when customers are looking at the art of the possible that the discussions take place from a creative standpoint of, "Gosh, instead of if I go down path X, could I go down path Y?" Which is a bit of Mohamed, you can maybe fill in the detail on the Meta deal, but that's a lot of how those things go.

Mohamed Awad
EVP of Cloud AI, Arm

Yeah, I mean, I guess what I would say is that, you know, obviously the Cloud business is on a great upward trajectory, but it wasn't too long ago that we were not. There wasn't a whole lot of traction for us. When I think about where we were focused, we were focused on the greatest consumers of compute, and so you can count the greatest consumers of compute on one hand, right? If you look back, all of the other hyperscalers were already on Arm. They'd already had programs. They were already going. You know, we were in there trying to figure out how to move the next one over, you know, and we were positioning our IP and our CSS, et cetera.

Obviously they were like, "Oh, you're developing CSS. That's taking this a little bit further." Of course, things kind of, you know, continued from there. You know, their point to us was like, "Hey, we really like the offering. We wish we could get this thing in-house. We wish we could, but we don't, you know, we are not in a position to go off and build a CPU. The market is underserved. Where do we get this?" Right? It was really about, you know, focusing on the biggest consumers of compute and then helping them address the fact that the market was underserved.

Rene Haas
CEO, Arm

Yeah, I mean, everybody can't do everything. There are lots of customers that we have who are building Arm-based silicon today, and they have incredibly capable design teams, yet they buy off-the-shelf Arm parts from people who are building chips as part of their business.

Mohamed Awad
EVP of Cloud AI, Arm

You know, the only thing I would add to that is that if you think about how we think about our go-to-market going forward, it is the next largest group of consumers of compute, right? We're not going to every mom-and-pop shop because, you know, there's support requirements around that, et cetera. We're really focused on those big hits.

Rene Haas
CEO, Arm

Yeah.

Louis Socha
Senior Equity Research Analyst, Daiwa Capital Markets

Okay, thank you. Louis Socha, Daiwa Capital Markets. My question has to do more with China. Maybe I'll break it down into two parts. Rene, the comment that, you know, the logos you had are real customers, I'm just wondering what the opportunity is in China. Is it possibly as big as the U.S. given everything they're doing over there? Is there any restrictions or have you not been able to go in there? And the second part I think is to Drew Henry. You had said that you were just in China. Just curious as to what you see they're doing with robotics and auto, and are they materially further advanced than us here in the U.S.?

Rene Haas
CEO, Arm

Yeah, what I would say with China, as far as the Arm AGI CPU, we have no customers to announce today, but there's no reason why we should not see market adoption in China. At the highest level, there's very good product fit relative to the very reasons that a non-China customer would take the Arm AGI CPU are the same reasons why a customer in China might. We just don't have anybody to talk about today. The big question always relative to export control and how would that kick in, from everything that we understand about the export control rules, there's nothing about this product that would be in tripwire with that. Obviously should things change, we'd observe that. Right now, we don't see that. No customers today, but no reason we can't have any.

Drew Henry
EVP of Physical AI, Arm

Then to your question about what's going on in robotics, I was in China last week, and it was one of those trips where it's series of meetings all day, the next morning, get on a plane, fly somewhere. It's one of those kind of trips. I saw just about as many companies as you could possibly see in the amount of time that I was there, and it was incredibly impressive. The thing that I've taken away is that, of course, China continues to move at China speed. Historically, just very famous at the speed at which they move things.

There's a lot that's happening there, rumored to be in the area of 200 robotics companies in China alone. The more interesting thing that I've noticed is the global development in robotics. For instance, in Europe has, you know, for quite some time missed a lot of the waves that have been these technology waves and competing waves. Right around the University of Zurich, right around ETH University of Zurich is a bunch of robotics startups. One of the top robotics companies is in Germany. Across the U.S., we've got a number of robotics companies in the U.S. There's quality and there's quantity, and those things are very different.

The thing that I'm recognizing in the areas where there's quantity, where there's lots of manufacturing capacity that might be made available at very inexpensive prices to folks, people are filling that with quantity. The quality is where the real value of the market is going to be. That, to me, is happening actually globally. I'll spend as much time in Europe and in the U.S. as I will in China as we work more in this particular space. The last thing I'll say is that's where that quality is about where computing investments are being made. That's the key thing about robotics. It's about where the investments in computing are, and that's being made globally.

In the kind of lower level infrastructure side of things where actuators and motors and controllers and things like that, where manufacturing is cheap. There's an awful lot that's happening in the low-cost manufacturing worlds, which had been a lot in China, but it's now being diversified as people figured out, listen, we gotta diversify how manufacturing is happening. It's as much happening in other places today. The last point I'll make about it is the interest that's going on in robotics to me right now is there's an awful lot of cost in the kinda low-level actuator infrastructure that exists because people are trying to figure out what is the kind of materials you wanna buy, you wanna build stuff with. There's lots of machines that are getting built.

The machines are expensive to build, and then you're not putting a lot of volume through them, so each component is expensive today. As that market begins to consolidate around more and more and more common parts, we're gonna see a collapsing of that cost that's gonna happen. The thing that's interesting to me is that in much of the robotics space today, a significant portion of the cost is actually in the mechanicals and actuation systems, and that is gonna go through a dramatic cost decrease as scale, kind of the scale hit. The thing that's common across all of those is the compute platforms, and the compute platforms are gonna be as high performance and as high capability as you can possibly provide for the next decade or more in those compute areas that I talked about, and I find that interest is global.

Charles Shi
Managing Director and Senior Analyst, Needham & Company

Hi. Charles Shi from Needham & Company. Maybe I wanna ask about all the discussion around AGI, CPU, all things you guys are doing on the silicon strategy and to put that a little bit into the perspective of where things may be going a little bit even longer term. The industry has gone down that path from the merchant model, companies buying off the shelf, and then more like an ASIC model, and then going to, like, more recently, customer-owned tooling, basically insourcing chip design. I mean, the hyperscaler customers of yours.

Now it looks like this whole launch of silicon. Well, by the way, you guys have been a big enabler of that path I just described, but now it looks like you're going back to offering the off-the-shelf chips, so going a little bit the opposite of the trend that has been happening. My questions are maybe twofold. One, why this is happening now? Because the trend has been one way, but it seems to be going back.

Two, your customers, especially hyperscaler customers, who elect to buy the off-the-shelf chips from you guys, how do you make sure they stay with you on the silicon business, instead of maybe they're going down the same path as many of the other guys have gone through, going from the merchant silicon, going to ASIC, going to maybe even COT, customer-owned tooling? That's a long question, but hopefully we can get some color.

Rene Haas
CEO, Arm

Yeah, it was a long question, Charles, but I think I got the gist of what you're saying. Let me try to play it back a little bit. I think what we see for the Arm architecture is an opportunity to have a significant share growth that isn't in conflict with what you are saying. What do I mean by that? Will hyperscalers still continue to do their own chips in-house to get the kind of efficiencies you're talking about? Yes. Do we think the TAM for Arm expands because more and more compute is needed in the data center? We do, because we don't think the hyperscalers will be able to do every single chip for every single application. Maybe they can, maybe they can't.

Also more broadly, we think the constraint on power in general really brings our power efficiency advantage to the table, where we think we can now materially take market share from x86. I think you've got an expansion of the TAM of other applications. I don't think the hyperscalers, as I said, need to stop for us to be successful. I think it's a large market for both, and I think, as I said, the demand for Arm will grow because, A, power efficiency really, really matters, and all of the work we've done around software is really paying dividends. Just look at the comments that Paul made from Meta this morning of that porting exercise.

It went from a "Can we do this?" to "Yes, let's do this," because the power efficiency, I think his words were, "It's too large to ignore." I think you are going to start to see that in areas that had been x86.

David Gibson
Senior Research Analyst, MST Financial

Thanks. David Gibson from MST Financial. Two questions. On the commitments from your customers, are they minimums? Are they volume and price set? Can you give us a sense of how flexible those commitments are, and hence the risk to your revenues going forward? That's the first one. The second one, this is gonna be a tougher one perhaps. I know we haven't even produced the chip yet, but wondering how you think about the life of the product itself. Like is it a five-year cycle replacement for Meta in five years to replace it with generation six, seven, whatever? Wondering how you conceptualize the life of these chips and how long they'll last for. Thank you.

Rene Haas
CEO, Arm

Yeah. I'll answer the first part. Maybe Mohamed, you can comment on the second part. I'll give you my view. To your question of how solid are the commitments from the customers and how have we baked that into our forecast, Jason would not allow us to show any numbers that he didn't feel confident in. So the commitments on forecast and our confidence levels are pretty high relative to support the numbers that we showed. So it's not like, oh my gosh, we think we've got a lot of work to go off and do. Is there upside opportunity? There always is, but we took a very conservative view as far as the numbers go.

I would imagine, and Mohamed, you can chime in here too. I think this probably looks like a five-year replacement cycle, even though we're gonna be bringing out products faster than every five years.

Mohamed Awad
EVP of Cloud AI, Arm

Yeah. I mean, I think five years is kind of what we typically see, though those numbers are changing now, right? Because I mean, the challenge is that. Listen, those numbers are driven by, you know, amortization and CapEx costs and, you know, how do you depreciate it? When you're talking about numbers this big, and frankly, this goes to whether or not you even build your own silicon, the calculus for what it means to go replace silicon or build silicon changes pretty dramatically. Five years is a good, you know, where we are today, but I think that's dynamic. We'll see how it goes. I mean, we're certainly introducing new products. They're gonna be more efficient, more performant, and we expect them to pick those up as they happen.

Rene Haas
CEO, Arm

I don't have any more questions. Do we wanna wrap up? Okay. If there are no further questions. Wonderful set of questions. Obviously you guys do your homework. As Jason so eloquently put, we have extreme confidence in two businesses that compound on top of each other that produce a very game-changing result for us in five years. We didn't take the announcement lightly relative to what it meant to the ecosystem, what it meant to the partnership, which is why you saw some of the videos that you saw. At the same time, we don't take lightly providing you numbers that look quite different than how you've thought about the company before. We have been very meticulous in terms of looking at where we think the trajectory of the business is going.

I think the last Analyst Day we did was when we went public in September 2023, so it'd be like August of 2023, and we showed numbers that we had high confidence in that showed a number of royalty agreements under contract. We beat all of that as we stand here today, two and a half years later. That should give you the confidence that as a management team, when we when we sign up to a projection and share it with an audience like this, we're serious about it. Thank you very much.

Powered by