Such a nice warm welcome. Thank you. Welcome to our live stream audience watching the Arm Everywhere event. I don't think we've ever done a live stream event like this, and to the folks here in the audience, thank you so much for coming to the historic Fort Mason. You may not know that Fort Mason here in California was actually an official defense site for the Civil War, and this is where a very famous battle between Alabama, Georgia, and California took place. I know you're thinking to yourself, "I don't remember that battle." That's why this area looks so pristine. There actually was not a battle, but it actually was a fort. I thought that was actually kind of neat. I didn't actually know that. Thank you again for attending a big day for us.
We have a lot to share with you, so I'm gonna jump right into it. When we thought about, you know, how to name this event and how to talk about our company, we thought Arm Everywhere was really appropriate because one of the things that we're very proud of that we don't always think about in our daily lives at Arm, but is really quite impactful, is just the scale of the company and the scale of the magnitude we have. When we start looking at numbers, 117 billion. What is that number? That's the total humans ever to live on Earth. That's if you count up by all of our calculations how many people have lived on the planet since inception, about 117 billion. 350 billion plus are the number of Arm chips to have ever shipped.
That is three times the total number of humans who have ever existed on the planet. It's not just one for every human, it's three for every human that have ever lived. 7x the total number of non-Arm-based CPUs shipped combined. Just think about that number. 160 Arm chips for every global household. Mine is probably larger than 160, but 160 is about the average. That just gives you a sense of the scale of what we've done, and it's really important because it feeds into everything that makes us what we are today and, of course, could not be done without our ecosystem partners. The company's DNA was really born to run off batteries. Company started in the early 1990s.
It was a spin-out of a British computer company named Acorn Computers, and that company had a mandate to build a chip, and that chip had a couple of requirements. One was it had to run in a plastic package, which back then was really important, and number two, it had to be really low power. The first part was important because of heat. The second part was important because battery life meant everything because this was going into the world's first PDA. The company, we nailed that. We nailed that objective so solidly that, and this is a true story, that when the first Arm development board that had the first Arm one processor was powered up, and these were plugged in now into the back of a wall.
You had a development board, lots of logic chips plugged into an AC outlet. When the AC outlet plug was removed, the chip kept running based upon the leakage current that was coming off all the other chips on the board. The folks came in the next night and they saw the oscilloscope was still driving a signal, and that is really what for us launched the revolution of smartphones. We were designed into the very first GSM phone, for those who remember that Nokia brick on the far edge. Then the BlackBerry, which many of us who had loved, still love. Wish it came back. All the way to the modern smartphones of Android and iPhones. That is where we started in terms of the battery life. It launched a generation of smartphones.
Now, one of the breaks we got about 10 years ago was when SoftBank bought Arm. Yeah, it was about 10 years ago. It was 2016. When SoftBank bought Arm, Masa gave us an opportunity, now that we're a private company, to invest into areas that we were not able to invest in before. That gave us the opportunity to expand the platform to a number of other verticals. We took everything that we knew about smartphones and then expanded that out into the cloud. We launched Neoverse. We got our first design wins in the data center, and then we were also able to invest into autonomous, automotive, physical AI. We could not have done that without that 2016 moment, and this is my thank you to Masa for allowing us to do that.
We could not have made that all happen, and it's paid significant benefits for the company. However, as good as our products are, as competitive as the platform is for physical AI, for autonomous, for the cloud, it is really what I like to call the ecosystem of ecosystems that really differentiates us. This is where the partnership really comes to life because that mobile platform that we built cannot happen without the software. The software layer in the case of the mobile area is iOS, it's Windows, it's Android, it's macOS. Then the litany of applications that not only run on the Arm compute platform, but they're highly optimized, highly tuned, and allows the partners in the ecosystem to build great products. That formula applies to every vertical that we participate in.
It applies to what takes place in the cloud, whether it's Linux or OpenAI or Anthropic, and then the platform that runs with it. This is why we like to call this the ecosystem of ecosystems, because it's not just one vertical. You can see when we look at the physical AI platform with automotive, same formula. 22 million-plus software developers that are very unique to a vertical, but they leverage a lot across the ecosystem that allows people to get started in other areas. This is the magic, and this is what is uniquely Arm. This is what's very, very unique about our compute platform. There's no one on the planet who can serve the edge to the cloud in the way our ecosystem does.
Now, over the past few years, we've been evolving our strategies largely because we see the demands in the marketplace are around the chips are more complex, the cycle times to build these chips are getting longer. 5 nm- 3 nm - 2 nm, means longer fab times, longer packaging times. There's a need to do more and to do it faster. We've traditionally provided IP in a standalone form, the CPU, the GPU, system IP, and that has served us well for the first 30-plus years of the company. As I said, we were starting to see huge demand for the need to go faster, make products better, and get time to market sooner. We introduced something called Compute Subsystems. We did this about three, four years ago.
We invested very heavily in terms of the engineering requirements to do this, and what this does is it takes all the blocks of IP and puts them together in a finished way, verified, performant, tested, that the end customer can then take to market, and in some cases, it shaves a year, in some cases 18 months, off the time of starting design to getting to production. It was a very significant investment for us. We put a lot of effort and engineering into it, but we've already seen massive benefits in terms of the customer base. We introduced this three or four years ago. Our business model is a license plus royalty. Royalty is the laggard, so royalties start to show up two, three years after we license a product. Already, CSS represents almost 20% of our royalties and growing. Now that's our evolution.
Of course, we're now in an era where everything is different than we knew it before. When I think about artificial intelligence, I get a lot of questions when I talk to analysts or media about did AI just come up on us by surprise. I think back to a time I was in Bletchley Park about a year and a half ago. Bletchley Park is where the original crypto work was done by Alan Turing to help the West against the Germans in World War II. There's an area there where you can go in the museum and you see papers from Alan Turing about can machines think. I think those papers were written in the 1940s. The idea of AI is obviously not new, and if you're a sci-fi aficionado or fan, I certainly was growing up.
Arthur C. Clarke was one of my favorite authors. 2001: A Space Odyssey. Now we have people who weren't even born in 2001 who are here. I always looked at this and said, "Of course, this is going to happen." I just didn't think in my lifetime I would see it at the pace that we've seen it. For anyone who says, you know, this is a bubble and it's going to pass, it may be a financial bubble, and the case of investment may slow down and it may be an investment bubble in the sense of the valuations may not be what they are today, tomorrow. But if anyone thinks that this is something that is going to go away, it's a little bit of an ostrich syndrome. This is here with us. It's really changed how people think about computing.
However, somewhere along the way, people kind of thought CPUs were dead. There was a thought that the only way you handle AI is through accelerated computing, that the CPU's role in the AI world is no longer relevant. Now, if we think about the role of the CPU and what happens in the cloud, now this is the cloud before AI, so I'm gonna say it's before that last slide that I showed. Huge growth in compute cloud. We saw growth from AWS, Microsoft, GCP. The conventional use of the cloud was you type in an answer, you do a search. Any seats left for the Warriors game? I think there are a lot of seats left for tomorrow's game, by the way. I have seen or tonight's game. You got the prompt back. This is the cloud. Very simple.
You do search, but CPU is very heavy. When we look at the growth of SaaS 10+ years ago, 10, 15 years ago, and the growth around cloud, the CPUs were doing literally all the work. Now, when you add the AI cloud, if you will, and now you are a human, and you're putting in a prompt into your device, whether it's your phone or your PC, well, of course there are still CPUs involved. The cloud is servicing that request, and that request gets sent for a token, which the accelerator generates, and a CPU in that data center orchestrates and sends a token back, the token being a word or an answer that provides the request for the query. This is all the work that's being done by the AI data center.
CPUs are involved both in the cloud, and obviously they're involved in the AI data center. We estimate that in this data center, there's probably 30 million CPU cores per gigawatt. There's a lot. Data center here is a combination of what sits right in the AI cluster, whether it's your head node to your accelerator or what sits next to a dedicated rack. The math is basically about 30 million CPU cores per gigawatt. Okay? That is the world that we've seen coming up to about the last year or so, or maybe even less. What has changed in the last number of months has been this explosion of agents. Agents are essentially tools that act on a request and come back with a full flow of answers. It's not just a query for an answer, but it's actually work.
It's run a payroll task, do a scheduler, go off and write a number of analyses relative to a tool flow and provide me an answer. We heard so much about OpenAI here in the last few weeks as an example, and it's not the only example. Now, why is this important? Why am I talking about this? Because as we move to agentic query, the number of tokens per human go up by 15x, if not greater. If you think about the why of that, it's pretty straightforward. Agents can generate requests far faster than humans, and they don't sleep. They're at it 24/7. The agents are now pushing these requests into the cloud, into the data center. What's happening? The data center is choking.
These accelerators, which are very expensive that generate the tokens, now need to send those tokens back through the cloud. If we think about what an agent is, an agent is a workflow. As I said, it's a payroll task. It's a scheduler task. It's asynchronous. It is a lot of work relative to scheduling. That's what CPUs do. That is not a work that can be done by an accelerator. The way to think about this is the accelerator generates the tokens, but it's almost like pushing a dump truck up and someone's gotta move all that dirt. The CPUs are the pieces of equipment that move that dirt, and agentic AI only increases that. What you see is a huge bottleneck now in terms of flow. What does that mean? You need more and more CPUs. Lots of them.
CPUs near the head node, CPUs next to the accelerator rack, more CPU racks inside the data center. You just need more. By our calculations, and we think this may be a little bit light, goes up about 4x, 120 million CPU cores for that same gigawatt. Okay? In that same profile, we now need 120 million CPU cores. Now, we're trying to put 4 x the amount of CPU cores in that same power envelope. Power is precious, obviously. The capital required for it is precious. Trying to put all those extra CPUs into a data center that is already stuffed to the brim with accelerators and CPUs doing the core work, that is a problem. Now, every tough problem needs a good solution, and we're announcing our first silicon chip that we are selling to customers for revenue.
The Arm AGI CPU. Now, this is a big, big deal, and I would love to tell you every feeds and speeds about the product right now, but Mohamed will kill me if I do that. We'll go into a lot of detail about the product and how we conceived it and the why. Let me be clear, we are now in a new business for Arm, and we are supplying CPUs as chips. The biggest reason we're doing this is that our partners have asked for it. We're also really doing this to solve the problem I just described. As agentic AI becomes mainstream, all of the work required to make that happen is CPU-bound, and you need a CPU that has the DNA of being born to run off a battery.
As I said, reason zero is our partners have asked us for it, and one of the partners we worked the closest with on this, is Meta. I'm super pleased to have Santosh Janardhan with me today, who's gonna do a better job than I can to tell you why Meta made that choice. Santhosh.
Please welcome to the stage Meta's Head of Infrastructure, Santhosh Janarthanan.
Can I get a clicker? Hey, folks. Welcome. It's funny, as every year I try to run the San Francisco Half Marathon, and they distribute the bibs the day before you run right here. I can tell you, it looks very, very different, compared to what I guess you're seeing now. Hi, my name is Santhosh Janarthanan. I lead infrastructure at Meta. What does that mean? Well, it means that we traditionally go and custom-build and design our data centers to run it, and we custom-build our hardware, our GPUs, our CPUs, and we'll get into that quite a bit, the network that connects them, and obviously the software that sort of binds it all together.
It's a fancy way to say that if your Instagram is not working, if your WhatsApp is not working, your message is not arriving, I am the person to blame. Okay? Now, if you think through our family of apps, that amounts to about 3.5 billion users that use our products daily. Every single day, about half of humanity logs into one of our sort of apps and hammers away at it. As you can imagine, that creates a decent amount of scale. We run a decent amount of the internet. We're probably the only hyperscaler that's not a cloud, right? If you think about gigawatts of capacity, tens of millions of servers, and increasingly, more and more you're seeing bigger and bigger CPU and GPU AI clusters.
Rene sort of went through that quite a bit. I think it's interesting to go and look at how this has grown over the last years. AI clusters are a fairly new thing, really started sort of post-COVID 2020 to 2023, just after sort of ChatGPT came along. Our initial clusters were pretty small. In fact, when I look back for this, in 2023, our initial clusters had about 128 GPUs. That's it. As you can see, even in 2023, we started scaling quite a bit. As you sort of fast-forward, it really started growing. The demand for this has far surpassed sort of what any one of us could imagine, it was. We are in the tens of thousands of GPUs stitched together in a single cluster now.
If I project it forward, and this is the thing I really want to set context, there is absolutely no sign of this slowing down. In fact, it's almost exponential. I only see it accelerating, right? The demand is exponential, and Rene was saying, power is constrained. I want to talk a little bit about some of our clusters. That is Prometheus. Prometheus is one of our bigger clusters. It'll surpass well over a gigawatt by the end of this year. That's a lot of GPUs, I can tell you. We've stitched together a bunch of data centers, a bunch of tents. That thing you see, the blue-colored thing, is actually a tent. It's a fancy tent, but still a tent. Right? It's weatherproof. It can survive about a Category two hurricane.
We're putting together all of this, stitching it together with a network. To our developers, to our researchers, what they end up getting is about a gigawatt worth of an AI cluster in a single combined entity, which is pretty powerful, as you can imagine. Like I was saying, the demand is exponential, to put it mildly. That is Hyperion. It's going to go up to 5 GW in a few years. Most people can't fathom what a gigawatt is. A gigawatt is about 10 Palo Altos. The town of Palo Alto, 10 times what it consumes is one gigawatt. This will be five. That's 50 Palo Altos, right? That's what we are building out. It's going to go and go really, really big. Why do we do this?
At Meta, we have this vision of delivering personal superintelligence for every single one of our users. This means creating models that can go and figure out the most relevant experience, the most engaging experience for every one of you on our platforms. It means creating a personal assistant for every one of you, right? Now, if you have to go and deliver sort of personal superintelligence to billions of people, what kind of systems would that take? We're talking about billions of people, each using sort of exacting amount of compute over and over. Like I said, over three billion users a day, right? What does it take?
It takes power, it takes land, it takes a decent amount of hardware, software obviously, and most of all, it takes silicon. A lot of silicon, right? This is why I think Arm is such a natural partner for us. What we want is a partner who can match our ambition, who can match our cadence of velocity of innovation. What we realized when we were sitting down with Arm is that they could develop it, they were as hungry as we were, and most importantly for us, were as power-conscious and as efficient as we wanted the sort of them to be. This is why Arm is now the primary co-collaborator and the primary sort of partner. The CPU that we are ending up developing is pretty foundational.
It's not just a Meta CPU. It's not just an Arm CPU. This is something that I think will end up being a foundational CPU for the whole ecosystem. I think we are at the threshold of something pretty sweet here, because you're going to hear more and more about sort of the constraints that data centers are facing. You're going to hear more and more about while the demand for compute is growing, the power is not growing at exact same curve. This marriage is I think about it personally as a win-win situation, right? It's extremely sort of heartening to see Arm moving on from not just being an IP license provider, but actually getting into the game of sort of building something that is production scale and production ready. Exciting times.
Yeah, you should yeah. Two years, three years in the making. I think about this as the sweetest things take some time, but we're getting there. Now, like I said, we are obsessed with efficiency. If you think about one of the biggest appeal that Arm has had over the years, it's its power profile. Arm can go, Rene had this fascinating experience that he was talking about taking 30 million core, instead of 30 now making it 120 million and fitting it in the same power envelope. There's one thing, you don't want to compromise on performance, right? This is the thing that I really want to make sure we drive here.
The biggest reason why we sat down with Arm and had this conversation was we want to put in a lot more cores per watt, but we do not want to compromise on the performance piece. That marriage is why I really think it's a win-win situation here. In fact, about two and a half years ago, we sat down with Arm. We actually first surveyed the market to see was there a CPU that could meet the specs that we wanted. If we met the performance, we couldn't get the power. If we got the power, we couldn't, wouldn't get the performance. This is why Arm ended up being such a partner. The ability to scale that Arm gives us when you push in a lot more cores in.
If you think about personal superintelligence, if you think about the orchestration that Rene showed, you don't want to starve your CPUs, nor do you want to starve your GPUs. That marriage that you end up doing is I think that most people are going to realize pretty soon. Now, the design point that we chose for this was something to minimize risk for this iteration. We wanted to make sure we get our first CPU right, get it working out of the box. This is a multi-generational partnership. I just want to emphasize this. When we look at subsequent iterations of things that are already in the hopper of what we're going to build out, I truly believe that this chip is going to expand sort of the performance on multiple axes.
In fact, this ecosystem is actually going to be awesome. When you challenge the incumbents, you see innovation across the board. That, I think, is what all of us will end up achieving. Now, I want to talk about why. I want to take us back, I guess, to why we do this work. About, like I was saying, 3.5 billion people use our products every single day. This means that's your friends messaging each other on WhatsApp. It could be, you know, a small or medium business messaging the users on a platform. It could be somebody going and doing an AI interaction with Meta AI. None of this is possible without infrastructure. Infrastructure has now become, has gone from being on the backside of sort of technology innovation to being the enabler of technology innovation, right?
AI is built on the backbone of infrastructure. Every interaction, every post, every feed, every call is done on the basis of what we build out on the back end. At least for us, we're custom-building data centers, we're custom-building hardware, and custom-building silicon. That's why Arm, I think, is such a big partner for us because for us, we want to squeeze every bit of performance out of what we build out. We think about optimizing things like, you know, performance per watt, performance per gigawatt, and Arm allows us to do that. It allows us to go and increase the efficacy of everything we build out. Why? So that we can go and serve more users, so that we'd hopefully improve every one of your lives in some way, shape, or form.
That's why I think Arm has been an awesome partner. Thank you, Rene and team. It has, it has been absolutely a pleasure to work with you, and hopefully we'll do this for years together. Thank you.
Wow. Amazing. Santhosh, thank you. That was terrific. I have someone else I'd like to ask to join us to also talk about how they plan to use our Arm AGI CPU, and that's Kevin Weil from OpenAI. Kevin.
Thank you, sir.
Kevin, thanks for joining us.
Thank you for having me.
Welcome to Fort Mason. Have you been here before?
I have, for a few conferences in the past.
Yeah. Well, welcome. First off, just tell us and tell me-
Yeah
Why does this launch today matter to OpenAI?
Well, I thought you did a good job, painting this.
Well, thank you.
AI performance these days is system performance. GPUs kinda get top billing wherever they go, but really the CPU is playing an incredibly important role as an orchestrator. Also, I think, as AI becomes more agentic, when you look at a rollout that an agent is doing, it's using tools inside containers. That's CPUs. It's running Python scripts as it does, as it performs skills. Those are CPUs. The CPU plays an incredibly important role, and it's really the whole system together that makes this all possible.
Now, your role at OpenAI is a pretty cool one, right? You're doing math and science and the stuff that's-
Mm-hmm
... super compute heavy. When you think about compute constraints, and I know when I talk to you or Sam or Mark or anyone at your company, it's, "I need more compute.
Yes.
Tell us about that.
That is one of the most common things I hear inside OpenAI, "I need more compute." It's kind of the coin of the realm. I mean, the root of it is we have more demand from customers, we have more ideas internally that we want to experiment with. We have more things that we want to do than frankly the industry can keep up with. When you get to the bottom of all this, it's certainly about silicon, but it's also about power. If you have a CPU that can draw less power, it could be just as performant, but use less power, it means you have more left over for everything else that you wanna do. That means more inference and more compute. That means more intelligence.
If there's one thing that I've learned in my couple years now at OpenAI, it's that more intelligence leads us to be able to build better products for all of you. The thing that I keep coming back to that I try and remind myself of all times is, as amazing as the models are today, and every year I'm blown away by the amount of progress we make. As amazing as the models are, the model that you use today is the worst AI model that you will ever use for the rest of your life. It's the worst AI model you're gonna use for the rest of your life, and a year from now, you're gonna be.
You couldn't imagine coming back to the AI models of today because they're getting better at such a rapid pace, which just means there's basically infinite demand for intelligence. We are not stopping from here.
In your world and in your new role where you're looking at verticals that are somewhat untapped today.
Yeah
M ath and science and things of that nature, when you think about the Arm AGI CPU or more broadly, what does more compute do for you in that space?
Well, I mean, the more compute you have, the more inference you're able to do, the longer the roll-outs you're able to do. AI, as we go, you know, as we're sort of progressing from this world of AI as chat to AI solving harder and harder problems. Just like you or me, when you solve harder and harder problems, you're gonna need to think a little bit longer. The more important problems we solve as we start to think about things like enterprise AGI, science, you're gonna need more compute, which means if you can draw the power that you have, which will always be finite, if you can draw that more efficiently, you can do more, we can solve more problems.
For you personally, what are you most excited about broadly in terms of everything we see going on with AI?
I mean, I kinda think I have the coolest job in the world. I get to work on accelerating science with AI. You've seen sort of a revolution in the past even just three months with GPT-5.2, 5.4, Codex. I mean, it used to be that people said, "Oh, well, these are just stochastic parrots. You know, they're sampling from a distribution of data that they were trained on, but they can't do novel things." Now we're seeing every day AI solve open problems in science, in mathematics, in physics, in biology. We're seeing AI help us understand the nature of the universe. We're seeing AI work for weeks on end using a robotic lab to run 36,000 different experiments to optimize the synthesis of a new protein faster and better than any human could. It's an exciting world.
I think science is gonna move faster than ever, and it's all built on the kind of infrastructure that you're providing here.
We are grateful for your support. Kevin, thank you.
Hey. Thank you so much.
Thank you. I love the idea that the model that we're using today is about as bad as it's gonna get. That's crazy. I wanna repeat in case I wasn't crystal clear on the first go-around. We are now delivering IP, CSS, and chips. Contact your local sales representative. Will is here. He can be reached afterwards. Now, seriously, I talked earlier about the ecosystem of ecosystems, and none of this could be done without the ecosystem that we have, particularly around Neoverse. We have many partners that we work with on the supply side, whether it's around memory or connectivity. We've also got great customers who use our IP today, and they are so supportive of what we're doing. You know, Santhosh talked about the demand.
The market is so large, the demand is so significant that no one company can serve it. What I'd like to do is, rather than me going on and on and talking about it, is have you hear from some of our partners and friends who I think you'll probably recognize a few.
Rene, congratulations on launching Arm's first data center chip.
Congratulations to Arm on the launch of the AGI CPU.
Congratulations on the launch of the Arm AGI CPU.
Today's announcement of the Arm AGI CPU is a significant milestone in AI-optimized compute and for the ecosystem.
Congratulations to the Arm team on the launch of an incredible milestone for the ecosystem built on innovation, scale, and openness.
The continued growth of the Arm ecosystem with its AGI CPU is a significant milestone in continuing to bring customers the flexibility to optimize for their specific workloads and ensuring accessibility of a new generation of purpose-built computes.
AI systems are evolving rapidly. They are becoming more autonomous and more data intensive, and that means performance is no longer defined by compute alone, but by how efficiently compute and memory work together. Arm's latest platform opens new opportunities for system-level innovation across compute, memory, and storage.
We look forward to continuing our partnership with Arm to advance next-generation AI platforms and ecosystem.
We are proud to partner with Arm in building this open, scalable, power-efficient AI future.
Accelerated computing didn't make CPUs irrelevant, it made them essential partners. Arm architecture has become foundational across all of our platforms. From Jetson, our robotics system, to Drive, our autonomous vehicle system, our data processing units called BlueField to Grace, our CPU. Without the ability to mold and shape and modify the Arm ecosystem and the Arm platform, it's impossible for us to build these systems that we build. Arm's adaptability, modifiability if you will, really has made it possible for us to integrate Arm across all of our platforms.
This collaboration with Arm has been great for both companies, and Graviton continues to provide better price performance for AWS customers. We see the AWS-Arm partnership continuing to deliver big for customers.
We are excited about the opportunity this creates to expand the Arm AI and data center ecosystem. Our Azure Cobalt 100 CPU, built on Neoverse Compute Subsystem, is an important part of how we optimize and accelerate every layer of our stack, deeply integrating cloud-native capabilities and delivering the best price performance and efficiency across our fleet. Partnership with Arm is a key part of that vision.
We've been a long-time and early adopter of Arm-based systems and a strategic partner in advancing the ecosystem. Having a diverse portfolio of Arm silicon and software gives OCI greater flexibility and differentiation.
Through deeper comprehensive collaboration in memory, foundry, SoC design, and advanced packaging, I believe Arm and Samsung Semiconductor can deliver exceptional Arm-based AI CPUs worldwide.
Google is proud to partner with Arm to support organizations' most demanding cloud-native and AI workloads.
This builds on a long and productive relationship between our two companies, working together across standards, technology, and products to help deliver the infrastructure for the world's leading data centers. We're proud to partner with Arm, and we look forward to building the next-generation silicon for AI infrastructure.
We look forward to continuing our partnership together and to helping customers and users everywhere build AI systems that are smarter, faster, and more scalable. That's why our long-standing partnership with Arm is so important.
One seamless platform from cloud to edge to AI factories. We look forward to building this future with you. Congratulations on bringing Arm's first data center chip to market.
Charlie and Matt and Sanjay, and even my old boss did better than I could in terms of talking about this. This has not happened without a fantastic partnership and support from the ecosystem. Now, I know you are dying to hear about this product, as am I. I'm now gonna turn it over to Mohammed Awad, who's gonna tell you all about the Arm AGI CPU and why it is absolutely amazing. Mohammed.
Please welcome to the stage Arm's Executive Vice President, Cloud AI, Mohamed Awad.
Kevin.
Thank you. Wow. Thank you, Rene. Thank you, Santosh. Thank you, Kevin. Thanks to all of you. Thanks to the entire Arm team that made today possible. We have been looking forward to this, and it is so exciting to be here. It's so exciting to talk to you guys. Thank you, thank you, thank you. Rene talked about how the world is transitioning from sort of legacy data centers to AGI data centers, to agentic data centers, heading down this path, and how the CPU is at the heart of it. We've designed our AGI CPU around three simple principles. We believe that's the heart of what we're doing, it's the heart of what we focused on, it's the heart of how we think about it. First, performance. Performance, performance.
With this many threads going on, with this much work to do, with this much orchestration to happen, you can't slow down. 24 hours a day, as Rene said, these agents are gonna be running, and if they're not performing fast enough, then the rest of that infrastructure that's relying on it grinds to a halt. We focused on performance. Second, we focused on scale. The scale of what we're talking about here is just incredible. You heard Santosh talk about gigawatts. Gigawatts. Scale at the CPU level, scale at the board level, scale at the rack level, scale at the warehouse level, all the way up. We focused on that. Finally, we focused on efficiency. Maybe most importantly.
Because at the end of the day, with this much at stake, with this much compute we're trying to deploy, we're not gonna get there unless we provide that performance, we provide that scale, and we do it in an efficient package. Those are the principles that have guided us. Wait for it. Those are the principles that have guided us, and we refuse to compromise. We've designed on all three. Play the video now. I gotta tell you, we are so proud. Our team has done a fantastic job on this, and it's really been designed from the ground up from this. Let me tell you a little bit more about what you just saw, 'cause I know there was a lot packed into that video. Arm AGI CPU starts off with our standard Neoverse V3 Compute Subsystem.
That's the same compute subsystem we make available to the entire ecosystem, and we have other partners building on it. Incredibly proud of that. We pack in 136 of those cores, which are very high-performant cores designed to be high performance. Our V-series is our most performant line, and you've seen it set records across lots of different hyperscaler implementations and those of other system providers. We add to that a dedicated 2 MB L2 cache, and we support up to 3.7 GHz in frequency. It's not just the CPU core. We thought about the entire system. As part of the design, we've went with 96 lanes of PCIe Gen 6, which supports CXL 3.0, which means you can attach it to any accelerator you like. It also means that you can support things like memory expansion.
On the memory side, DDR5 with up to 6 GB per second of memory per core, which can be sustained to each core. That is unique. That level of performance to every single core on both the IO and the memory is unique to us in this type of a package, in this type of a performance point, at this efficiency level. It's not just about the bandwidth, it's not just about the IO, it's about the overall design. You see, we designed the whole thing to be low latency so that you could get to less than 100 nanoseconds of latency from the memory. We did so by sticking with a dual chiplet design, each chiplet having all of the memory and the IO directly on it, rather than having to worry about complicated NUMA domains and multiple hops across the silicon.
The result, it wasn't a typo in the slide. 300-watt TDP. 300 watts. That is amazing. It's built on a 3 nm TSMC process and allows for that maximum compute density. This is what purpose-built design looks like. This is what we're so proud of. The AGI CPU is breaking records all over the place for performance, for scale, and for efficiency. You saw some of that in the video. This is a standard OCP air-cooled rack. Nothing unique about it, nothing especially exotic about it, just OCP rack. Standards, right? That's our head of OCP right there clapping, just so everyone's aware. 36 kW, we pack in over 8,000 of these performance CPU cores. We do so by going to a two-node, 1U server, 30 of them. You can't do that in other systems because the power consumption's just too high.
This is setting records for air-cooled. You know what? If you want liquid-cooled, we can do that too. Over 45,000 CPU cores and a 200 kW, again, a standard rack from OCP. Over a petabyte of memory in this thing. Oh, by the way, fun fact on this one, it's a 200 kW rack. We actually only consume about half that much power. We ran out of space, that's why we couldn't put more cores in there. Yeah, it's pretty wild. The scale of this stuff is crazy. It's just really inspiring. These are standard racks, but there's nothing else like them. To get to this level of efficiency, you know, we really had to design the Arm AGI CPU from the ground up, and that's what I'm so proud about, and I'll tell you about in a minute.
I wanna just talk about the fact that these are standard racks because it's not only that we're taking from OCP and leveraging some of their platforms, we're also giving back. We're in the process of making a bunch of contributions to OCP, things like Arm ServerReady, authenticated access control, and diagnostic tools, and those contributions won't just be for the Arm AGI CPU, they will apply to the entire ecosystem. We will make it available so that and they will be beneficial for all Arm-based platforms because it really is an ecosystem that we're building here. You know, Arm has always been about nurturing and partnering with the ecosystem. That's always been core to our identity, and those relationships are paying great dividends now.
You saw the video that Rene played, and we're so grateful about all those partnerships. You know, it's those partnerships actually which have allowed us to build the Arm AGI CPU. Some of them are very long-standing. Partners like TSMC and Samsung and Micron and SK Hynix, these are partners that we've been working with for well over for decades, literally for decades. We've also got some new partnerships, which is why we're so proud to say that the Arm AGI CPU is available now. Oh, went a little bit far there. Can you go back, please? Says it there. It doesn't say it there. Yeah, so Arm available, Arm AGI CPU is available now, and we're so proud of that. It's actually in customers' hands. Customers are actually evaluating it as we speak.
We are ready to go, and we're so grateful for our partners, both on the ODM side, on the memory side, on the CPU side, on the manufacturing side, who have helped us get to this point. We'll be in production by the end of the year, and we are excited to share that with you. Today, we've got firmware ready to go. We've got specifications ready to go. I talked to you about platforms. I talked to you about supply. The one thing I haven't talked to you about yet is software. Let's talk about software. Now the next slide. Okay. You know, the reality is that Arm has been investing in data center software ecosystem for well over 15 years.
I don't know if everyone understands how long we've investing in the software ecosystem. For the beginning of that time, in the early days, it was just Arm investing in the software ecosystem. Then something happened in 2019. We launched Arm Neoverse. What Arm Neoverse did, that compute platform when we launched it allowed our customers to begin to launch products with a much lower barrier to entry. It allowed them to build their own silicon and start to coalesce around a common platform. That started that software flywheel turning. You see, when tech leaders started adopting Neoverse, they started to optimize software around it. The more of those tech leaders that adopted Neoverse, the faster that flywheel started to spin.
Today, we've got AWS, and Google, and Meta, and Microsoft, and Oracle, and Nvidia all investing alongside us in the software ecosystem, and that really was what, you know, allowed us to kind of really make some great traction in software. Together, we've made Arm a first-class citizen on most modern software packages. For our AI software ecosystem specifically, not only are we a first-class citizen, not only does software run well on Arm, software actually runs best on Arm, and the reason for that is very simple. For AI, the Arm software ecosystem, the Arm architecture is the primary CPU architecture in support of AI today.
In fact, the work we've done together with technology leaders means that tens of thousands of companies today run their software on Arm in the cloud on over 1.25 billion Arm Neoverse cores, which we've already shipped into data centers around the world. That growth is only accelerating. That's actually the curve. You see, Arm in the data center just works. This is a key point, and I don't know if I'm making it well enough. I'm gonna bring somebody on stage who's got a little experience with software. Paul Saab has worked on Meta's infrastructure for over 18 years. He's one of the longest-tenured employees at the company. There's a laundry list of things that he's been responsible for, including the adoption of flash storage all the way through to the implementation of IPv6.
Today, he's specifically focused on making AI more efficient in their infrastructure, and that's how we got to know each other. Please welcome Paul Saab. Great seeing you, man.
Thank you.
Thank you. Thanks for being here.
Thank you for having me.
You've told me the story before, but, you know, I really wanna hear, you know, you guys have had a long history with Arm. It goes back longer than just a couple of years ago. Can you maybe give everybody a little bit of a history lesson as to kinda how things started?
Yeah. You know, I think it was like 2014, 2015, we were looking at Arm. You know, we were really excited about the efficiency wins that we were seeing. We were really back then just targeting our Hack/PHP platform called HHVM. You know, it was working great. Like, we made it work, it was performant, and then the market kinda went away for us. We didn't really have a platform anymore, and so we just sort of tabled it, and we ripped all that code out. Everything in the code base was removed.
Oh, geez.
Yeah.
Well, okay, that was 2014 and 2015. Obviously, something must have changed or you wouldn't be standing here today, right? Kinda where did we go from there?
Well, the story's kinda funny. It was like post-COVID bubble, and we had a bunch of people over at the house, you know, sitting around, socializing and whatever, and I turned to one of my colleagues and I said, "Hey, I wanna port to Arm again." I kinda had this gut feeling that the ecosystem and the world had changed, and, you know, if we didn't start then, we would be kinda playing catch up when it actually happened. I didn't even ask my boss here for permission to buy these machines or even to start the project.
It's a good thing he approves now.
I don't really ask him permission for much to do, so.
All right. Well.
We found some machines out there. I went to some other colleagues. I said, "Hey, I wanna port to Arm," and he actually responded, "I was wondering when you were gonna ask me." We got the machines in, started porting, you know, making great progress, but it was super slow. We only had 8 machines. We had this vast x86 ecosystem, and I went to the guys and I was like, "Hey, can we cross-compile?" That's what we ended up doing. We ended up like, you know, working round the clock. It took us about 90 days, five engineers, and we had a full complete port, full system ready, but then we ran into another problem. We had no silicon to buy.
This is, you know, and Santhosh referenced this, that like we looked at every partner, and I think this is about the time you and I started talking.
You'd say the market was a little bit underserved maybe for what you guys were looking for.
I think underserved is an understatement.
Let's go back to the 90 days, 5 people.
Yeah.
I mean, really, you know, it's. Okay, I'm gonna take your word for it. It was 90 days, five people, but that's just getting the source code working. Like, now you've gotta operationalize and get it performant. Like, how's that going?
It's still a small team. I mean, it's, you know, a lot of, you know, very devoted people bringing the systems up. You know, from the time we finished that initial port in 2022, it took us about two and a half years to actually get some sort of production-worthy, you know, systems in that were, you know, TCO-effective, you know, performance per watt. You know, it was still a very small team, and even today it's really a small team that's focused on, you know, hyper-optimizing. You know, it started off with, you know, once those performance systems landed, it was really just one engineer until, you know, a few more came in.
That engineer never had written a single line of NEON, never written a single line of SVE, and single-handedly, you know, took some of our most precious workloads and made them work on Arm.
How is it performing now, generally? Like, on typical workloads? Like, how should we think about the performance in the general.
We're seeing performance that, you know, is basically equal to anything you can buy on the market today at massive performance per watt improvements.
That's great. Okay, my light's gonna start blinking in a minute here, so I'm not gonna keep you on stage too long. I just, first of all, wanna say thank you, but before I let you go, I guess one question for you know, if somebody's out there thinking about, hey, you know, 'cause there are, you know, tens of thousands of companies that are using Arm already, but there's still a few that aren't, you know, what sort of advice or guidance would you give them? What would be your kind of recommendation to them?
I think, you know, small focused teams doing the port, but, you know, like, if I were starting the port today, I would be using an LLM. I mean, what I'm seeing some of the engineers that are now optimizing, you know, even existing Arm accelerated code, they're using LLMs to, you know, even boost those by 10% or 20%. So the barrier to entry today, like, porting to Arm is, I would say, close to zero, 'cause, like, the LLM's just gonna do it for you. I don't even write any handwritten code anymore myself. It's just all LLM, all test cases, all across the board. So, like, there's no excuse to port to Arm today.
Excellent. Thanks, Paul.
Yeah. Thank you.
Thank you. Well, that was inspiring. I mean, you know, Paul and I have obviously known each other for a little while and, you know, the tenacity and what I hear around is, you know, once Paul gets something in his mind, it just kinda happens. I appreciate all the support, Paul. Thank you. We're still part of the partnership that we've had, with you and with Meta more broadly, so thank you very much. You know, what I love about that story is that they had a need, you know, the market was underserved, and together we worked together to go address it. The reality is the opportunity for the AGI CPU is broad. The software's ready, and we have a great product, and that's why we're seeing such great customer traction.
We're seeing it in multiple areas. If you think about companies like Cerebras and Positron and Rebellions, they're joining Meta and OpenAI by using Arm AGI CPU for things like managing head nodes that they're building or managing accelerators they're building, so head node type use case, or also for agentic orchestration and fan-out. These are specific use cases that they're looking at. Then in the cloud, you know, we see companies like SAP and SK Telecom and Cloudflare who are actively using or planning on deploying Arm as part of, you know, their infrastructure. These are just a few of the customers that are planning on using Arm AGI CPU, but rather than me tell you about what they're doing, let's listen to them.
Arm's been building IP for hardware for generations. Over the last decade, we built processors for heavyweight compute.
Cloudflare is one of the largest networks in the world, and Arm is a really important part of our ability to keep innovating, not just at the speed we've always innovated at, but a speed that keeps up with what's going on.
Arm has consistently delivered predictable scalability and excellent power efficiency.
How can we drive more energy efficiency at the end of the day and more scale to what we are running and deploy? Also, having an eye on, from a price-performance perspective, on cost reduction overall.
One of the key criteria for our customers is the best use of the limited energy available for them. Getting the technology that gives them the best outcome per watt is critical.
Data centers have an obligation in the AI ecosystem. We use a lot of power. This is where Arm has been a leader historically, and Arm AGI really separates itself.
AI has fundamentally turned our business upside down. It grabbed all the available data center capacity to the point where they're building many new data centers are coming online, which then puts pressure on power delivery into those data centers. Arm technology gives them the most outcome per watt.
The AI industry isn't the largest industry in tech. It might be the largest industry in the history of tech.
There is a big excitement about the Arm AGI CPU. This is helping us to accelerate the innovation as we are moving into the future of AI-driven enterprises.
AI is redefining the entire infrastructure from the user, the consumer, the business, the use cases, the models, the applications, the infrastructure, all the way down to the silicon.
Many of the solutions out there are getting power hungry. Not all customers' data centers today can handle that. With the Arm silicon coupled with a brand-new line of AI accelerators, we now have a power-efficient solution that we can offer our customers.
This performance per watt that we're gonna be able to get out of this CPU is really gonna help us not only save money, but be able to get to places that have been harder for us to get to.
What makes this partnership compelling is their system-level combination. SK Telecom is pairing the Arm AGI CPU with the Rebellions AI chip. This CPU is the perfect fit as we evolve into an AI data center developer.
The Arm AGI CPU strengthens the orchestration layer of the system, enabling greater efficiency in the head nodes that support next-generation frontier AI systems. We are excited to work with the Arm team and continue building the infrastructure that powers the next wave of AI.
There's a lot of change happening right now, and it's interesting to see how that drives innovations on all levels. It opens up a whole new world of possibilities to drive innovation. That's pretty exciting.
Our mission is smarter technology for all, and so to be able to be at the front end driving AI platforms and solutions, who better to partner with than Arm?
This partnership is not about what's coming next. This partnership is about the decade to come.
I just wanna say thanks again to all of our customers and to partners that are supporting us here today. The support we've gotten has really just been incredible. We built Arm AGI CPU for you, and we're so pleased with the response. You see, Arm AGI CPU has been designed from the ground up to make sure that performance scales and power stays predictable. That's the superpower, performance, scale, and efficiency, and it's resonating with our partners. You see, that's a very different approach than is taken by x86. They are burdened with execution overhead and legacy feature support. They chose to focus on things like modularity, support for lots of different markets and esoteric use cases. We are ruthlessly focused on improving efficiency and reducing latency. Ultimately, this is about architectural philosophy. We're not strapped to the past.
We are not strapped to the past. Listen, we don't support Lotus Notes, okay? We just don't do it. We're focused on exactly and only what the AGI data center needs, performance, scale, and efficiency. Let me take you through that in a little more detail. It starts with performance, and performance for us is all about doing more work for every clock cycle. This has always been an area, great IPC has always been an area where Arm has shined. How much work do you get done every single cycle? Our AGI CPU absolutely shines here. Now what we see is that legacy CPUs, they sometimes try to compete on this vector by doing things like increasing the frequency, going to boost modes. Here's the reality. When you increase the frequency, what else do you increase? Power. That's a problem.
These boost modes are not sustainable across long periods of time. They're not sustainable across a chip. With Arm AGI CPU, what we give you is full performance sustainably all the time, and ultimately, that means scale. We linearly scale across cores, and our memory and I/O subsystem is specifically designed to be matched to those cores so that we can continue to feed them 6 GB/s of memory bandwidth to every single core. In order to scale, what we see some of these legacy architectures do is multithreading, right? What happens when you do multithreading? You throw two jobs at the same core. That's how they get to a high thread count or try to get to a lot of devices. The reality on that is your I/O and your bandwidth, that doesn't double. You've just moved the bottleneck elsewhere.
Oh, by the way, the CPU needs to be burdened with managing that back and forth, and so your performance degrades. You end up starving your processes. What we see over and over again is that data center operators have to overprovision their data centers by 30% or more to deal with this lack of nonlinear scaling. This is an actual thing that happens. We take pride in not having to do that. There's actually a great demo of this out on the show floor. I encourage you all to check it out after the keynote. Then finally, we have this maniacal focus on efficiency. Obviously, that's always been Arm's hallmark. It's always been something that we've been great at.
We're leveraging all those techniques and methods and experience that we've built up over the decades around building incredibly efficient processors, incredibly efficient technology, and we're packaging that all up in a custom design specifically for this use case. AGI CPU is purpose-built without that legacy overhead because it all comes back to performance, scale, and efficiency. That's my efficiency bullet. At the end of the day, no wasted cycles, no stranded compute, no wasted power or silicon, and we're super proud of that. Let's look at what it means in practice. I'm gonna show you the results, and they kinda speak for themselves. First, let's talk about sustained performance. What you see here is the performance that you can expect to achieve consistently. So this is consistent performance. No performance throttling because you're over power budget, no memory or IO contention.
This is the sort of performance you're gonna see. You can see with AGI CPU, it's world-class. You've got world-class performance you can take to the bank. Next, let's talk about scale. How many threads or agents can you run in each rack? How much compute do you actually support with a fixed power budget? With a fixed physical footprint? Remember those racks I showed you earlier? There you go. That's where we land. Of course, there's efficiency. Performance per watt. What's going on with my screens? They're flipping all over the place. Can you go back, please? Go back one more. What you're seeing here, these are all of these charts are with SMT disabled, so these are single-threaded cores for us, single-threaded cores for them, so no multithreading whatsoever. Okay?
I told you what I thought about multithreading, which is why we elected to show it to you this way, but oftentimes what we hear is that multithreading is gonna improve that middle chart. It's gonna allow for more scalability. Multithreading is going to improve the performance per watt. Let's take a look at what happens if we turn multithreading on. Okay. See, first of all, your performance goes down. That's the chart on the left. The reason why the performance goes down is because you can't just add more work and expect performance to be the same. That's pretty self-explanatory. In this particular case, again, we've held it at kind of based on the memory and the IO bandwidth available, kinda where you land.
That second one, the sustained threads per rack, the reality is that because of the limitations on the device and all of the bottlenecks, you end up in a scenario where you can't actually use all of those threads. Many are left idle. Finally, performance per watt. Yes, there is a small improvement there, but not enough to change the calculus. At the end of the day, the results are clear. This is a killer product, and Arm is a class of its own. Performance, scale, and efficiency. I'll say it one more time. This is what the Arm AGI CPU is built for, and the impact on the AI data center is gonna be profound. Let me turn it back to Rene. Thank you.
Thank you, Mohammed. Thank you, Mohammed, and thank you, Paul, and your LLM agent that's gonna do all the conversions for us. We've shared a lot with you today, and I am grateful for your patience and time. If there were just a few things to take away from this morning, I think it starts here. Performance per watt, which translates to performance per rack. When you look at an x86 equivalent structure, same power delivery, 36 kW, 2x the performance in the same power. That's what you need to remember. For those of you who are paying for that power, there's another number you need to remember. If you think about 1 GW of capacity, and you think about the CapEx associated with that extra power you're spending for the sake of performance, it's up to $10 billion of CapEx.
Obviously, these are serious numbers. Again, the takeaway from the Arm AGI CPU is 2x performance per watt, probably more than 2x , ish. Now, you heard a number of comments in the videos, including Santhosh, that when you embark on the kind of engagement and partnership we're talking about, while a day like this, an event like this is wonderful and amazing, and we're talking about a great product, it's really not about the day, but it's about the future and commitment to a roadmap. We are committing to future generations of this product. Arm AGI CPU 2 is coming out soon, as is Arm AGI CPU 3. As you heard in the videos again, these are multi-generational engagements. We are investing a lot. Our customers are investing a lot. The ecosystem is investing a lot.
We are absolutely committed to a roadmap and a future around this product line. In addition, we will continue the CSSes around these products. As Mohammed mentioned, you know, one of the big benefits of the CSSes are the speed it allows our customers to get to market. It also enables a lot of benefit for us as well. The CSS roadmap will continue. I wanna close a little bit around what we think the financial opportunity is for Arm. Before this day, our business has been IP and IP compute subsystems, and we have been doing extremely well in that business, far better than what we had talked to investors about two and a half years ago when we did our roadshow for the IPO. We're actually ahead of that.
When we look at the AI data center business, that represents today about a $3 billion TAM, and now I'm just talking about roughly the royalties. I mentioned on one of the earnings calls that the cloud AI business will probably be our largest business in a few years, and this is really driven by all of the growth that Mohammed talked about, the deployment of 1.25 billion Neoverse cores and forward. When we think about our business going forward, the Arm AGI CPU, and as Mohammed mentioned, we have committed customers, Meta, OpenAI, Cloudflare, SAP, F5, customers you saw in the video. When we think forward about what is the market opportunity for this business, it is a dramatic sea change for the opportunity.
When we look at what's going on with agentic AI, the growth of CPUs, the benefit that power-efficient CPUs bring to the data center, we think this represents about $100 billion TAM for us in the future. Today is all about the Arm AGI CPU. There will be some tomorrows, and don't ask me about tomorrow today, but there will be some tomorrows. We think this opportunity to take the work we've done across all of the markets, as you've heard in the videos from edge to cloud, from milliwatts to gigawatts, we think we have an opportunity to address greater than a $1 trillion TAM by the end of the decade.
We've got some work to do, but I couldn't be more proud of what our company's achieved, grateful to the ecosystem that helps us achieve it, and the customers that are now committed to buy our product. I wanna close by saying that we stand on the shoulders of our ecosystem. None of this is possible without the ecosystem that we have nurtured for 35+ years, many of you who are here today and watching on video. Thank you for attending today. Arm is everywhere, and we appreciate your support. Thank you.
Over 350 billion chips have been shipped with Arm. We're the most pervasive computer architecture ever invented.
We touch 100% of the connected population. Anyone with any digital device is likely using Arm.
Arm is one of the great secrets of the technology space. People use Arm technology every day multiple times.
When you look at the Arm ecosystem, it is the de facto platform that everybody is building on top of.
We deal with an immensely different range of different companies. In some cases, they take our Compute Subsystem and put their technology alongside that. An increasing number of partners are saying, "Actually, we're not experts in building silicon. We would like to have an Arm-based solution.
We chose for many years not to build silicon because really Arm is all about what our partners want. That gave us the IP business that we've got. That will be continuing an important part of what we do going forward.
The IP and the CSS roadmaps remain unchanged. We get this tight coupling of lessons that we learn from doing silicon development that feeds back into the current generations of IP development, which the whole ecosystem, however they engage with Arm, will benefit from.
It's all about choice, and it depends on where the ecosystem is at and where customers want to engage with us around that.
The Arm silicon will combine all of the goodness of all of our technologies, and it's got enhancements to scalable vector extensions that allow you to provide higher quality computing for AI as well.
AI is moving incredibly fast. I think every day you wake up and there's a new technology around AI.
Think about where we're gonna be in AI 20 years from now, and then think about the fact that everything's growing faster today than it was 20 years ago.
The world is evolving to adopt AI, and so more and more what we're seeing are these optimized systems, these optimized platforms. That is the perfect home for a product like the Arm AGI CPU.
In the era of agentic AI, the Arm AGI CPU was really purpose-built for these agentic AI workloads and specifically for AGI in the future. That's really how this brand will really stand up.
As we've learned with AI, we're starting to learn more and more about customers' workloads, their applications, what they need. What we're finding is that the AI solutions today are very, very power hungry.
Power reduction is a constant battle, and Arm has an architectural advantage for getting more performance at lower power.
The product was designed to run on batteries, which is why the product is so pervasive. Any CPU is only as pervasive as the software that runs on it, and that's really the ecosystem.
Compared to the other processors that are out there today, Arm has power efficiency. It gives customers options that they don't currently have today.
Arm AGI enables a true dual-socket server that we can fit into any existing air-cooled data center, and it represents a focus on efficiency that I think we've lost, and it also is a focus on performance.
I'm excited about these new Arm silicon-based systems because it allows customers to have more choices, and we do believe that this is gonna be a game changer.
It's amazing to be a part of bringing this technology and the value and the benefits of this technology to as many people as possible.
For Arm to build its own silicon, the first thing we had to understand is what our customers go through to build their own versions. There's a bunch of things that you have to do that we kind of knew about, but now we understand much better.
We really focus on really robust discipline design practices, and the team has really stepped up the amount of dedication, just all-in buy-in for this project has been just outstanding.
You know, for me to be a part of this team, to work alongside many of the people that I see, you know, working hard in the labs at night or putting in their all to kind of solve that late-night issue, that's an incredibly rewarding experience.
When the silicon came back and it worked and it's up and running and the software's running and the team's excited and we ship to our customer and our customer has it up and running, the feeling's absolutely amazing.
I remember the email that I got saying it's alive. That was a wow moment. It meant that I could ping the CEO and say, "It's alive." The wow moments won't be just one. It's just gonna be a series of them every other week.
To build a world-class product like the Arm AGI CPU, it takes a lot of people, a lot of resources, and we didn't do it overnight, but we did it pretty quick. That's the way you have to do things in this market today. Speed matters, and I couldn't be more proud of the team to make it happen.
I look at this as a chapter in a longer novel, and we're just getting started. It's gonna be groundbreaking.
I feel immensely proud to have worked for Arm and to work with some amazing people to achieve that transformation to the company, but the transformation to the industry and to the world.