SoftBank Group Corp. (TYO:9984)
Japan flag Japan · Delayed Price · Currency is JPY
5,424.00
+205.00 (3.93%)
May 1, 2026, 3:30 PM JST
← View all transcripts

Investor Update

Mar 17, 2025

Moderator

[Foreign language]

We are going to start the Arm briefing session. I would like to introduce today's presenter, Vice President in charge of IR, Ian Thornton. This meeting is provided with simultaneous interpretation service for those sell-side analysts. Please tune in your channel. The channels are displayed here for those online. Default is original audio. That is the setting. If you wish to listen to either English or Japanese, you may select interpretation from the menu bar in the Zoom application and those online. Currently the Japanese slides are shown on the screen. You can switch to the English slide by selecting view options in the Zoom application and then select share screens in English. Now we would like to proceed with the meeting. Ian, please go ahead with your presentation.

Ian Thornton
VP of IR, Arm

Good morning. Thank you very much indeed for coming to this presentation. It is wonderful to be back here again in Tokyo. It's only been a year since I was last here, but many important events have taken place. Arm has reported record revenues for three of the last four quarters. Demand for Arm technology has never been stronger. Driven by excitement around AI, Arm has introduced many new products to meet this demand. We have also accelerated investment in R&D to speed up the development of the next generation of Arm technology. Recently, Arm, SoftBank, and OpenAI announced two major new projects, Project Stargate and Cristal intelligence. Project Stargate will increase demand for Arm technology and Cristal intelligence will enable us to develop new technologies faster by using OpenAI's tools in our R&D and business processes.

Firstly, I will talk about how computer chips are designed today and how Arm simplifies the design of the computer chip, reducing both time and cost. Then I will talk about Arm, our products, our ecosystem, and our primary end markets. Next, I will talk about where Arm is investing in new technologies and new ecosystems to take AI everywhere from the data center and to the edge. I will then talk about the impact of this investment on Arm's P&L. I will show how Arm's R&D investments are aligned with SoftBank's vision for AI and the future of computing. We hope that this will provide a platform for artificial superintelligence, which is a key focus for Masayoshi Son, and then we'll have plenty of time for your questions. Consumer electronics have become more capable because the computer chips inside of them have become smarter and more complex.

On the left is the first ever silicon chip. The manufacturing process was revolutionary, but the circuit design was very simple. It contained just four transistors and could store a single bit of data. Over the 64 years since then, chips have become extremely complex, like the one shown on the right. This example has over 100 billion transistors and is from a modern smartphone and contains the latest technology for computing, for graphics, AI, and radio communications. Only a few companies have the expertise and technology to design and manufacture such complex chips. Looking closely at the chip, you can see it is made up of a number of distinct blocks. One of the most important components of the chip is the main processor or CPU. This particular chip has six CPUs, two high performance CPUs, which are the larger blocks shown here, and four smaller power efficient CPUs.

The operating system will switch between the CPUs depending on the performance requirements of the application being run. When high performance is needed, it will use the large CPUs. When less performance is needed, it will switch to the smaller, more energy efficient CPUs. The graphics processor is typically very large. In this case, the graphics processor is used for gaming. This isn't a data center GPU. The size of the GPU is determined by how many pixels need to be updated on the screen. As you move to larger screen sizes, higher resolutions and faster frames per second, the graphics processors tend to get bigger, needing more transistors. There can be other accelerators on the chip for offloading compute intensive tasks such as video encoding and decoding, and also encryption. There can be radios on a chip that connect the smartphone to the outside world.

This can include 5G modems, Wi-Fi, Bluetooth and so on. There can be special blocks to communicate to other devices in the smartphone, such as the display, memory, camera, power controllers and so on. All of these functional blocks need to be connected together. As chips have become larger and more complex, the interconnect has become more sophisticated, automatically moving data around the chip so that the functional blocks are never stalled, waiting for the next piece of data to turn up. Finally, we have the input and output pins that turn the internal digital signals within the chip into analog signals that communicate between different chips. Oh no, sorry, too far. If you take apart any modern device, from a smartphone to a self-driving car, what's at its core? It's an Arm CPU design as well as the CPU designs.

Arm also has designs for many of the other components shown here, including the GPU, the accelerators, and the interconnect. Now we have seen how these chips are designed, let's explore Arm's role in the overall ecosystem. Arm provides the design of the CPU, the brain of the chips. We then license the design of the CPU to companies who design chips. These chips are either sold onto a device manufacturer or used in the company's own products. Our customers include both semiconductor companies such as Qualcomm, NVIDIA, and Intel, and also systems companies such as Apple, Amazon, and Tesla. In the past, the system companies such as Amazon would buy chips from semiconductor suppliers, but now they are choosing to develop their own chips. They are therefore coming directly to Arm to get their CPU designs. These companies do not manufacture the chips themselves.

Rather, they take their chip designs to a foundry which will physically build the chips out of transistors and bit cells. These foundries include TSMC, Samsung, SMIC and so on. These are some of the largest leading fabs in the world. Arm works closely with all of these foundries to ensure that Arm CPUs can be efficiently built in every foundry. Once the chips are designed by NVIDIA or Qualcomm and built by TSMC or SMIC, they get packaged and then go into an end product such as a smartphone, a laptop, a smartwatch, a car, the data center, the drone, and so on. All of these products can only operate when they have software to control them. Your smartphone cannot function without software, and all the software runs on a CPU. All of these products are powered by Arm's CPU technology.

Arm is now the most widely used CPU company in history and we can see this in the numbers. Our customers have shipped over 310 billion Arm-based chips to date. They shipped around 30 billion in the last fiscal year alone and they're currently shipping around 8 billion chips per quarter, which is one per person for every person on the planet every 90 days. These numbers are so gigantic because everything today is a computer and consequently our revenues are growing strongly. We are forecasting more than 20% year-on-year revenue growth for the current fiscal year and we are highly profitable with around a 45% non-GAAP operating margin. We generate strong cash flows and have no debt. I mentioned earlier that the purpose of a CPU was to run software.

Software is tied to the CPU that it is written for and the success of a CPU is dependent on the broad availability of software for that CPU. Arm has by far the largest software ecosystem on the planet. We've been able to achieve this because over 50% of all the chips with CPUs are Arm-based chips. We estimate that over 20 million software developers are currently creating new software programs and apps for Arm-based devices and that to date they have invested over 1.5 billion hours in creating Arm software. Arm also invests in software. We invested over 10 million hours in the software development of Armv8 and we are planning to invest three times that amount for Armv9. At today's salary rate, 10 million hours is around $1 billion. For Armv9, we are planning to invest around $3 billion just in software development.

Arm's top customers are some of the largest companies in the world such as Microsoft, Alphabet, Amazon, Apple, NVIDIA, Samsung, Intel and Qualcomm. We work closely with all these industry leaders to understand their product plans and what software that they will be running in the future. We can therefore develop the CPUs that are ready to run this software and so to enable their future products. Also, as our largest customers are all well financed, Arm has no trouble in collecting monies owed. Although all chips can contain Arm technology, we put most of our focus on six broad market segments starting with smartphones. Arm has 100% market share of the main chip going into all smartphones and tablets, including all the Apple iPhones and Android phones. We are gaining share in the PC market.

Already all Apple Macs use Arm and we are beginning to see some Windows devices based on Arm. NVIDIA's recently announced AI PC is also Arm based. We are also gaining share in the data center with over a 50% market share within Amazon's AWS and we see a growing share at Alibaba, Google, Microsoft, Baidu, Tencent, Oracle and others. We have a high share of wireless comms equipment especially within base stations and the 5G rollout has been very good for us and we are gaining share in wired comms. In the automotive space we have a very high share of in vehicle infotainment and driver assistance systems. Tesla and many Chinese car OEMs are now very aggressively deploying self driving features and many of these use Arm based chips.

Finally, in IoT where there are billions and billions of tiny chips sold every year into a wide range of applications from tiny sensors and actuators to robotics and logistical applications to industrial and citywide monitoring and control systems. All of these products will soon be utilizing AI to improve their capability and usability, eventually evolving into entirely new product categories. Arm intends for our CPU technology to be the brain in every one of these applications and this is requiring increased amounts of investment today, which we expect will yield benefits for years and decades to come. Most of the news stories around AI have focused on AI in the data center such as OpenAI's ChatGPT running on NVIDIA GPUs or Google's Gemini running on their own custom ASICs.

However, every GPU chip and every accelerator chip needs a CPU to run on alongside it. The GPU is great at mathematics and doing the complex matrix multiplications that are needed for training, but the CPU is needed to control the GPU. The CPU runs the operating system and the applications, while the GPU speeds up the mathematical tasks. Increasingly, the CPU chip being used is based on Arm technology. Within the data center, most of the focus has been on training new models, but increasingly we are seeing inference taking over as enterprises and consumers start to use AI in their everyday work. Unlike training, inference requires more work from the CPU. Whereas the GPU is great for mathematics, the CPU is great for decision making. This makes the CPU more important in inference applications.

At the edge, AI workloads rely mainly on inference, and so here the CPU plays the most important role. We think that AI will transform many of the devices that we use every day. Flagship smartphones are already coming with AI applications built in. Eventually, all smartphones will be AI enabled, and in the future, smartphones may no longer need screens. Instead, they could be controlled only by gestures and by voice. All cars will be AI enabled and eventually the car may become fully autonomous. All manufacturing and logistics equipment will use AI vision to navigate the production line and the warehouse. Industrial robotics combined with household control systems might one day lead to a humanoid robot in every home.

As I mentioned previously, the only purpose of a CPU is to run software, and I also said that Arm works with leading technology companies to determine the future software programs that will be run. This enables us to understand what new capabilities will be needed in our future CPUs. Along the top of this slide, I have listed when each new version of the Arm architecture was announced. As you can see, we have a major new update every two years. Along the bottom of the slide I have listed the Arm CPU products that are based on these architecture versions. As it takes two to three years to develop a new CPU, there is a two to three year gap between when the year of the architecture was announced and the year the first CPUs are available.

This period of time is also very important for the software companies in our ecosystem as it gives them the time to add in extra functionality into their software products that will be running on a new architecture. Over the past decade, Arm has been investing in three main areas which are security, accelerating AI and delivering higher performance for our GPUs. Security is becoming increasingly important as more of our daily lives depend on the smart computers we use all the time. Your smartphone can now make credit card payments and access your bank account. Your home security system can unlock your front door and control your security cameras. Your car will transport you and your family to work and school and then home again. We need to make it difficult for criminals or enemy governments to breach these computer controlled devices.

With AI going everywhere, of course we need to add in acceleration for the most common type of AI and machine learning algorithms. Higher performance is good for whatever software you want to run. It is these new features and capabilities that allow Arm to charge a higher royalty rate as we go from generation to generation of the architecture. We are not just investing in AI, we are also investing in our compute platforms. These compute platforms are the fastest way for companies to develop a new Arm based chip using the latest manufacturing technology and getting the best performance and power outcome. To achieve this, we have preassembled our latest CPUs and system technologies into a single reference design. This halves the amount of work needed to be done for a new chip design.

Customers can take our reference design, add in the high speed interfaces that they want and manufacture a general purpose chip. Or they can add in their own custom accelerators to create a chip for a specific application. Our Neoverse Compute Subsystems are designed for data center server applications and AI accelerators. They can also be used for higher networking and communications equipment. Our Compute Subsystems can be used in different configurations. It can be used stand alone even by itself. It is a high performance computing machine suitable for cloud computing. Indeed, this is what Microsoft has done with their first data center chip, Cobalt, as have the French supercomputer company Silicon Pearl. However, you can also add in an accelerator for AI applications or for networking or security applications.

By reusing the same Compute Subsystems across multiple end markets means that we can license the same subsystem to many different companies. Our first Neoverse Compute Subsystems has now been licensed to six companies for applications including a supercomputer, cloud servers and high end networking equipment. As I mentioned earlier, Arm also invests in the ecosystem around our products. In the case of our Compute Subsystems , we have created a partnership program which now has more than 30 companies whose own products and services have been optimized to work alongside our Compute Subsystems , making it even easier for companies to build their own chips. Another new technology area that Arm is investing in is chiplets. As system on chips have become larger and more complex, it is sometimes easier to break them down into smaller chiplets that can then be integrated together into a single package.

This is what NVIDIA have done with their Arm-based Grace Blackwell solution. We have worked with some of our total design partners to create multiple different chiplet test implementations for different end markets. For example, Arm has worked with ADT Technology and Samsung to create an AI accelerator chiplet suitable for the data center. Also, we have worked with Socionext and TSMC to create a chiplet for a supercomputer. In addition, Arm and about 60 of our ecosystem partners and customers have created a new standard called a chiplet system architecture that defines how these chiplets should communicate with each other and will help to make chiplets to be reusable across multiple different chip designs. Now let's examine the financial impact on these investments.

As you saw earlier, even when a new architecture is announced, it can take two to three years to develop a new CPU based on that architecture. We can start signing licenses even while the CPU is being developed, but we cannot recognize revenues until we can deliver that technology to the customer. It then takes our customers another two to three years to develop a chip and maybe another year for that chip to ramp into meaningful royalty. Once ramping, it can deliver royalty revenue for many years. Investing more today in more technology and in more advanced technology can yield more license revenues in a few years time and much more royalty revenue further into the future.

Looking back over the past few years you can see that along with strong revenue growth, we have already been growing our investments in R&D, both revenue and headcount have grown at a 17% CAGR over the past two years and now more than 80% of our people are engineers developing the next generation of technology. Going forward, we expect headcount to continue to grow at this pace and the proportion of engineers to also continue to increase. Rene has decided that non-engineering headcount will be flat going forwards. Instead we need to use agentic AI to make SG&A functions more productive, more efficient. Regarding profitability, at any time we can slow hiring and so allow the revenue growth to drop through to profit growth which would bring short term benefits for all shareholders. However, now is not the time to be focusing on profitability.

Now is the time to be building the technology for the future. We believe. We believe this is what SoftBank and our other major shareholders want us to do and this is where our greatest opportunity currently lies. Arm's revenue will grow as chips become more intelligent, which drives complexity, and as the increasing demand for more chips in more devices results in more Arm-based chips. As more AI gets deployed into more devices, this will drive more demand for even more Armv9, which can accelerate these workloads. As chips become more complex, their development costs increase, and this will lead to more companies to use Arm's Compute Subsystems , and this will result in higher royalty rates. The demand for semiconductors is growing steadily. Analysts estimate that the semiconductor industry will grow at an 8% CAGR between 2024 and 2023.

Within the industry Arm is gaining share in long term growth markets such as automotive, cloud, and IoT. Arm's customers shipped double the number of chips in FYE 2024 than in FYE 2020. Combining together more chips, more valuable chips, higher royalty rates, and share gains will help to grow Arm's revenues for years and decades to come. With that we can turn to your questions.

Moderator

[Foreign language]

Thank you very much. Now until 11:30 Japan time, the floor is open for questions either in Japanese or in English. Questions are welcome. We would like to entertain as many questions as possible. Please restrict your question number to two. First, identify yourself and your affiliation. First, certified analysts are going to be the ones who are going to ask questions. After that, we are going to take questions from the participants online. Please push the raise hand button for those participants online. You can find it in the reaction box. From the site, the microphone will be brought to you. If you have a question, please raise your hand. Thank you. The gentleman on the first row, the microphone will be brought to you.

[Foreign language]

May I ask a question in Japanese? I have two questions. Thank you, Ian.

First question going forward, license revenue, how are you going to see it on a quarterly basis? I should say that there are a lot of fluctuations, I understand that. In the longer run, there are going to be different kinds of architecture expected to come out. That will link into the growth in the royalty revenue in two or three years. Not only on the quarterly basis, but into two years and three years, how are you going to see a growth of license revenue? This is the first question. Second question, royalty revenue guidance mid 20%. That's what the guidance then gives going forward. Within the cloud business, within the CPU, the share might grow, but AI data center for GPU and CPU, what is the percentage of the amount? I think the CPU percentage may be smaller.

How are you going to drive the growth in this area? Into the latter half of this year, there is going to be the NVIDIA Blackwell. The computer is going to come out into the market. Next year, probably what would be your expectation on the growth of royalty and the license revenue in this area? Two of them.

Ian Thornton
VP of IR, Arm

Thank you for your questions. As we are still in Q4, it's FYE 2025, so only a couple of weeks away from the end of Q4 and with our results coming up in May, I will have to defer guidance for the coming year to what we provide for the information in May. I'm happy to answer your question in more general terms in terms of the drivers of our licensing and our drivers of our royalty revenues. You make a very good point about our license revenues being quite variable quarter to quarter. The revenue recognition, particularly around some of the large subscription deals that we sign, mean that we can end up recognizing 50%-60% of the revenue associated with a multi-year deal all on signature. That can mean that the license revenues can be lumpy.

To help with that, we do provide an annualized contract value number and that I would always recommend that you look at first because that basically takes the revenue recognition and spreads it out as if the revenue recognition was rateable. If you can imagine a license that's a five year, $20 million a year license, the revenue we recognize $50 million on signature and then the remaining $50 million will be spread over the following 20 quarters. So $2.5 million per quarter, so big spike and then very little for the remainder. Whereas with ACV, annualized contract value, we will spread that out $20 million, $20 million. In any quarter where you see the license revenue has gone up strongly, please go and take a look at ACV, you'll probably find it turning up a little bit.

If in any quarter where the revenue is down strongly, I would suggest you look at ACV, you'll probably find it's up a little bit. I thoroughly recommend you take a look at ACV in terms of what the drivers of our license revenue have been in recent times and what this will mean going forward. In the past, and particularly when I think back to the IPO, we have seen much, much stronger licensing than we had anticipated. In fact, if you go back to the IPO and compare to where we are guiding the market, it looks like that we were guiding about $1.3 billion. It looks like we will now be ending up closer to $1.8 billion dollars. Much, much higher. Almost all of that is due to excitement around AI.

If you are starting a new chip design today, regardless of whether that chip design is going into a smartphone, a data center, a smart TV, a car, even a washing machine, you know that that chip is going to have to run some form of AI algorithm. These are quite large, quite computationally heavy algorithms, but they're also changing very rapidly. It takes two to three years to build a chip, but the algorithms are changing quite fundamentally every single 12 months. In that two to three year period, the model is likely to change once or twice. Anticipating what the performance requirements are going to be of that chip is very, very hard.

What we have seen in the past year is that companies have been buying our most advanced technology and buying higher performance technology than we had anticipated so that they can future proof themselves against whatever the future design is going to be. Really, they are over speccing, at least from what we had anticipated, so that they can run that algorithm when their chip comes to market in two to three years. In two to three years' time. That has been driving our license revenues in the near term. In terms of if I look forwards, do I expect that to continue? I cannot see it slowing down at the moment. I cannot see the demand side slowing down because all the time we see demand for more high performance compute to run the next models.

I think for at least for a while, we will see the demand continue. How that translates into revenue will depend on when the lumps occur. As I say, I still recommend that you look at ACV every quarter. On the royalty side of things, one of the downsides, if you like, of having now a 50% market share is that we are exposed to all of the winds of change within the semiconductor industry. There are few parts of the market that can have an inventory correction that will not also impact Arm's. We do well in the parts of the industry that are booming, but if there is any part of the industry that is slowing down, then we get impacted by that as well. There is nowhere to hide anymore.

In the past year we saw a slowdown in the sale of chips going into networking equipment, particularly in things like wireless communications, as there was a pause in the rollout of some 5G networks. Those were all Arm-based chips. There was an impact there. We've also seen an inventory correction in industrial IoT as well. This was due to, during the pandemic many companies struggled to buy chips and so their supply chains were very tight. Post the pandemic they moved from being just in time to being just in case. They allowed their inventory levels to become much higher.

Now many companies believe that they have more resilient supply chains, more diversity in their supply chains and so they're moving back to a just in time approach to inventory management and therefore are winding down the amount of inventory that they have and therefore buying fewer chips. Again, it's not a demand issue, it's just an inventory management issue that we hope will come to an end next year. I think in normal times we would be expecting next year therefore to be a year of strong royalty revenue growth because as we had two weak parts of the market this year, hopefully no weak parts of the market next year. Now maybe we're looking at an uncertain future due to potentially geopolitics and increased tariffs and could that mean that there could be some form of recession?

If that means that there are fewer cars being sold and fewer smartphones being sold, then we would not be able to dodge. I think there's a lot of long term growth drivers within the semiconductor industry that we'll benefit from. As I mentioned earlier, the analysts think that over the next five years that's a market that will grow just under 10% per year. We're gaining share, we have higher royalty rates. If there is a recession or depression then maybe the growth is less strong. We shall have to find out when we get to next year.

Moderator

[Foreign language]

The next question, the gentleman in the front row over here.

David Gibson
Senior Research Analyst, MST

Thank you.

David Gibson, MST.

Apologies. Two questions in. How important is Stargate to Arm?

I would have thought you'd get the CPU design anyway in the network centers.

The cloud centers.

That's the first question.

The second, could you talk about the d esigns and how competitive your CPU, GPU and what is it your customers say.

That needs to improve the most looking across those three primary products and hence where you are investing for the future to improve their performance. Thank you.

Ian Thornton
VP of IR, Arm

Thank you, David. Stargate for us is first of all, we are not contributing to the funding of Stargate. We are just going to be a beneficiary of the chips being deployed within Stargate. There are many decisions that have not yet been taken in terms of exactly what is going to go into Stargate. I have no visibility at the moment as to what the exact mix of technologies is going to be included within there. Arm is the CPU of choice and we do expect there to be a lot of Arm CPUs in there. NVIDIA are the technology partner of choice.

My default assumption is therefore that there'll be a lot of Grace Blackwell chips with the Grace part being Arm based. Exactly how many Grace Blackwells and exactly what the mix of those is going to be, it's hard to tell, but it is a $100 billion investment in the first data center that does imply a lot of chips. Therefore, hopefully a lot of Arm based chips, but exactly how many is not yet clear. OpenAI is going to be responsible for the management of the data center. It will be down to them as to what the mix of technologies are used. We'll have to wait for them to make that determination. Not yet sure, but it certainly sounds very exciting. Sounds like it should be good.

Exactly how many dollars that results into royalty revenues, I'm not sure yet in terms of the technology in the data center. Where we're gaining share within the data center predominantly is with the cloud service providers developing their own CPU chips for use in their own data centers. Companies like Amazon developing their own Graviton chip, Microsoft developing their own Cobalt chip, and Google developing their own Axion chip. The reason the benefit for them in developing their own chip rather than buying an off the shelf chip is that they can optimize the software that needs to be run, the workloads that they are using or that their customers are using in the data center, and they can optimize it with everything else in the data center.

For many years now, the big power companies have been building their own data centers and customizing the equipment that goes into those data centers. By designing their own chips, they can also design their own blades and their own server systems. Now they can optimize the software to run on their chips, to run in their racks, to run in their servers, to run in their data centers. The whole thing is optimized top to bottom, left to right. All of our major customers in this space are saying that by doing this they can achieve somewhere between a 40%-60% reduction in power by designing their own Arm-based chips rather than using a traditional chip from sort of Intel or AMD. This is not any magic due to the Arm processor.

Obviously we are a low power design, so we do contribute that. Really it's what you get when you optimize a chip to run a particular set of workloads. By matching the chip to the workloads, you end up with something that runs much, much, much more efficiently. That is, I think, one of the key areas that we are seeing in terms of how we've gained share so far. Amazon announced in November last year at their re:Inven t conference that for the past two years more than 50% of the new chips that they have deployed within AWS have been their own Arm-based Graviton design. Microsoft and Google both hit general availability in October last year, so they are just beginning to ramp now. We see no reason why in five years' time they could not also be at 50% or higher penetration.

The reason for that is that when Amazon first started deploying Graviton in about 2020, not all of the software that was needed to run in the cloud was available to run on Arm-based chips. Amazon initially made Graviton available for free to software developers. Now that's no longer needed. All the software is now ported and so there's no reason for software not to be run on Arm. Also, Amazon was targeting Graviton at their third-party customers, their AWS customers, and so they could only move as fast as their customers wanted to move. Their customers had to port and test their software. Now Amazon made the decision to charge their customers 40% less running on an Arm-based chip than they did for an x86-based chip.

That is because Amazon were getting the benefits of the lower power consumption, so they were able to pass that on to their customers. That was very encouraging. Our understanding is that Microsoft and Google are also planning to run on some internal workloads as well as using it for Azure and GCP. The next time you have a conference call using Microsoft Teams, it might be an Arm CPU that is running the Teams for you. The next time you go and look at a YouTube, it may well be running on an Arm-based chip, because those are the applications that Google and Microsoft are targeting first.

More broadly, if I look across all of the major hyperscalers of the top 10 largest hyperscalers in the world, eight of them are now deploying Arm-based chips, and the other two have the first Arm-based chips still in development. Maybe towards the end of this calendar year or early next calendar year, we would expect that to be a 10 out of 10. Hopefully next time I'm here, that will be a 10 out of 10. In terms of what they're asking from us, as I mentioned in my presentation, the key purpose of a CPU is to run software. The conversations that we're mainly having around next CPU designs are very much focused on what is the software that it needs to run.

We are, I think, very fortunate with the relationships that we have with many of the companies that are writing the base algorithms for AI. Companies like OpenAI, companies like Meta, we are working very closely with them in terms of what is the evolution of the software that they are creating. The dream is that you have a new CPU going into a new chip that becomes available just at the same time as the new software algorithms become available. That has not been happening in recent years because the models are changing so fast. The more we can do a better job in talking to these companies that are developing these algorithms, then hopefully we will be able to do a better job to making sure that the hardware technology is coming to market just at the time the software is coming to market.

That is definitely one of the key benefits, I think, in things like Project Stargate, because it's enabling us to get more of an internal view as to what OpenAI's plans are than otherwise we would have if they were just another third party software company.

David Gibson
Senior Research Analyst, MST

Just to follow up, is there any.

Requirements from your customers you think need to improve on the GPU NPU side of things in particular that you think.

You're behind versus what the customers in the market think?

Ian Thornton
VP of IR, Arm

At the moment we only have CPUs to offer, so we. I will touch on the NPU in a moment. For the data center, we don't have a product to offer there, so that was a bit of a moot question. Certainly the focus is on CPU. Yes, they would love us to be able to build our CPUs faster and have them come to market faster. That's something that we are obviously always trying to do, but at the same time we need to do it in conjunction with the software companies so we can't move faster than they can. On the NPU side, maybe it's just worth touching on. We do have NPU, so neural net accelerators for embedded devices.

These are very much targeting not the data center, but rather going into robotics, going into security cameras, going into other devices. In fact, one of the applications that we've been working with a company on is how to put not large language models, but a small language model even into a simple device like a washing machine. You may think, why do I want a small language model in a washing machine? The intention is to make the washing machine a subject matter expert in itself. Basically, the manual for the washing machine gets embedded within the device itself.

Then you can, rather than just putting your clothes in and fiddling with the dial, you can actually tell it what the clothes are, what you need doing, and it will have enough of an understanding to be able to program itself for that particular wash. Now, it will not be able to tell you what the weather is going to be. It will not write you a poem, it is just going to be a subject matter in itself, but it is certainly looking at how we can take some of these AI technologies and create new product categories, things that did not exist before. That is something that we are trying to improve and enable with our embedded NPU.

Moderator

[Foreign language]

The gentleman in the second row from the front.

Kirk Boodry
Analyst, Astris Advisory

Hi. Kirk Boodry from Astris Advisory . I have a question.

It's almost a follow up to David's question.

When we look at all the research.

Development that Arm has done since SoftBank first invested, one area that stands out is not having a chip design.

For AI accelerators in the data center. I was wondering if you could talk a little bit about that, why you d idn't do it, whether there's a possibility y ou do it going forward?

Ian Thornton
VP of IR, Arm

Okay. On the chip part, I mean, Arm is an IP provider, so you wouldn't expect us to be selling chips. If I can re-ask the question in maybe a slightly different way, which is why haven't we developed an AI accelerator as an IP deliverable to license to NVIDIA and others? It's not a technology problem, it's a market problem, market demand problem. If you look across the companies such as Google and what they're doing with their TPU, if you look at what Amazon are doing with Trainium and Inferentia, what NVIDIA is doing, everyone's doing something slightly different and they are optimizing, they have strong views about how they want their algorithms to work and to run. They are each creating something that is highly differentiated from the others. Arm is an IP company.

We do best when we are licensing the same design to everybody, or at least to multiple companies. That way we can benefit from designing something once and licensing that same design to three, four, five companies. The goal is always, or as often as possible anyway, to try to cover the cost of the development into the license fees. That means that the royalties, when they turn up, profit. The problem is if everyone is trying to build something different, then we do not have the same thing that we can license to everybody. What I might anticipate though is that, and certainly we have seen elsewhere in the history of the semiconductor industry, is that right now everyone is focusing on frontier models.

They're trying to push the boundary of what can be done and they're doing this by throwing lots of technology at the wall to see what makes progress, what doesn't. That is leaving a lot of space behind for efficiencies. We've seen this a little bit with DeepSeek as an algorithm, which focused less on being the frontier model, but more on making it an efficient version of an existing model. That was able to demonstrate, if you believe the claims, a 10x improvement by optimizing what was one of those frontier models.

What typically we find is that as we start to find, as part of the algorithm that we, or as part of the problem that we're trying to solve, we figure out the solution as the algorithms start to slow down, then we can create more optimized solutions which we've been licensed to many companies. If I may give an example to illustrate, imagine that we have solved how to do natural language processing. We figured out how to have a computer understand language to create a natural language response. That could be either to understand one person, so my voice. That means we may have a digital assistant in the smartphone or it could be anyone's voice. We can maybe have a call sensor in the cloud.

Now once we've figured out what algorithms are needed, then the R&D investment will move on. It will go into self-driving cars, it will go into AGI. There can be more focus on how to make those, that algorithm more efficient so that it could be scaled out so lots of companies can have cloud-based autonomous call center. Now with the algorithms becoming more stable, you could then start to build chips and CPUs or GPUs or accelerators that are optimized for that particular algorithm. In the same way we saw with DeepSeek making a 10x improvement by doing software optimizations, you can get similar levels of improvement by doing hardware optimizations as well.

That is the time when you might see an IP company come along saying, okay, we now have, rather than you all building your own accelerator for your own chips, why do not we design that once and license it to everybody? Because you are all trying to solve the same problem, you are all trying to do the same thing. If I can give an example where this has happened quite recently in the space of AI, five years ago, seven years ago, a lot of the focus of AI was on image classification. If you remember, there were lots of photographs of Chihuahuas and blueberry muffins. Can you tell the difference between the Chihuahua's face and the blueberry muffins? That was being done. That was state of the art within the AI being run on big data centers in the cloud.

You cannot buy a security camera for the home today that does not have image classification, does not have object recognition, does not have face detection, does not have friendly face recognition. A lot of the technologies that were being run on $5 billion data centers are now being run in $50 cameras. Similarly, five years ago there was a lot of focus on voice recognition, and now you can have text to speech, live translation as an algorithm running in your smartphone. A $5 billion data center going into a $500 smartphone. I think that many of what we are seeing as frontier models that we are seeing today, in a few years time will end up being in much, much lower cost goods, including as I mentioned earlier, maybe even in your washing machine.

I think the answer to your question, Kirk, is not so much as, well, why don't you. It is more like, well, when does that make sense? I think it will make sense at some point. I can't necessarily anticipate when that will be, but we're already starting to see some of the algorithms that were being developed for in the data center coming into consumer electronic devices. I think that's going to be a continuous flow. Meanwhile, the models move on to the next problem and the next problem.

Kirk Boodry
Analyst, Astris Advisory

Thanks.

Moderator

[Foreign language]

Thank you. Right now we have no one with their hands up at the online participants. Anyone participating online, please use the raise hand function, which is in the reaction button group. We will continue with receiving questions from people in the venue. First, in the second row from the front.

Kenji Yasui
Co-Head of Research, UBS Securities

[Foreign language]

Thank you. My name is Yasui from UBS Securities. I have two questions and the first question, CCS and a custom ASIC company. I would assume that you will be competing against them. You are kind of competing for added values. Are they a partner or, so custom ASICs market, will you try to capture that market from Arm CCS? I wanted to understand the dynamics of that part. That is my first question.

Ian Thornton
VP of IR, Arm

Customers, still partners. The Compute Subsystems, what we were—the reason for developing the Compute Subsystems is that our customer base has been changing. Historically, Arm would license its technology to semiconductor companies, to companies whose primary products are computer chips. Over time, though, that has been changing. We have been finding that their customers are now wanting to build their own chips and are coming directly to Arm to license CPUs directly from us. In some cases, those companies will still engage an ASIC company, but they want to have the relationship directly with Arm. The reason is that as software increasingly becomes the product that you are selling, if you're selling a smartphone, you're really selling a box that runs software. The smartphone without software is a black box. With software, you actually have something that does something.

Even cars are increasingly being sold by software. Either the software in the cockpit or the software in the self driving capability. If software is becoming the product that I'm selling, then it's very important for me, the OEM, to control how that software runs. As that software runs in the chip, I therefore need to control that chip. You may choose to acquire or develop the semiconductor design capability yourself in house, or you might decide to use an ASIC company. Even if you choose to have an ASIC company, you still will want to have that relationship directly with Arm, because it's the Arm CPU that is running your software that is so important to your product sales. Over time we have now started to see our customers become more our customers' customer. More OEMs than we saw previously.

Those companies want a different deliverable, they want a more advanced deliverable than a semiconductor company. Some companies are licensing our Compute Subsystems because that is an assembly of Arm's CPUs and interconnect. It's a better starting point for their chip design. Again, they can still go and work with an ASIC company to do the rest of the chip design. Many of the companies like Broadcom and Marvell have particular expertise around high speed interfaces and sort of the back end part of the manufacturing process. Those are very hard things to do. The design of the chip, increasingly the OEMs want to own themselves. It's not that we are competing with the ASIC companies, we're not competing with Marvell, but maybe we are providing more technology to the OEMs than had been historically the case.

This is a demand pull from their customers. This is because of the importance of software going into an OEM's products. We're responding to their request.

Kenji Yasui
Co-Head of Research, UBS Securities

[Foreign language]

My second question is, AI is now starting to split into front end and back end. CPU and for the rack network are linking, that's the first part. The second is GPU and for back end and Ethernet, like the high speed internet and the software there, and NVIDIA essentially created kind of a monopoly software stack. CPU, I understand Arm is very strong in that area. When you come to the back end side, someone inclusive of Arm, can you actually compete against NVIDIA? Can you break into that monopoly? In one sense it may be very difficult right now, but from your company's perspective as Arm, this back end network included for anyone, including yourselves. Has anyone tried to break into that space or has everyone kind of given up? Which do you think is the case?

Ian Thornton
VP of IR, Arm

Yes, thank you. We are indeed very strong on the CPU side on the alternatives to NVIDIA. I think the most credible alternatives right now are what the cloud service providing companies are developing for themselves. So Google's TPU, Amazon's Inferentia and Trainium chips, these are the most credible alternatives being developed. Again, these are being developed by the cloud service providing companies because they have a better understanding of what problems are trying to be solved within their data centers compared to the general purpose accelerator such as the NVIDIA GPU.

I recall seeing an article by Meta and they were claiming, running the algorithms, that they wanted to run a five fold improvement compared to an NVIDIA GPU because whereas the GPU is a general purpose accelerator, they've built something to run a specific algorithm and by again building something to run a very narrow use case, you can optimize for the chip to run that particular piece of software. In terms of anyone being a third party chip provider to go and compete with NVIDIA, I think there's space for that, I think there's probably demand for that, but I don't currently see any company that is managing to achieve that. I think to your point that NVIDIA is becoming not just a chip company but a solutions company providing a lot of the software as well as a lot of the other.

Many companies that have tried to compete with NVIDIA I think have lost because of CUDA and the software ecosystem of software developers that have written software to support CUDA. Any competitor to NVIDIA would need to be having an effective competitor, CUDA as well as the GPU.

Kenji Yasui
Co-Head of Research, UBS Securities

[Foreign language]

On the back end side, AI Accelerator and the network. Can Arm win a market share in the future? In an area that isn't CPU, is there a possibility for you to win market share in the future?

Ian Thornton
VP of IR, Arm

There is space, but right now we don't have anything to offer.

Kenji Yasui
Co-Head of Research, UBS Securities

[Foreign language]

Thank you.

Moderator

[Foreign language]

In the third row, the gentleman in the back and we are expecting questions coming in from the online participants. If you have a question, please push raise the hand button online.

[Foreign language]

May I? My name is [Navero] BofA Securities . I just would like to have a question on the short term royalty ratio for the past three quarters remaining level of 25% if I remember correctly. I think it should be raised to 60%-70% in the medium term. Looking at the most recent situation, what is the reason behind this performance and how is your countermeasure to improve the royalty share?

Ian Thornton
VP of IR, Arm

The proportion of chips that are Armv9 versus Armv8, Armv7, Armv6 is entirely determined by consumers going into shops and buying things. It is not something that we have control over. We certainly cannot influence it other than all going out and buying more smart TVs and smartphones. It is just the mathematical outcome of people going into shops and buying phones and other devices. I think what we have seen most recently is that Armv9 has more than doubled on a year-on-year basis, but has grown in line with Armv8 on a sequential basis. To a certain extent I think we have been slightly surprised by the growth of Armv8 and had Armv8 not grown, Armv9 would have a higher proportion. Armv8 has been growing partly because we have seen a recovery in industrial IoT chip sales.

I mentioned earlier that there had been a decline in industrial IoT earlier in the year. We had a bit pop up in Q3 and that's all. Most of that actually. Armv6, it's very old technology and that helps to balance things out. Also, you have to bear in mind that Armv9 is still at the moment only in high end smartphones and in data center chips. Everything else is still Armv8 and older. There's plenty of room for Armv9 to grow into. We feel very confident that in time 60%-70% of our royalty revenues will come from Armv9. Actually, in the last quarter I think we've been just very pleased to see that Armv9 has grown, but also so has everything else. It all pays royalties. I don't really mind that much where the royalty comes from.

Moderator

[Foreign language]

Thank you. Next question is also from the venue. The person in the second row from the front and towards the window.

Satoru Kikuchi
Senior Analyst, SMBC Nikko Securities

[Foreign language]

My name is Kikuchi from SMBC Nikko Securities. I have two questions. Thank you Ian, for your presentation. I think on page 13, page 18 of the presentation material. Now, over the last two years the revenue growth and the employee growth has been 17% at the same rate. Now this balance over the next two years will we see changes? Based on the plan at the IPO, I think the plan was for the revenue growth to accelerate more. When will we start to see that timing where the growth in revenue will exceed the growth in the headcount.

Ian Thornton
VP of IR, Arm

Actually, our revenue growth is higher than we had said at the IPO. We are exceeding our IPO targets. At the IPO, we also indicated that we thought that our non-GAAP operating profit would grow at 1-2 percentage points per year. We did. I did not mention the headcount growth, but the intention was always to continue to grow investments in R&D and also to continue to grow, therefore, our engineering headcount. I think we are very much on track. I think it is page 18. That is 19. Next page, previous page. There you go. Thank you. I think, looking forward, as I mentioned, I am not going to be giving revenue forecasts today, but we certainly intend to keep investing by hiring new engineers. I think right now there is a lot of opportunity.

As we see more algorithms coming from the data center to the edge, we need to make sure that that runs on Arm technology. Also we have our partnership now with OpenAI, through Stargate and through Cristal intelligence. We absolutely must take this opportunity to make sure that we are developing the right technology, CPU or accelerator so that we can maximize the opportunity for us in the data center as well as outside of the data center. We have every intention to continue to grow our headcount at the same sort of trajectory. We added 1,000 engineers last year, we will add another 1,000 engineers next year at a minimum. The one difference you might see is that we may start doing some of that hiring through acquisitions. To date, we have done all of our hiring organically.

Historically we have had a history of buying small companies, small semiconductor companies. We do not want their technology, so we are not buying them for their products, but we are buying them for their engineering teams. These can be 200- 300 people companies. It is great because that is like one whole CPU team. It comes with graduates, senior managers, project managers, program managers. Often it comes with a building. We can just go in there, take away the stationary, give them all Arm business cards and away they go. As an Arm design team, that is much, much faster than hiring people one at a time. That is something I think you can expect to see. Right now we have no plans to slow down hiring. This is not the time to be focusing on profitability.

I think even if revenue ended up being a lower number than we had originally anticipated, I think that we will continue to hire through that because I think the long term opportunity is so great.

Satoru Kikuchi
Senior Analyst, SMBC Nikko Securities

[Foreign language]

Thank you.

The second question, dividend policy. There is no change, I believe, going forward. What is the dividend payout policy or dividend policy in the future?

Ian Thornton
VP of IR, Arm

Yes. To date we do not pay a dividend, but we do have plenty of cash. We have about $6.5 billion of cash and cash equivalents and we are probably going to be increasing that by approximately $1 billion per year. We have right now nothing to spend it on. We have only recently done an IPO and then doing a big buyback is probably not a good idea. Also, we get many complaints about the lack of liquidity or the lack of float. That is not a desired outcome other than some small acquisitions. They will not absorb our cash and therefore that leaves the dividend. Today we do not pay a dividend. I think that SoftBank as our much larger shareholder would have to desire a dividend.

We are very interested in getting feedback from other investors and also SoftBank's investors as well to see whether a dividend is required. We are very much open to a dividend if there was sufficient demand for one.

Moderator

[Foreign language]

Thank you. Yes, the person in the third row towards the wall and after that the person at the front row on this side and in that order.

[Foreign language]

From [Adair Securities], I have two questions. The first question is a follow up to Yasui-san's question now. In what business layer are you going to focus on going forward more recently? Not just the backside of the chip design, but the layer next to chip design. As a partner or I think you could be a competitor, but partner or competitor, that type of relationship, which direction will this go in the future? Would the partnership become more prevalent? Or because of the increased added value from you, it could be more of a competitor in the future. Relationship with customers or the business layer forecast based on your strategy going forward? That's my first question.

Ian Thornton
VP of IR, Arm

Thank you. Yes. I would like to think that the relationship is going to remain very much one of partnership. However, as I mentioned previously, our customer base is expanding and we are seeing more companies who traditionally bought chips wanting to take control of the chip that go into their products. They want Arm to provide the technology directly to them. They want Arm to provide more technology than we have licensed to selling to those companies. Now maybe some of our customers would rather that we did not provide technology directly to them. I think it is up to them to demonstrate that they can add more value, that they can build a chip for Amazon better than Amazon can build their own chip and if they cannot, then it is only appropriate that Amazon have the choice of developing the technology themselves.

I would not say that we are competing with ASIC companies, rather we are enabling their customers to become more independent if they so choose to do so. I think we are providing more choice rather than we are competing with our customers.

[Foreign language]

The second question is about Cristal intelligence. The relationship with the Arm OpenAI is going to be developed in Cristal intelligence project. Arm-based chips are going to be included. Is that the way that Arm is going to be involved in Cristal intelligence? If it is going to be achieved, that SoftBank is going to make use for their own business activities and that's what they are arguing about. When it comes to Arm, when the Cristal is established, is it going to be used by Arm? To what extent Arm is going to get involved? What is your expectation on this project? This Cristal intelligence?

I think Cristal intelligence, there are two very exciting parts of this. At one level I see Cristal intelligence being a little bit like Project Stargate. It's an opportunity for Arm-based chips to be deployed within a large high-value data center. Hopefully lots of opportunity, lots of chips, lots of opportunity for Arm-based chips. In addition, as part of this we will get access to the OpenAI, excuse me, the OpenAI agents and we will be able to use those internally to improve our own business processes and also potentially our own product development processes as well. As I mentioned earlier, we have been told by Rene that the headcount for SG&A is effectively flat and that we need to, as the engineering side expands, then we need to support a larger engineering team but with no more people.

We need to generate, to focus on automation and AI and agentic AI in order to be able to scale up to support a much, much larger engineering team over time. On the engineering side I think that we have experiments going on to see whether how AI could be useful in our, in our product development. We have been using Microsoft's GitHub Copilot as part of our software engineering activity for some time. The reports I've heard is that it has had roughly a 10% productivity improvement. Although it is quite patchy between different people. Some people get more, some people get less, but that's over 2,000 people. So 10%, another 200 people's worth of work we've been able to achieve through that.

As it's quite patchy, I suspect that therefore, as we get used to using the technology more, then there is a lot more room for a lot of productivity gains. To date we have not rolled out the AI on the CPU design side of things, but I'm sure that's something that we will be looking to do so in the future and hopefully that would also have a similar benefit in improving CPU design going forward.

Moderator

[Foreign language]

Last question from the venue, and we have someone online with their hand up as well. That person online will be the last person after we take questions from this venue.

Oliver Matthew
Head of Institutional Equities, CLSA

Hello, Oliver Matthew from CLSA, thank you for your presentation. I actually only have one question around the longer term, if we're on the journey to AGI and ASI that suggests the exponential pickup.

What do you see your kind of s egment mix looking like when we get t o those two groundbreaking moments?

Ian Thornton
VP of IR, Arm

If we get to ASI, who knows, maybe we'll all be communicating by telepathy and all have chips embedded in our brains. I have no idea. I always find it very interesting to hear NASA speak about the opportunity for AGI or ASI in terms of having four vehicles being autonomous and there never being an accident again. The amount of compute that that would require in each vehicle will be much, much, much greater than anything we've obviously seen so far. You need to have multiple cameras, LiDAR, radar, vehicle-to-vehicle communications, vehicle-to-infrastructure communications as well as obviously to the cloud communications as well as having a smart brain in the center. We are working at the moment with many of the car companies to make that happen.

We are working with about 120 of the world's largest car companies in a consortium to basically set up what are the software requirements and the software APIs to enable a software defined vehicle. That is work ongoing, but still it is probably many years away before that becomes a reality. You will see if you start thinking about what that level of automation could do in factories, in the home, in restaurants, for preparing food. All of that will be powered by, yes, maybe a super plane that sits in the cloud. To get things done physically, you are going to have to have a lot of local sensors, cameras, controllers, actuators to move the robot's limbs around and its fingers and joints and things like that.

I think all of these markets are going to become much, much, much larger if we get to a world of AGI, ASI and Arm already now has a very, very high share of the controller chips and the brain and the camera chips that go into all of these devices. I think this looks very, very, very positive for us if we go up in a world like that.

Moderator

[Foreign language]

Thank you. The last question from the participant online, Citi Securities , Mr. Tsuruo. This is the very last question, Mr. Tsuruo. Please go ahead by unmuting.

Mitsunobu Tsuruo
Director of Equity Analyst, Citi Securities

[Foreign language]

Thank you very much. There are many questions, so I have two questions. First one, Oracle in Stargate is an investor or a business partner. Hyper Oracle AI data center business. What is the relationship with Oracle? AWS 50%, that's what you have mentioned. What about Oracle? Oracle is going to become an important partner and how the share is going to change and initial investment in Stargate, Texas investment. What is going to be the status of this investment?

Ian Thornton
VP of IR, Arm

Thank you for your question. Unfortunately, regarding Project Stargate and investments, we're not involved at all with investing in Stargate. I have no answer for you. Maybe the SoftBank IR team might be better placed to answer that question. Regarding our relationship with Oracle, again they're an investor in Project Stargate so they're helping to fund many of the Arm-based chips. Thank you Oracle. Thank you Larry. For us I think the main sort of relationship is through Ampere. Ampere is a semiconductor company that Oracle and Arm are both investees in. Significantly, Ampere is providing a lot of the Arm-based chips that Oracle is deploying. The relationship we have is with Ampere but Ampere's customer is Oracle. That's how that relationship is established.

Mitsunobu Tsuruo
Director of Equity Analyst, Citi Securities

[Foreign language]

Thank you.

DeepSeek , when it comes to inference, the pot for inference is coming down due to DeepSeek for your demand. Is that positive because of elasticity and is there more room for demand to increase or is it the other way around? Please share with us your thoughts there.

Ian Thornton
VP of IR, Arm

Yes, I think as I mentioned earlier, I think DeepSeek to a certain extent shouldn't have surprised anyone. When you focus on the frontier models, very much focus on performance and trying to do new things. If you start to focus a little bit more on efficiency and take those frontier models and then try to create a more efficient version, you can make a large amount of progress very quickly, which I think is probably what has happened with DeepSeek. I think there should have been less surprise about DeepSeek. I think some of the surprise was more to do with the fact it was as a Chinese company. Had it been a U.S. startup, I think everyone would have been less startled by it.

To a certain extent I think there was a lot more efficiency opportunities to come because that was just an efficiency focusing on software. Once the algorithms stopped moving around, there's the opportunity to focus on the hardware side of things as well. You can probably see another 10x improvement when you start developing the hardware to accelerate the software. As I indicated earlier, we can see this happening in how some of the object image classification sort of models that were looking at pictures five years ago, the image classifications of Chihuahuas and Muffins are now in security cameras going into the home. When you stop focusing on the frontier and start focusing on optimization, then you can make substantial efficiency improvements which obviously therefore enables edge devices.

Yes, we absolutely expect to see large language model functionality going into edge devices and at the edge it will be running on Arm processors. Yes, all very good flight.

Mitsunobu Tsuruo
Director of Equity Analyst, Citi Securities

[Foreign language]

Thank you.

Moderator

[Foreign language]

Thank you. This is the end of the briefing session at our IR site. This session is going to be uploaded. Once again, thank you very much.

Powered by