We are going to start the Arm briefing session. I would like to introduce today's presenter, Vice President in charge of IR, Ian Thornton. This meeting is provided with simultaneous interpretation service for those sell-side analysts, and please tune in your channel. The channels are displayed here. For those online, default is original audio. That is the setting. If you wish to listen to either English or Japanese, you may select the interpretation from the menu bar in the Zoom application. Those online, currently the Japanese slides are shown on the screen. You can switch to the English slide by selecting view options in the Zoom application and then select the shared screens in English. Now, we would like to proceed with the meeting. Ian, please go ahead with your presentation.
Good morning. Thank you very much indeed for coming to this presentation. It is wonderful to be back here again in Tokyo. It's only been a year since I was last here, but many important events have taken place. Arm has reported record revenues for three of the last four quarters. Demand for Arm technology has never been stronger, driven by excitement around AI. Arm has introduced many new products to meet this demand. We have also accelerated investment in R&D to speed up the development of the next generation of Arm technology. Recently, Arm, SoftBank, and OpenAI announced two major new projects: Project Stargate and Cristal intelligence. Project Stargate will increase demand for Arm technology, and Cristal intelligence will enable us to develop new technology faster by using OpenAI's tools in our R&D and business processes.
Firstly, I will talk about how computer chips are designed today and how Arm simplifies the design of the computer chip, reducing both time and cost. Then I will talk about Arm, our products, our ecosystem, and our primary end markets. Next, I will talk about where Arm is investing in new technologies and new ecosystems to take AI everywhere from the data center and to the edge. I will then talk about the impact of this investment on Arm's P&L, and I will show how Arm's R&D investments are aligned with SoftBank's vision for AI and the future of computing. We hope that this will provide a platform for artificial superintelligence, which is a key focus for ASI. We will have plenty of time for your questions. Consumer electronics have become more capable because the computer chips inside of them have become smarter and more complex.
On the left is the first-ever silicon chip. The manufacturing process was revolutionary, but the circuit design was very simple. It contained just four transistors and could store a single bit of data. Over the 64 years since then, chips have become extremely complex, like the one shown on the right. This example has over 100 billion transistors and is from a modern smartphone and contains the latest technology for computing, for graphics, AI, and radio communications. Only a few companies have the expertise and technology to design and manufacture such complex chips. Looking closely at the chip, you can see it's made up of a number of distinct blocks. One of the most important components in a chip is the main processor or CPU. This particular chip has six CPUs: two high-performance CPUs, which are the larger blocks shown here, and four smaller power-efficient CPUs.
The operating system will switch between the CPUs depending on the performance requirements of the application being run. When high performance is needed, it will use the large CPUs. When less performance is needed, it will switch to the smaller, more energy-efficient CPUs. The graphics processor is typically very large. In this case, the graphics processor is used for gaming. This isn't a data center GPU, and the size of the GPU is determined by how many pixels need to be updated on the screen. As you move to larger screen sizes, higher resolutions, and faster frames per second, the graphics processors tend to get bigger, needing more transistors. There can be other accelerators on the chip for offloading compute-intensive tasks such as video encoding and decoding, and also encryption. There can be radios on a chip that connect the smartphone to the outside world.
This can include 5G modems, Wi-Fi, Bluetooth, and so on. There can be special blocks to communicate to other devices in the smartphone, such as the display, memory, camera, power controllers, and so on. All of these functional blocks need to be connected together. As chips have become larger and more complex, the interconnect has become more sophisticated, automatically moving data around the chip so that the functional blocks are never stalled, waiting for the next piece of data to turn up. Finally, we have the input and output pins that turn the internal digital signals within the chip into analog signals that communicate between different chips. Oh, no, sorry, too far. If you take apart any modern device, from a smartphone to a self-driving car, what's at its core? It's an Arm CPU design.
As well as the CPU designs, Arm also has designs for many of the other components shown here, including the GPU, the accelerators, and the interconnect. Now we have seen how these chips are designed, let's explore Arm's role in the overall ecosystem. Arm provides the design of the CPU, the brain of the chip. We then license the design of the CPU to companies who design chips. These chips are either sold onto a device manufacturer or used in the company's own products. Our customers include both semiconductor companies such as Qualcomm, NVIDIA , and Intel, and also systems companies such as Apple, Amazon, and Tesla. In the past, the system companies such as Amazon would buy chips from semiconductor suppliers, but now they are choosing to develop their own chips. They are therefore coming directly to Arm to get their CPU designs.
These companies do not manufacture the chips themselves. Rather, they take their chip designs to a foundry, which will physically build the chips out of transistors and bit cells. These foundries include TSMC, Samsung, SMIC, and so on. These are some of the largest leading fabs in the world, and Arm works closely with all of these foundries to ensure that Arm CPUs can be efficiently built in every foundry. Once the chips are designed by NVIDIA or Qualcomm and built by TSMC or SMIC, they get packaged and then go into an end product such as a smartphone, a laptop, a smartwatch, a car, the data center, the drone, and so on. All of these products can only operate when they have software to control them. Your smartphone cannot function without software, and all the software runs on a CPU.
All of these products are powered by Arm's CPU technology. Arm is now the most widely used CPU company in history, and we can see this in the numbers. Our customers have shipped over 310 billion Arm-based chips to date. They shipped around 30 billion in the last fiscal year alone, and they are currently shipping around 8 billion chips per quarter, which is one per person for every person on the planet every 90 days. These numbers are so gigantic because everything today is a computer. Consequently, our revenues are growing strongly. We are forecasting more than 20% year-on-year revenue growth for the current fiscal year, and we are highly profitable with around a 45% non-GAAP operating margin. We generate strong cash flows and have no debt. I mentioned earlier that the purpose of a CPU was to run software.
Software is tied to the CPU that it is written for, and the success of the CPU is dependent on the broad availability of software for that CPU. Arm has by far the largest software ecosystem on the planet. We've been able to achieve this because over 50% of all the chips with CPUs are Arm-based chips. We estimate that over 20 million software developers are currently creating new software programs and apps for Arm-based devices, and that to date, they have invested over 1.5 billion hours in creating Arm software. Arm also invests in software. We invested over 10 million hours in the software development of Armv8, and we are planning to invest three times that amount for Armv9. At today's salary rates, 10 million hours is around $1 billion. For Armv9, we are planning to invest around $3 billion just in software development.
Arm's top customers are some of the largest companies in the world, such as Microsoft, Alphabet, Amazon, Apple, NVIDIA , Samsung, Intel, and Qualcomm. We work closely with all these industry leaders to understand their product plans and what software that they will be running in the future. We can therefore develop the CPUs that are ready to run this software and so to enable their future products. Also, as our largest customers are all well-financed, Arm has no trouble in collecting monies owed. Although all chips can contain Arm technology, we put most of our focus on six broad market segments. Starting with smartphones, Arm has a 100% market share of the main chip going into all smartphones and tablets, including all the Apple iPhones and Android phones. We are gaining share in the PC market.
Already, all Apple Macs use Arm, and we are beginning to see some Windows devices based on Arm. NVIDIA's recently announced AI PC is also Arm-based. We are also gaining share in the data center, with over a 50% market share within Amazon's AWS, and we see a growing share at Alibaba, Google, Microsoft, Baidu, Tencent, Oracle, and others. We have a high share of wireless comms equipment, especially within base stations, and the 5G rollout has been very good for us, and we are gaining share in wired comms. In the automotive space, we have a very high share of in-vehicle infotainment and driver assistance systems. Tesla and many Chinese car OEMs are now very aggressively deploying self-driving features, and many of these use Arm-based chips.
Finally, in IoT, where there are billions and billions of tiny chips sold every year into a wide range of applications, from tiny sensors and actuators to robotics and logistical applications to industrial and citywide monitoring and control systems. All of these products will soon be utilizing AI to improve their capability and usability, eventually evolving into entirely new product categories. Arm intends for our CPU technology to be the brain in every one of these applications, and this is requiring increased amounts of investment today, which we expect will yield benefits for years and decades to come. Most of the news stories around AI have focused on AI in the data center, such as OpenAI's ChatGPT running on NVIDIA GPUs or Google's Gemini running on their own custom ASICs. However, every GPU chip and every accelerator chip needs a CPU to run alongside it.
The GPU is great at mathematics and doing the complex matrix multiplications that are needed for training, but the CPU is needed to control the GPU. The CPU runs the operating system and the applications, while the GPU speeds up the mathematical tasks. Increasingly, the CPU chip being used is based on Arm technology. Within the data center, most of the focus has been on training new models, but increasingly, we are seeing inference taking over as enterprises and consumers start to use AI in their everyday work. Unlike training, inference requires more work from the CPU. Whereas the GPU is great for mathematics, the CPU is great at decision-making. This makes the CPU more important in inference applications. At the edge, AI workloads rely mainly on inference, and so here the CPU plays the most important role.
We think that AI will transform many of the devices that we use every day. Flagship smartphones are already coming with AI applications built in. Eventually, all smartphones will be AI-enabled, and in the future, smartphones may no longer need screens. Instead, they could be controlled only by gestures and by voice. All cars will be AI-enabled, and eventually, the car may become fully autonomous. All manufacturing and logistics equipment will use AI vision to navigate the production line and the warehouse. Industrial robotics, combined with household control systems, might one day lead to a humanoid robot in every home. As I mentioned previously, the only purpose of a CPU is to run software. I also said that Arm works with leading technology companies to determine the future software programs that will be run.
This enables us to understand what new capabilities will be needed in our future CPUs. Along the top of this slide, I have listed when each new version of the Arm architecture was announced. As you can see, we have a major new update every two years. Along the bottom of the slide, I have listed the Arm CPU products that are based on these architecture versions. As it takes two to three years to develop a new CPU, there is a two to three-year gap between when the year of the architecture was announced and the year the first CPUs are available. This period of time is also very important for the software companies in our ecosystem, as it gives them the time to add in extra functionality into their software products that will be running on the new architecture.
Over the past decade, Arm has been investing in three main areas, which are security, accelerating AI, and delivering higher performance for our CPUs. Security is becoming increasingly important as more of our daily lives depend on the smart computers we use all the time. Your smartphone can now make credit card payments and access your bank account. Your home security system can unlock your front door and control your security cameras. Your car will transport you and your family to work and school and then home again. We need to make it difficult for criminals or enemy governments to breach these computer-controlled devices. With AI going everywhere, of course, we need to add in acceleration for the most common type of AI and machine learning algorithms, and higher performance is good for whatever software you want to run.
It is these new features and capabilities that allow Arm to charge a higher royalty rate as we go from generation to generation of the architecture. We are not just investing in AI; we are also investing in our compute platforms. These compute platforms are the fastest way for companies to develop a new Arm-based chip using the latest manufacturing technology and getting the best performance and power outcome. To achieve this, we have pre-assembled our latest CPUs and system technologies into a single reference design. This halves the amount of work needed to be done for a new chip design. Customers can take our reference design, add in the high-speed interfaces that they want, and manufacture a general-purpose chip, or they can add in their own custom accelerators to create a chip for a specific application.
Our Neoverse compute subsystems are designed for data center server applications and AI accelerators. They can also be used for higher networking and communications equipment. Our compute subsystems can be used in different configurations. It can be used standalone, even by itself. It is a high-performance computing machine suitable for cloud computing. Indeed, this is what Microsoft has done with their first data center chip, Cobalt, as have the French supercomputer company, Silicon Pearl. However, you can also add in an accelerator for AI applications or for networking or security applications. By reusing the same compute subsystem across multiple end markets means that we can license the same subsystem to many different companies. Our first Neoverse compute subsystem has now been licensed to six companies for applications including a supercomputer, cloud servers, and high-end networking equipment. As I mentioned earlier, Arm also invests in the ecosystem around our products.
In the case of our compute subsystems, we have created a partnership program, which now has more than 30 companies whose own products and services have been optimized to work alongside our compute subsystems, making it even easier for companies to build their own chips. Another new technology area that Arm is investing in is chiplets. As system-on-chips have become larger and more complex, it is sometimes easier to break them down into smaller chiplets that can then be integrated together into a single package. This is what NVIDIA has done with their Arm-based Grace Blackwell solution. We have worked with some of our Total Design partners to create multiple different chiplet test implementations for different end markets. For example, Arm has worked with ADT Technology and Samsung to create an AI accelerator chiplet suitable for the data center.
Also, we have worked with SocioNext and TSMC to create a chiplet for a supercomputer. In addition, Arm and about 60 of our ecosystem partners and customers have created a new standard called a chiplet system architecture that defines how these chiplets should communicate with each other and will help to make chiplets to be reusable across multiple different chip designs. Now let's examine the financial impact on these investments. As we saw earlier, even when a new architecture is announced, it can take two to three years to develop a new CPU based on that architecture. We can start signing licenses even while the CPU is being developed, but we cannot recognize revenues until we can deliver that technology to the customer. It then takes our customers another two to three years to develop a chip and maybe another year for that chip to ramp into meaningful royalty.
Once ramping, it can deliver royalty revenue for many years. Investing more today in more technology and in more advanced technology can yield more license revenues in a few years' time and much more royalty revenue further into the future. Looking back over the past few years, you can see that along with strong revenue growth, we have already been growing our investments in R&D. Both revenue and headcount have grown at a 17% CAGR over the past two years, and now more than 80% of our people are engineers developing the next generation of technology. Going forwards, we expect headcount to continue to grow at this pace and the proportion of engineers to also continue to increase. Rene has decided that non-engineering headcount will be flat going forwards. Instead, we need to use agentic AI to make SG&A functions more productive and more efficient.
Regarding profitability, at any time, we can slow hiring and so allow the revenue growth to drop through to profit growth, which would bring short-term benefits for all shareholders. However, now is not the time to be focusing on profitability. Now is the time to be building the technology for the future. We believe this is what SoftBank and our other major shareholders want us to do, and this is where our greatest opportunity currently lies. Arm's revenue will grow as chips become more intelligent, which drives complexity. As the increasing demand for more chips in more devices results in more Arm-based chips. As more AI gets deployed into more devices, this will drive more demand for even more Armv9, which can accelerate these workloads.
As chips become more complex, their development costs increase, and this will lead to more companies to use Arm's compute subsystems, and this will result in higher royalty rates. The demand for semiconductors is growing steadily. Analysts estimate that the semiconductor industry will grow at an 8% CAGR between 2024 and 2023. Within the industry, Arm is gaining share in long-term growth markets such as automotive, cloud, and IoT. Arm's customers shipped double the number of chips in FYE24 than in FYE16. Combining together more chips, more valuable chips, higher royalty rates, and share gains will help to grow Arm's revenues for years and decades to come. With that, we can turn to your questions.
ありがとうございました。 Thank you very much. それでは、ただいまから日本時間の11時30分まで。 Now, until 11:30 Japan time, the floor is open for questions. Either in Japanese or in English, questions are welcome.
We would like to entertain as many questions as possible. Please restrict your question number to two. First, identify yourself and your affiliation. First, the Telpride analysts are going to be the ones who are going to ask questions, and then we are going to take questions from the participants online. Please push and raise the hand button for those participants online. You can find it in the reaction box. Then, from the site, a microphone will be brought to you. If you have a question, please raise your hand. Thank you. The gentleman on the first row, the microphone will be brought to you.
May I ask a question in Japanese? I have two questions. Thank you, Ian. First question. Going forward, license revenue. How are you going to see it?
On a quarterly basis, I should say that there are a lot of fluctuations. I understand that, but in the longer run, there are going to be different kinds of pictures are expected to come out. That will link into the growth in the royalty revenue in two or three years. Not on the quarterly basis, but in two years and three years, how you are going to see the growth of license revenue. This is the first question. Second question, royalty revenue. Guidance mid-20, and that's what the guidance then gives. Going forward, within the cloud business, within the CPU, the share might grow, but AI data center or GPU and CPU, what is the percentage of the amount? I think the CPU percentage may be smaller, so how are you going to drive the growth in this area?
Then into the latter half of this year, there's going to be the NVIDIA Blackwell, the computer that is going to come out into the market. Next year, probably, what would be your expectation on the growth of royalty and the license revenue in this area? Two of them.
Thank you for your questions. As we are still in Q4 after FYE2025, only a couple of weeks away from the end of the Q4, and with our results coming up in May, I will have to defer guidance for the coming year to what we provide for the information in May. I'm happy to answer your question in more general terms in terms of the drivers of our licensing and our drivers of our royalty revenues. You make a very good point about our license revenues being quite variable quarter to quarter.
The revenue recognition, particularly around some of the large subscription deals that we sign, means that we can end up recognizing 50-60% of the revenue associated with a multi-year deal all on signature. That can mean that the license revenues can be lumpy. To help with that, we do provide an annualized contract value number, and that I would always recommend that you look at first because that basically takes the revenue recognition and spreads it out as if the revenue recognition was ratable. If you can imagine a license that is a five-year, $20 million a year license, the revenue we recognized is $50 million on signature, and then the remaining $50 million will be spread over the following 20 quarters, so $2.5 million per quarter. Big spike and then very little for the remaining period.
Whereas with ACV, annualized contract value, we will spread that out 20, 20, 20, 20. In any quarter where you see the license revenue has gone up strongly, please go and take a look at ACV. You'll probably find it's only up a little bit. If in any quarter where the revenue is down strongly, I would suggest you look at ACV. You'll probably find it's up a little bit. I'd probably recommend you take a look at ACV. In terms of what the drivers of our license revenue have been in recent times and what this will mean going forward, in the past year, and particularly when I think back to the IPO, we have seen much, much stronger licensing than we had anticipated.
In fact, if you go back to the IPO and compare to where we are guiding the market, it looks like that we were guiding about $1.3 billion. Looks like we will now be ending up closer to $1.8 billion. Much, much higher. Almost all of that is due to excitement around AI. If you are starting a new chip design today, regardless of whether that chip design is going into a smartphone, a data center, a smart TV, a car, even a washing machine, you know that that chip is going to have to run some form of AI algorithm. These tend to be quite large, quite computationally heavy algorithms, but they're also changing very rapidly. It takes two to three years to build a chip, but the algorithms are changing quite fundamentally every 6 to 12 months.
In that two to three-year period, then the model is likely to change once or twice. Anticipating what the performance requirements are going to be of that chip is very, very hard. What we have seen in the past year is that companies have been buying our most advanced technology and buying higher performance technology than we had anticipated so that they can future-proof themselves against whatever the future design is going to be. They are really overspeccing what we had anticipated so that they can run that algorithm when that chip comes to market in two to three years' time. That has been driving our license revenues in the near term. In terms of if I look forwards, do I expect that to continue? I can't see it slowing down at the moment.
I can't see the demand side slowing down because all the time we see demand for higher performance compute to run the next models. I think at least for a while, we will see the demand continue. How that translates into revenue will depend on when the lumps occur. As I say, I still recommend that you look at ACV every quarter. On the royalty side of things, one of the downsides, if you like, of having now a 50% market share is that we are exposed to all of the winds of change within the semiconductor industry. There are a few parts of the market that can have an inventory correction that won't also impact Arm's.
We do well in the parts of the industry that are booming, but if there's any parts of the industry that is slowing down, then we get impacted by that as well. There's nowhere to hide anymore. In the past year, we saw a slowdown in the sale of chips going into networking equipment, particularly in things like wireless communications as there was a pause in the rollout of some 5G networks. Those were all Arm-based chips, so there was an impact there. We've also seen an inventory correction in industrial IoT as well. This was due to during the pandemic, many companies struggled to buy chips, and so their supply chains were very tight. Post the pandemic, they moved from being just in time to being just in case. They allowed their inventory levels to become much higher.
Now many companies believe that they have more resilient supply chains, more diversity in their supply chains, and so they're moving back to a just-in-time approach to inventory management and therefore are winding down the amount of inventory that they have and therefore buying fewer chips. Again, it's not a demand issue. It's just an inventory management issue. That we hope will come to an end next year. I think in normal times, we would be expecting next year, therefore, to be a year of strong royalty revenue growth because of, as we had, two weak parts of the market this year. Hopefully, no weak parts of the market next year. Now we're maybe looking at an uncertain future due to potentially geopolitics and increased tariffs. Could that mean that there could be some form of recession?
If that means that there are fewer cars being sold and fewer smartphones being sold, then that we will not be able to dodge. I think there's a lot of long-term growth drivers within the semiconductor industry that will benefit from, as I mentioned earlier. The analysts think that over the next five years, that's a market that will grow just under 10% per year. We're gaining share. We have higher royalty rates. If there is a recession or depression, then maybe the growth is less strong. We shall have to find out when we get to next year.
Thank you.
The next question, the gentleman in the front row over here.
Thank you. David Gibson, MSP, apologies. Two questions here. How important is Stargate to Arm? I would have thought you'd get the CPU designs anyway in the network centers or the cloud centers. That's the first question. The second, could you talk about the designs and how competitive your CPU, GPU, and NPUs are? What is it your customers say that needs to improve the most looking across those three primary products? Hence, where you are investing for the future to improve their performance? Thank you.
Thank you, David. Stargate for us is, first of all, we are not contributing to the funding of Stargate. We are just going to be a beneficiary of the chips being deployed within Stargate. There are many decisions that have not yet been taken in terms of exactly what is going to go into Stargate. I have no visibility at the moment as to what the exact mix of technologies is going to be included within there. Arm is the CPU of choice. We do expect there to be a lot of Arm CPUs in there. NVIDIA are the technology partner of choice. My default assumption is therefore that there will be a lot of Grace Blackwell chips with the Grace part being Arm-based. Exactly how many Grace Blackwells and exactly what the mix of those is going to be is hard to tell.
It is a $100 billion investment in the first data center. That does imply a lot of chips. Therefore, hopefully, a lot of Arm-based chips. Exactly how many is not yet clear. OpenAI is going to be responsible for the management of the data center. It will be down to them as to what the mix of technologies are used. We will have to wait for them to make that determination. Not yet sure, but it certainly sounds very exciting. Sounds like it should be good. Exactly how many dollars that results into royalty revenues, I'm not sure yet. In terms of our technology in the data center, where we're gaining share within the data center predominantly is with the cloud service providers developing their own CPU chips for use in their own data centers.
Companies like Amazon developing their own Graviton chip, Microsoft developing their own Cobalt chip, and Google developing their own Axion chip. The benefit for them in developing their own chip rather than buying an off-the-shelf chip is that they can optimize the software that needs to be run, the workloads that they are using or that their customers are using in the data center. They can optimize it with everything else in the data center. For many years now, the big cloud companies have been building their own data centers and customizing the equipment that goes into those data centers. By designing their own chips, they can also design their own blades and their own server systems.
Now they can optimize the software to run on their chip, to run in their racks, to run in their servers, to run in their data centers. The whole thing is optimized top to bottom, left to right. All of our major customers in this space are saying that by doing this, they can achieve somewhere between a 40%-60% reduction in power by designing their own Arm-based chips rather than using a traditional chip from sort of Intel or AMD. This is not any magic due to the Arm processor. I mean, obviously, we are a low-power design, so we do contribute that. Really, it's what you get when you optimize a chip to run a particular set of workloads. It's by matching the chip to the workloads, you end up with something that runs much, much, much more efficiently.
That's, I think, one of the key areas that we are seeing in terms of how we've been gaining share so far. Amazon announced in November last year at their re:Invent conference that for the past two years, more than 50% of the new chips that they have deployed within AWS have been their own Arm-based Graviton design. Microsoft and Google both hit general availability in October last year. They're just beginning to ramp now. We see no reason why in five years' time they could not also be at 50% or higher penetration. The reason for that is that when Amazon first started to deploy Graviton in about 2020, not all of the software that was needed to run in the cloud was available to run on Arm-based chips. Amazon initially made Graviton available for free to software developers. Now that's no longer needed.
All the software is now ported. There is no reason for software not to be run on Arm. Also, Amazon was targeting Graviton at their third-party customers, their AWS customers. They could only move as fast as their customers wanted to move. Their customers had to port and test their software. Amazon made the decision to charge their customers 40% less running on an Arm-based chip than they did for an x86-based chip. That is because Amazon were getting the benefits of the lower power consumption. They were able to pass that on to their customers. That was very encouraging. Our understanding is that Microsoft and Google are also planning to run on some internal workloads as well as using it for Azure and GCP.
The next time you have a conference call using Microsoft Teams, it might be an Arm CPU that is running the Teams for you. The next time you go and look at a YouTube, it may well be running on an Arm-based chip because those are the applications that Google and Microsoft are targeting first. More broadly, if I look across all of the major hyperscalers, of the top 10 largest hyperscalers in the world, 8 of them are now deploying Arm-based chips. The other 2 have their first Arm-based chips still in development. Maybe towards the end of this calendar year or early next calendar year, we would expect that to be a 10 out of 10. Hopefully, next time I'm here, that will be a 10 out of 10.
In terms of what they're asking from us, as I mentioned in my presentation, the key purpose of a CPU is to run software. The conversations that we're mainly having around next CPU designs are very much focused on what is the software that it needs to run. We are, I think, very fortunate with the relationships that we have with many of the companies that are writing the base algorithms for AI. Companies like OpenAI, companies like Meta, we are working very closely with them in terms of what is the evolution of the software that they are creating. The dream is that you have a new CPU going into a new chip that becomes available just at the same time as the new software algorithms become available. That hasn't been happening in recent years because the models are changing so fast.
The more we can do a better job in talking to these companies that are developing these algorithms, then hopefully, we'll be able to do a better job to making sure that the hardware technology is coming to market just at the time the software is coming to market. That is definitely one of the key benefits, I think, in things like Project Stargate because it's enabling us to get more of an internal view as to what OpenAI's plans are than otherwise we would have if they were just another third-party software company.
Just to follow up, is there any requirements from your customers you think need to improve on the GPU, NPU side of things in particular that you think you're behind versus what the customers and the market needs?
At the moment, we only have CPUs to offer. I will touch on the NPU in a moment. For the data center, we do not have a product to offer there. That one is a bit of a moot question. Certainly, the focus is on the CPU. Yes, they would love us to be able to build our CPUs faster and have them come to market faster. That is something that we are obviously always trying to do. At the same time, we need to do it in conjunction with the software companies. We cannot move faster than they can. On the NPU side, maybe it is just worth touching on. We do have NPUs, so Neural Net accelerators for embedded devices.
These are very much targeting not the data center, but rather going into robotics, going into security cameras, going into other devices. In fact, one of the applications that we've been working with a company on is how to put not large language models, but a small language model even into a simple device like a washing machine. Now, you may think, "Why do I want a small language model in a washing machine?" The intention is to make the washing machine a subject matter expert in itself. Basically, the manual for the washing machine gets embedded within the device itself. Then you can, rather than just putting your clothes in and fiddling with the dial, actually tell it what the clothes are, what you need doing.
It will have enough of an understanding to be able to program itself for that particular wash. It will not be able to tell you what the weather is going to be. It will not write you a poem. It is just going to be a subject matter in itself. It is certainly looking at how we can take some of these AI technologies and create new product categories, things that did not exist before. That is something that we are trying to enable with our embedded NPU.
The next gentleman in the second row from the front.
Hi.
Chris Dudley from BuzzFeed Advisory. I have a question. It's almost a follow-up to David's question. When we look at all the research and development that Arm has done since SoftBank first invested, one area that stands out is not having a chip designed for AI accelerators in the data center. I was wondering if you could talk a little bit about that, why you didn't do it, whether there's a possibility you do it going forward.
Okay. On the chip part, I mean, Arm is an IP provider, so you wouldn't expect us to be selling chips. If I can re-ask the question in maybe a slightly different way, which is, why haven't we developed an AI accelerator as an IP deliverable to license to NVIDIA and others? It's not a technology problem. It's a market problem, market demand problem. If you look across the companies such as Google and what they're doing with their TPU, if you look at what Amazon are doing with Trainium and Inferentia, what NVIDIA is doing, everyone's doing something slightly different. They are optimizing. They have strong views about how they want their algorithms to work and to run. They are each creating something that is highly differentiated from the others.
Arm as an IP company, we do best when we are licensing the same design to everybody or at least to multiple companies. That way, we can benefit from designing something once and licensing that same design to three, four, five companies. The goal is always, or as often as possible anyway, to try to cover the cost of the development into the license fees. That means that the royalties, when they turn up, are a real profit. The problem is if everyone is trying to build something different, then we do not have the same thing that we can license to everybody. What I might anticipate, though, is that, and certainly, we have seen elsewhere in the history of the semiconductor industry, right now, everyone is focusing on frontier models. They are trying to push the boundary of what can be done.
They're doing this by throwing lots of technology at the wall to see what makes progress, what does not. That is leaving a lot of space behind for efficiencies. We've seen this a little bit with DeepSeek as an algorithm which focused less on being the frontier model but more on making an efficient version of an existing model. That was able to demonstrate, if you believe the claims, a 10x improvement by optimizing what was one of those frontier models. What typically we find is that as we start to, as parts of the algorithm that we are, or as part of the problem that we're trying to solve, we figure out the solution. As the algorithms start to slow down, we can create more optimized solutions, which we can then license to many companies. If I may give an example to illustrate.
Imagine that we have solved how to do natural language processing. We have figured out how to have a computer understand language to create a natural language response. That could be either to understand one person, so my voice. That means we may have a digital assistant in the smartphone. Or it could be anyone's voice. We can maybe have a call center in the cloud. Once we have figured out what algorithms are needed, the R&D investment will move on. It will go into self-driving cars. It will go into AGI. There can be more focus on how to make that algorithm more efficient so that it could be scaled out so lots of companies can have a cloud-based autonomous call center.
Now, then with the algorithms becoming more stable, you can then start to build chips and CPUs or GPUs or accelerators that are optimized for that particular algorithm. In the same way we saw with DeepSeek making a 10x improvement by doing software optimizations, you can get similar levels of improvement by doing hardware optimizations as well. That is the time when you might see an IP company come along saying, "Okay, we now have, rather than you all building your own accelerator for your own chips, why don't we design that once and license it to everybody because you're all trying to solve the same problem.
You're all trying to do the same thing. If I can give an example where this has happened quite recently in the space of AI, five years ago, seven years ago, a lot of the focus of AI was on image classification. If you remember, there were lots of photographs of chihuahuas and blueberry muffins. Can you tell the difference between the chihuahua's face and the blueberry muffin? That was being done. That was state of the art within the AI being run on big data centers in the cloud. You cannot buy a security camera for the home today that does not have image classification, does not have object recognition, does not have face detection, does not have friendly face recognition. A lot of the technologies that were being run on $5 billion data centers are now being run in $50 cameras.
Similarly, five years ago, there was a lot of focus on voice recognition. Now you can have text-to-speech, live translation as an algorithm running in your smartphone. Again, a $5 billion data center going into a $500 smartphone. I think that many of what we're seeing as frontier models that we're seeing today, in a few years' time, will end up being in much, much lower-cost goods, including, as I mentioned earlier, maybe even in your washing machine. I think the answer to your question, Chris, is not so much as, "Well, why don't you?" It is more like, "Well, when does that make sense?" I think it will make sense at some point. I can't necessarily anticipate when that will be. We're already starting to see some of the algorithms that were being developed in the data center coming into consumer electronic devices.
I think that's going to be a continuous flow. Meanwhile, the models move on to the next problem and the next problem.
Thanks, Neil.
Thank you. Right now, we have no one with their hands up at the online participants. Anyone participating online, please use the raise hand function using the reaction button group. We will continue with receiving questions from people in the venue. First, in the second row from the front.
Thank you. My name is Yasir from UVS Securities. I have two questions. The first question, CCS and a custom ASIC company. I would assume that you will be competing against them. You are kind of competing for added values. Are they a partner or is it the custom ASICs market? Will you try to capture that market from CCS? I wanted to understand the dynamics of that part. That is my first question.
Customers, still partners. The compute subsystem, what we were sorry. The reason for developing the compute subsystem is that our customer base has been changing. Historically, Arm would license its technology to semiconductor companies, to companies whose primary products are computer chips. Over time, though, that has been changing. We have been finding that their customers are now wanting to build their own chips and are coming directly to Arm to license CPUs directly from us. In some cases, those companies will still engage an ASIC company, but they want to have the relationship directly with Arm. The reason is that as software increasingly becomes the product that you are selling, if you're selling a smartphone, you're really selling a box that runs software. The smartphone without software is just a black box. With software, you actually have something that does something.
Even cars are increasingly being sold by software, either the software in the cockpit or the software in the self-driving capability. If software is becoming the product that I'm selling, then it's very important for me, the OEM, to control how that software runs. As that software runs in the chip, I therefore need to control that chip. You may choose to acquire or develop the semiconductor design capability yourself in-house, or you might decide to use an ASIC company. Even if you choose to have an ASIC company, you still will want to have that relationship directly with Arm because it's the Arm CPU that is running your software that is so important to your product sale. Over time, we have now started to see our customers become more our customer's customer, so more OEMs than we saw previously.
Those companies, they want a different deliverable. They want a more advanced deliverable than a semiconductor company. Some companies are licensing our compute subsystems because that is an assembly of Arm's CPUs and interconnect. It's a better starting point for their chip design. Again, they can still go and work with an ASIC company to do the rest of the chip design. Many of the companies like Broadcom and Marvell have particular expertise around high-speed interfaces and sort of the back-end part of the manufacturing process. That is a very hard thing to do. The design of the chip, increasingly, the OEMs want to own themselves. It's not that we are competing with the ASIC companies. We're not competing with Marvell, but maybe we are providing more technology to the OEMs than has been historically the case.
This is a demand pull from their customers. This is because of the importance of software going into an OEM's products. We're responding to their request.
えっと、二つ目の質問はですね、AIの。
My second question is, AI is now starting to split into front-end and back-end. CPU and the software direct network linking, that's the first part. The second is GPU. For back-end, the back-end and Ethernet, like the high-speed internet. The software there, NVIDIA has essentially created kind of a monopoly software stack. CPU, I understand Arm is very strong in that area. When you come to the back-end side, someone, inclusive of Arm, can you actually compete against NVIDIA? Can you break into their monopoly in one sense? It may be very difficult right now, but from your company's perspective as Arm, this back-end network included software, anyone, inclusive of yourselves, but if anyone tried to break into that space, or has everyone kind of given up? Which do you think is the case?
Yes, thank you. We are indeed very strong on the CPU side. As the alternative to NVIDIA, I think the most credible alternatives right now are what the cloud service providing companies are developing for themselves. Google's TPU, Amazon's Inferentia and Trainium chips, these are the most credible alternatives being developed. Again, these are being developed by the cloud service providing companies because they have a better understanding of what problems are trying to be solved within their data centers compared to the general-purpose accelerators such as the NVIDIA GPU. I recall seeing an article by Meta, and they were claiming for running the algorithms that they wanted to run, a five-fold improvement compared to an NVIDIA GPU because whereas the GPU is a general-purpose accelerator, they've built something to run a specific algorithm.
By, again, building something to run a very narrow use case, you can optimize the chip to run that particular piece of software. In terms of anyone being a third-party chip provider to go and compete with NVIDIA, I think there's space for that. I think there's probably demand for that, but I don't currently see any company that is managing to achieve that. I think, to your point, that NVIDIA is becoming not just a chip company, but a solutions company providing a lot of the software as well as a lot of the other systems. Many companies that have tried to compete with NVIDIA, I think, have lost because of CUDA and the software ecosystem of software developers that have written software to support CUDA.
Any competitor to NVIDIA would need to be having an effective competitor, CUDA, as well as the GPU.
このバックエンド側の。
On the back-end side, AI accelerator and the network, can Arm win a market share there in the future in an area that isn't CPU? Is there a possibility for you to win market share there in the future?
I just say there is space, but right now, we don't have anything to offer.
ありがとうございます。 先ほど挙手されていた三列目の。
In the third row, the gentleman in the back. We are expecting questions coming in from the online participants. If you have a question, please push and raise the hand button online.
May I? My name is Nadao, DOB Securities. I just would like to have a question on the short-term royalty ratio for the past three quarters, so remaining level of 25%. If I remember correctly, I think it should be raised to 60-70% in the medium term. Looking at the most recent situation, what is the reason behind this performance? How is your countermeasure to improve the royalty share?
Yeah. The proportion of chips that are Armv9 versus Armv8, Armv7, Armv6 is entirely determined by consumers going into shops and buying things. It is not something that we have control over. We certainly cannot influence it other than all going out and buying more smart TVs and smartphones. It is just a mathematical outcome of people going into shops and buying phones and other devices. I think what we have seen most recently is that Armv9 has more than doubled on a year-on-year basis, but has grown in line with Armv8 on a sequential basis. To a certain extent, I think we have been slightly surprised by the growth of Armv8. Had Armv8 not grown, Armv9 would have a higher proportion.
Armv8 has been growing partly because we have seen a recovery in industrial IoT chip sales. I mentioned earlier that there had been a decline in industrial IoT earlier in the year. We had a bit of a pop-up in Q3. Most of that is actually Armv6. It is very old technology. That helped to balance things out. You have to bear in mind that Armv9 is still at the moment only in high-end smartphones and in data center chips. Everything else is still Armv8 and older. There is plenty of room for Armv9 to grow into. We are still very confident that in time, 60-70% of our royalty revenues will come from Armv9.
Actually, in the last quarter, I think we've been just very pleased to see that Armv9 has grown, but also so has everything else. It all pays royalties. I don't really mind that much where the royalty comes from.
ありがとうございます。次の質問を。
Thank you. Next question is also from the venue. The person in the second row from the front and towards the window.
My name is Keith Gucci from SMBC Nikko Securities Securities. I have two questions. Thank you, Ian, for your presentation. I think on page 13 or page 18 of the presentation material, now, over the last two years, the revenue growth and the employee growth, there's been 17% at the same rate. Now, this balance over the next two years, will we see changes? Based on the plan at the IPO, I think the plan was for the revenue growth to accelerate more. When will we start to see that timing where the growth in revenue will exceed the growth in the headcount?
Actually, our revenue growth is higher than we had said at the IPO. We are exceeding our IPO targets. At the IPO, we also indicated that we thought that our non-GAAP operating profits would grow at one to two percentage points per year. We did not mention the headcount growth, but the intention was always to continue to grow investments in R&D and also to continue to grow, therefore, our engineering headcounts. I think we are very much on track. I think it is page 18. That is 19. Next previous page. There you go. Thank you. I think looking forwards, as I mentioned, I am not going to be giving revenue forecasts today, but we certainly intend to keep investing by hiring new engineers. I think right now, there is a lot of opportunity as we see more algorithms coming from the data center to the edge.
We need to make sure that that runs on Arm technology. Also, we have a partnership now with OpenAI through Stargate and through Cristal intelligence. We absolutely must take this opportunity to make sure that we are developing the right technology, CPU or accelerator, so that we can maximize the opportunity for us in the data center as well as outside of the data center. We have every intention to continue to grow our headcount at the same sort of trajectory. We added 1,000 engineers last year. We will add another 1,000 engineers next year at a minimum. The one difference you might see is that we may start doing some of that hiring through acquisitions. To date, we have done all of our hiring organically. Historically, we have had a history of buying small companies, small semiconductor companies.
We don't want their technology, so we're not buying them for their products, but we're buying them for their engineering teams. These can be 200, 300 people, companies. That's great because that's like one whole CPU team. It comes with graduates, senior managers, project managers, program managers. Often, it comes with a building. We can just go in there, take away the stationery, give them all Arm business cards, and away they go as an Arm design team. That's much, much faster than hiring people one at a time. That's something I think you can expect to see. Right now, we have no plans to slow down hiring. This is not the time to be investing, sorry, to be focusing on profitability.
I think even if revenue ended up being a lower number than we had originally anticipated, I think that we will continue to hire through that because I think the long-term opportunity is so great.
Thank you. The second question, dividend policy, there is no change, I believe, going forward. What is the dividend payout policy or dividend policy in the future?
Yes. To date, we do not pay a dividend, but we do have plenty of cash. We have about $2.5 billion of cash and cash equivalents. We are probably going to be increasing that by approximately $1 billion per year. We have right now nothing to spend it on. We have only recently done an IPO. Doing a big buyback is probably not a good idea. Also, we get many complaints about the lack of liquidity or the lack of float. That is not a desired outcome. Other than some small acquisitions, but they will not absorb our cash. Therefore, that leaves the dividend. Today, we do not pay a dividend. I think SoftBank, as our much larger shareholder, would have to desire a dividend.
We are very interested in getting feedback from our other investors and also SoftBank's investors as well to see whether a dividend is required. We are very much open to a dividend if there was sufficient demand for one.
ありがとうございました。そういたしましたら。
Thank you.
Yes. The person in the third row towards the wall. After that, the person at the front row on this side, in that order.
Tokunaga from Adaro Securities. I have two questions. First question is a follow-up to Yasuhi-san's question. Now, in what business layer are you going to focus on going forward, more recently? Not just the backside of the chip design, but the layer next to chip design. As a partner or I think you could be a competitor. Partner or competitor, that type of relationship, which direction will this go in the future? Would the partnership become more prevalent? Because of the increased added value from you, you could be more of a competitor in the future? Relationship with customers or the business layer forecast based on your strategy going forward? That's my first question.
Thank you. Yes. I would like to think that the relationship is going to remain very much one of partnership. However, as I mentioned previously, our customer base is expanding. We are seeing more companies who traditionally bought chips wanting to take control of the chips that go into their products. They want Arm to provide the technology directly to them. They want for Arm to provide more technology than we have licensed to semiconductor companies. Now, maybe some of our customers would rather that we did not provide technology directly to them. I think it is up to them to demonstrate that they can add more value, that they can build a chip for Amazon better than Amazon can build their own chip. If they cannot, then it is only appropriate that Amazon have the choice of developing the technology themselves.
I would not say that we are competing with ASIC companies. Rather, we are enabling their customers to become more independent if they so choose to do so. I think we are providing more choice rather than we are competing with our customers.
Asking that I thought very much. The second question is about the Cristal intelligence , the relationship with Arm. OpenAI is going to be developed in Cristal intelligence projects. Arm-based chips are going to be included. Is that the way that Arm is going to be involved in the Cristal intelligence ? If it is going to be achieved, that SoftBank is going to make use of it for their own business activities. That is what they are arguing about. When it comes to Arm, when the Crystal is established, is it going to be used by Arm? To what extent Arm is going to get involved? What is your expectation on this project, the Cristal intelligence?
I think there's Cristal intelligence . There are two very exciting parts of this. At one level, I see Cristal intelligence being a little bit like Project Stargate. It's an opportunity for Arm-based chips to be deployed within a large, high-value data center. Hopefully, lots of chips, lots of opportunity for Arm-based chips. In addition, as part of this, we will get access to the OpenAI agents. We will be able to use those internally to improve our own business processes and also potentially our own product development processes as well. As I mentioned earlier, we have been told by René that the headcount for SG&A is effectively flat and that we need to, as the engineering side expands, support a larger engineering team, but with no more people.
We need to focus on automation and AI and agentic AI in order to be able to scale up to support a much, much larger engineering team over time. On the engineering side, I think that we have experiments going on to see how AI could be useful in our product development. We have been using Microsoft's GitHub Copilot as part of our software engineering activity for some time. The reports I've heard is that it has had roughly a 10% productivity improvement, although it is quite patchy between different people. Some people get more. Some people get less. That is over 2,000 people. 10%, another 200 people's worth of work we've been able to achieve through that.
As it's quite patchy, I suspect that therefore, as we get used to using the technology more, there is a lot more room for a lot of productivity gains. To date, we have not rolled out the AI on the CPU design side of things, but I'm sure that's something that we will be looking to do so in the future. Hopefully, that would also have a similar benefit in improving CPU design going forwards.
ありがとうございます。それでは、臨場の方、最後。
Last question from the venue. We have someone online with the hand up as well. That person online will be the last person after we take questions from the venue.
Hello. Oliver Matthew from CLSA. Thank you for your presentation. I actually only have one question around the longer term. If we're on the journey to AGI and ASI that suggests exponential pickup, what do you see your kind of segment mix looking like when we get to those two groundbreaking moments?
If we get to ASI, who knows? Maybe we'll all be communicating by telepathy and all have chips embedded in our brains. I have no idea. I always find it very interesting to hear NASA speak about the opportunity for AGI or ASI in terms of having all vehicles being autonomous and there never being an accident again. The amount of compute that that would require in each vehicle will be much, much, much greater than anything we've obviously seen so far. You need to have multiple cameras, lidar, radar, vehicle-to-vehicle communications, vehicle-to-infrastructure communications, as well as obviously to the cloud communications, as well as having a smart brain in the center. We are working at the moment with many of the car companies to make that happen.
We are working with about 120 of the world's largest car companies in a consortium to basically set up what are the software requirements and the software APIs to enable a software-defined vehicle. That work is ongoing, but still, it is probably many years away before that becomes a reality. If you start thinking about what that level of automation could do in factories, in the home, in restaurants for preparing food, I mean, all of that will be powered by, yes, maybe a super brain that sits in the cloud. To get things done physically, you are going to have to have a lot of local sensors, cameras, controllers, actuators to move the robot's limbs around and its fingers and joints and things like that.
I think all of these markets are going to become much, much, much larger if we get to a world of AGI, ASI. Arm already now has a very, very high share of the controller chips and the brain and the camera chips that go into all of these devices. I think this looks very, very, very positive for us if we do end up in a world like that.
ありがとうございました。それでは、最後に。
Thank you. The last question from the participant online, Citi Securities, Mr. Tsuro. This is the very last question. Mr. Tsuro, please go ahead by unmuting.
Thank you very much. There are many questions. I have two questions. First one, Oracle. In Stargate, is Oracle an investor or a business partner? And Oracle AI data center business, what is the relationship with Oracle? AWS, 50%, that's what you had mentioned. What about Oracle? Oracle is going to become an important partner. How is the share going to change? Initial investment in Stargate, the Texas investment, what is going to be the status of this investment?
Thank you for your questions. Unfortunately, regarding Project Stargate and the investments, we're not involved at all with investing in Stargate. I have no answer for you. Maybe the SoftBank IR team might be a better place to answer that question. Regarding our relationship with Oracle, again, they're an investor in Project Stargate. They're helping to fund many of the Arm-based chips. Thank you, Oracle. Thank you, Larry. For us, I think the main sort of relationship is through Ampere. Ampere is a semiconductor company that Oracle and Arm are both investees in. Significantly, Ampere is providing a lot of the Arm-based chips that Oracle is deploying. The relationship we have is with Ampere. Ampere's customer is Oracle. That's how that relationship is established.
はい。えっと、二つ目は。
ですけど、DeepSeekによってインファレンスが。
DeepSeek, when it comes to inference, the cost for inference is coming down due to DeepSeek. For your demand, is that positive? Because of the elasticity, is there more room for demand to increase, or is that the other way around? Please share with us your thoughts there.
Yes. I think, as I mentioned earlier, I think DeepSeek, to a certain extent, shouldn't have surprised anyone. When you focus on the frontier modules, very much focused on performance and trying to do new things. If you start to focus a little bit more on efficiency and take those frontier models and then try to create a more efficient version, you can make a large amount of progress very quickly, which I think is probably what has happened with DeepSeek. I think there should have been less surprise about DeepSeek. I think some of the surprise was more to do with the fact it was a Chinese company. Had it been a US startup, I think everyone would have been less startled by it. To a certain extent, I think there was a lot more efficiency opportunities to come.
I mean, because that was just an efficiency focusing on software. Once the algorithms start moving around, there's the opportunity to focus on the hardware side of things as well. You can probably see another 10x improvement when you start developing the hardware to accelerate the software. As I indicated earlier, we can see this happening in how some of the image classification sort of models that we're looking at, pictures five years ago, sort of the image classifications of chihuahuas and muffins are now in security cameras going into the home. When you start focusing on the frontier and start focusing on optimization, you can make substantial efficiency improvements, which obviously therefore enables edge devices. Yes, we absolutely expect to see large language model functionality going into edge devices. At the edge, it'll be running on Arm processors.
Yes, all very good for us.
Excellent.
ありがとうございました。それでは、本日は。
Thank you. This is the end of the briefing session. At our site, this session is going to be uploaded. Once again, thank you very much.