SoftBank Group Corp. (TYO:9984)
Japan flag Japan · Delayed Price · Currency is JPY
5,424.00
+205.00 (3.93%)
May 1, 2026, 3:30 PM JST
← View all transcripts

Investor Update

Mar 21, 2024

Operator

Thank you very much for waiting, ladies and gentlemen. Let's begin the Arm business briefing. Now I'd like to introduce today's speaker, Mr. Ian Thornton, the head of investor relations at Arm. We will be using simultaneous interpretation service. Seated analysts attending in person, please set your receiver to the channel displayed on the board. For those attending online, the default audio setting is original audio, which does not include interpretation service. If you wish to listen in either English or Japanese, you may select interpretation from the menu bar in the viewer or find it under more and then choose your preferred language. Also, what you are seeing on the display is Japanese. If you want to look at the English version, then select view options and select English. Then, Mr. Ian Thornton, please have the floor.

Ian Thornton
Head of Investor Relations, Arm Holdings

Thank you. Good morning, everyone. Thank you very much indeed for coming to this presentation. It's wonderful to be back here in Tokyo once again and to see so many familiar faces. The last time I gave a presentation in Tokyo was in about 2019. Although many things have changed for us, including our markets, our products, our business model, and the IPO, but in our hearts, we are still very much the same company with the same ambition to build the future of computing on Arm. Firstly, I will talk about how computer chips are designed today and how companies like Arm simplify the design of a computer chip, reducing both time and cost. Then I will talk about Arm and our products, our ecosystem, and our markets. Then the importance of artificial intelligence and how Arm is enabling AI to go everywhere.

Then we'll touch on Arm's licensing and business model. We'll finish with a quick look at Arm's financials for the most recent quarter. Then we'll leave plenty of time for your questions. Consumer electronics have become more capable because the computer chips inside them have become smarter and more complex. On the left is the first-ever silicon chip. The manufacturing process was revolutionary, but the circuit design was very simple. It contained just four transistors and could store a single bit of data. Fast forward 63 years, and the chips have evolved into extremely complex machines like the one shown on the right. The example shown here contains 100 billion transistors. The chips which are used in modern smartphones contain cutting-edge technology for computing, graphics, AI, and radio communications. Few companies possess all of the skills required to design such complex chips.

Looking closely at the chip, you can see that it's made up of a number of distinct blocks. One of the most important components in a chip is the main processor or CPU. This particular chip has 6 CPUs, 2 high-performance CPUs which are the larger blocks shown here, and 4 smaller power-efficient CPUs. The operating system will switch between the CPUs depending on the performance requirements of the application being run. When high performance is needed, it will use the large CPUs. When less performance is needed, it will switch to the smaller energy-efficient CPUs. The graphics processor is typically very large. In this case, the graphics processor is being used for gaming. This is not a data center GPU. And so the size of the GPU is determined by how many pixels need to be updated on the screen.

And as we move to larger screen sizes, higher resolutions, and faster frames per second, so the graphics processors tend to get bigger and bigger, needing more transistors. There can be other accelerators on the chip for offloading compute-intensive tasks such as video encoding and decoding, and also encryption. There can be radios on a chip that can connect the smartphone to the outside world. And this can include 5G modems, Wi-Fi, Bluetooth, and so on. There can be special blocks to communicate to other devices in the smartphone, such as display, memory, cameras, power controllers, and so on. And all of these functional blocks need to be connected together. As chips have become larger and more complex, so the interconnect has become more sophisticated, automatically moving data around the chip so that functional blocks are never stalled, waiting for the next piece of data to turn up.

And finally, we have the input and output pins that turn the internal digital signals into the analog signals that communicate between chips. Arm is a provider of some of these blocks. And although we are most known for providing the CPU designs, we also have graphics processors, accelerators, and interconnect designs as well. I will now place Arm in the context of the semiconductor industry. Arm provides the design of the CPU, the brain of the computer chip. We then license the design of the CPU to companies who design the actual chip. These can be chips either sold onto an OEM or to be used within their own products. Our customers include companies such as NVIDIA, Samsung Semiconductor, Qualcomm, MediaTek, Tesla, Amazon, Apple, Renesas, NXP, and Socionext. We deliver the CPU designs to them to help accelerate their chip designs.

Now, these companies do not manufacture the chips themselves. They will take their chip designs to a foundry which will physically build out the transistors and the bit cells. These foundries include TSMC, Intel, Samsung, SMIC, and so on. These are some of the largest, most capable fabs in the world. Once these chips are designed by NVIDIA or by Qualcomm and then built by TSMC or SMIC, they then get packaged and go into an end product such as a smartphone, tablet, smartwatch, car, data center, drone, and so on. All of these products have Arm technology inside them. All of these products can only operate when they have software to control them. Your smartphone is just an empty box without software. All of that software runs on a CPU. Arm is by far the most pervasive CPU company in history.

We can see this in the numbers. Our customers have shipped over 280 billion Arm-based chips to date. They have shipped more than 30 billion in the last financial year alone. They are currently shipping around 8 billion chips per quarter, which is the equivalent of 1 per person for every person on the planet every 90 days. Today, there are also over 15 million software developers currently writing software for Arm-based products. These numbers are so gigantic because everything today is a computer. All of those computers need a CPU, which is the brain of the computer. Around 50% of those CPUs are Arm-based. That number is only growing. Consequently, our revenues are growing strongly. We are highly profitable, highly cash-generative, and with 0 debt. Arm has 4 main product categories. Arm's primary products are CPUs. These are the brains of the chip.

Around 90% of Arm's revenues comes from these CPU designs. We have 4 families of CPU. I'll talk more about these on the next slide. Arm also develops other types of processors that can be used in a system-on-chip design. We have a family of graphics processors used in smartphones, smart TVs, and in automotive displays. We have an AI accelerator, a neural processor unit, or NPU, which is mainly targeting AI in edge devices. We also provide many of the complex components such as the on-chip interconnect, as I spoke about earlier. Nearly 10% of our revenues come from these components. Recently, we have introduced a new family of products, our compute subsystems. Historically, we would provide semiconductor companies with the components like the CPUs and the GPUs and the interconnect. They would have to integrate these components together into their chip by themselves.

However, we now have a new type of customer, companies such as Tesla and Microsoft. These companies have only recently started designing chips. So they want more help in their chip design. Consequently, we have now integrated our CPUs and system technologies into a larger subsystem. This further reduces the time and effort to build a chip. We think this is a better starting point for many companies and could become a large revenue growth driver in the future. Currently, we have subsystems for infrastructure products like servers and smartphone application processors. Last week, we announced that we are developing a subsystem for chips in the automotive market. Today, we have only one public customer for subsystems, which is Microsoft. Their first chip will start shipping this year.

Finally, we also provide software engineers with the tools and libraries they need to efficiently develop their programs and apps for Arm-based chips. We largely give this technology away for free as it helps the Arm software ecosystem to grow. As I mentioned, we have four main families of CPU. Cortex-A CPUs are used in products that have an operating system and apps such as smartphones, smart TVs, smartwatches, etc. Cortex-R CPUs are used in products that need to control a mechanical system such as the timing of pistons in a car engine or the spinning of a hard disk drive. Cortex-M CPUs are used in the tiny chips that are used to control and connect many of our digital electronics that we use every day.

The controller chip in a washing machine or the Bluetooth chip in a wireless mouse or the security chip in a credit card or passport. And finally, Neoverse CPUs target infrastructure products such as data center servers and networking equipment. I won't go into each product in detail. But you will notice that I've split these products into Armv8 and Armv9. Every Arm CPU is underpinned by an instruction set architecture. This is the dictionary of software instructions that the Arm CPU can execute. And this is what links the CPU to software. Arm has recently introduced CPUs based on the 9th version of its instruction set, Armv9. This is important because Arm receives a much higher royalty rate for CPUs based on Armv9 than for Armv8. I mentioned earlier that the purpose of the CPU is to run software.

Software is tied to the CPU that it is written for. The success of the CPU is dependent upon the broad availability of software for that CPU. Arm has by far the largest software ecosystem on the planet. We estimate that over 15 million software developers are currently creating new software programs and apps for Arm-based devices. To date, they have invested over 1.5 billion hours in creating Arm software. Arm also invests in software development. We invested around 10 million hours in the software development for Armv8. We are planning to invest 3 times that amount for Armv9. At today's salary rates, 10 million hours is around $1 billion. So for Armv9, we are planning to invest around $3 billion in software development. Although all chips can contain Arm technology, we put most of our focus on four broad market segments.

In mobile, for us, this is mainly smartphones, but also tablets and laptops. We have cloud compute. This includes high-performance chips for data centers. There is the automotive market. This market will become very important to Arm as cars start to become self-driving. We have the IoT market where there are billions and billions of tiny chips sold every year. I will now address the latest status in each of these markets. Arm has 100% share of the main application processor for smartphones. On the screen, we say over 99%. But it has been many years since we've been able to find a smartphone without an Arm processor in the main chip. Until recently, all smartphone chips were based on our Armv8 architecture. This typically generates a 2% to 3% royalty rate per smartphone chip.

However, in the past year, Armv9-based chips have started to gain share. These typically have a higher royalty rate. And as you can see, value share is increasing faster than volume share. Over the next 2-3 years, we expect that Armv9 will largely replace Armv8, which will help to boost our royalty revenues. In addition, we expect that AI algorithms will continue to be deployed within smartphones. AI tends to need more compute than other programs. And either in the form of more CPUs or high-performance CPUs will further increase our royalty revenue. For the full year ending March 31st, 2023, Arm had around a 10% market share of the cloud data center market. We have a few days left to go before the end of this fiscal year. But I expect that we'll probably be reporting a 15% share for this year.

Most of our share is from Amazon's Graviton chips. These started to ship in 2020. Now, around half of Amazon's new server deployments are based on Graviton. The other half are chips based on x86 from Intel and AMD. Amazon's latest server chip is Graviton4 and uses 96 Arm CPUs. Amazon have announced that over 100 of its top 100 customers are using Graviton for some of their workloads. Last year, Nvidia announced Grace Hopper. This is their latest product for accelerating AI in the data center and combines both CPU and GPU into a single product. Grace is a CPU chiplet that uses 72 Arm CPUs and is combined with an Nvidia H200 chiplet. Earlier this week, Nvidia announced Grace Blackwell, an even more powerful combination, which also uses the Arm-based Grace chiplet.

And finally, in November, Microsoft announced their first ever server chip, the Cobalt 100. This is based on 128 Arm CPUs. The increased number of CPUs allow for larger and more complex workloads to be undertaken. We believe that Microsoft will initially be using Cobalt for running Microsoft Office applications like Teams and OneDrive. Arm already has a high share of some of the chips going into automotive applications. As cars become more advanced, we are only seeing that opportunity increasing. The transition from internal combustion engine to electric vehicles and hybrid vehicles is happening now. In Norway, nearly 90% of new car sales are electric, although globally, the number is closer to 15%. In an electric vehicle, the amount of charge in the battery equates to mileage.

All electronics need to be energy efficient to leave as much electricity as possible to extend the mileage. Modern cars are increasingly replacing the analog dials for speed and fuel with digital displays. These are being combined with navigation information and entertainment. Having a digital display means that you can provide the driver with the right information at the right time. In a semi-autonomous vehicle, this can include driver information when the human is driving the car and maybe a movie or a game when the car is driving itself. And finally, as the car becomes more autonomous, so the amount of electronics needed increases exponentially. A fully autonomous car may need 10-20 cameras, Lidar and radar, vehicle-to-vehicle communications, vehicle-to-infrastructure communications, and then a massive brain to process all of this information and make decisions in real time.

Arm is well positioned for much of this technology to be Arm-based. The Internet of Things market is a highly diversified market with everything from smart cameras, disk drives, robotics and manufacturing, home automation, and white goods such as washing machines and smart ovens. Arm has around a 65% market share of this embedded market. This share has grown steadily over time. Arm is gradually displacing older proprietary processors developed by semiconductor companies many years ago. As a product becomes a smart product, this is typically when it will start to use an Arm processor. Most of the news stories around AI have been around AI in the data center, such as OpenAI's ChatGPTs running on NVIDIA's GPUs. However, every GPU chip needs a CPU to run alongside it. The GPU is great at mathematics and doing the complex matrix multiplies needed for training.

The CPU is needed to control the GPU. The CPU runs the operating system and the apps that the GPU can then accelerate. And increasingly, the CPU chip is based on Arm technology. In addition, we're seeing AI coming out of the data center and being run on edge devices such as smartphones and consumer electronics, in automotive applications, smart cameras, and even thermostats. And all of these applications outside of the data center are being run on Arm-based CPUs. In smartphones, AI is enabling new use cases such as live translation. In automotive, AI is being used in cameras to identify where the road is, where other road users are, and if there are any hazards around.

In consumer electronics, we are seeing new applications like smart door locks that use AI to identify the friendly face of a family member to unlock, or if it is a stranger, to remain closed. I mentioned NVIDIA's Grace Hopper chip earlier. Looking at this in a little more detail, this chip combines Hopper, an H200 GPU chiplet, with Grace, a CPU chiplet with 72 Arm cores, all based on the Armv9 architecture. NVIDIA claims that Grace Hopper runs some AI applications 10 times faster than when combining the H200 with an x86-based CPU. This is in part due to the chiplet design, which allows for tightly coupled memory between the GPU and CPU chiplets. Jensen Huang, NVIDIA's CEO, has described Grace Hopper as a home-run product based on initial customer demand. AI is also running in your smartphone today.

If you take a photograph using your smartphone, the image will be improved by AI-enabled computational photography. If you apply a filter to that photograph, that will also be implemented by AI. Recent new applications include Circle to Search, where the AI needs to interpret an image on the screen to identify and isolate the object to search for. Live translation is another useful application for AI. And all of this is just run on the smartphone. There is no need for cloud connectivity. We can also run chatbots on a smartphone. This will run entirely on the CPU with no need for accelerators or GPU. This particular chatbot runs at about 10 tokens per second, which is about the reading speed for a human. Arm has three main revenue growth drivers. Firstly, we expect the overall semiconductor industry to grow.

You can see from this screen at the bottom that we expect the overall industry to grow by about 7% CAGR from 2022 to 2025, but with stronger growth areas being in areas such as cloud compute and automotive. Secondly, we expect Arm to gain share. Our overall market share was 49% for calendar 2022, and we estimate 51% for calendar 2023. We have plenty of room to grow in most markets. Thirdly, we expect the royalty per chip that we can get will also grow. Arm's latest technology is being deployed in these new chips, so we can get more revenue per chip. Let's look at the market share in a little bit more detail. This looks back over 10 years. The blue bar shows the value of Arm-based chips. The orange bar are x86 from Intel and AMD.

And then the yellow bar are other chips, which are highly fragmented with no single architecture having a high share. Even in 2014, the value of Arm-based chips slightly exceeded the value of x86 chips. This made Arm the largest architecture on the planet by chip value. Now we exceed the value of all other architectures combined. Based on the design wins we are achieving now, we expect that Arm's market share will continue to increase. Arm's business model is optimized to maximize our future royalty revenue streams. In recent years, Arm has been moving to a subscription-based business model. This is because Arm now has many products, and many of Arm's customers use our products in many of their chips. Licensing our products one by one was taking too much time.

We've moved to a Netflix-like subscription model, where our customers pay an annual fee to get access to a portfolio of Arm products. This starts with a customer selecting which portfolio of products they want access to and how many simultaneous Arm chip designs they expect to need. They pay more if they want to access a larger portfolio or if they want to have multiple simultaneous chip designs. The customer then pays their first subscription payment. Arm delivers the portfolio of IP that the customer has selected. For each chip design, the customer can experiment with different Arm CPUs and different configurations until they have decided which is optimal for their next chip. The customer must then inform Arm of their choice. This is an important step because it allows Arm to then track the design and provide any assistance that the customer may need.

Arm only gets a royalty when the customer is successful in selling their chip. So we want to reduce their chip design time and so accelerate the time to royalty. Once a chip is manufactured and is being sold, Arm will then collect a royalty on every single chip containing Arm technology. The customer's engineering team can then start to get to work on their next chip. The design and manufacturing process can take anywhere from 1 to 3 years. Our customers often have multiple design teams working in parallel so they can have a new chip coming out every year. As a consequence, Arm is constantly adding new processors to the portfolio that the customer has subscribed to, just like Netflix bringing out a new series every year.

Arm's top 20 customers are some of the largest companies in the world, such as Microsoft, Alphabet, Amazon, Apple, NVIDIA, Samsung, Intel, and Qualcomm. As our largest customers are all well-financed, Arm has no trouble in collecting any money's owed. In addition, for smaller companies, we typically ask for license fees to be paid upfront. To date, there have been over 280 billion Arm-based chips manufactured and sold. At the current growth rate, we expect to exceed 300 billion by the middle of next year and 500 billion sometime in 2030. By then, it will have taken us 40 years to get to half a trillion. But it may take less than 10 more years to get to 1 trillion chips. It will be an exciting time to watch. Arm technology can ship for many years or even decades.

As you can see here, around 46% of Arm's royalty revenue in 2023 came from technology that we had developed more than 10 years ago. Each new technology family adds another layer of royalty revenue over the top. Arm's royalty revenue has grown at around a 13% compound annual growth rate over the past 12 years. We expect that growth rate to accelerate over the next few years as our latest Armv9 technology comes to market, plus as more AI-enabled chips start to appear. Now to our most recent financials. For the last quarter, we had $824 million in revenue. This was split roughly 60% and 40% between royalty and licensing. Cost of sales are very small, leading to a 97% gross profit. Our non-GAAP operating costs are mainly R&D related, with about 80% of our headcount being engineers.

This gives a non-GAAP operating margin of over 40%. We expect that we can grow our revenue faster than costs, and so operating margin will increase by 1-2 percentage points per year. We have a 10-year target of 60% non-GAAP operating margin. And around 30% of our revenue, or around 75% of our non-GAAP operating profit, drops through to cash. So we believe that Arm has a great future. There are more opportunities for Arm in more markets. And in many of those markets, more and more Arm chips are required driving volume. And more of these chips are becoming more advanced, so driving royalty revenue. And so with that, we can turn to your questions. ありがとうございました。

Operator

Thank you very much. So now we'd like to start the Q&A session until 11:30 A.M. Japan time. You can ask questions either in Japanese or in English. We try to accommodate as many questions as possible. So please give questions up to two questions at each time. If you would like to give the questions, please give your name and affiliations. First, we'd like to start with the sell-side analysts who are here in this room, and then followed by the online participants. Online participants, please use the raise-hands button and then wait. If you are dropping the questions, please say, "Drop the hands." I think you have the raise-hands button in the reaction button. So first, we'd like to solicit the questions from the analysts in this room. Please make sure that you use the microphones. Please raise your hands if you have any questions. Okay. Yes.

The person who is having a blue tie?

Mm-hmm.

Daisaku Masuno
Managing Director and the Head of Information & Telecommunications Research, Nomura Securities

Thank you. Masuno from Nomura Securities. I have 2 questions. In 10 years timeframe and also the recent question. So first question is about 10 years from now. So non-GAAP operating margin is 60% as your target, 60% as your target. So for 10 years, top line in CAGR, how much CAGR are you expecting? CAGR. And those drivers are segmented in the different segments. Which segment of the business is driving the certain percentage of the growth for the top line?

Ian Thornton
Head of Investor Relations, Arm Holdings

Yeah. Thank you for your question. Yes. So we have obviously recently IPOed. During the IPO process, you're not allowed to give too much guidance. We are still very close to the IPO in terms of, so we are not giving detailed guidance at the moment because if you give guidance during the IPO and you fail to meet your targets, then you could be in trouble. If you give shortly after the IPO guidance, the SEC will say, "Oh, you should have provided that information during the IPO." So we are not currently giving more detailed guidance than what I just mentioned. The only thing I would say in terms of the in general, in terms of revenue trajectory, we've indicated that we think that royalty revenue can grow at high teens or 20% and that license revenue should grow at mid-single digit growth.

However, in our most recent period, license revenue has grown more strongly than we had anticipated, growing mid-teens%. So at the moment, we are actually seeing stronger growth than we had expected. But we are not giving any more an update to longer-term guidance because we are not certain how sustainable the near-term higher growth in license revenue will be.

Daisaku Masuno
Managing Director and the Head of Information & Telecommunications Research, Nomura Securities

すいません。今の確認なんですが。

Well, let me confirm one point. Royalty revenue is mid-teen to 20%. The license is mid-single percentage growth. So you are talking about 10 years, right? You are talking about 10-year span confirmation?

Ian Thornton
Head of Investor Relations, Arm Holdings

Yes.

Daisaku Masuno
Managing Director and the Head of Information & Telecommunications Research, Nomura Securities

2つ目。

I have a second question. For the recent guidance, you've already announced some guidance. Fourth quarter, year-on-year, it's 40% increase, I think. Why do you think that it's so strong only in fourth quarter? What about the sustainability of this number? Would you care to mention about this?

Ian Thornton
Head of Investor Relations, Arm Holdings

Yes. So when looking at a year-on-year growth number, you have to look at both the quarter that we are looking ahead to and also back to the quarter one year ago. The semiconductor industry is cyclical and goes through periods of rapid growth, very often followed by a period of contraction because the companies that buy chips buy too many chips, and then they then have them on the shelf, and then they burn down their inventory. And this period is known as an inventory correction. It happens in normal times every 18 months to two years. Following the pandemic, there was a during which point during the pandemic, there was a shortage of many chips. You may recall that, particularly in the automotive market, some cars could not be sold because of some basic chips just being unavailable.

So after the pandemic, we saw many companies, many OEMs, significantly increase their inventory levels in order to protect themselves against a future pandemic or return of the pandemic. The phrase that has been used was that companies had moved from just-in-time purchasing to just-in-case and so had more chips than maybe they would normally have had. But during 2022, we saw that inventory start to be consumed, and so fewer chips were bought. And that meant that semiconductor industry revenues declined. And that decline went on until the lowest point was February 2023. And so that was the weakest quarter of the semiconductor industry cycle. Because Arm has approximately a 50% penetration into chips with CPUs sold into the semiconductor industry, we could not dodge that cycle. And so our royalty revenues, particularly, were impacted by weaker sales during that period.

At the same time, if semiconductor companies are seeing reduced revenues for their chip sales, then they may make decisions about reducing their R&D budgets, maybe starting fewer chip designs or postponing a chip design. That can also impact our licensing revenues as well. We didn't see much reduction in license revenues a year ago, but you don't know what you don't know. The deal that you didn't sign, you don't know whether it was there or not. During 2023, we have seen a strong recovery in the semiconductor industry. Month-on-month sales have been up every month since February 2023. So when we're looking at Q4 2024, we are comparing what looked like a strong month with the weakest month in the cycle. So therefore, there is a very strong year-on-year. There is an easy comparison, easy year-on-year.

In addition, we are now seeing much stronger license revenue than we had anticipated. Much of this extra license revenue is related to companies being very excited about AI. And so although much of the talk around AI has been in the data center, we are seeing companies that are building chips for edge devices, so smartphones, for smart TVs, for washing machines, also now wanting to build chips that are AI-capable. But they have a problem. It can take 2 to 3 years to build a chip. And so companies are looking at what the market is going to be like in 2 years' time. What is the AI capability that is going to be needed by my customers in 2 years' time? But the AI models are changing so rapidly. Every 6 months, the models are slightly different. They're more capable.

So if it takes me 2 years, the models will have changed multiple times during that period. So they are trying to hit a moving target. And so we have seen companies license our more advanced technology and maybe some of our higher-performance technology to try and future-proof the chips that they are starting today so that when they come out in 2 years' time, they will be able to run the AI algorithms that will be required by that point. And because they're licensing our most advanced technology, that has a higher price tag associated with it. And so licensing revenue in the last few quarters has been higher than we had anticipated.

It will also mean that the royalty in two to three years' time will be a bit higher because if you license our most advanced technology today, you not only pay a higher license fee, but you sign up to a higher royalty fee in the future. In terms of sustainability, well, from a comparison point of view, we are currently comparing with the weakest quarter in the cycle. So next quarter, the quarter after, the quarter after will be with strengthening quarters. But still, if you think of the downturn and the upcycle, it's still going to be the first few quarters will still be against relatively weak quarters. As to the sustainability of the strengthened licensing, we don't know yet. We will be reporting full-year results on May the 8th. Obviously, at that time, we will take a view as to guiding for the following periods.

We'll have to see what our visibility looks like right now. Personally, from a personal view, I don't think the excitement around AI is going to be over next week. I think it's going to continue for some time. So hopefully, therefore, that means that we see continued demand from companies wanting to use Arm in their AI applications for many quarters to come.

Daisaku Masuno
Managing Director and the Head of Information & Telecommunications Research, Nomura Securities

今の売上の。

So the sustainability of the fourth quarter revenue. So next year's Wall Street consensus says that Q4 revenue annualized plus 15%. That is the Wall Street consensus. So if that is the case, then value-wise, Q4 times 4, and that is a starting point. Do you think that is probably sufficient for us to count on that number?

Ian Thornton
Head of Investor Relations, Arm Holdings

I think we will be reporting Q4 results on May the 8th. I think please give us a little time, as we get closer to May the 8th, we may have a slightly better view. We'll guide future quarters when we get to May 8th.

Daisaku Masuno
Managing Director and the Head of Information & Telecommunications Research, Nomura Securities

はい。それでは。

Thank you. So any other questions from the sell-side analyst in this room? Also, if you are participating online, please raise your hands. Use the raise your hands button. And please wait. So the person wearing the blue jacket, please.

Speaker 8

Congratulations. Very good presentation. But more importantly, great to see Arm delivering, in fact, I think, more than what you suggested in 2019. So if people had listened carefully, I think it would have been a very good result. I have a question on the new AI product line. I think before you explained to us, one of the great things about Arm was the ability to work with industrial partners to see the kind of future of technology. And my imagination is that we're in uncharted territories in terms of the number of companies who now want to use chips. But I don't know if that's really true. So just one thing is, are we seeing a real broadening of the customer base?

Related to that, based on the work you're doing now, when would we expect to see this kind of real AI products really coming out from those customers? As you said, it seems to me really in the last 12 months, people getting excited and CEOs and CTOs all have to join the boom now. If you don't join, you're kind of dead. Should we be thinking 2 or 3 years is when you really see that kind of output? The final point on that is, the right way of thinking about this is the biggest impact would be more complexity per chip. Anyway, if you could comment on those areas, thank you.

Ian Thornton
Head of Investor Relations, Arm Holdings

Yeah. In terms of expansion in the customer base, I think I agree with your analysis that we are seeing more companies wanting to build computer chips. I think it is probably surprising to many that companies like Amazon and Meta and Tesla want to build their own chips. For years, I thought that building a chip was getting harder and harder and requiring more and more resources and therefore would become increasingly consolidated around larger and larger companies. But it seems that there are more companies wanting to build chips today, although, as you know, many of those companies are extremely well-resourced and so therefore can afford the very high cost of chip development.

I think what we're seeing here is that as software becomes more of the product that we as consumers are purchasing when we used to buy a car, it used to be based on the quality of the car, how fast it went, whether the leather seats were very nice or not. But now, increasingly, what's selling the car is the user interface. It's the, "How does it look and feel in terms of the screen and how I interact with the data? How well does the what software functionality does it provide in terms of self-driving capability or lane warning signs and things like that?" Those are all software-controlled functions.

Because all of that software runs in a CPU, runs in a chip somewhere in the car, then more companies are wanting to take control of the chip because ultimately, that is what differentiates how their software works versus the competitor. If they can make that software be smarter, faster, more capable than their competitors, then maybe they'll sell more cars or more services. So I think that's a trend we have seen across many markets, including now things like cloud computing. So I think we can expect that to continue, that more non-traditional semiconductor companies will want to build chips. One of the reasons for developing our compute subsystems, as I mentioned them earlier, is because these companies haven't been building chips for 30 years. They may want a better starting point than just the individual components.

Having those components pre-assembled into a subsystem gives them a better starting point. One of our first licensees of our compute subsystem said that they had gone from delivery to basically tape out their first chip. Tape out basically means sending it to manufacture in just 9 months. This was for a complex server chip that would normally take 18 months to 2 years. The design time was more than halved by using the subsystem. Another customer said that they'd saved between $20-$30 million worth of engineering effort, basically just the design effort that they'd have to pay their own engineers by using this subsystem. We're definitely seeing that that is a product that is of good use to these non-traditional semiconductor companies.

Regarding the sort of, "Are we going to start to see real AI technology in the next 2-3 years?" I think one of the most asked questions when I go to an investor conference has been, "What is the killer app going to be for AI at the edge devices? What is going to make me want to go out and buy a new smartphone because it's an AI smartphone? What is that going to mean to me?" And the analogy that I've been using is that this feels a bit like the early days of 4G when the 4G networks were being rolled out. And I was being asked, "What is the killer app for 4G?" 4G had some basic functionality. You could stream video. You could download an attachment to an email a bit faster. Great. But what's the killer app?

Looking back, I would suggest that the killer app for 4G was Uber or ride services more generally because Uber can't work without a 4G smartphone as part of their infrastructure. And equally, you as a consumer, you can't access Uber without a 4G smartphone. So the two are needed together. But Uber is not a 4G app. Uber includes 4G, sure. It includes a smartphone. But it also has lots of infrastructure. It's got cars with people to drive them. But I don't know how many meetings like this I could have sat in before someone said, "I've got it. I now know what the killer app for 4G is. It's taxi services." So sitting here now looking at the smartphone and saying, "What's the killer app for AI?" Yes, I can do live translate. Yes, I can do Circle to Search. But so what?

I think what we're really seeing right now is AI PCs, AI smartphones, AI-enabled cameras, cars going into within the AI capability being provided to the developers as an empty box, as something to say, "Go build your application. Go build your product. Go build your service," of which the smartphone or the smart PC may be just a small component of a much, much bigger infrastructure. And I'm afraid if I could invent that, I wouldn't be here. I would be with the investment bankers taking on large amounts of debt and building a big business. So I don't know. But I think maybe in 2-3 years' time, we'll start to see some of those new businesses starting to appear. And maybe then it'll appear obvious to us.

Speaker 8

Just a final point. For those new chip customers, I mean, you mentioned that the number of CPUs are just massively higher. We should expect the complexity per chip from that new customer base to be very positive for your revenues.

Ian Thornton
Head of Investor Relations, Arm Holdings

Yeah. Yes. One of the things I would definitely point out is that if we look at AI through the lens of its software, it's just another way of writing software using statistical analysis rather than traditional programming techniques. The AI algorithm is very computational-heavy. It needs a large CPU or it needs lots of CPUs to run. And so you can expect that digital electronics that become AI-enabled will need more powerful CPUs, at least initially. It'll be interesting to see how the models evolve. ChatGPT-3 had 170 billion parameters. ChatGPT-4 has 1 trillion parameters, so a 600-fold increase. We therefore may expect to see that even AI in a smartphone maybe won't increase 600 times. But there's going to be as new capability is added, it will need more performance. But then we're also seeing models become simplified.

As models start to become a bit more fixed, then work is done to optimize them, to reduce the number of parameters so they fit into smaller memory and need less performance. I showed earlier a chatbot that is running in a smartphone quite happily. We're working with a company on a text-to-image generator that runs in a smartphone. It's a bit slow today. But once we've done maybe some more optimization, then maybe that can run quite happily as well in a smartphone. So I think we will see both expansion and complexity in compute. And then once the models start to settle down, more optimization.

Operator

それでは他。

Any other questions?

そういたしましたら、右から2番目の。

Okay. The second from the right, yes.

From SMBC Nikko Securities. My name is Satoru Kikuchi. Nice to meet you. I have 2 questions. I have 2 questions. Number 1 is that SoftBank Group, the relationship with SoftBank Group. Well, SoftBank Group has 90% of the Arm equity. And I think there are many different investment strategies by having 90% of the Arm equity or stock. From Arm perspective, maybe, yes, before IPO, I think you have to devote yourselves to the development so you don't really have to create the profit. So that was a huge benefit for the growth driver. But now, SoftBank is a majority shareholder. Is this anything good for Arm's or any expectation toward SoftBank Group? That's my first question.

Ian Thornton
Head of Investor Relations, Arm Holdings

So just looking back over the history, when Arm was acquired in September 2016, I remember the meeting with Masayoshi Son, when he addressed the Arm workforce and his primary message was, "Go, go, go." And so we went. To your point, we significantly increased investment in R&D. That was very necessary. I think the Arm portfolio that we had in 2015 was stretched, and we needed to change things. And those first three years post-the acquisition were spent in changing the product portfolio from being just one CPU family into being CPUs designed for our four main markets. So we have CPUs for mobile, CPUs for infrastructure, CPUs for automotive, and CPUs for IoT and embedded devices. That was something that we would have struggled to do as a listed company. During that period, we took our operating margin down from about 50% to about 20%.

I think that had we done that as a listed company, the CEO and the CFO would have been fired by the investors. So we needed the support of SoftBank. But since then, we have been very much focusing on increasing profitability by selling our new products and then collecting the royalties as they're starting to appear, while also then developing our new Armv9 family of processors. So at that point, that was when we could start looking ahead to an IPO. Then we had the pandemic. Then we had NVIDIA trying to acquire Arm. And only then could we actually get on with the IPO itself. So the IPO was probably a little bit delayed. But now, our focus is very much on getting the balance correct between investing in new technology and allowing the profitability to come through.

We see lots of opportunity right now for new technology investments with all of the new opportunities from AI, from the new opportunities that our compute subsystems bring in terms of helping companies build chips more quickly. So we have plenty to spend our money on. And we've hired 1,000 engineers in the last year. We intend to hire another 1,000 engineers this year and probably another 1,000 engineers next year as well. But nevertheless, we still think that revenues can grow faster than our costs because of what we've developed over the last few years. Thank you.

Operator

ありがとうございます。

Thank you. You have one more question. The dividend. Maybe we talked a little bit early to talk about the dividend. And also, I don't think that SoftBank wants to have a dividend at this moment. But with this momentum, you are going to produce so much profit. So as for dividend, do you have any are you going to change the philosophy about the dividend? What would be the changing driver for the change in the dividend? So when or how or so the second question is, when are you planning to change the thinking about the dividend policy and why?

Ian Thornton
Head of Investor Relations, Arm Holdings

Yes. Currently, Arm has around $2.5 billion in the bank. We are generating well, last quarter, we generated $250 million of cash. So if we keep that up for a year, that's $1 billion every year. You are quite correct that we have very little to spend our money on. We could do a buyback, but we've only just IPOed. So buying back the shares after issuing some doesn't seem very sensible. And also, I think our investors want more liquidity, not less. So a buyback doesn't really make sense. SoftBank could have done a dividend before the IPO but chose not to. So I think that right now, we are not being asked by a major shareholder for a dividend. And I think we'd need to wait for their request for a dividend before we would do one.

We see no need to do one ourselves without permission from SoftBank. They own 90% of our shares. So we need to take their lead. We will probably do some M&A. ARM has historically acquired companies from time to time. Usually, we acquire companies as part of our recruitment strategy. If we have to hire 1,000 engineers, maybe we could acquire a company with 100 already working together. So there are senior managers, line managers. There are junior engineers already in place. So that can be quite an efficient way of hiring. And then maybe some interesting technology areas. So clearly, with AI being a big focus and system design being a big focus, there may be either companies or teams within a company that might have some technology that we could turn into IP that we could then license to our customers.

Not many companies have the sort of technology that turns into generally available IP. We have to be very careful and very selective. There are sometimes opportunities to acquire companies. We will save our cash for that sort of opportunity.

Satoru Kikuchi
Senior Analyst, SMBC Nikko Securities

Thank you very much.

Operator

Thank you.

ございます。現在。

Thank you. As of now, there is no one from the online participants to raise hands. We'd like to continue having questions from the people in this room. If you have questions for online participants, please use the reaction button to raise hands. Okay. The person wearing glasses.

Kirk Boodry
Lead Analyst, Astris Advisory

Hi. Kirk Boodry from Astris Advisory. Hi, Ian. I have two questions. One that's pretty sort of top-down and another one that's bottom-up. The first one is, you have the total addressable market slide in your presentation. It seems to me that the cloud compute expectations or forecasts are well behind what your downstream customers are doing. And so is that going to change soon? I know you talked about with the IPO that you have to be careful about how you're guiding. But I mean, that slide is probably not any, it needs an update.

Ian Thornton
Head of Investor Relations, Arm Holdings

You're absolutely spot on there, Kirk. We developed this as part of the IPO process in March to May last year. So really ahead of a lot of the excitement, ahead of AMD forecasting, I think, was it $400 billion worth of AI infrastructure chip sales. Now, maybe not all of that $400 billion is CPU-based. Much of that will be GPU-based. But every GPU needs a CPU. So I think it is fair to say that the $28 billion that we're showing here for the TAM in cloud compute would be a larger number had we done that analysis knowing what we know now. But we're not yet going to be updating it today. We have the numbers that we use for the IPO and need to stick with them.

But for May 8th, which is our full-year results, we are looking to see whether to update our sort of longer-term forecasts. And actually, I'm looking at putting in a number for 2030 to give some long-term targets to chase after. But that conversation is still happening internally. But that's my plan.

Kirk Boodry
Lead Analyst, Astris Advisory

Okay. Sorry. I don't know if this was on before or not. So my other question is related to your customer channels. And it's more of a confirmation, I suppose, because I don't think you can give us a lot of detail. But when you look at companies like NVIDIA versus companies like Apple or Microsoft, are the latter economically more rewarding for Arm because of the software sales or because of the sort of, yeah, I guess, software, the integration package that you sell them, like Neoverse and things like that?

Ian Thornton
Head of Investor Relations, Arm Holdings

I mean, different customers obviously license different amounts of technology. I mean, someone like an NVIDIA uses a lot of our technology in a lot of their chips. And have done for many, many years. So their chips for automotive are Arm-based. Their latest AI chips are a combination of an Arm CPU and their own GPU. So we are in a lot of their products, which obviously helps to drive revenues. I think we had the top our main customers. There's our main customers. So you can see pretty much all the companies you just spoke about are our top 20. So yeah. So they all pay us a fair bit of money. So maybe I'm not answering your question right.

Kirk Boodry
Lead Analyst, Astris Advisory

Well, no. If it's not a clear sort of relationship, then probably the answer is no, right? But I mean, if you look at Microsoft making chips on their own and what you sell them in terms of licenses, in terms of the royalty units you get for the chips they make, and in terms of the sort of support they get for the integration of all the things on the chip, I mean, is there a clear economic maybe make it more simple. Is the royalty or is the revenue you get for each chip higher with a customer like Microsoft versus NVIDIA?

Ian Thornton
Head of Investor Relations, Arm Holdings

Well, let's take a step back from individual names. But if you use our latest Armv9 technology, you pay a higher royalty rate than for Armv8. If you use our subsystems, you pay a higher royalty rate than if you just use v9 on its own. So a chip like Cobalt 100, which uses our compute subsystem, is going to be paying a higher will deliver a higher royalty rate per chip than one that was v9-based but not using our compute subsystem. And so yeah. So that's not because they're Microsoft versus NVIDIA. It's just because if you use more of our technology in your chip, you pay more.

Kirk Boodry
Lead Analyst, Astris Advisory

Okay. Thanks.

Operator

それでは、臨場参加からの方。

Okay. We still have some questions from the floor. But I'd like to take questions from the online participants. New Street Research, Rolf Bulk. Would you please unmute yourself and also start speaking?

Rolf Bulk
Senior Analyst and Co-founder of the Technology vertical, New Street Research

Yes. Ian, thank you for this presentation. This is more or less a follow-up question to the question just asked. With regards to your Compute Subsystem solutions, you currently deploy those for server CPUs, smartphones, automotive. My question is, in which of these segments do you expect penetration of the subsystem-based solutions to be highest, say, five years from now? And is there an upper limit to the penetration you can achieve in smartphones? Your customers in smartphones are very experienced in developing their own SOCs. So do you see a limit to how high your penetration can go with your subsystem solution in that particular market? Thank you.

Ian Thornton
Head of Investor Relations, Arm Holdings

Yeah. There is an opportunity for subsystems to gain share even within smartphones, even within companies that are very experienced at building chips. That's because the smartphone market is maybe not unique, actually. Maybe it's becoming more common. Every single year, a smartphone OEM needs a new flagship smartphone to come out. Every single year. Every single year, they need a new, increasingly complex, high-end chip to go into that flagship smartphone. That chip must be significantly more advanced than the one they had a year ago. The semiconductor companies are required to deliver a new and better smartphone chip every single year. They've done that for many, many years. Except that now, the time it takes to manufacture an advanced chip is taking longer and longer.

So for TSMC, at 5 nanometers, it took 16 weeks for a chip to be manufactured. For 3 nanometers, it's taking 20 weeks. So the amount of time that you have to design a more complicated chip has been reduced by 1 month. So you used to have 9 months to design the chip, 3 months to manufacture a new chip. You now have 8 months to design the chip. And it's a more complex chip. And we think that when we go to 2 nanometers, then it's going to take another bite out of your remaining design time. And so even for smartphone companies, having a better starting point than the components may therefore be able to bring you a significant benefit.

And so we're working very closely with some of our large smartphone customers in order to try and make sure that our compute subsystems for mobile are going to be able to enable them to continue to hit that annual beat that they have to hit every single year, every single year after every single year. And the more time it spends in the more time taken in the fab, effectively, the more valuable and the more useful our subsystems become. So there is potentially a big opportunity there. Outside of the smartphone market, we have multiple design wins now with our Neoverse Compute Subsystem for cloud compute. We have 4 licenses already, Microsoft being the only one that is public, so the only one I can talk about. But there are others. And then last week, earlier this week I've lost my date.

Earlier this week, we announced our automotive compute subsystem that is targeting in-vehicle infotainment and ADAS chips going into cars. It's not available yet. It will be available next year, available to be delivered to our customers next year. So hopefully, that will then start to appear in cars in sort of 3-4 years' time. And again, that's targeting both traditional semiconductor companies who just need to build chips faster and also non-traditional semiconductor companies like car OEMs.

Rolf Bulk
Senior Analyst and Co-founder of the Technology vertical, New Street Research

Thanks, Ian. That's great context. As an unrelated follow-up, given the proliferation of AI on the smartphone and in edge in general, do you see a possibility for v9 to be adopted at a faster pace than v8?

Ian Thornton
Head of Investor Relations, Arm Holdings

So going back to when we introduced v8 about 10 years ago, so the v7 to v8 transition took about four years in the smartphone market. Now, v8 brought a very important innovation, which was the ability to run PC applications in your smartphone. So things like Excel, things like PowerPoint could not run easily within a smartphone before. But with the introduction of v8, you enabled PC applications to migrate across into the smartphone market. v9 brings additional big benefits such as accelerating AI. So I think that they both brought big benefits. And I think maybe it's less to do with the attractiveness of the technology as much as it is to do with the ability for semiconductor companies to roll out a new technology across all their product portfolio and then for OEMs to do the same as well.

I think even though I don't think that we will see a significant increase in the deployment of v9 versus v8 simply because I don't think they could go much faster, if that makes sense.

Rolf Bulk
Senior Analyst and Co-founder of the Technology vertical, New Street Research

Thank you.

Ian Thornton
Head of Investor Relations, Arm Holdings

In terms of where we are today, if we assume that it takes about 4 years, which is what happened last time, we're about 1 year in right now. We still have about 3 more years to go before v9 becomes the majority nearly all smartphones. The vast majority.

Operator

それでは。

Okay. So I think maybe one last question from online. This is a last question from just Richard Kaye. Please unmute yourself and also ask a question.

Richard Kaye
Portfolio Manager and Analyst, Comgest

皆さん、ありがとうございます。よろしくお願いします。

Operator

Thank you very much.

Richard Kaye
Portfolio Manager and Analyst, Comgest

Could you tell me what is your biggest risk in the five-year outlook that you've made? It's a fascinating presentation. Your position is clearly established. But is your biggest risk, in a way, yourself, is it your own ability to develop at the speed you want or to hire at the speed you want? People talk about RISC architecture as being an alternative solution, perhaps, to Arm. Do you perceive that as a threat at any time, or do you think that your install base advantage is so strong that that's not really a threat to your position? Could you tell us what concerns you, concerns the other management most in the outlook that you would give? Because you give many positives, which sound very persuasive. But what are the negatives that you're most scared of besides the end customer disappointing?

Ian Thornton
Head of Investor Relations, Arm Holdings

Yes. Well, I guess there's two parts there, wasn't there? There's what do we find is the biggest competitive or technological risk? And then you asked a very specific question about is RISC-V , to the extent to which RISC-V is a competitive threat. Because those are slightly different things. So in terms of technological risk, we are in a period of rapid change. And clearly, that brings both opportunities and threats. So we know that AI algorithms are going to be introduced into a wide range of embedded markets, as we discussed, from smartphones and PCs. Smart TVs will be using AI. Smart cameras are already using AI. Maybe even your washing machine will use AI to wash your clothes cleaner using less detergent or something. And so there is an opportunity for this new technology to be deployed across a very wide range of electronic devices.

As I indicated earlier, the AI algorithms themselves are changing very rapidly. We're only just starting to see the first deployments of AI into edge devices. It's highly likely that in five years' time, the software that will need to be executed will be different. In 10 years' time, it'll be different again. We need to identify how that software is changing and therefore making sure that we are building the right combination of technologies so that Arm is able to provide as much of the solution as possible. What we don't wish to happen is for Arm to repeatedly build the wrong thing and therefore create an opening for a competitor to come along and build the right thing and therefore to partially displace Arm or at least have the value that this new opportunity brings accruing to them rather than accruing to us.

Now, I think that we are very well placed in order to identify compared to anybody else on the planet, we are pretty well placed to identify how that software ecosystem is going to evolve. We have the largest software ecosystem today. We have deeply embedded relationships with Microsoft, with Google on the Android side, with Apple, of course, with iOS. We develop a lot of technology with the Linux and open-source software community. So we have, if you like, the best sensing organization for any technology on the planet, pretty much. But that doesn't mean that there isn't a lot of hard work required and a lot of decisions to be made in terms of how we go to market and with what technologies. Therefore, there is always a risk of being wrong and somebody else being right.

So that is probably the biggest technology threat and the thing that makes us most worried most of the time. Then specifically on RISC-V, although RISC-V is an alternative technology to Arm in some markets, RISC-V is actually a different technology that is trying to solve a different problem to Arm. RISC-V is a processor architecture similar to Arm. But the RISC-V architecture is modular and scalable, allowing people to make lots of changes to the instruction set. And if you remember, the instruction set is that relationship between the processor and software. And with RISC-V, for each RISC-V design, you can change the instruction set. And that means that if you know what software you want to run, you can create an instruction set that is very optimized for that particular software algorithm.

But the problem is that that processor will then not run another piece of software, which would need a different design. So at Arm, we define the architecture, and we fix it so software runs the same across all Arm processors. Whereas with RISC-V, each one is unique. And so software developers need to optimize their software for each individual RISC-V implementation. So we tend not to find RISC-V in markets where large amounts of third-party software is needed because it's just a lot of cost to support multiple different RISC-V implementations. Arm is a better choice for something like a smartphone which wants to run Android across multiple different companies' chips and apps across multiple different companies' chips. RISC-V tends to be used in more deeply embedded applications like in a Bluetooth protocol stack chip, some kind of wireless connectivity chip.

That is where we see more risk five. So there is some overlap, but it is 10% of our business, not 90% of our business.

Operator

ありがとうございます。

Thank you very much. Thank you. Now it's time to conclude the press conference.

Powered by