Okay, welcome to the Arteris session. So thank you very much for attending. So, semiconductor chips, and we're involved with the semiconductor market. Semiconductor chips are made out of IP, and these are basically blocks that are either pre-made before the project or they're made for the project, but basically, they are assembled into SoCs. So there's an architectural view, there's a view when the SoC is actually being implemented, and there is a physical view or physical awareness view that reflects the physical constraints of a chip. And then it goes on to physical layout to be actually implemented into physical silicon. So what we do is we make these colored parts. These are the on-chip communication subsystems of the SoC.
So we are the pioneers, and some would consider us to be the inventors of using networking techniques inside semiconductor chips. So instead of having dedicated wires on the chip, you actually are converting everything to a packet switching format, with a packet header, which is an address and a data payload, and you're shipping that around the chip. And so what that does, it makes it easier to configure the various data transport policies. It's a smaller area, higher performance, a lower power, and it's about 10% to 13% of the silicon area.
And then what we also do is we provide software and IP for actually creating integration models of the rest of the IP blocks, blocks that we don't make, where you essentially are establishing high-level connectivity, and you're also configuring registers, which are the exit ports, and generating the RTL for those, for those exit ports. And that makes it easy to integrate the other IPs in a chip to the data transport backbone, which is the network on chip. So, we went public in October of 2021, so we've been now public for more than, I guess, almost three years. And, we have essentially been pretty much on target for the financials that we've projected, as we go along.
So there has been approximately 800 designs done with our IP, and about 3.5 billion systems shipped. And so some of the daily items that you use probably have some Arteris's interconnect in them, right? And hopefully, we're not causing you any problems. The customer base is blue chip, some of the largest companies in the world: Intel, Samsung, NXP, STMicroelectronics, medium-sized companies, startups. So it's about 230-some active customers at the moment. So, and this is across all the various geographies, APAC, Europe, EMEA, and North America. Since we sit in the middle of the chip and we are the data backbone, if you will, we have to work with other companies. So we spend a lot of effort building an ecosystem.
We have access to the latest foundry information from Samsung, TSMC, Intel Foundry, and GlobalFoundries. We work with EDA companies such as Synopsys and Cadence. We work with processor IP, which is primarily Arm, but we also work with RISC-V as the customer chooses. We work with other design IPs, such as memory controllers and physical layer IPs. We have a network of design service houses that work with our system IP to build silicon for their customers. So we're trying to be a Switzerland of IP, a neutral infrastructure to which the industry can connect to. We're operating in five verticals, of which automotive, communication, consumer electronics, enterprise computing, and industrial.
This is kind of their respective sizes in terms of license revenue. But we're also heavily focused on AI/ML, which is not a vertical. Artificial intelligence is a horizontal because we believe that you will have artificial intelligence features basically in all electronic systems eventually. Your smartphones, your PCs, your refrigerators, that eventually you will have machine learning everywhere. And one of the big things is the generative AI, which is a major innovation in man-machine interface, and so there's a lot of activity in generative AI. To the point that in the last two quarters, half of the design starts with our system IP have been for machine learning AI applications.
So, ultimately, this is probably going to be the biggest market, as basically machine learning gets deployed across many electronic systems to make them much more usable and and basically improving the productivity. And so particularly generative AI, which essentially for the first time allows you to ask a machine a question and get an answer that you did not this you could not think of yourself.... Right? I can give you lots of examples, but we don't have enough time. But generative AI require this movement of incredible amounts of data, billions of weights, and you have to move data at very high speeds between the processing elements and the HBM3 memories, and so this requires a very high-performance interconnect, such as the one we provide.
So we have many AI/ML customers, but most of them want to keep the use of Arteris confidential. So one company that allows us to say who they are is a company called Rebellions. It's a mid-sized company in Israel, which basically is developed a generative AI accelerator. I'm sorry, it's a Korean company, not Israeli company, that is basically doing a generative AI accelerator. And so this is basically in silicon, in data centers, among your many other designs that have been done with Arteris for artificial intelligence. Another vertical that we're focused on, or AI/ML is a horizontal, but a vertical, is automotive. There is a revolution going on, or at least a disruption in the automotive industry, because the car is moving from a mechanical device to a computer on wheels.
So we started working on automotive in 2012, before automotive was popular, and today, about 70% to 80% of all ADAS automated designs use Arteris interconnect. So companies such as Mobileye, STMicroelectronics, NXP, and Bosch, and many companies that don't allow us to say who they are, are using us for their automated driving SoC projects. We also have a strong position in vision cameras, in radar, in cameras, and also in modems, car modems, because the car is just an endpoint. It is going to be connected to the Internet, it's gonna be connected to the data center, it's gonna be connected to the roadside electronic infrastructure, and it's gonna be connected to each other.
So in 2022, there were approximately three SoCs per car on the average, and about one million-plus Level 1 driving cars. The projections, which is kind of validated by McKinsey and IHS Markit, is that by 2026, you're gonna have 20 to 25 SoCs per car and about 60 million Level 2+ cars. So that's about 1.4 to 1.5 billion SoCs just for the car alone, and it doesn't count trucks, drones, logistics delivery vehicles, or any of the chips that are going into the transportation infrastructure to essentially create what I call the Internet of Cars, which is gonna be an automated transportation network. Now, is this going to happen overnight? Absolutely not. This is going to take many, many years, but the hardware decisions are being made now, not in the future.
One of the things about our business is, it has very deep moats around it, and those moats have piranhas and crocodiles swimming in those moats. It takes you 2 to 5 years to build a mature product, and that's being charitable. We know because we've done it several times. You also have to spend a number of years to get designed in. You have to convince the semiconductor companies of the system house and get their system designed in and shipped to their and validated by their customers, and it takes about 5-8 years to generate a royalty stream.
You also have to spend a lot of time on building the ecosystem, because you have to be working with partners that are going to be interoperable with your system IP, and you need a very knowledgeable, experienced engineering team that understands the problem, and there's also patent barriers. So it would take another company maybe 10 years to come to a mature market position. So that's why this business has deep moats, and that's why there isn't that much commercial competition in the system IP market. Our business is driven by a number of global macro trends.
So you basically have chips that are becoming continuously more complex, which means they use more and more system IP, and they need continuous delivery of new system IP products in order to be able to meet the objectives of the SoC customer. There's also the issues of regionalization that basically in China, in Europe, and in US, and to a lesser extent, to Japan and Korea, everyone wants to have the semiconductor industry, both design and manufacturing, within their own geographies, which is increasing the investment in this. And so in the future, people are gonna be using more system IP, not less, and we want to make sure that it's our system IP and not anybody else.
And so our growth strategy is this: we want to deliver one new product every year. The product we delivered last year was something called FlexNoC 5 with second-generation physical awareness. This product has been very successful, and we're looking forward to shipping another product this year with basically Arteris innovation. We're also committing to our customers that we're gonna ship two enhancement releases at least of our existing products, so that they can get the enhancement that they want and need for their SoCs. We're focused on the highest growth markets, automotive, generative AI, and also we just announced another relationship with another RISC-V company, Andes. So our goal is to announce a relationship with one RISC-V company at least per quarter for the next foreseeable future.
We're trying to run a business that is growing 20% to 25%, and that's very balanced between customer size, between geography presence, and basically application balance, so that we have, we're spread across the world, basically. And we also want to have a balance between license and a royalty model. So our business model is, when you start a project, you pay us a license. When you go into production, you pay us a royalty. And then on top of that, we are looking forward to potentially... We have done two acquisitions so far. We would be interested, though nothing's imminent, about additional tuck-in acquisitions to basically complement our organic growth rates with technology that makes the Network on Chip and our SoC integration software stronger. So with that, I'd like to ask Matt to take over and maybe ask some questions.
Thank you very much for everybody's attention, and Charlie, for the presentation. I wanted to ask first on the business trends. We've been, the elephant in the room in generative AI is obviously NVIDIA. And they've taken their product cadence and sped it up or doubled it, once every 2 years to once every 1 year. And not only are large companies like AMD and others trying to react to that, I would imagine that has spurred some activity on the licensing front. And I mean, your two big markets, automotive, which is going through a renaissance, but moves relatively slowly, is very different from the AI ML world, that by necessity of the big guy in that market going really quickly, are gonna have to go really quickly themselves. How's that been reflected so far in, in the business activity you're seeing in, in that market?
So, we've announced publicly that the last two quarters, half the designs, that the design starts that were you know, given us through design notices, were machine learning designs, so there's a lot of activity. The GPU is a very interesting technology for generative AI because it can run. It's a general purpose architecture that can run virtually any large language model or algorithm. But the problem with it is that it uses a lot of power, and the query costs of each time. So each time you ask ChatGPT a question, it costs $0.02 to 0.08 for that query, because you're essentially running couple billion weights of machine learning, and you have to move huge amounts of data between the HBM3 memories and the processing elements.
So, there is a lot of activity in building generative AI ASICs. Right now, some people are reluctant to build ASICs because the software technology is moving very quickly, but as it slows down, you can expect that some of the dedicated purpose accelerators for generative AI will gain more prominence and more market share. So of course, NVIDIA can design their own ASICs. They're a very capable company. But to your other point about the speed, this is gonna be the fastest market.
And so one of the things that we need to respond by offering higher levels of automation to our customers, so that they can speed up their design cycles of generative AI ASICs. Because there, you know, in automotive, you're getting design cycles of 2 to 3 years, with the median being 2.5. In generative AI, it's gonna be 9 to 12 months, and so you need to develop technologies that allow people to be that agile, and part of our job is to deliver those technologies.
That makes a lot of sense. Nick, I wanted to bring you into the conversation on the financials. It was an interesting time for the company to IPO, right? You were going through this amazing, like, transformation from a technology standpoint, but also had a good deal of revenue at the time with Huawei, that through no fault of your own, like, politically got taken away, right? And so we've been sort of filling in that hole while building long-term momentum for the business.
And I think as you emerge and sort of cross back through, we had the first positive free cash flow quarter. We're on pace to sort of do that for the year. And then maybe you could talk us through, like, the financial guideposts of growth targets on the top line, and also as how do we graduate from free cash flow positive to non-GAAP earnings positive, to GAAP earnings positive, and what does that whole graduation program look like?
Sure. I don't know whether I'm wired up or not, but I'll take them. So yeah, I mean, these are the three most commonly asked questions, Matt. The way we've set out our stall is goal number one for us, job one is free cash flow positive, which we always committed, say always, certainly for the last couple of years, we've committed as a 2024 result. We guided it for 2024, we hit it for Q1, so slightly positive. We guided slightly positive Q2, we've guided positive for the year, so you can put a check in that box. The second financial milestone for us is non-GAAP profitability, which we have not formally guided, but we've sort of informally indicated is an exiting 2025, entering 2026 result.
And then there's GAAP profitability, which still matters to us anyway, and that is 18 months roughly following the non-GAAP. The second question that we get asked most frequently, apart from when do you guys actually start making cash, is, well, why is your non-GAAP profitability trailing your cash flow? I don't understand it. How can that possibly be right? You bet round the back of the shed printing money. And the truth is that it all sits in the RPO metric. So anybody who follows RPO, which is basically our backlog, in round numbers in the last year, published quarter one to published quarter one of the previous year, RPO grew around $18 million.
That's essentially, think of that as cash in to the company, that is now sitting on the balance sheet as deferred revenue. So in another company that who was recognizing revenue as a point in time, that would all have flown through the income statement. That is roughly the shortfall on our Non-GAAP operating loss. So without that, we'd have been already break even, and that's what everybody should take some comfort in that the Non-GAAP operating profit is absolutely within sight, and I would argue we're already there. It's just that it's sitting on a balance sheet in deferred revenue. And the third thing is obviously when do you hit GAAP profitability? And that is more a feature as to when does SBC start to evaporate.
And we have, as in common with many, many, in fact, almost all freshly public, freshly minted public companies, we have to grant a lot of stock to, especially around the Bay Area, and a lot of places in the US, to attract top talent, and that generates a lot of SBC, you can't avoid it. And so last year, for example, we had about 8% additional dilution. Most of that was coming out of the SBC, and this year, the projection is more like 5%. Next year, more like 4%. So we're kind of bringing that in, really by virtue of the fact that we're just, we're just growing. So it's not we're issuing less stock, but we're just, we're not growing the number of people quite so rapidly as we once were. The fourth point, rather, sorry, to answer the question you haven't asked, the question that we get most frequently is, so how does royalties play into all this?
Yeah.
And it probably was where you're going, so I don't mean to be doing your job.
Somebody's got to do it.
But the answer on royalties is, think of our business as a very, very long-term gestation business. We're selling into a market that has a sort of a cycle time of 10 to 15 years in terms of designing to mass production, selling at peak levels in the automotive area. So, with that, it's a very slow burn, and there's an inflection point. We started selling in significant numbers into licenses into automotive around 2013, 2014, 2015, and those are only just now coming into play, into mass production, into generating royalties.
And so if you look at t hese are public numbers that are on our website, and you look at the projections over the next several, let's five years, which is one of the slides on that projection, so I can't really call it a projection, but it's the forward look on royalties. That has an inflection point, which is around 2027, 2028. And that's just by virtue of when those royalties start to kick in.
I mean, it sounds like, so, I mean, if I summarize that, it sounds like, we're moving towards the inflection, but the inflection has a high degree of certainty relative to the business that you have been winning over the last five or six years that will come into the P&L in a pretty major way.
And the nice thing, Matt, is that these are all contracts that we have won. They're already existing contracts. They're legally binding. These are big customers. The royalty rates are known. The only thing that is unknown, except we have to make a good, a good projection on it, is the sales out volumes.
I guess, last thing, and I think we have a minute or two here. I guess, Charlie, new areas of investment. You mentioned introducing new products every year. You guys have done cache coherency and a number of other things that are... And I imagine as the timelines speed up, particularly in AI/ML, like, none of these folks are gonna have any type of internal NoC capabilities or anything. They're gonna have to license IP. So, any kind of a preview at all as to things that might be coming, areas you're focused in, topics like that?
Yeah, I mean, definitely we're focused on automation. We need to speed up the time to results for our customers, particularly for generative AI. We want our people to have silicon before the algorithms change. So this is one area of investment. The other area of investment is chiplets are coming, so these big chips are being split up into different pieces of silicon. It's not gonna happen for every project, but for some applications it makes sense to essentially aggregate pieces of silicon with different functionality and put them in a single package.
And so we're focused on essentially being able to deliver a multi-die solution to our customers so that the chiplet project looks like a single piece of silicon to the programmer. So the software people are not being trained for hardware awareness, and so we need to make multi-die chiplet systems look like a single programming space, to the software programmers, and that's the other kind of major area we're focused on.
All right. Well, thank you very much for everybody's attention. Nick, Charlie, thank you guys-
Okay.
So much for the partnership over the years. And I have to say, I say this to investors a lot. There's this. This is, you guys run an amazing business by legitimate technology people, and I continue to hope that it gets recognized more.