Welcome back, everybody. I'm Joe Moore, and happy to be here today with Fermi Wang, the CEO of Ambarella, who's been here several times this conference. But great to have you back. So maybe we could just start out with big picture. This is a year where there's been a lot of enthusiasm for edge AI. You've talked about your revenue has really transformed. You've had some bumps along the road, but the majority of your revenue is edge AI at this point. Can you just talk about the vision, where you're going with computer vision, and where you've been taking the company the last couple of years?
Yeah, so maybe a little history. We started the company in 2004, focused on just a perception processing technology for video, and in the first four years, we helped a lot of customers building video perception with cameras in all kinds of camera space. Right now, two biggest markets for us are IoT, as well as automotive, but just starting, I would say, 2013, 2014, our customers demand that in addition to just doing perception, they want us to add video analytics. Basically, try to do neural network processing on the video data in real time, so that's where we started our edge AI route, and we spent, I would say, close to $1 billion R&D dollars and also multiple years. Right now, where we are at right now is our second generation of AI chip called CV2 family is generating 70% of total revenue today.
We have sampled multiple chips in our third generation chip. We'll talk about that. That's definitely focusing on more advanced AI processing, for example, all the Transformer network for autonomous driving. Also, the Gen AI or vision language model for both Transformer for the automotive markets, as well as the IoT market. That's where we are at of developing edge AI technology. It's a huge amount of revenue already, and the majority of R&D dollars going there. Moving forward, I think the Ambarella position is very clear that we think our customers will demand higher and higher edge AI performance without increasing too much of power consumption. Because power consumption, in my opinion, is the biggest hurdle for us to scale AI, both on the edge or in the data center. That's focused on edge because that's where our market is.
And we think that to improve AI performance is important, but controlling the power consumption is as important. So that's going to be our focus, continue to help our customers to implement more advanced neural network models in a more advanced chip with better AI performance and control the power consumption.
I mean, the technology that you guys have is so cool. And I think when you've looked at the changes since you started doing computer vision chips a few years ago, at that time, these problems were solved through heuristics and algorithms. And we've sort of pivoted to where now convolutional neural nets training and the AI side has led to what you do is essentially being a low power, low latency inference at the edge. How have you made that pivot? I mean, a lot of your competitors, Mobileye, is still in a heuristic state. I think you guys are now an AI solution. How has your philosophy led to that transition?
You know, I was trained to do video processing. I got a PhD on the video side. In fact, I spent a lot of time looking at machine learning, even while I was a PhD student. But when you try to develop heuristic or traditional machine learning model, the algorithm to detect a human face and dog face are two totally different algorithms. That convinced me that's not a scalable approach. That changed until we saw the neural network approach start become really mainstream. Because in 2012, when the GPU performance to a point that you can start training deep neural network and get efficient, and that's where we convinced that you can build a lot of different computer vision models with a similar approach. That's where we believe we need to invest. And we invest in 2012, and that has been 12 years now.
And then, like I said, billion dollar investment for a company like us. Our total OpEx this year is $180 million. We definitely spent a lot of money to focus on this area because we believe this is going to transform the company, and it did. And we're going to continue this route.
And the opportunity for several years that most investors focus on is automotive, but the good and the bad of automotive is it takes a long time, and once you're there, you tend to stay there for a long time, but it takes a while to develop it. Along the way, you've kind of talked about the IoT market, and in particular, professional surveillance as a driver. There's been some bumps along the road there, but can you just talk about that market and maybe the context of how you've turned the corner, how you're kind of back in growth mode in that space now?
So I think what we suffered in the last two years is really inventory correction. It's not the technology or the trend of technology. In fact, we spent two years last year when we were here talking about inventory correction, none of our peers, semiconductor companies, talked about it. We were the first ones to talk about it. But that's not because we have a crystal ball. It's really because we don't have much of a distributed channel between us and our customers. So we can see the inventory build-up in the last few years faster than other people. Passing that, I think for us, that we continue to invest heavily on this AI inference engine is really because we start seeing that not only just security camera applications require AI performance. We start seeing that a lot of cameras are going to different applications.
If you look at what Bosch says, their presentation about their security camera market, they are still talking about smart city, they talk about all kinds of different new applications can leverage their investment on security camera. For example, I use this example a lot. In any city, when you're in a heavy traffic crossroad, you can start seeing cameras setting up there and look at the full direction of the traffic and identify all the cars in terms of the license plate, colors, the models, and the speed, and all of the matrix of that car passing that intersection and pass that all of the attribute back to the cloud. It's not only for security reasons. It's for to build the data of the traffic model of the city. It's a huge marketing tool also.
So we start seeing that because you can add edge AI to the camera, security becomes just one application. You start seeing more and more different types of applications can leverage this infrastructure.
If you talk to professional surveillance camera companies, it's very clear that this is the path and that your chips are central to the innovation that they're seeing. Can you talk about what that means in terms of compute? You've migrated CV2, CV5, CV7. What does that do from a sort of compute per dollar for Ambarella?
So first of all, if you look at our current company average ASP, we're probably around $12-$13. Six years ago, seven years ago, it was $6. And now if you look at our CV2 family, our average ASP is probably around $18-$20, our CV3 family. Our CV7 family will be $25-$50, and our CV3 family will be $100-$400. It's because of the AI performance requirement. Even for security camera, I would add that why do people need more AI performance? I'll just give you a quick example. 12 months ago, when large language models popped up, people said, oh, you know it's OpenAI. But we quickly found out that's not the only case. In fact, even for security camera, we have this large language model called vision language model. It really takes the video as an input and translates it to text.
The text is describing what it sees. You can prompt it. You can ask questions like ChatGPT. Suddenly, when our customers see this, they say, OK, now I can replace all of the people we hire to monitor the security camera. Basically, you have an automation process to really watch thousands of cameras in real time without any human. That speeds up the process. That's just one very simple example of how large language models will play into security camera, which we didn't see a year ago. Now it's become very clear. It's not only just professional security camera. If you watch what Nest, Ring, those home security camera companies, they start offering vision language models as a service to their home user at a $10 amount. Of course, that vision language model is running on the cloud today.
But we definitely have a capability in our CV7 series of chips that can move those 3 billion-7 billion parameters of vision language model running on edge. That becomes an ASP adder. And also, another proof point that our customers will enjoy and demand more and more higher AI performance in their device.
I mean, you mentioned the consumer applications. It seems like a low teens ASP doesn't seem that daunting. But when you start thinking about the multipliers, as you take that into retail, it ends up those are expensive doorbells, expensive products. What is the role for Ambarella? Do you need to accept lower margin to participate in consumer? And just what does that imply for you guys?
I think definitely in our financial model, of course, gross margin is always important for us. We still believe that 59%-60% is the gross margin range that we are targeting at. But our metric, the most important metric for us, is operating margin. We need to continue to generate operating margin leverage. We start showing on Q3 after the inventory correction. Our last quarter is really showing huge leverage on the operating margin just because of revenue growth. We expect that that's going to be a direction we continue to work at.
But does those gross margins, I mean, 59% is pretty healthy for all the markets, but for consumer, it's pretty healthy. Does that limit your opportunity somewhat? And what's the philosophy there in terms of I understand not wanting to put too much resource into a margin that starts falling. But just how do you think about the dynamics there?
It's not like all our products are at the gross margin range. We definitely have an appetite to go to lower gross margin, higher volume business. And we're definitely doing that. And in fact, it's just what's the range of the ASP and the gross margin we want to accelerate. So our philosophy on the low gross margin business is following. If our customers who want to buy our high-end chip, at the same time, they want to have a low-end chip at lower gross margin, we're happy to supply to it. But if there's a customer come to us and say, OK, I want to buy a $2 chip for you at a much lower gross margin, but I don't have a need for your high-end chip, then that's not the customer I want to really spend time with.
Because I really think that for a company that our size focuses on our limited research, our R&D dollar, as well as the support structure, we need to make sure that we support the biggest potential customer, and with that, in fact, all the customers, like Motorola, our big customers, they also have low-end products requirements. We're happy to work with them at a lower gross margin.
Great. And then you talked about on the call, strengthen other IoT. Can you talk about what applications you're most excited about there?
Right, so in fact, there are multiple things happening right now. One is, for example, we talked about wearable cameras for a long time. Finally, we start seeing it ramping up, both in the U.S. and in Europe, when people pay more attention to security. That's one thing. I think that all of the in-room video conferencing, they want to add AI features in there. And they start using our high-end chip like CV5, CV72. And also that even our traditional portable cameras, in the past, you just captured video. Now, if you look at one of our customers, called Insta360, they add a huge amount of AI applications on top of their video capture device. I think, so those areas, we are happy to support those areas, not because they're just using video.
It's really that those areas have a lot of demands on the video plus AI performance, and that's where we think that we are focusing on right now.
Great. And then before I move to autos, just last on IoT there. The growth, it's been hard to triangulate from the outside. I feel like you've had periods of enormous growth, some of which happened while China was going from a third of your revenue to zero because of export controls. You still grew through that really nicely. Then you went through a severe inventory correction. To be fair, most of your peers saw the same thing. I mean, Silicon Labs, a lot of the guys that sell into adjacent applications saw a drawdown that was as severe or worse than what you saw. But just how do you think about those inventory dynamics going forward? Is that just part of the concentration inherent in supplying these IoT markets, or do you think it'll be smoother?
I think for us, I think we are way past this inventory correction problem. We were one of the first few going through that cycle, I think one of the few coming out of the cycle already. And we are convinced that we are out of the cycle for two reasons. One is we start seeing all of our big customers have healthy inventory because they have to disclose. For example, if you look at Motorola, their inventory is very healthy. And it was much, much worse just even two quarters ago. So I think that by talking to our customers, we understand their inventory level. Two, we start seeing very steady PO with hitting our lead time. Our lead time for the phone sensors anywhere between six to seven months. And we start seeing the POs need a lead time, not an urgent PO.
Also, by talking to customers regularly throughout the last two years, it gives us confidence that inventory is behind us. I think that's the reason.
Great. So moving over to auto, big picture, we'll get into some of the changes. But you still have a $2.2 billion, six-year funnel for a company that's done less than $100 million of trailing 12 months. That implies still a fairly steep growth in automotive. Can you talk about the strategy there and what are the inflections that are embedded in that funnel?
Right. So first of all, our strategy is focusing on Level 2+ market, and I think that's the next big market. Today, if you look at the AD market, Mobileye has a dominant position on the Level 1, Level 2, basically called the front ADAS, basically single camera facing front. That market roughly has a 60% penetration rate. If you look at next in line to become a big market is Level 2+, which requires multiple cameras, multiple radar, and even some of them require LiDAR or other sensors to be integrated into a single chip we call domain controller. That's our CV3 family, so that is a market we're focusing on, but unfortunately, that market is delayed. Because if you ask everybody two years ago, what's the penetration rate for Level 2+ this year, a lot of people are going to guess it's maybe teens.
Right now, Level 2+ is still 5% penetration based on our own estimation. Half of that is Tesla, half of that is China. Very little for everybody else. So what happened, I think there are two reasons. One is the price gap between current front ADAS versus Level 2+ is still too high. That price gap really limits the growth. Even we see that at a Tesla car, Tesla needs to reduce their FSD price from $15,000 to $8,000 and monthly to $99 to start seeing more adoption. So I think that price gap needs to come down quickly and to enable the penetration. The other one is software. I think that's probably the biggest reason that there are a lot of delays because Tesla in China definitely has the best software for Level 2+, which is complicated software, AI based.
I think they have the market leader is because of the software. I think everybody else is trying to figure out how to create a software that can compete with Tesla in China. I think that's the current situation.
I mean, you look at the technology that not just that you've demoed, but that your Tier 1 customers have demoed two years ago. And you look at these kind of 12 camera systems that are really neatly aligned to data and in a very low power footprint. And it's frustrating as a consumer that we haven't seen that yet in cars. So is that just EV taking a lot of the innovation attention away from L2+, or is it regulatory? What has to happen to get that capability into production vehicles?
I think regulation plays a little role in the Level 2+ market. It plays a much bigger role in Level 3, Level 4. I think the reason of the delay is really that I think we talk about price, but let me emphasize on the software side. I think this software has to be AI-based. In fact, a lot of people approaching Level 2+ and using a combination of AI and heuristic model, I think that's going to be a fail. Because it's not fail. It's not they cannot deliver products. They cannot compete with Tesla or Chinese approach using pure AI-based software stack and deliver the performance. So I think that's where we are focusing on right now. If you look at our current software stack that we're demoing, and also we sample to our Tier 1 OEM customer, it is 100% AI-based software.
I won't say it's end-to-end because that term has been misused. But if you look at our current neural network, we have three major neural networks inside our software stack. But we are on the road to go to end-to-end with one large model to do driving. But I also want to say one more thing. I'm convinced in the long run, large language models will play a role in level two plus driving. And we're seeing that coming. So the AI performance is going to drive the improvement of the software quality. And only the people spending time on the AI and the AI model, and also maybe even large language model, will survive in this market.
So the last two years, your six-year automotive funnel has come down just a tick. And I was saying, as we're coming in, I think that actually has given you guys some credibility because you're looking at this and sort of assessing it. We know some of these programs have been delayed or canceled. There's a probability rating. So you could easily just tweak the probabilities to grow it, just to please Wall Street. But you're not doing that. You're giving us an honest assessment of the situation. And one that still projects a lot of growth in the model to $2.2 billion, as I said, is a big number for a company doing $100 million a year. So what's your confidence in where we are now with that funnel?
And how many? I assume there's still a lot of low probability stuff that could turn to high probability stuff as you move forward.
So if you look at the funnel, there are two portions. The first portion we call the business we already won, basically already confirmed with the OEM that they kick off the project. We have a letter from them to confirm the design win as well as the projected value. So that's one business. But even with that, in the last two years, when the OEM updated the number, the number continued to cool down. Of course, we continue to add new design wins in there. So that's part. But the pipeline portion, which is that the RFQ and RFI we are bidding on, and the model we use is not, let's say, if the OEM total business size is $100 million, the number we put into our funnel, we apply a factor.
If we think that we only have a 30% chance to win it, we times the total $100 million by 0.3, so we only put $30 million into the pipeline. We think that's the best way to give our investor a way to understand how Ambarella looks at the potential business in front of us for the next six years, so I would say that one business is basically confirmed. The OEM needs to change the volume up and down based on the requirement, but for the pipeline, I really think that we are bidding on several important projects, and that will definitely change the direction of the company when we win it, so I think that's the focus of the company to win some of the Western OEM business in the next few months.
And maybe we can talk about China a little bit. I mean, China is an area that is going to see early adoption of L2+ relative to the Western part of the market. You have some big wins there. Leapmotor, you announced earlier this year, which is, I think, multiples of the size of Rivian. So just for context, for people who don't know the size of it, it's a pretty big company. But China has its own tensions. I know you guys have very deep relationships in China. How do you think about that Chinese market? You sized it as a surprisingly small, I thought, portion of the funnel. But just how do you think about your prospects in the Chinese market?
Right. So I think with the geopolitical situation, it's very clear that Chinese OEM will pick Chinese silicon if they have choice. So Horizon is really the state-owned, state-supported Chinese semiconductor company in automotive in China. So they have the whole core advantage versus us. So our strategy there is very clear. We have been competing in China for 20 years. We know that if a Chinese vendor can deliver a chip at a similar performance level or criteria, we don't have a chance to compete there. So the only strategy for us in China is we have to continue to deliver differentiated. There's a reason for people to pick us. What's the reason in China people want to pick us? The scalable software, the much lower power consumption, much better perception. Those are the areas that we can compete.
We are focusing on, I would say, probably the high end of the, I would say, mid to high end of the Level 2+ market in China. The reason we think that the China market is important for us is they are really the most competitive market. They are the technology leader at this point, whether we like it or not. They are leading in terms of automotive ADAS software development. If we can continue to compete in the market, that pushes us to develop technology faster. If we can survive in China, I think we can survive anywhere in the world.
Yeah. So if it weren't for geopolitics, you'd be a much larger company for your surveillance in automotive. But I mean, you're right. That's where a lot of the innovation seems to be happening.
Right. In fact, for geopolitical, I just want to highlight that five years ago, when the first-time geopolitical strategy implemented, we lost Hikvision, Dahua, and DJI in a month, in a span of 12 months. At that time, the three companies represented 40% of total revenue. And we lost that $40 million. And we had to regroup that and regrow. That was really painful. But today, our China exposure is roughly 15%. That's total number revenue that we have domestic consumption in the China market. So I think we minimize the risk. But still, although that we know it's a competitive market, I still believe that's where the best way to test out our technology. And if we can be competitive there, we can really survive anywhere else.
Great. Let me pause and see if we have questions from the audience. I have a lot more, but.
Just on the operating leverage, so is that R&D going to stay kind of at this kind of level? And then we're going to see the revenues.
Just on the operating leverage, is that R&D going to kind of stay at this kind of level now? Well, I won't say it's level, but it's definitely our revenue growth has to outpace the R&D growth. But however, I want to highlight, we talk about 2nm. In fact, 2nm is already plugging to our current OpEx run rates. So it's not like we're going to increase a lot for 2nm. So I think that we just continue to see an uptick on our OpEx side. But you should expect our revenue grows faster than that.
It seems like the significant portion of your R&D the last three years hasn't really been monetized at all.
No, not at all.
The revenues are still coming, so.
In fact, I can say that when we look at roughly how much R&D we invest on CV3 and also the related cost, 50% of our R&D is probably in the CV3 family, which generates zero revenue today. Actually, it's not zero. Very little revenue today. So I think that's really where we think we should continue to focus on to get more business from there.
Any questions?
Yes.
Hi. When you say that China has end-to-end autonomous software, how does that compare to where Tesla is now? And also, is that led by the actual auto companies, or is it like the suppliers do the?
You're asking our software stack? Right.
No, about autonomous driving software.
Yeah, yeah. Yeah. So we acquired a company called VisLab in 2015. That company at that time had been doing autonomous driving for 17 years. Right now, they have 20-plus years of software stack. So we acquired a company to get expertise on the software stack. Today, we are demoing a complete AI-based software stack, level four software stack running in our car. We have been giving a demo to Las Vegas and the Parma, where the office is. And we can demo the autonomous driving software stack. The biggest difference is that that software stack, I think, functional-wise, performance-wise, is there. But we haven't gone through the final production with any customer. We're working on it. Aurora is the first customer going to leverage our chip and its parts on the software into their commercial truck, level four commercial trucks.
I mean, you have that software capability. You also have radar through the acquisition of Oculii. You bring a lot to this market. We just need somebody to implement these capabilities.
I agree. In fact, when we look at CV3, we look at the silicon requirement. But we also look at all the infrastructure that we think that we can integrate to make this easier for our customer. Software stack is definitely one thing. That other sensor input, we are a camera company. But we also believe that we need another sensor modality to support autonomous driving. We think radar is the best one. So we acquired radar, I would say, three years ago. So you can see that we are looking at it. It's not just a silicon solution. Look at total solution for our customer. And the way we sell it, that's very important. We don't bundle it. We don't try to sell black box for us.
In fact, we are ready to talk to our customers and say, if you like a portion of our software, we are ready for license to help you to integrate to our customer software stack. I think that's another very big difference against our competitors.
Great. Well, I think that brings us up to the end of our time. But Fermi, very excited to see where this goes from here. Thank you so much.
Thank you. Thank you, Joe. Thank you.