Welcome back, everyone. I'm Joe Moore from Morgan Stanley. Very happy to have with us today the CEO of Ambarella, Fermi Wang. Fermi, thanks for joining us.
Thank you.
Just need to read real quickly this hedge. For important disclosures, please see the Morgan Stanley Research Disclosure website at morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales representative. Fermi, thanks for joining us today. Maybe if we could just start out, you know, about the transition that you guys have been on. 45% of your revenue last year coming from computer vision, continuing to move up to 60% this year. You know, more recently, you've sort of talked about this computer vision sort of transitioning to more of an Edge AI company. Can you just talk about where you are in these transitions?
Right. The reason we you know, talk about Edge AI instead of computer vision, because in the past, we focused purely on the video processing, and then the computer vision is AI technology for the vision. When we start focusing on domain controller for not only automotive, but for general other applications, at same time we add radar into our product roadmap. Suddenly computer vision doesn't cover everything we do. We believe that we need to give a more accurate representation what we are focusing on. Edge AI is the one that we come out. End of the transition is really not only about technology and the market, is really about focusing on creating a total solution that we think is important for our customer moving forward. In automotive, you need a domain controlling the radar.
In fact, I can say that radar going to go to our IoT space also. In the future, I think that this Edge AI for different sensor modality plus high-level processing going to be the area computer focus on.
You know, you've really transitioned this computer vision analytics market around deep learning and AI. I guess, you know, that seems to me to be a real advantage when you talk about deploying this stuff that some of your competitors either are using programmable silicon or are using custom silicon that's designed for more of a heuristic approach to solving these problems. You guys have embraced the underlying software transition from day one. Can you maybe talk a little bit about that software-first mentality you have?
Right. You know, from day one of the company, which was start in 2004, we knew that, you know, we have to generally optimize the silicon architecture for the application we're focused on. We call this a algorithm-first or software-first approach. What we did is we look at the algorithm software we need to implement for that particular application. After we optimize the software, then we come back how to look at how to implement those software in a silicon form. When you do this iteration, not only you can achieve the best performance because you focus on your algorithm software, but at same time, when you come back to optimize silicon, you can optimize for die size, you can optimize power consumption.
That's another reason that we can really showing up with the best performance, AI performance today with the lowest power consumption.
I guess, you know, deep learning's obviously been in the headlines a lot this year outside of the vision space because of what's happening in natural language, large language models.
Mm-hmm.
ChatGPT, those kinds of things. I know you guys have always talked about transformers being an important element of your architecture. Can you just talk to how you apply that same methodology to your markets?
Right. You know, in fact, that transformer is a really a neural network architecture proposed by Google in 2017, become popular in 2018. We start paying attention to that because of course, that's a brand-new architecture then, and we found out that Tesla was going to use that in the FSD software. We immediately start looking at how to implement transformer early on. Today, we believe that we are one of the very few silicon supplier can really provide a very efficient transformer network implementation on a silicon form and a demo at the real time. In fact, at the CES, we demo a transformer implementation with a customer, with a customer transformer chip software running on our chip, and also, we have demo our own transformer software running on our chip.
That really shows off that our application-specific hardware has enough programmability, and that can change it to different type of network architectures.
It seems like what you're really. This is above my knowledge base a little bit, but when you talk about using transformer inference, you know, you guys are really an inference company.
Yes.
Transformer adds material complexity, multi-passes to that inference. You know, how widely applicable is that? I mean, can you apply this to other workloads outside of vision?
I think that our current silicon can run any transformer-based application. Is matters how efficient it is. You know that, all the large language model today and also some of the computer vision software like Stable Diffusion, all of them using transformer network in their application, although they are not really for autonomous driving or IoT space. We believe that we can do well in terms of performance efficiency, we need to look at two things. One is we need to verify that I think we can run, we need to verify the performance we can get on using running those LLM on our silicon.
Second thing we need to look at is whether there is a business opportunity for us in the near future. That we are definitely doing study on that.
I mean, it's really wild to me because, you know, if you and I talked about this, we did one of these in December, we talked about some of these things. It was this kind of esoteric semiconductor question. Now suddenly I'm getting, you know, Google and Microsoft press releases every other day about lowering the cost of inference.
Right.
Lowering the cost of per query and these things. It seems like there's a real opportunity there to do this. You have technology that might be.
I agree with that because, you know, in the past, majority of the people doing a neural network focusing on training, because that's a difficult problem and you need a very powerful platform. Inference was not there because, you know, the people using inference on the server side is pretty little compared to training. Suddenly, because the ChatGPT has 100 million people using it and in a regular basis, suddenly the pinpoint of providing performance is not on the training anymore, it's on the inference side. Inference is dominant, the server performance requirement. From that point of view, it really create a question whether there is a requirement for optimized dedicated hardware on the server side to run inference, and that's something we need to figure out.
You have leadership technology in a really important new area. You have struggled a bit with numbers recently, so maybe we could talk through some of that. You know, you've had a couple of supply chain issues in the last couple of quarters, and you've had inventory dynamics particularly in the IoT markets. Can you just talk about where you are in terms of, you know, resolving some of those issues?
Right. I think that we believe the current weakness that we're seeing in the Q1 that we get to is because of inventory control. inventory correction on our customer side. We gave an example in our script saying that one of our customer, seeing their revenue growth based on our product is growing 10%-15%, but at same time, when we look at their PO to us and the forecast to us is down 20%. There's gap in there, which we think is because of the inventory correction. We are seeing similar patterns of different scales among all of our IoT customers, and that's the problem we're dealing with.
Well, that's why we talk about our Q1 guidance is weak, but we also believe that that's a low point from the inventory correction point of view. We hope that in the future, we'll start recovering, but we just don't know how fast the recovering going to look like.
If you look at these surveillance markets, you know, you have a very obvious growth story there because you're moving from your traditional video processing socket to a computer vision socket that's more than 2x the value. You know, it's surprising, I guess, to me, to see the magnitude of the drawdown that you've seen. Are you sure that you're not, like, losing market share in some of those legacy areas? You know, how do I think about framing that longer term opportunity?
Well, there definitely is. Our CV products replacing some of the video processing product, that's definitely is one factor we're considering. There's another considering. You know, three years ago, our Chinese security camera business was 30% of total revenue.
Mm-hmm.
This year, going to be zero. That transition is also something we're experiencing. On top of that, we're talking about inventory. We think that our design win pipeline is still very healthy, and we confident that what we are dealing with today is a inventory problem, not the design winning problem.
Okay, great. You know, you talked about CV being 60% of your revenues up from 45% this year. Now we don't there's no revenue guidance associated with that. But on my number, that would be kind of a, you know, 10%-15% growth in computer vision, which would be a pretty big deceleration from, you know, I guess 70%+ last year. Is that? You know, how do I think about that? I mean, do you I'm not asking for guidance, but, you know, I guess I would think that's relatively conservative relative to the opportunities that we know you have.
I think that we need to. In fact, when we give the guidance, the biggest worry for me is, do we consider all the potential inventory? That's where, you know, I won't call it conservative, but definitely when we give the guidance, we want to make sure we consider all the potential problems we're gonna deal with on the inventory side when we give the guidance. computer vision, I continue to see not only that give us twice the ASP growth, but also we're going to go into new applications, right? We talk about near-term access control, we talk about potential go-to robotics. I think the auto is our biggest opportunity in the next few years.
I think I am confident that computer vision will, and what I should say, AI application, going to continue to drive our growth and, after we finish this, inventory correction.
Great. maybe pivot and talk a little bit about your automotive opportunities. I wanna get to the new stuff to CV3, but I guess if we look at, you know, where you are today, you have a very strong position in areas like surround view, video, digital rearview mirrors, driver monitoring. Some of those categories seem kind of slow to take off, and it seems like driver monitoring in particular, we know there's regulatory support. We know it's part of the European luxury car standards. I know from talking to your customers how strong your position is. When do you start to see those markets contributing more to revenues?
We're definitely seeing that those markets going to continue to grow. In fact, on top of that, you know, even when you talk about regulation, you talk about in China, they finally approved that they can use electronic mirrors without using an optical mirror associated with together. Suddenly, there is a regulation to support better electronic mirrors design win. I think from that point of view, we believe that the growth of the DMS and the e-mirror will be important for us. However, at same time, the ASP is on the low-end side compared to our ADAS or CV3 revenue. The impact to our total revenue growth is limited by the ASP, not because of unit number growth.
Mm-hmm. It still seems like a pretty relevant number, as those things start to ramp. I should mention, you know, your automotive funnel for the next six years. You've talked about a, kind of $2.3 billion opportunity for a business doing about $100 million a year. Obviously a lot of growth potential in those numbers. Maybe before we get to CV3, if you talk about, you know, the opportunities in ADAS before you get to that level of domain controller, you know, what are the prospects for forward-facing single camera type of wins for you guys?
Right. For ADAS, we have been winning. We have been talking about design, but our investor continue as well, is a smaller OEM, so that is not a major OEMs. I think one major things that we think can help us to change that is when we announce upon it with the Continental and the Bosch, they will not only help us on the design win on the with CV3 pipeline, but also give us enough credibility on the CV2 side for OEM who consider us in the past, but worry about whether we can have a scale. Now, we don't have the scale problem anymore because we are working with the major Tier 1s on those things.
I really think that ADAS will continue to be an area we need to focus on and try to continue to get design wins on the big OEMs. We announced a lot of ADAS design. In fact, we announced quite a few just this time also. Like I said, we need to have a bigger OEM design wins to drive a bigger growth on ADAS side.
Yeah, it seems like the barriers to entry there are coming down faster than I might have thought in terms of doing those types of L2 functions.
Right. I think because of there are more and more software partners available in there, They are ready to go take things into production. In ADAS, we don't have a Ambarella software bundle to it. In CV3 for the Level 2+, we have our own software that we bundle with it to do demo. Also we announce Conti and us going to have a joint software development for which will be available for OEM who want to take that software into production. Things change quite dramatically in the last few years in terms of people understand our benefit as well as the mature software solutions.
You should still have a pretty strong vector for growth in automotive even before you get to CV3.
I believe so.
CV3, maybe you could just for people who aren't aware of it, can you talk about, you know, obviously, that's been a lot of your investment the last several years, and is a lot of your investment now from the design win pipeline perspective.
Right.
Can you just talk to that product?
Right. CV3 is really, we call it domain controller. The idea is this is not only just processing video. It's really taking all the sensor modality that is required to do a Level 2 or Level 2+, even Level 3, Level 4 cars, and we take all the sensor modality. We do all the signal processing, including both camera and the radar, maybe even those ultrasonic, and to fuse them together at a very low-level sensor fusion, and to try to provide the best perception solution. On top of that, we're going to provide enough solution based on neural network or other algorithm to do path planning to the driver policy or safety modules. All the software stack will be available on our CV3.
Our CV3 is a complete domain controller on the both hardware and software side. We expect one single chip of CV3 will provide all the function and to drive for the driving solutions.
You've had two really important wins here with both Continental and Bosch. You know, I think there was some press today where Bosch was highlighting.
Mm-hmm.
The importance of the Ambarella relationship. You know, can you talk about the importance of those wins? Obviously, those aren't things that drive imminent revenue because they have to turn around and sell it to OEMs. You know, the demos, by the way, were I thought really impressive, both that you guys did at CES, but also the ones that Continental did on its.
Right.
In its demo. Can you just talk to, you know, might be getting with OEMs?
Right. First of all, we definitely talk to OEMs with our tier one partners today. More importantly, I think, those guys, what they basically announcing with the Conti and Bosch has really give us credibility of a size, right? People have been complaining about Ambarella is too small compared to NVIDIA and Qualcomm. When we partner with Conti, that kinda solve the problem because now we are not standing there alone. We're standing with the big guys that OEM familiar with, and that add credibility. On top of that, you know, I want to emphasize the software announcement with Conti again. The Conti is not just providing a box, but a hardware platform to the OEM anymore.
The software that Conti and Ambarella going to joint develop is going to be either a backup solution for OEM who are doing software development itself or the solution for OEM who doesn't have a software solution itself. With the Conti collaboration, we think that we will have a complete hardware, software solution that offer to OEMs who need it.
You know, I wanna ask about scale, 'cause since you mentioned it. You know, you're competing with NVIDIA primarily in kinda L4, L5. You're competing with Mobileye, which is part of Intel on L2. You're competing with Qualcomm mostly on L2. Those are the three biggest semiconductor companies on Earth. You know, in an environment where everyone's very focused on supply chain and the automotive customers are thinking, you know, making sure they can get it. You know, obviously, you've proven yourself in terms of technology capability, and I would argue versus any of those competitors, you have real leadership, particularly in kinda the sense of like EV, battery-powered, power-sensitive stuff. You have really good technology.
You know, do you feel like your scale is something that holds you guys back, and what can you do to sort of bridge that gap?
Yeah. I think absolutely, the scale was a problem when we deal with the OEMs. However, like I said, you know, we work on that. In fact, there are few arguments. First of all, for our OEM customer who really want to compete with Tesla, technology matters. You know, I really want them to show them to compare our solution to our competitors, make sure they understand we can offer better solutions to our OEM. That's first. Second thing is that's another reason we continue to focus on the importance of a Conti and Bosch, because the scale they add to and their relationship with the Tier 1s. Sorry, the relationship with OEMs is a major factor for us in the last few months after we announced that.
There's two things, definitely. However, I would always agree that, you know, if we are in a much bigger platform, we can do even better with our current technology. I never say no to that. However, we are not going to sell this company just putting ourselves the block to say, "Hey, please buy us." That's not our intention. Our intention is we need to continue to run this company as an independent company. When the opportunity presents itself, we'll definitely consider whether that's the right thing for us to do.
Great. You know, these wins, it seems like, you know, obviously, Continental and Bosch are both leaders in these markets. They've made a large investment to work with you guys. You know, I guess when you think about your funnel, how are you thinking about those things translating to revenue? You know, should we think of that being upside potential to the funnel over time as you start to convert some of those?
Right. Last year, we talked about $2 billion-$3 billion total funnel size. With Continental and Bosch, we definitely expect this year going to be better. In fact, I think just by working with them actually will definitely improve the percentage, the possibility of winning new design wins. I think that will be positively impact our funnel numbers that we will talk about in November this year.
Okay, great. Can you talk about a couple of developments, first radar? I wanna ask about software as well. You know, you acquired Oculii. You have some really groundbreaking technology in the radar space. How do you look at that being integrated into your solutions?
We announced, you know, Oculii radar is 4D imaging radar. It's fundamental technology. On top of that, we announced a brand-new architecture we call centralized radar. It's enabled by CV3 as a domain controller, plus Oculii's unique algorithm, radar algorithm. Together, that's the combination of this to enable the centralized radar. I think very few people can do this because, you know, the problem we resolved is most of 4D imaging radar company trying to use a huge antenna array, 48 by 32 or 24 by 16, huge antenna array to get the distance accuracy. We are not. We are using four by three, eight by six antenna array, but we pass that information into CV3 as a centralized raw data into a centralized domain controller.
We use our algorithm, Oculii algorithm to extrapolate the point cloud that we need. We can achieve similar performance in terms of accuracy and distance just like any other 4D imaging radar. The benefit is because, first of all, you can do sensor fusion with a video data point as well as a radar, and you can do a low-level sensor fusion, which nobody else can do. Two, because you use much smaller antenna, so the cost and the power on your radar module is much, much lower. We add not only the better performance, better accuracy, but also lower bounds because our approach. This centralized radar is being well received by the customer, and that's definitely something we think that will be another way to help us to win OEM design wins.
Yeah, it's a really unique capability. You know, what's the path to revenue generation? Like, how long does that take?
Well, it's because it's associated with CV3, so it's definitely is related to CV3 design wins. That will take, you know, probably we're talking about 2026 right now. However, in between, we continue to sell the Oculii radar at edge. You know, like we announced with Lotus and Geely, that they're using. Basically, the solution is running Oculii software on a TI chip, TI DSP at edge. They pass the object list to the processor in the middle. For each car, we have a front radar and a back radar, our total content is probably in the $30-$40 in that range. From that point of view, we already have a roadmap for edge radar, 4D edge radar. Now we have a centralized radar.
In the next three years, we're gonna monetize on the edge radar side, but also focus on design wins for the centralized radar.
You had automotive OEMs that were mezzanine investors in Oculii before the acquisition.
Mm-hmm.
I know there's a lot of traction there, essentially. Can you talk about software? You know, you announced Continental hardware win, and then a few months later, you talked about incrementally software wins. Can you talk about the monetization of that? I think when you, when you look at, you know, like the NVIDIA-Mercedes relationship, which is really good economics for NVIDIA.
Right.
They're attributing a lot of that to the DRIVE software platform. you know, how big do you think that software can be for you over time?
If you look at the Goldman Sachs report. Sorry to mention that name.
It's okay. I'm still a shareholder.
The software stack worth is roughly $700 + in 2025. We believe that's a realist number for software contents in a Level 2+ autonomous driving car. Of course, the value gonna continue to go down. The way that Conti and us are working together is we're gonna split whatever the price we're going to get on software side. That's in addition to the ASP we can get on the CV3 side. That's the first thing. Also because that all the software can be upgraded in the future, we believe any time you upgrade software, you probably get also your fair shares of software contents on the upgrade. That's two way we're gonna get monetize our software.
Great. I have one more question, and we can open it up to the audience. you know, you put this focus on CV3. It's obviously pretty large opportunity. Does that take away focus, though, from some of the other potential wins you have, CV2 and some of the other areas? I mean, it seems like In 2019, there was a major investment in CV We're three years away from monetization. We're now at the monetization phase and investing in another big wave. I'm very excited about the investment potential that you have. How do you make sure that you're gonna monetize the investments you've already made?
In fact, that you're asking about our trade-off in terms of our investment. CV3 is a huge investment for us, and we haven't monetized it yet. We haven't seen revenue from that yet. Because of the focus of that investment, definitely we have to give up something. What we really give up is we stop investing on video processors, particularly on the low-end side.
Mm.
You know, a lot of video processing today, for example, for high-definition TV, we are selling $3, $4 video processor. We kind of saying that's not our business. We don't want to spend another silicon on that business. We're still selling to the low-end business by selling our current chip with that low gross margin, trying to maintain the business. We have not taped out a video processor chip targeting at a $5 or $5 below ASP. That's the trade-off we decided to make. Because we truly believe in the next five years, video processor market could quickly transition to computer vision, and in five years, I don't think that there is a video processor market anymore.
Although we can probably lose some short-term opportunity because there's still a lot of low, low ASP opportunity out there, but I think that our focus will pay off in longer time.
Great. If we have questions from the audience.
I would like to know your talent management strategy. Do you think this right now is a good time to do some cost saving lay off employees? Maybe this is a good time to hire more software engineer?
First of all, we need to watch our cost expenses, but that's not at the cost of losing our long-term strategy, right? We need to continue to focus on that our CV3, our investment, our schedule that we commit to customer will be met. However, we are also watching the expense on edge if there's any fat we can trim, we're gonna trim it. We haven't laid off anybody because we think that we, well, our engineering resources are very efficient. At the same time, we are taking other opportunity to cut expenses that internally, that is not visible outside, but we definitely want to watch our expenses without hurting our long-term investment.
Thanks, Joe. For me, hi, it's Charlie from Taiwan. I cover foundry sector TSMC. My question is about your foundry strategy, because you just announced that you will use Samsung for your CV3, 5nm foundry. You should be aware that two years ago, some of your peers, Qualcomm, NVIDIA, right, they use the Samsung foundry and suffered a little bit. Right? Can you, kind of, go through, you know, why you do this, you know, very big concentration on the Samsung foundry?
Right. you know, because we're a small company, we cannot afford to have a dual source for each process node, right? When we commit to one foundry for 5nm, for example, we cannot afford to do another one. However, at any process node, when we start a new process node, for example, we're going to evaluate 4nm and 3nm very soon. We keep our option open. We definitely look at all the possible options. The reason that we use Samsung extensively for a long time, I think first of all like, we talk about small company, that priority at TSMC probably very low. Also, I think we get a better services and the price, particularly on the price side on Samsung that really help us to get a 60%+ gross margin.
I think that's definitely one of the reason we want to continue to use Samsung. However, I truly understand that when you move to next process node, which is really risky because the technology maturity is very important, so we need to evaluate properly when we go to a 4nm, which we haven't made any decision yet.
Thank you.
Fermi, can you talk to, last minute we have, there's been a number of OEM announcements that your competitors have made that when we've dug around, you know that there's still opportunity for Ambarella that's pretty material. I don't wanna put you on the spot by mentioning any of those individual names, but when you look at these high-profile wins, you know, does that still leave opportunity for Ambarella at those customers, and what does that opportunity look like? Is it.
Right.
2026 CV3 stuff, or is there stuff that you can do along the way?
For the any domain controller thing, we're talking about 2026 because that's how long it takes. I also believe there are a lot of things open to us because, you know, if you look at the first generation domain controller solution people selected, there's obviously is a problem, right? The biggest problem is power consumption so high you need water cooling, and that is a fundamental problem that I don't think that the NVIDIA can address yet. Without addressing that problem, I think for our customers, it's natural for let's start looking at possible other options. Evaluating other solutions is ongoing, and I think we are definitely thinking that we have a chance to bid on those projects. For ADAS, it's probably totally different dynamic, right?
There are a lot of people using a black box solution which they want to have a more flexibility, which I think we can offer.
Great. Well, we'll wrap it up there. Thank you so much for your time. Thanks.
Thank you.