Okay.
Okay.
All right. All right. Excellent. Good afternoon, everyone. Welcome to this session. I'm Vivek Arya from Bank of America's Semiconductor Equipment Research Team. I'm really delighted to have the team from Ambarella join us this afternoon: Fermi Wang, CEO, with me, and John Young, CFO, and Louis Gerhardy from the IR team with us as well. I'll go the usual kind of fireside Q&A format, but if you would like to bring anything up, please feel free to raise your hands. Fermi, warm welcome. Really glad that you could join us. You know, you just reported earnings, and I'm sure we'll get into kind of the nitty-gritty of the quarter.
Fermi, I think it'll really help the audience for you to kind of set the stage on what Ambarella used to be, you know, what it has come to, and how has your kind of strategic vision evolved in this process.
Right. Thank you, Vivek. First of all, thank you for coming to meet with us today. You know, Ambarella is 21 years old, and we started—we purely focused on one idea: how to enable personal video content. That was in 2004. There was no iPhone. There was no YouTube. To enable personal video content is a difficult task. We built this really proprietary video processing technology so that people can record high-quality video content in a very cheap video device. That was the foundation of the company. We basically went to profitability just with that technology. Very soon after the iPhone came out, it became very clear that capture video has become a commodity, and we need to start adding value by going to different markets.
That is the time we realized that in addition to the consumer market, we need to go to, you know, security camera, drone camera, or other cameras like automotive cameras to differentiate the product portfolio. Which we did. That's the reason we went public in 2012. After 2012, the most important thing happened in that day, in addition to collecting the money, is to identify for us, we need to add a brand new technology to our video processing, which is really, at that time, we called it video analytics or computer vision. Basically, you can analyze the video content in real time when you capture the video. Based on that idea, we started producing our computer vision technology or AI for the video technology. It took us a few years to get to the first production.
We started ramping up our second generation computer vision technology in 2018. In the last six years, our revenue goes from zero to last quarter, 75%. If you look at this, it is a 60% CAGR in five years in terms of AI revenue growth. From there, we identified more opportunities. In addition to just doing a new network, CNN has a new network AI application. We identified using autonomous driving to basically use video perception and radar perception to do autonomous driving. In addition to that, in the last two years, when Gen AI popped up, we started looking at how to apply our CV4 architecture for those more advanced AI portfolios. Today, 70% of our revenue is from IoT side, 30% from the auto side, but the majority of our revenue comes from our AI processors, close to 80%.
You can see that we transitioned from a video processing-only company to AI for video. In the future, I think all of the only technology we're focused on going to be is how to continue to improve our AI performance for video data only at the beginning, maybe moving to other data types. Just purely focusing on video data type, it will give us a lot of growth on the edge AI for the endpoints like cameras or edge infrastructure like the boxes that integrate multiple edge endpoints and put the multiple video streams into a box that we can use a more powerful chip to analyze video. AI technology is going to be the most important technology driver for us and also revenue driver.
Got it. Now, edge AI, you know, you have a very interesting presentation on your website, right, where you lay out kind of the core and then the network edge and the application edge. What does edge AI mean for you, Fermi? The reason I ask that question is that, you know, yes, there is an understanding that obviously you can't just take products that have been designed for the core and, you know, put it on the edge, right? You also have a number of companies in the smartphone industry, right, in the typical conventional IoT industry who can actually participate in that edge also. What does edge AI, you know, mean to you? How is the competitive landscape? Like, who is your true competitor in edge AI?
It's a great question because I think there's a lot of different definitions of edge AI. For me, the definition of edge AI means that for each application, if you call it edge AI, the majority of the AI performance, AI functions happen at edge. But there are other devices, like, you know, your cell phone, you collect data, but you pass majority data to the cloud, and the majority of AI happens at the cloud state. Although your data is collected on edge, I won't call it an edge device because really that's just a data collection. The AI happens in the cloud side. The way I define edge AI means the majority of the AI performance happens at the edge device. That's why I call it edge AI.
Got it. Okay. And then competition? Who do you think as competitors in edge AI?
Today, I think edge AI, of course, that NVIDIA has some edge AI devices like Tegra. Qualcomm has a lot. You talk about another 50 startup companies that have been founded by the VC. It is a busy crowd space. You know, we made it clear that in the last, since 2018, we shipped more than 32 million units of AI processors to edge AI devices. Also, I think that just that one data point put us in a unique position competing with NVIDIA and Qualcomm.
I see. You know, a few months ago, I think everyone in the investment community heard of DeepSeek, right, in a really loud way, right? I'm sure we'll debate what the pros and cons, but what has that announcement meant for Ambarella? You know, what has that done positive or negative for you guys?
Right. I think it has a major impact to us positively. You know, before that, DeepSeek, when you talk about reasoning model, everybody just assumed, in fact, including myself, it has to happen in the cloud. There's no chance it will happen on the edge because of performance requirement. What DeepSeek really showed to me is they have multiple different models starting from 1.5 billion parameters, the smallest, to then to the 400-something billion parameters. We, at the ISCOS a few weeks ago, we showed that with our CV75, which is a two-watt chip, that is capable of running DeepSeek 1.5 billion parameters model without any problem, at a really good performance. The CV72, our four to five-watt chip, can run the eight billion parameters.
Just these two things show you that a very cost-effective and power-efficient solution can run reasoning models that were not possible just three months ago. I think that creates opportunity. I still do not know what is the best application that can use the DeepSeek model running at edge. We will figure that out, taking time. Just that we can start showing people that such a powerful model that can only be run on the cloud now can be available at edge, I think that creates opportunity for us in the future.
Got it. Is the optimization of the—who does the optimization of the product, you know, for these large number of, you know, large and small and medium-sized models? Is that work that Ambarella has to do? Is that the work the customer does after getting your silicon? Like, whose job is it to do all that work?
Right. For example, that's using a public model that is generated by some big companies, right? People want to retrain it. That retraining usually happens with our customer. We can, of course, we can do retraining. We are capable of doing that. However, if we try to bundle a retrained model selling to a customer, we are basically taking away the differentiation our customer wants. Our job, the way we position this, is our customer should do all the retraining.
Right.
Our job is to help them to port that retrained model onto our chip and run it very efficiently. We need to provide them compiler tools that can compile the model. It does not matter if it's CNN or any kind of the Gen AI type of models, and efficiently convert that to a binary that runs our chip. That's what we should do. After that porting, it's our job to work with the customer to optimize the model. That, I think, is our job.
I see. One other industry question, Fermi, I had is some of the edge AI companies that we speak to, they say, you know, we can do the processor, but by the way, you know, we also have a way to bundle the sensor, right, that is getting the information. Some will say, you know, I have a great connectivity portfolio, right, because, you know, ultimately, this thing has to go back and forth, right, to some other place. How do you address that bundling, you know, argument? And how do you—is that something Ambarella will need to do, develop a connectivity portfolio, develop a sensor portfolio? Or do you think that staying best of breed is the right approach for you?
My gut feeling is that people want to bundle everything together, in fact, in the past, because they want to integrate those functions into a single chip. I do not think that is possible. You cannot integrate sensor or connectivity into a, you know, five-nanometer chip anymore price-efficiently. They are really talking about they have a sensor, and they have a processing chip, and they have a connectivity chip, and they bundle as a package selling at a discount price to a customer. It is basically a business deal. People, I do agree there are people trying to bundle and selling using business deal to sell the whole package. From that point of view, you will not get the best technology, right? Or if the people already have the best technology on all three of them, good for them, they will win the business no matter what.
The way we run into is we always provide the best technology on the processing side, particularly on the video processing, power efficiency for AI, and also performance for AI, different bandwidths. All of that is our strength. For us, we are really just trying to compete with people trying to bundle together. For most of the customers today, at least, they are trying to get the best technology. Cost is always an issue, but technology has to be the first priority for them. From that point of view, although we are running into all kinds of people trying to bundle solutions, I think we still have no problem to sell our AI processors to our customers.
Okay. A few kind of near-term, and then we'll come to the longer-term dynamics. You reported earnings last week, right? Q1, very good results. Q2 was good guidance. I think you raised the guidance for fiscal 2026. You know, the stock had a little bit of a mixed reaction, let's call it that. What was your, you know, kind of impression of your earnings, you know, how you think about the second half of the year? If you want to make a comment on, you know, how the stock had this kind of mixed...
I won't call them the top 10% was a mixed responses. First of all, our Q1 was 3% better than guidance. We guided up Q2 for 5% or 6%. Then we increased our annual guidance by another 5% in the middle, right? From the financial performance point of view, I don't see anything wrong with it. If I look at our script, we talk about new market, our new chip, we continue to deliver on time. Overall, from the script point of view, I don't think we show any weakness. Now, what's the theory behind this 10% drop? I heard a few of them, but one of them I want to defute, you know, really making sure people understand.
There's a thesis that says, well, because I didn't talk enough about CV3 or autonomous driving updates in my script, people think that I am, you know, defocus our autonomous driving investment and focus more on the edge infrastructure. I just want to make clear that it's not the case. In fact, that in the last three months, we continue to invest heavily on the, you know, on the development, customer engagement, designing RFQ. We didn't give any update because in the last three months, there's no major development from the customer point of view for us to give an update. However, at the same time, the new announcement on the edge infrastructure side should not be a surprise because since edge AI came out, we talk about using N1 and N1- 655 to address this edge infrastructure opportunity.
I do not think it is a brand new thing that people should say this is defocusing on. I just want to make clear that CV3 for autonomous driving continues to be a very important direction for the company, while we should try to leverage the only investment we put into our third generation CV3 architecture and try to identify new applications that can take advantage of that technology so that we do not need to add too much more OpEx and still can enable brand new applications. I think that is really the best word that we can have.
Got it. You know, on the second half of the year, right, I think you're using kind of your typical conservatism. Is that typical conservatism, or is there something in the macro environment that, you know, causes you to be more conservative than usual?
I think it's a conservatism that really reflects the reality. You know, there's a huge tariff discussion. On July 2nd, people are going to, receiving the later, find out what's their tariff rate. If everything goes smoothly, that would be great. If not, it's going to be an ugly situation immediately. What I try to say is because we don't know, we try to bake in some conservatism into our second half guidance. What that means is there will be more upside than downside for us. If things turn out to be, you know, that tariff is not an issue after July 2nd, I think we can get a better number than our second, than our guidance right now. That's the thing we try to say.
Got it. Did you observe any pull-ins in the first half at all?
You know, to tell the truth, when you see a strong financial performance that we have, you have to suspect there are some other pull-ins. We engage with our customer aggressively to understand their position. None of them say they are doing pull-ins because most of them are saying, "Hey, you know what, sitting here, wait to see what's going to happen with the tariff." From that point of view, I suspect there are some, but that's not a major scale like what we've seen three years ago.
I see. You know, one thing Fermi, you brought up that, you know, there have not been as many updates on the automotive side. You know, we spoke with Continental recently, and they're still very engaged, right, with the platform. To your point, things are happening, right? It's just that end customer progress has been, right, a little bit slow. Where do you see it, right? When do you think you will start to, you know, the automotive pipeline will start to get re-energized?
First of all, our investment continues. Our engagement with Continental on the current design win, for example, Aurora, Cadillac, and other design wins we announced already, they are all in progress. I expect that we're going to deliver revenue in 2027, as we announced in the past. The key is right now we have to continue to focus on the design win and RFQ that we have. The only thing we're trying to say is look at, you know, in the past we lost a major design in the last quarter, and we were hoping, we had very high hope before that. After we lose it, that becomes such a negative response for us. We are starting to think about how to communicate to the investors about our design activities, setting our expectation and fail to deliver.
It's just I don't think that's the right way to do it anymore, right? We need to figure out what's the right way to communicate to investors in the future.
Got it. Okay. If, let's say, for the next, you know, one to two years, sort of edge AI and IoT, you know, stays, what are the, you know, top three or four applications that are driving it? Within that, if you could also give us a sense for how many of them are accretive to your average selling price right now.
First of all, we are starting a lot of new edge AI applications that will come into our revenue pipeline. You know, security used to be our biggest one. Right now, you know, we are starting, we talk about, you know, video conferencing, you know, portable video, wearable cameras, you know, now edge infrastructure. All of them can be meaningful revenue for us. All of them are taking advantage of our third-generation AI infrastructure. The ASP, all of them going up. For example, just to give you an idea, for the video conferencing, the first chip we are selling to there was a video only, human viewing processing chip at $9. Today, CV5 that people use for video conferencing is selling between $25-$45, depends on volume. The ASP growth is significant, right, because the AI performance you are adding there, right?
CV5 is still second generation CVflow. When you go to third-generation CVflow, CV72 and CV75, we talk about which we're going to add, you know, advanced model that we just talked about, vision language model and the reasoning model, maybe move more in the future into that platform. I expect to see more applications jump up. The ASP is really about how much AI performance we continue to offer our customer. I expect that our ASP continue to grow up. Today, our average ASP in the whole company is $13-$14. Our CV5 selling price is anywhere between $25-$45 for our second-generation CVflow. CV72 is also in that similar range by third-generation architecture. CV3, we talk about $100-$400. N1 655 that for the edge infrastructure, we talk about low three-digit price.
You can see that our ASP is going to continue to grow based on it because it's really AI performance demand going to continue to drive up our performance requirement, therefore our ASPs.
Got it. You still have some legacy video processor business, right? How should we model kind of the decline of that business over the next few years?
In last quarter, we have only like 25% of human viewing or video processor business. We expect that it's going to have a very long tail and gradually dwindling down in that. You should maybe assume another three, four years of down. At a certain point, majority, 99% of our revenue comes from AI-based products.
Okay. Can you give us a sense for what your exposure is to China, both on kind of a build-to and a ship-to basis, so we get kind of a true measure of what that exposure is?
Right now, 15% of our revenue is consumed domestically in China. I also believe the majority of our customers, if they do not want to consume in China, they manufacture outside China already. Our exposure is limited there.
You know, I saw in your company presentation was this kind of, you know, fiscal 2031 calendar 2030, right? Almost, I think, about, you know, $13 billion in the sum that you have laid out. Is that something, Fermi, that you can do organically? Is that like what will it take for you to go from a few hundred million, right, to a billion-dollar company? What will it take?
The most important thing is that we need to penetrate CV3 because if you look at the next two years, the growth will come from the IoT side, right? We talk about that. If you talk about three, four years, the biggest opportunity is trying to secure a major design win in CV3. We talk about the last quarter, the design we lost is close to a billion-dollar opportunity for us, right? A win like that means really solving a big problem for us. Moving forward is really about how to expand edge AI on the edge device, edge endpoints to the edge infrastructure. I think that staying independent and if we want to grow to a billion dollars, acquisition probably is unavoidable.
At the same time, to your point, you know, having a, playing a much bigger scaled platform, it will definitely help us to get to there even quicker.
Right. The opportunity that you had to forgo that you just referred to, what was that due to? Was it just the company scale? Was it resources? Like what do you think drove it? And more importantly, is that, you know, a persistent issue or was that just a one-off thing that you had to?
You know, one thing I want to point out is that, you know, our ASP continues to grow. One reason is we only really focus on mainstream high-end. You can also point to the fact that if we have enough resources, we should be able to even win the low-end. There is no reason to leave low-end. With our scale, with our R&D investment, we believe the best way to invest for us is focusing on the gross margin generation, therefore the operation margin generation. From that point of view, we did walk away from revenue opportunities that we could have because we just do not focus on lower margin business. That is definitely something that can be solved with, you know, a bigger scale of the company.
I see. From a, you know, supply chain perspective, what can be the issues that can impact your cost structure if, you know, we get a different situation of where tariffs are right now? Are you sufficiently diversified?
Yes. In fact, you know, we are in five nano and two nano. We cannot diversify from the foundry point of view because only the largest company can have a deal source for two nano or five nanometer. We cannot. We have to pick one. From the geopolitical situation, yes, we can protect ourselves and protect our customer by, you know, we are using Samsung. We do have foundries in Korea and also foundry in Texas. We do, from the diversified point of view, a geopolitical situation point of view, our supply chain has been proven by a lot of our customers in terms of the robust. The true sense of diversification is that you have deal source on any nodes of silicon, which I do not think we can do without scales.
I see. You know, the gross margin point you brought up is interesting because, you know, your business has been consistently above the 58%-62% target. Is that because you're walking away from business, right? Or is it a product gap or what is helping you stay above the target? Or I guess should the target be revised? Like even how persistently you've been?
Right. However, you know, throughout our history, we seldom focus on low-end because, you know, to try to compete with the price with those companies only pay attention to that, I think it's a bad deal. However, throughout the company, we kind of downplay the low gross margin business. For our customer, for example, some of the larger security camera customers, they want from the low-end to high-end. We are happy to supply to them across their platform. We definitely take a lower gross margin on the low-end side just because we want that business. We still have a combination from low-end to high-end. If some of our customers just say, "I do not care about mainstream. I just want to have better price on the low-end side," we tend to walk away from that.
Right. You know, when I looked at where a lot of our peers are in terms of modeling, people always model your gross margins to get back into the. But from what you're describing, if ASPs continue to do quite well, is there a reason why gross margins would get back to the trading range or to your target range? Or do you think they can consistently stay above that?
In fact, we are in the target range of 59-62, but we are kind of guiding gross margin a little lower as a trend. The reason for that is, you know, our competition, particularly on the auto side, is Qualcomm and NVIDIA. I do not expect that that will be nice to us in terms of price competition. I think that we kind of bake in that potential competition from them. From the IoT endpoint device point of view, I think we have a track record and also the product portfolio that can protect us.
I see. On the R&D intensity, you know, Ambarella has always been a company with a very strong focus, right, on R&D. That also means that, you know, it is very, you know, out of bounds, right, with the kind of sales growth that you're seeing. Do you think it's just a matter of time? Is it like at what point do you think Ambarella can be a company that is going earnings on a consistent basis?
First of all, if you look at the last 10 years, we invest on the CVflow for CNN type neural network, and then we invest on the autonomous driving for CV3. Now that we're talking about, that two generation of CVflow definitely takes a huge amount of investment. Moving forward, I think in the near future, our job is leveraging that investment to focus on the applications that can take advantage of those investments. We are not looking for another market that will require a huge amount of R&D expense. Instead, I do believe our AI architecture will allow us using the current architecture and current software to tap into the new market. From that point of view, I hope and we'll continue to show more of our operating leverages on our bottom line.
Okay. Outside of the CV3, what are you seeing in the automotive market right now? You know, there's a lot of concern about, you know, cyclical, you know, issues. What are you hearing from your automotive market?
We saw the same thing. You know, in fact, that the market has, you know, different problems and financial problems and inventory problems. Because of that, we do see people slow down their investment. Although everybody's still committed to level two plus, the investment cycle and their decision cycle definitely push out. That's one thing we see. The other thing is instead of doing really high-end events to level two plus, level three car, they focus more on the highway level, level two plus. Basically, they changed their business model to focus on the value, more value-based engineering and try to get to market faster so they can get to profit. I do see that the change of the particular Western side.
I see. Does your opportunity in the car change depending on the modality? Like if people are using just cameras versus using cameras plus LiDAR plus other things?
I do not think that changed. I think it is hard for me to believe that when you go to a higher level of autonomy, you can use camera only. I think that domain controller that can integrate multiple sensor modality continues to be our thesis, and we believe that we can continue to benefit from there.
Okay. And then finally, you know, Fermi, as you look over the next year, what do you think are possible kind of upside drivers to the guidance you have given, right? I understand, you know, macro is what it is. You know, is there a certain market? Is there a certain, you know, customer or application that you think can drive upside to how you think about your fiscal 2026 right now?
A couple of things, right? First of all, we talk about a lot of green shoot opportunity in the edge endpoint opportunities. Those opportunities, if the volume goes up, for example, the wearable camera, a lot of people, you know, we start seeing a lot of opportunity on wearable because it's not just a policeman wearing the wearable. A lot of the security guards, service people, seven- 11 clerk, put on wearables so that they can document all of the events that happen when they provide services. You can imagine that that kind of, if the volume goes up, and that can be a driver, right? For us, it's all about volume. The other thing is now we're talking about edge infrastructure. We believe the revenue is going to be starting second half of next year.
If we, you know, hit the market with the right customers, it can be a revenue.
The server-like product.
That's a server-type infrastructure type. Yes. It's a server-type.
Do you plan to sell the whole box or do you plan to sell just the CPU on the box?
Right. So just like a, you know, camera, we provide a complete reference design. We show a camera to a customer with our chip in there. Our customer will look at the camera and say, "Great, that's a good example." They will build their own box, build their own camera. For the edge infrastructure, same thing, we are going to build a complete box including application running on top of that and give this reference design to our customer. They can find the manufacturer or themselves to manufacture the box themselves. On top of our software, they'll remove the layers of software that we provide, replace that with their own models for their own applications so that they control all of the value added. That is the business model we are looking at.
Makes sense. With that, Fermi, thank you so much for your time.
Thank you.
Really appreciate it. Really enjoyed the discussion.
Thank you very much.
Thank you.