Hey, everyone. Well, welcome back to our now third session of the seventh annual AV and Auto Tech Forum. I am Doug Dutton, VP on the autos team, here with Chris McNally, Senior MD, Head of Global Auto, and Louis Gerhardy, VP of Corporate Development at Ambarella. Louis is the VP of Corp Dev, bringing over 30 years of experience as a previous Wall Street analyst and also in the semiconductor industry. Before joining Ambarella, he played a significant role at Freescale-
Mm-hmm.
As a senior semiconductor industry analyst at Morgan Stanley, Louis, thank you for joining us today.
Yeah. Thank you, Doug and Chris. Always enjoy the discussions, and thank you for your help over the last year. As Doug said, Louis Gerhardy, VP of Corporate Development. I report to Fermi Wang, who is our CEO and co-founder of the company, and about two-thirds of my career has been in the industry in strategic finance roles. Started out in business development and sales and marketing for a Taiwanese firm in the late eighties, and includes Freescale, and I've been at Ambarella for about seven or eight years. I was a sell-side analyst, also covering the semiconductor industry for 11 or 12 years. The one takeaway you have to leave with from this presentation, and I'll say it up front to make sure the message is clear, is that Ambarella has been going through a transformation.
In the last five years, we've invested in AI, and at this point in time, 70% of our revenue is coming from AI inference at the edge. We're not data center. We're not hyperscaler. And currently, about 30% of that is coming from auto, and 70% is coming from the IoT markets. Just covering some facts on Ambarella first. Company was founded in 2004. Our IPO on the Nasdaq was 2012. We have about 1,000 employees around the world, and about 80% of them are engineers, and 80% of the 80%, 64% of the total company, are software engineers.
And they're working on, not only firmware and algorithms embedded in the chips, but of course, the compiler, the tool that's used to program our chips, the tool that our customers use. And our story is about perception. So we have a twenty-year heritage in perception, and that's why we're here today, at an auto conference, even though two-thirds of our revenue is in IoT, is that perception is about collecting data. And we have a reputation over twenty years of collecting data, no matter how difficult the environmental conditions are. For example, too much light. Is the sun shining into the sensor? Is it too dark? Are there shadows?
And so we, you know, AI is as good as the data that you collect, and so we've started our heritage of perception, which initially has been on human viewing applications on the bottom left side of this page. For the first fifteen years, Ambarella perception technology was applied to human viewing applications at the edge, frequently in the endpoint device. And keep in mind, to have a competitive technology there, we had to operate frequently off of a battery in a handheld device at the very edge of the network, and so having that perception capability is critical. And what we've done, starting five years ago, we started to generate AI revenue. And what that means is that we took that perception engine, the data collection engine, and we integrated a proprietary deep learning AI inference processor into the chip.
You have a single chip that can do computer vision or now it can do more advanced AI networks. You can see how we've leveraged our perception expertise into these new markets depicted on this page, where human viewing-only applications is becoming a smaller and smaller part of our business, and enabling machines to either partially, autonomously or fully autonomously perceive the world and make decisions on their own. Again, the data is collected in the perception engine, and the AI inference processor is making the decision with increasingly sophisticated AI networks, which we'll talk about. In Q2, we've reached a point where, again, 70% of our total revenue is coming from AI inference.
Cumulatively, we ship 25 million AI inference processors, and about a third of those are going into the automotive market on a cumulative basis, and two-thirds have gone into IoT applications, in particular, enterprise class applications. And so to answer the question, how are we differentiated and why do we win? We're differentiated by our approach. Our approach is something we call algorithm first, and that means before we lay out transistors, we understand what are the algorithms, what is the software that we have to execute on the chip? And then we'll build an architecture around it. This is a purpose-built chip that is very difficult to do. You need a significant amount of know-how.
It might sound simple, and in fact, many of our competitors will reuse technology from one market and apply it to this, the markets we're serving, and there's some disadvantages when that occurs. So what does this algorithm-first approach yield, you know, why do we expect to win? Well, it's going to yield superior efficiency, and we measure that in terms of performance per watt, or you could say even performance per dollar. Our chip is just very efficient relative to general purpose technology that was developed for one market and then applied to these edge markets that we're gonna talk about today. This is our portfolio of chips, and the first observation is, you know, top row is the latest products. First observation would be, they're getting bigger. Why are they getting bigger?
The primary reason is that the AI inference processor is representing a majority of the surface area of our chips. And why is it getting larger? It's getting larger because our customers are moving to higher resolution cameras that needs more AI inference processing. They're adding more sensors per vehicle. That needs more AI inference processor. We're taking on more function. We're not just doing perception now, but we're doing the fusion and planning and control layer as well. That needs more inference processing. And then, perhaps most importantly, the AI workloads that we're running on these chips are becoming more sophisticated. Today, all of our revenue is coming from computer vision that uses an AI network, CNN networks.
But the top row, all of those products, in addition to doing CNNs, they can also very efficiently execute transformer networks, which is a type of processing required for advanced AI networks, whether it's vision language models or CLIP or GenAI and other networks like that. And so to describe what's on this page, the bottom row would be our legacy. Those are human viewing only chips. They're simply doing the perception for human viewing. That's about 30% of our revenue, and that business we expect to have a negative 20%, you know, five-year revenue taper. We haven't developed a new human viewing only chip in three years. Every product we've done in the last three years has had AI in it, and going forward, every product's going to have AI in it and more advanced network capability.
The second row would be our first generation of AI inference processors. And again, it integrates the perception engine that's on the bottom row, but the chips are a lot bigger because you have the AI inference processor integrated into it. And those chips is what you call computer vision, doing CNN-type networks, and 70% of our revenue is coming from that row. The top row, those products, are all five nanometer. None of them are generating revenue yet. But in addition to doing CNN networks, they can also do the advanced AI networks because there's a transformer support on those. And we expect our first revenue from that top row to begin in three or four months with the products on the far left, the CV7 family.
And so to put this all in perspective, remember at the beginning I said, you know, you have to walk away understanding we're a rising ASP story. The blended ASP last quarter for all the products on this page was $12-$13, and the legacy six, the second gen row, ranges from, let's say, you know, $10-$75. And then the top row, the ASP is gonna range from $25 to more than $400 for those chips. And so now you can understand, if we execute successfully, especially on the top row, why we're a rising ASP story. This is our SAM.
Won't spend too much time on it, but, you know, one of the reason we're here is that, you know, while, you know, auto is only 34% of our revenue as of last year, as we speak, it's about 30, it becomes a majority of our opportunity five years from now. The number one reason for that is because of the demand for this more sophisticated AI inference processing in a vehicle. L2+ in particular, as we'll talk about later, is one of the markets that's driving that. The mega trends that we're addressing would be safety, you know, security and automation. You know, maybe before automation, we should say partial automation, 'cause that's where the market is today.
As I said earlier, every new product that we introduce will have our AI inference technology in it, and it is the foundation for our growth. Let me talk about our auto business. Majority of our revenue today is coming from the first three columns. Let's call that safety, you know, or ADAS. Oftentimes, we'll have single function running on one chip, sometimes multiple functions. When I say that, I mean, for example, recently we announced a win with Samsara, and one of the products we're in can do driver monitoring and the front-facing camera. That's a multifunction product that's running on one chip. That would be in these first three columns. Other applications there would be driver monitoring, would be electronic mirrors, would be black boxes. Front ADAS, we have some exposure to, but it's very small.
Our major customers today are depicted on the bottom of this page. I'd say Japan, as a region, in any given quarter, is our largest market. Again, the first three columns, Toyota, Honda, and Nissan are using us in vehicles, and we sell to them, not directly, but through tier ones like Denso, Panasonic, or JVC. The big opportunity for Ambarella, the one that most investors are focused on, would be the L2+ . This is where our CV3 central domain controller comes into the story, and that's a product that sells for $50-$400 each. L2+ , the way I look at it, is maybe 3% of all the cars produced this year-
Yep.
Something like that. It's still a very small number. Our major goal right now is to come into this market as L 2+, you know, begins to take off, and the way we would do that is with CV3. And if you look at the SAM numbers here, you can understand why, you know, going from our numbers around $500 million SAM opportunity now to more than $2.5 billion in the next four or five years. And so that's what we'll spend more time talking about. That is our major focus. And where we are is that we've got some commercial vehicle wins. We've announced one through Continental. We can't say the OEM name, but I think Chris knows. Chris and Doug know.
Kodiak Robotics and our first passenger vehicle customer who announced was Leapmotor in China, which is one of those Chinese OEMs that's moving so fast and looking at the export market. Also, I'll mention our CV3 and our solution to L2+ and above is a platform and so we also provide radar perception software that runs on CV3, and we also, starting at L2+ , provide a software stack for L2+ applications, and our first partner there is Continental, and they do have a customer that's gonna go to SOP calendar year 2027 with some of our software, some of Continental's software, to serve this OEM customer. We did have an announcement this morning about our radar, so I'll briefly mention it, but maybe I'll back up to talk about the radar market.
We're not in the traditional radar market. That's TI, that's Infineon, companies, NXP. We're coming into the market with two very unique technologies. Number one is, this press release is about Lotus using our 4D Imaging Radar, which basically gives you a point cloud that's twice the density of traditional antenna arrays, which is six transmit and eight receive. And so very efficient way using radar perception to collect data with 4D Imaging Radar technology. It's the next generation. The other major thing we're bringing to the radar market, and this is new also, is centralized radar processing. Today, all the cameras in a car, say there's seven or eight, they're processed centrally in that central domain controller. Radars today, the perception processing is done out on the edge, in the satellite radars.
And what we're proposing is to do that digital processing in our CV3 chip that's already in the vehicle, and so it's a much more efficient way to do it. Now, this is very new in the market, both the 4D imaging radar that we bring and also the centralized radar processing. Two new innovations. In this case, Lotus is just using the first. They're using the 4D imaging radar software from us. So we're just selling software in this case, and our software is running on someone's chip. Lotus did say that they're evaluating our centralized radar processing, which would mean they'd run their software on our chip, and so that is what we're doing in the radar market. We do provide once a year an update on our automotive revenue funnel, and last time we updated it, it was $2.4 billion.
This is a funnel. It's not really comparable to what other companies put out there for a couple of reasons. One is this is a net figure. It's not like a gross, "Here's our opportunity, we'll be some part of that." This is our estimate of what our auto revenue will be over the next six years only, nothing beyond that. The blue is the part that we've won already. We've been notified we won, and so the blue would be based on the customer forecasts we get, and the orange would be probability weighted. For things we're bidding on only, what is the chance we're gonna win it? Then that net figure is put into this. This, by the way, is only safety ADAS autonomy. We are not in infotainment. We are not in 4G, 5G. We're not in Wi-Fi in the car.
We're not in Bluetooth or infotainment. Many of our competitors will put those numbers in there, and so they'll show a figure that's much bigger than this. This is just the active safety domain and we'll be updating this by the end of our fiscal year. Not gonna spend too much time on this, even though it's 70% of our revenue. This is our IoT business. Most of it is driven by, you know, enterprise CapEx, and our customers are listed on the bottom of this page. The major market for us is enterprise class security. You can see Motorola Solutions, i-PRO, which is formerly Panasonic in Japan, Hanwha in Korea, Axis and Canon in Sweden and Japan.
Those are some of the major customers for this enterprise industrial class product, and it's a camera. Imagine that sits outside in the harsh weather, extreme weather, operates twenty-four seven, and it's doing. Historically, it was just doing human viewing, but now, in addition to the human viewing, it also has all of this AI inference capability in there, so we're seeing, and you can listen to our customers talk about it, this market proliferating from pure human viewing, security guard looking at the screen, to applications that were never imagined before, and as you apply these advanced AI networks, we're seeing the utility and the transformation of what used to be a very simple market into a very exciting one. We also sell into the home security market, but they don't require a sophisticated AI workload, so that's not as important of a market for us.
But we are, this year, seeing very nice growth in our other IoT category, and it's a lot of different products. For example, in there, you'd have enterprise class video conference system from HP Poly. You'd have access control. Of course, robotics is in there. That's a very nice market for us, in particular, mobile robotics, because we use the same CV3 chip that we sell into the automotive market. And even the handheld portable camera market is adopting AI in some very clever ways, which we can talk about if you want. So really, in conclusion, we're experiencing a strong recovery from the inventory correction. I think this is happening because of our investment in AI. We're recovering much faster than our peers, sooner than our peers.
For example, the midpoint of our guidance for Q3, which is an October quarter, our revenue is gonna grow about 24% sequentially, and about 56% on a year-over-year basis. And we have good visibility to our lead times, which are currently six to seven months. And what is driving it, it would be, new projects at our customers, in particular, new projects that are using our newer, more expensive chips. And that's why we're able to confidently say we expect to grow our auto business this year. We expect to grow it next year, because off of a base of $77 million in auto revenue last year, we're not really exposed to global SAR and what's happening to production. Wins, like we've talked about recently with Rivian or with Samsara or with Honda, is enough to, cause us to grow.
Now, to put it in perspective, consensus for this year has us growing 17%-18%. Next year, they have us growing about 18%. What we're saying is that our IoT business will be above that. Auto will grow, but not quite at those levels. So with that, I'll, I'll stop, and we can-
Excellent.
Yeah.
Louis, this is a great overview. To start off the question, maybe a little bit on the competitive landscape.
Mm-hmm.
You know, from our auto seat, we always see larger compute platforms as a result of, you know, internal R&D programs, the likes of the Mercedes, Tesla, you know, Rivian. And maybe you could talk a little bit about... You mentioned the algorithm first, but what differentiates you from other players like the NVIDIAs and Qualcomm? What is the selling point in that competitive market?
Mm-hmm
... you know, for larger centralized compute?
Yeah. In both those cases, their technology originated for other purposes and has been applied to this market.
Yep.
And that's going to yield some differences in what's called the throughput-
Mm-hmm.
... of the chip, and how efficient it is and how much power it consumes. And that all does come back to the approach that we talked about. And so at CES, the last couple of years, we'll do it again this year, we'll show the same network with the same, video stream for both chips, side by side, on the wall with a power meter attached. And we'll show the efficiency advantages that we get, performance per watt, for example, running on an apples-to-apples basis, common networks. And so it's that efficiency that looks good on a compare-a-chip-to-a-chip basis, but when you look at the bill of material, it's even more profound because - and it, it gives OEM the flexibility to make some decisions and differentiate. For example, if we don't need liquid cooling -
Yep
... and someone else does, that's 70... I don't know how many pounds of weight.
Yep.
A lot of weight that can make a difference in range.
Which means everything in an EV, yeah.
Yeah. And the power savings can be applied to either extending range or using fewer power, or fewer batteries in a car. And what is that advantage? It's in the single digits, but still, it's, you know, if you can increase your range, you know, 3% by an innovation like this, that's a big deal. We also have certain advantages with DRAM and how we utilize it, which leads to other advantages. And so we're at that point in time where we weren't first to market, because remember, to develop that optimized silicon, it took more time to create such an efficient solution. Meanwhile, the more general purpose products were in the market, and so we need to go to an OEM and convince them of our advantages-
Mm-hmm.
and then they'll need to port, or we together will port the software onto our chip, and that's where we are in the market right now.
You know, this is not a traditional auto conference, but when I think about, if you heard some of the introductory remarks, that idea of the combination of Level 2+, huge opportunity, but we're also seeing in China, that's where it's sort of being adopted.
Mm-hmm
... the fastest. Can you talk a little bit about how some of those hardware advantages, cost position, works to be applicable to the Chinese market as well?
Mm-hmm. Yeah, so you're right, China market is moving exceptionally fast, and you need to have the flexibility in your product to not only efficiently do computer vision, which of course, many are using today, but they're moving to, like, BEVFormer networks, and other advanced AI networks extremely fast. And so your silicon architecture, in particular, the inference processor embedded in our chip with the perception engine, it needs to be able to efficiently process transformer networks, which we can do. I think some other companies may have more challenges in supporting transformer networks, so that creates an opportunity for us. We have one product, CV3-AD685, on the slide. We started sampling that product a little more than a year ago, and 685 can run a complete end-to-end AI network that you know about.
and that's a very-
Yeah
... important capability relative to, say, other silicon on the market, to have that flexibility. I'd say one other really important thing for us in China is that these OEMs, for the most part, want to do their own software stack. They want to differentiate their vehicles with their own software, and again, they're pushing the technology hard, very AI intensive, moving to end-to-end AI quickly. And that plays to our strengths, because when we sell into the automotive market, we're happy to sell our SoC and allow our customer to develop the software stack, or we can also provide certain software IP modules to run on our own silicon, or we could even provide the full stack.
Now, in China, we're not currently marketing our stack because of the data collection challenges, but the willingness to open up your silicon architecture and allow the Chinese companies to program it with their own software, and to be able to support end-to-end AI on a product, a full stack end-to-end AI running on CV3-AD685, that's super interesting.
Yeah. One last one for me, and then I'm gonna hand it over to Doug. On the go-to-market and the timeline, love the slide about the $2.4 billion funnel. Can you talk a little bit about what is the timeline on that 75%? Number one, are those RFQs that are happening over the next 12 months, and if not, when is this... are these sort of production-
Mm-hmm
... three, four, five years down the line?
Yeah. The way the funnel works is there's two parts to it. There's one, and then there's, the probability-weighted part.
Yep.
And for the one business, you can imagine for next year, probably almost all of that business, let's say, is one.
Mm-hmm.
And there's very little probability weighted. It's just the way the auto-
Yep
works, is you get some better visibility than our other markets do. And that one business kind of peaks out on a year-by-year basis in year four, and then it declines, and that's the way it's, the shape of it every time we've done it. We've done it for four years in a row now. On the probability weighted side, as I just said, there's practically zero in next year's numbers, but you get to year six, it's almost all of it.
Yep.
And so you put those numbers combined, this year by year by year, if you look at it, it's definitely back-end loaded. Why? It's back-end loaded because, number one, L2+ is only 3% of the market today. Number two, our content is going up dramatically. The CV3 chips that would serve L2+ market will sell from anywhere from $50 to more than $400, and today, our revenue in auto is from CV2 products that sell for maybe $20 to $30. And so as you go every year in that funnel, the expectation is, is that our content will be going up at the same time as L2+ penetration is going up, and that's what-
Yeah
causes it to be so back-end loaded.
That makes sense.
Yeah.
Mm-hmm.
Yeah, so maybe we can ask a little more on the strategic partners that you talked about and your customers that you're selling through to. You know, what is the ideal form of those relationships look like? Obviously, you want to be the one selling-
Mm-hmm
... the full stack, but, you know, with the opportunity here for $50-$400 for a CV3 chip-
... you know, selling software modules on top of that. What does the ideal form for you look like as a Tier One right now-
Mm-hmm.
You know, in some of your bigger relationships?
Yeah. So I'll divide it into two. We have two auto business. One would be the safety and ADAS functions, and in that case, we typically sell to Tier Ones, although commercial fleet market, like a Samsara, obviously, we'd sell directly to them, and they'd have, someone build the product for them. And so that's very different. Our largest Tier Ones today for that type of business would be, again, like Denso and, Panasonic and JVC. Of course, Samsara is a customer, Gentex is a customer for products they sell. But yeah, your question, was understandably more oriented to the L2+ business than the CV3 chip, and in that case, our go-to-market's very different.
We have to take that chip and go to an OEM and ask them to port some of their software to it, to validate what we say about it, that it's so efficient and, like, everything I was talking about. Once they port that software, their own software, usually, then the chip can be validated, and you enter into RFIs, RFQs. So that funnel, all of those things we're bidding on, there's not like something in that funnel that we're not bidding on, that we're not invited to bid on, is what I'm trying to say.
Yeah.
So it's all RFIs, RFQs, and the probabilities vary quite a bit. And so what do we have in place, in particular for the L2+ market? Well, Continental and Bosch were the first two Tier Ones to recognize the advantage of this architecture and anticipated that OEMs would have significant interest. And so what they've done is to begin the process independent of a nomination in anticipation. That's kind of where we are right now, is that we're bidding on product RFIs, RFQs, and they've primed the pump with the work that they've done. In the case of Continental, our relationship is not only CV3 SoC, but it's also the software stack-
Yeah, yeah.
In particular, perception software that we contribute there. And in the case of Bosch, it's an SoC CV3 relationship. But we do expect to add additional Tier Ones, but selling to an OEM is super important for that type of product, and so we do spend a lot of time on a global basis doing that.
It’s important to recognize, ‘cause we had Ismail on for the-
Mm-hmm
... the pre-record from Continental, and your name came up over and over again. If you think about Continental and Bosch, they're probably 60% of the current radar market today-
Mm-hmm
-that needs to go from traditional radar to more of a centralized, you know, imaging radar. So those are, in my opinion, fantastic partners because, you know, they have the business now that needs to transition to the next generation.
Yeah, you're exactly right about the radar, and they are the leaders, two leaders of the world, but I wanna be clear that we're still trying to convince them to move to our architecture.
Yep, yep.
But it would be. That's where we are with them on that front.
We've got about five minutes. We have more questions, but I wanna first kick it over to the audience for Q&A. Don't be shy. Anyone, anyone? Christian.
Hi, Christian Fernandez, Nuveen. You had a pie chart there on work one, and I think future funnel work. If I remember correctly, between, was it two thousand and twenty-two and two thousand and twenty-four, or there, there you go. The work one, the blue bit, didn't really change much.
Mm-hmm.
Can you clarify a little bit the composition of that blue portion and why it did not grow further?
Sure. Yeah, absolutely. So, we provided that funnel 4.0, the one on the far right, $2.4 billion, in November 30th. And so at that time, when we introduced it, we talked about seeing a lot of push-outs, a lot of cancellations, program delays, and that. So there was a lot of churn between funnel 3.0 and funnel 4.0, especially on the orange part, which is, you know, more speculative because there's a probability weight. But why were there program delays? You know, and this won't surprise you, but just to connect the dots, when I was talking about we provide, for example, with L2+ CV3, frequently SoC only is what we're providing. If the software from our customer is not ready, we don't sell the chip.
Yeah.
So things get pushed out, and so I think that is the single biggest factor in what's changed here is. Maybe another way to say it is, the market isn't with some OEMs going to L2+ as fast as we and others were expecting.
Yeah, 'cause you still need the OEM to, at the end of the day, cook on the, for the software.
Yes. But I think even tier twos that provide the software-
Yep.
also seeing delays, too, on the chip.
Yeah.
Suji with Summit Ventures. Quick question regarding the third generation chips. Do you have any data points that show your customers would be willing to pay anywhere from $50-$400 per chip?
So for CV3?
Yeah.
Yeah. You know, that is what customers are paying something similar to that today. And so the advantage that we bring would be the efficiency and the impact to the bill of material and the, and the energy savings. So those aren't, you know, unusual price points. Now, you might say for front ADAS market today, you know, ADAS, you know, it's well known, the ASP for the chip and the software for a single camera looking forward to an emergency braking is around $50. So you might say, "Well, why would customers want to pay, you know, $50-$400 or more?" It's because of the incremental utility they're getting, which is all the L2+ features, where the car can navigate itself, you know, L2+ , you know, on the highway or more complicated scenarios, like in a city.
That's why an OEM would pay more, and of course, they'd have a margin on top of that.
Do you think that upselling is difficult for us as you talk about how of that strategy?
I think that question ultimately comes down to how quickly will the market adopt L2+ with the more expensive chips? Those numbers are baked into our funnel, but I know maybe I'll ask, you know, Chris and Doug, if they shared their view on L2+, like, if our 3% is right, what do you think it's gonna be in, like, five years?
Yeah, I mean, in the first slide we have, I mean, the what we call Level 2+, which is obviously, you know, like, it's a broad definition. I think the market is sort of intending that to be a 20 million-plus type solution, and one that they can charge, you know, some version of roughly $3,000 for. In China, it's more competitive. That number maybe is coming down. But yeah, that's sort of table stakes at this point. The one thing I'd also add is, you know, we cover obviously Mobileye. If you look at, like, their EyeQ6, you know, it's a $50 chip in previous generations that is moving into at least a couple of hundred dollars and sort of stated $200 and upward for sort of more performance items.
Mm-hmm.
Obviously, Orin is, you know, for NVIDIA, a number in multiples of that. So I think it's, again, you know, part of the ability of the feature set, and you're competing on, you know, the ability to allow the OEM to do so. Mm-hmm. Well, I know we could have more, but I think we've hit our time limit. Louis, thank you so much, and let's give a round of applause for Louis and Ambarella.
Yeah, thank you.
Thanks, guys.
Thanks so much. Doug, thank you.