We good? Am I turned on? Yes, I am. Well, welcome back, everybody. I'm Joe Moore. Very happy to have with us today for probably the fourth time that you've been here at the-
Yeah
Nasdaq conference, Fermi Wang, CEO of Ambarella. So, Fermi, maybe we could just get started with kind of a bigger picture view of the company's focus on computer vision. Can you talk about what that means and kind of, you know, a broad overview of the strategy?
Right. First of all, I think computer vision has been a very old technology, continued progress in many different ways. When I was a student, computer vision means traditionally algorithm, heuristic algorithm. But when Ambarella decide to take on computer vision, which means try to analyze video content in real time, we decide to start using it because we believe neural network will change this technology in a totally different way. In a traditional computer vision, when you try to do a person detection or car detection, that two totally different algorithm, you cannot scale the processing for that. But when neural network, any kind of object detection is a training with a certain neural network, that solve the problem so easily. So that we decide in 2012, 2013, we start looking at how to implement computer vision based on neural network approach.
Our first really high-quality production is our CV2 family chips. We started shipping in volume in 2018, and since then, this year, our AI-based, CV2-based revenue is close to 60% of total company revenue, which really made a significant progress. The target is really any video application on the edge require real-time video analytics, like security camera, like IoT devices, like autonomous driving cars. Those things are definitely need a real-time video, which give very low latency response, as well as very tight power consumption. That has been our strength, and we enable neural network-based computer vision with CV2 family.
Now, we just introduced CV3 family just last year, and CV3 family is really using computer vision technology to do AI inference for autonomous driving. And the target is really build a domain controller for CV3, for the level two plus, level three autonomous driving. And the idea is that all of the AI-based performance and the function will be implemented on CV3. On top of that, it's not just a perception level of a video, radar processing or sensor fusion or path planning. It's everything that you need to require to drive the car will be run on the CV3.
We are halfway there because we not only introduced our first silicon last year, we just sample our CV3-AD685, which is our first production-ready autonomous driving car solution, and we're demoing that chip to the customer today. In fact, this week in Europe, and also we are demoing AI-based software stack, which integrate VisLab software that we acquired and Oculii radar software we acquired. All the software has been integrated to one complete software stack that runs on CV3, and that has been a milestone we try to present to the investor, and I think we're ready to do that. In fact, we're doing that already, and if you have time, please come to our CES booth. We'll show you the progress.
That's great. I mean, we've seen AI have this kind of transformational effect on the way people are looking at computer vision, and I guess it takes a long time to productize that in automotive. But maybe if you could talk about some of the markets that have used you guys, you know, earlier, like maybe starting with professional surveillance. You know, where are you now with penetration? Obviously, you already had a presence with video processing, and you've moved that over to CV. Where are you in that transition now?
Right now, I think, for, we call enterprise, security camera, I would say that transition just probably, I won't say even past the half a point, because majority of the, the, our customer product, which use our CV2 family chips, still in the high-end or maybe mainstream chip. On the low-end side, because they most of the low-end products are still a, a video processor only and also cost, for a very cost-sensitive solution. So I would say that transition just in the before the midway, and it will take a few years to get to the, the mainstream. I think the limitation is today, the AI camera that our customer build is still, has a huge price gap between their video-only solution because they have been commoditized.
But it's not because our chip. I will talk about our video processor ASP is roughly high single digits. Our CV2 family ASP is anywhere between $15-$20. So we're talking about $10 ASP difference, but the gap on the camera side is still pretty huge. So I think there's definitely a route for people to continue to reduce the cost of the AI camera. Then that will see even bigger penetration for our CV2 family chip.
Maybe if you could talk about the more consumer-centric side of the surveillance market, where maybe $10 is an issue-
Right
... sometimes. But it also, the utility of those cameras can be a lot higher when you have your technology.
Right.
You know, and you've talked about some wins in some of the doorbell-
Right
- and some of the markets around that.
So-
Can you talk to consumer?
We call that home security camera, and that market really go into two different, totally extremes. There is one on one side, people start spending money to build AI performance and building the AI processing into the camera. But there's another extreme, is people trying to build extremely cheap home security camera. At Amazon, you can find $20-$30 security camera for the home security. So for that two extremes, we focus purely on the AI side. In fact, you know, our company strategy is we invest so heavily on the AI for, in the last you know, 8-10 years. We believe that we should focus on applications that will drive even higher AI performance in the future. That's where we want to be.
If the application doesn't really demand high AI performance, it's not gonna be the area of focus. So for the lower-end consumer IP camera still using a $5 video processing chip, that's not a focus area for us anymore.
Great. And then looking at automotive, maybe before we get into what you're doing from a productization standpoint, you know, where are we right now? Because, you know, you sort of you've had this view that autonomous is gonna take longer. You're showing some kind of amazing technology, some of the booth that you guys had, earlier this year at CES, Continental and Bosch systems, with 12 cameras all seamlessly interacting with one another. And yet you look at model year 2024 cars, and we're mostly looking at single-camera, forward-facing adaptive cruise control. So you know, when are we gonna start to see this higher level of automation within cars?
I think, autonomous driving changed quite dramatically in the last 3, 4 years. I would say 3, 4 years before COVID, things just... You know, everybody say, "Okay, now we have a goal to become level 3, level 4 immediately." So everybody jump, trying to build the level 3, level four cars. But I think everybody kinda come back to the stage, they realize that in order to make money on this market and try to reduce the complexity of those systems, people start taking a different approach. People start looking at what's the easier technology we can take into the production immediately? The way I look at market is there are 3 level 2 +, there are 3 major applications. Maybe it's, it's very rough, but, I mean, that's my opinion.
One is, on the very low end, Level 2+, which we call ADAS plus smart parking. Basically, one megapixel forward-facing ADAS camera plus surround view, and you can do smart parking. This is a really low-end Level 2+. China is really driving this technology quickly. The second level, which is higher, we call the highway level NOA, navigation on autopilot, which in a way you can think of that's Tesla's Autopilot, right? You have a technology to help you to do navigation on highway. You can do emergency brake, changing lane, some simple navigation, but when you come to the local city route, you have to, the driver need to take over.
Then, the third level is really, I call it, you know, city level NOA, and that which means really what FSD try to achieve is, you know, you can do autonomous driving, you can really achieve a lot of, autonomous driving functions in a city level, like, you know, London or San Francisco. Not talking about level four, level three, I'm talking about level two plus. If there's any emergency, the driver need to take back the control. With that, I think those three level things, level 2+ , is my definition of the car.
I start seeing people say, "Let's really produce the low end quickly," and particularly in China, and I'm seeing that people trying to do all three levels in their product, any OEM try to do all three products in their product definition. So because the level 2+ , because it's well defined, I think it's going to be much easier to get into production than the level three, level four cars they tried to do in the past. So from that point of view, you're asking me where we are? I think we are in a level, you're gonna start seeing a lot, a lot more level two plus car by those three levels, and maybe even combinations of the different combination of those threes in the next few years.
I guess, you know, you've seen this significant focus on EVs from the OEM customer standpoint, and that's a very good trend for you guys because you have such power-efficient products in the ADAS market. But has that taken energy, in your view, from the development of ADAS, and has that changed the time frames at all?
Like you said, I think EV definitely is a great trend for autonomous driving because all the EV will have autonomous driving function features in there. However, although we do see some delays in our script, this quarter we talk about, there are some project got delayed on the AV side. But I, I really think it's more of because the complexity of the software. You know, particularly, you see a lot of people trying to do their software development and got delayed. I, I really think that getting that complete software stack for different level and go through enough testing, making sure that they have a quality they want, I think that's a bottleneck, and a lot of project got delayed because that. I'm not sure if it's because EV or the EV take our resource. It's really AV is more difficult than most people thought-
Mm-hmm
... 4-5 years ago.
Okay. And then from a productization standpoint, you know, when you talk about those L2+ capabilities, you know, you guys have—there's a lot of focus on CV3, which is a 26 type of product. But before we get to that, can you talk to, you know, the opportunities that you have in surround view?
Yes.
The opportunities you have in driver monitoring and things like that.
So, for example, just in this earnings script that we talk about, we talk about multiple ADAS products that are in production in China, and that's definitely an area we can start seeing a trend that we're winning ADAS business in the most price competitive market. I think that trend will continue. So ADAS, e-mirrors. And in fact, the driver monitor integrating in a different combination with those products is a market that will continue to drive our growth. In fact, you know, in our final discussion, we talked about we have $800 million worth of won business, which means that OEM already award those projects to us. Either it's already in shipping or we're in the process of doing engineering development to finalize those products.
In that $800 million, I think those things we just mentioned is definitely a driving force for the growth.
Great. And then, you know, you've talked a little bit about CV3. Can you talk about what that does for you from a capability standpoint and what the economics are to Ambarella?
Right. So first of all, the difference between CV3 and CV2 is we improve the AI performance per watt by 3x. Why that matters? Because at the edge, power consumption is always a limitation for products. And you know, we also believe that AI performance demands on each device is going to continue to grow, you cannot scale the power consumption accordingly. You have to find a way to scale the AI performance without causing the significant power consumption growth, and that's why CV3, 3x better than CV2 on the power performance per watt, is so significant and so important. And because of that, we can scale to much higher performance and still fit within the power budget in the device. That enable many different applications.
For example, any applications require batteries. That battery life is definitely related to how much power consumption you consume in your main chip, in a system. That's where we think that we help a lot of people to enable system that it's not possible because of the power envelope, limitation in the past. For example, autonomous driving car, for example, a lot of robots in the future that, you know, which definitely has a power limitation, and the CV3 will be suitable for that.
Great. So I guess moving to the path to revenue for this stuff, you have these really important wins that I talked about with Bosch and Continental, the Tier 1 level, and you've had a number of others. I'm not, you know, I don't wanna, to, to, you know, not give mention to all of them, but, you know, really big wins at, at the Tier 1 level. But, my understanding is you're not booking that as, you're not counting that as a revenue pipeline until it transitions to OEM demand. So what does that endorsement from those big Tier 1s give you?
Right. So in fact, that compared to our competitors, we are a very small company. Scale is always a problem for us, particularly in front of large OEMs. So the only way, the best way for us to address this problem is if we can go to approach OEM with a large Tier One together. That has been our strategy for a long time. So winning Continental and Bosch, that we can show up in front of OEM together with a Tier One, that really helped to address the scale problem that we're facing with almost all the OEMs.
You know, just in the past, you know, people said, "You know, if you want to buy IBM computer many years ago, Cisco router many years ago, that's a no-brainer because you will not fire, you will not be fired because of that." So when we have this scale problem, the way to have a Continental and Bosch, that we can pitch together, and that help us to resolve that problems.
And, I mean, you do-- You mentioned scale. You know, I would sort of characterize your three biggest competitors as Mobileye, which is Intel, Qualcomm, you know, as well as NVIDIA, so the, which are the three largest companies I cover. So, you know, you're competing with very large companies, but you have technology. I mean, I can talk through each one, why you have technology that's superior-
Mm.
from a performance per watts perspective than all of them. So, you know, how are you finding that in the wake of all the supply chain stuff over the last couple of years, you know, how are you finding that process of selling to those guys?
Selling to the, those p-
To the OEMs, yeah.
Yeah. So I think, you know, all the OEMs understand that there are two major competitors. One is Tesla on the autonomous driving side, one is China, from the, you know, producing faster, the product faster than they are. So our approach to them today is, although we are small, we can prove to you that we have technology to help you to build a product to compete. And that has our approach. So that's why we need to show up with a real silicon, real software, running a car, to drive, to prove that all the performance point we present on our slide. And that's how we win the design, that's why it take long. So I think, for... We are there already. We, like I said, CV3 chip have been demoed to customers.
Our production chips are there. Our AI software is running, what that mean to customers. So it's a long process, but because we are small, because we are coming to this market with our size, we have to do every extra work to prove our technology is better for them to use. And I think throughout several years of work, working hard, I think we're there. And again, if you are interested to look at our technology, CES is the best place to look at it.
And in addition to all the vision capability you guys bring, you also have unique capabilities in radar-
Mm-hmm.
through the Oculii acquisition and some of the subsequent development. I wonder if you could give us an update on that.
Yes. You know, when we look at a CV3 implementation, we come to a conclusion that we need a sensor redundancy, and the best redundancy sensor for us is radar. And then we start thinking about what's the technology to acquire? When we look at all the radar company out there, we believe, first of all, we need to have a 4D imaging radar, because current traditional radar performance, when you apply that to Level 2+, is difficult, and it didn't give you the quality one. So you have to full 4D imaging radar, but also has to be cost sensitive. And the Oculii, that definitely provide both. They find a way to do high-end 4D imaging radar with very cheap analog device, analog front-end chip in front.
For example, we can use a 4 by 3 or 6 by 8, the limits, and to implement the 4D imaging radar, which is very difficult to do. The way we do it is using algorithm to create those density of the resolution. What we achieved in the last 2 years, after acquiring them, we understand the algorithm to implement all the algorithm the Oculii provide, and we add those function and features, hardware acceleration, into our CV3. In the future, CV3 not only do video perception, but also radar perception. More importantly, we don't do, you know, object detection on both independently. We merge the point cloud of radar and the camera so that we can have much higher density, and we do inference on that, which give you much better prediction.
And with that, I think, first of all, it has been proven by many research papers that give you better, much better accuracy. But more importantly, you have to have a very powerful chip to do this kind of processing, and the CV3 is one that can do all of that.
So you build this together, I mean, you have really interesting capability. In terms of how you turn that to revenue, you've talked about now a $2.4 billion funnel, which is up only slightly from a year ago. But still, for a company doing $80 million in trailing revenue, to have that's your six-year revenue potential, I believe, is like $2.4 billion. Can you talk a little bit to the changes that you're seeing in that funnel? You know, it did come up, and again, it does reflect-
Right.
At least 5x growth, you know, in revenue, I think, in the next few years. But can you talk to the puts and takes around that funnel?
So I think the biggest change is really that the throughout the last year, I think the forecast from the OEM Tier One changed and reduced. That's a major impact for the business, in the one business, also in the pipeline, when the volume projection reduce, that impact our funnel. That's the first one. The second one is already we talk about. There are a lot of project we're bidding on, got pushed out, delayed. The decision got delayed. That's second one. So these two are negatively impact our funnel. But the third factor is really project won and loss. The net effect of that is that is a positive impact to our funnel. The fourth one is really new project that will bid on. That obviously is a plus.
So I would say net net, although we didn't grow a lot, but it's really about the current project projection, that because current automotive business, our OEM Tier One is more conservative on the projection. That cause a lot of reduction on our pipeline.
Great. And you mentioned on the call about there being a larger component of CV3 in that funnel.
Mm-hmm.
Can you talk to that? You know, you're starting to get visibility now into what those opportunities look like, which is still three years away from a revenue standpoint, but starting to show up in your funnel.
First of all, CV3 first time show up in our one business. We talk about we won some project-
Yeah
... and they show up, although it's not big dollar sign, but definitely it's a good sign. On the pipeline, I would say a much bigger chunk of the pipeline is CV3 now, and that because the new project we're bidding on is because the CV3 design wins. Particularly, that now we are showing up with 685. We're going to show up the whole family of a CV3 chip, and we already present the family to our customers, so they know what to expect. So that definitely help us to get more RFQ bidding on the new projects.
Great. Maybe I probably should have asked this earlier, but can you talk about China as a market for adoption of products like CV3? You know, the pace of adoption, I feel like in the U.S. at least, probably also in Europe, there's a lot of political regulatory challenges to deploying these technologies.
Mm-hmm.
Whereas in China, there's a little bit more pragmatism about, you know, is it safer than a human? We can start to deploy it. Can you talk about, you know, Ambarella has a nice presence in China. Can you talk about your capabilities there?
Yeah. So first of all, you know, because our, in the past, we have, you know, we focus on, you know, security camera in China, focus on consumer, so we do have a big China team. And also, because of the team, when we try to use our automotive solution in China, we quickly found out that in the last 3 years, particularly, Chinese OEM and Tier 1 really realized that the best way for them to implement project is go to low-end first and go to production early. You know, for the project that we talk about, the three level, two plus level, we are seeing projects in China, all of them, and they are not talking about 48 months of development cycle. They are talking about 18-24 months development cycle.
Some project we're bidding on, on the low-end side, we're bidding on today, and their targeted production day is the end of 2025, early 2026. And because of that, we believe when we win some design win with our CV3 chip, that's where the first CV3 revenue will come from, and that's why we need to really focus on that end market. It doesn't take our focus away from Continental and Bosch or other European, U.S., Tier One OEM. It's just like for us to get a faster revenue from CV3, that's a much better bet.
Great. So I wonder if we could shift gears a little and just talk about the shorter term environment. You know, you guys, a lot of our small-cap semiconductor coverage had a difficult 2023. You guys certainly did as well, but it seems like from the earnings call, those trends are starting to stabilize. Can you talk to, you know, what drove the inventory correction and what gives you confidence that we're starting to come out of it?
... We're being this inventory correction for six quarters already, and throughout the process, we spent a lot of time talking to customer to understand their inventory level. We try to build our own spreadsheet to forecast their inventory level. So two things happened in the last three months. First of all, we start seeing, you know, reduction, meaningful reduction on our current customer inventory. And then they are telling us the inventory reduction is happening, and they think they feel comfortable for different customer at different timing. For some, some of them are already in a better position, some of them will take maybe two to three quarters to get to a better position, but obviously the trend is coming down.
But in the last time we talked, I told you that there's one metric I'm monitoring, which is book-to-bill ratio. For the past few quarters, our book-to-bill ratio is problematic. In the last three months, we start seeing that book-to-bill ratio to back to much respectable level and also shows stable conditions. When you see that your book-to-bill ratio grow up in a significant way and start stabilizing, that give me confidence that our customer are coming back to putting orders in a regular way. That also tells me that the inventory probably is in a better situation.
So both of those things give me confidence that we believe Q4 will be a little growth over Q3, and even Q1 will be a growth over Q4, and next year should be a growth year for us.
Great. So I'll ask one more question and see if we have questions in the audience. But, can you talk a little bit, you know, this has been a big year for AI as an investment theme, really focused more on language than video and vision, but you guys obviously have really good edge AI capabilities. Can you talk a little bit about how your thinking has evolved about how to attack those types of opportunities?
Well, with that, I really think I want to invite all of you to see us again. We're going to show you something, how to do image analytics with LLM model. And we believe that if you do some... You know, there's already LLM model that to do video image processing or not image processing, image analytics. So we believe that if you treat just image as a two-dimensional signal, and you pass that into a one-dimensional tokens, and then you can use the LLM model to do a processing, you can do a lot of things that the performance and the functions that apply to speech, and you can apply similar function to the image. And it's not me saying that.
In fact, there are a lot of research paper already talk about how to use the LLM to do image processing. So my belief is LLM will become very popular also in our current space, namely IoT, automotive. Particularly, you know, if you already believe that transformer network has become so popular in, in the autonomous driving software stack, and LLM is just a much bigger version of a transformer and a different combination. And we believe that both LLM will apply to our current market and also adjacent video market that can take advantage of LLM solution. By the way, when we say LLM solution, we're not taking our new chip. We are using current CV3 silicon that we are sampling to customer to run LLM.
We have demoed this to our customer today with a real silicon, real software, and please come to see us to look at it.
Great. Let me see if we have questions from the audience. Then maybe I'll just follow up on that. I mean, I, you know, I sort of felt like when you saw the prevalence of AI, I feel like this will be really good for Ambarella stock because, you know, you're very good at doing edge AI. And I know when you started talking about it, there was some consternation about, you know, do we need to concentrate our investment? So how do you, how do you kind of balance those things? I mean, there's a lot of growth opportunity here that can probably come much quicker than it does in the automotive space. But, you know, and it seems like you've backed away from some of the more investment-oriented things like cloud AI and things like that. Can you just talk about the trade-offs implicit in that?
That's absolutely true. For example, you know, we believe our CV3 solution can go to different, many different markets, but the easiest market for us to address is the market that we're close to. We know where they are, and there's minimal additional investment we need to put in to enable that. We could have gone to the cloud, but the problem is just try to go to, go to the NVIDIA home court to invest heavy investment to go at that is not a good business exercise considering current situation. So we decide that although we try to leverage our CV3 solution and focus on edge device, edge LLM, LLM opportunity. I think that's our current thinking.
Great. Well, Fermi, thank you so much. Appreciate your time.
Thank you. Thank you very much.
Thank you. Thank you. Give it a couple minutes.
All right. Good morning, everyone. I am Sanjit Singh. I am the US-based infrastructure software analyst on the Morgan Stanley software research team. Our next presentation is with Akamai, and we're super thrilled to have the CEO and co-founder of Akamai, Tom Leighton. Tom, thank you for, for joining us today.
Nice to be here.
So Akamai is, you know, built using its capabilities in building large-scale distributed systems to attack the security market, to attack the compute market, to attack the delivery market. I wanted to start the conversation, spend some time on the security opportunity because it's a business that's now crossed a $1.8 billion run rate, and it looks like you're on the cusp of security becoming, you know, almost 50% or crossing 50% of your business. Maybe just to sort of set the stage, you guys have been in the security market for just around a decade, and if we take ourselves back to that time, you know, Akamai starting in the CDN market, what was sort of the initial thoughts about moving into security?
Like, what gave Akamai a license to become a security player coming from CDN?
Well, technically, it made a lot of sense. You know, we built a massively distributed platform with 4,000 PoPs in 750 cities, initially to distribute content. And so wherever anybody would go to get content from one of our customers, all that communication would go through our edge server. And that's the perfect place to apply security, you know, to filter out the DoS attacks, to filter out the application layer attacks. You know, you're delivering it this way, but the attacks coming in are also passing through our servers. So why not, you know, put that first layer of defense there? So it just made a ton of sense. Now, the actual buyer within the account would be different.
Ultimately, goes up to the CIO and CISO, so that was a little bit of a challenge, and it was hard to get started commercially because before Akamai, everybody would buy a box for web app firewall , and they'd put it in their data center, and they'd hire somebody to manage it. And of course, the challenge was that as you got higher volume attacks-