Good morning, everybody. Good afternoon, everybody. So happy to introduce today Fermi Wang, the CEO of Ambarella, for maybe the sixth time at this conference, several times already.
Several times already, yes.
Yeah. So I always appreciate you being here. And a really interesting time to see you. You've had a year. This has obviously been a big year around the Edge AI theme. You guys have sort of pivoted and refocused a little bit. Edge AI is now 80% of your revenue. Can you just talk about where you are in the big picture, where you've come from, and where you're going?
Right. So first of all, I want to clarify one thing, which is that, in my opinion, we're only building one technology, which is really edge AI technology for hardware and software. And this platform, hardware-software combination, can serve many different applications, including automotive, including what we call IoT space under IoT, enterprise security, and a lot of new applications that we talk about, drone, enterprise, edge infrastructure. So in my mind, there are many market segment opportunities because of the edge AI technology we provide.
So for me, we're going to continue to invest on the enable more and more edge AI applications, particularly today is video plus AI plus low power consumption. That's the focus of the company. And that will be the core of our revenue growth. Of course, that we are talking about moving to the edge infrastructure. Maybe non-video data will become also a play in the future. But definitely today, we are focusing on any applications that can take advantage of our hardware and software platform.
That's a really important point. I mean, we wrote, I think in 2019, our big debates report was on who will win the battle for edge AI inference and Ambarella features prominently. So this is not something that's new. This has always been a focus. What's new is maybe the broadening out of application set beyond cars into these other markets. I guess, can you maybe talk a little bit about automotive? Obviously, this technology has really critical capabilities that you would use in the automotive market, but it's just been slow to see these kinds of features adopted. Can you just talk about where we are in that?
So I think you are talking about really the autonomous driving level two to level four. Maybe just before I answer that question, let me answer a little bit different question. Our automotive business is still 21% of our revenue and growing. And the area that grows is really another interesting edge AI market that we just developed starting two years ago. We call it AI telematics. The biggest customer is Samsara. And that, in that application, the first product Samsara used us to do a camera facing outside, a camera facing inside with the driver, to providing more and more AI functions for ADAS and the driver monitor system.
But now, if you look at their promotion, they are talking about more and more edge AI functions. They want to integrate to the solution, not only include more camera, but start putting large language models into that space. So that is definitely another example. Two years ago, we didn't know even this application is part of our roadmap. But because Samsara is using our solution penetrated, we realized that this technology can use this market can use our technology. Back to your Level 2, Level 4 question, this is definitely a tough year for autonomous driving.
If you look at the, it's not only us talking about this message, but a lot of people in this industry, both OEM, Tier 1, and also semiconductor companies. The reason for that, I think there are two reasons. One is, with all the pressure from Chinese OEMs and also Tesla's FSD, people come to the conclusion that most Western OEMs come to the conclusion that their product line needs to be reshaped to a way to become more competitive. Second thing is the software stack, the autonomous driving software stack, has become the obvious weakness for the auto OEMs.
They are trying to figure out what's the best solution for that. With these two reasons, we saw fewer RFQs available to the bidding. Also, even with the existing RFQs, they all push out trying to understand what's the right spec and what's the right timing. With that, Ambarella's approach is trying to solve the obvious problem. We are trying to offer a software stack, a production working software stack. However, we are not bottlenecking as a black box solution like our competitors.
We are trying to sell the software stack, enable the feature functions that our customer might find that they can use, they can license part of our software, maybe even a whole software stack, that we are opening up as a licensing model, as a white box solution. And we believe that by providing a scalable software solution that can scale easily from level two and level four, in fact, we already proved that scalability to some of the OEMs out there. And with that, I hope we can speed up trying to solve one of the difficult problems in the autonomous driving. But nonetheless, this year is definitely a very difficult year for any autonomous driving suppliers.
Thank you for that. And I know you've always kind of led with a software-first mentality on these types of products. I know a lot of your engineering workforce is software-based. Can you talk about the importance of that? And now, as you sort of take this stack of software plus hardware, and you can apply it to a lot of different markets, can you talk about the role of software?
Yeah. I think that's definitely important. One statistic is important that although we are a semiconductor company, our engineering resources, the hardware to software engineering ratio is one to six. That just shows you how important software size is important for us. But it's not only just software by itself. For any silicon, building a software SDK is important. And it's even important for us because every time we try to convince one of our customers to switch their software platform from the NVIDIA GPU engine to our platform, the biggest resistance is how you help them convert from the CUDA to our Cooper SDK.
And that is not difficult. It's really the mindset that I spend so much time on CUDA already. Why do I need to spend time to convert to a different SDK? So the only way we can solve that problem is from the business side to convince people that there's power advantage for them to move. But at the end, that software structure that is not only mature but also flexible enough for people to move any CUDA software to our platform is fundamentally important for us to have a successful business. Because almost every customer we have today, they all used NVIDIA in the previous generation one way or another.
So from that point of view, that mature software, not only just an SDK, but a compiler to move, help people to port any trained model to run on our chip. And also, even in the application level that we show people examples of how to run applications on our software platform. All of those software are important for us to win design wins.
Great. Thank you. And so then, as you talk about those new verticals, can you talk about what they are? You've talked about edge infrastructure. Can you define kind of what that means and some of the examples?
Right. So in fact, I would just say 12 months ago, our largest market is still enterprise security, and it's not true anymore. It's not that enterprise security slowed down. In fact, we still see very strong growth on the enterprise security. It's really that other areas of edge AI start growing and going faster. We talk about two new markets in earnings call two quarters ago. One is drone, one is edge infrastructure. And drone, these two are totally different applications, in my opinion. But funny thing is they can use the same hardware and software structure that we are providing to our customer.
And for the drone, we are offering two types of solutions. One is if you only want a drone to capture video. In fact, one of the products that our customer introduced, they put a 360-degree camera under the drone. So when you fly, you are not only seeing one direction, you see everything surrounding you. And that really helps people to navigate the drone if you're using manual navigation. That's one product. The other product is really a drone, just another type of robot. And everything we develop for autonomous driving car applies to autonomous driving drones.
In fact, you can view that most of the drones today, we call the level two plus drone because you still need people to manually control it. But there's a lot of already autonomous functions in there. And the drone will move to level three, level four, probably faster than the cars. From that point of view, you need a very powerful domain controller on drone to perform those functions, to avoid objects, to navigate, to understand the performance. So from that point of view, I think our CV3 family product that we define for the autonomous driving, we'll eventually go to the autonomous drones.
So that is a market that we think is important. Although the market is still relatively small, it's 10 million units per consumer drone today, mainly dominated by the DJI. But the window of opportunity has opened up because DJI got banned by the United States government. So that 1.5 million consumer drone market in the United States opened up for fight. And we are seeing multiple customers trying to fight on that. So that's just one new opportunity from zero to meaningful to us very quickly. The other one you asked about edge infrastructure, which is even more important for me.
Edge infrastructure means in the past, we sell our solution to what we call edge endpoints, cameras, or any form of cameras into a different device. But edge infrastructure is really the aggregate different types of cameras and performing higher-level functions in a box that we never did before. So in fact, we announced our first product two quarters ago. And the applications for that particular application are very simple. It's trying to aggregate multiple camera feeds. And for example, in this hotel floor, say there are 20 cameras. Most of them probably are not even AI-enabled, let alone the ChatGPT.
So if you want to upgrade those cameras to be running ChatGPT-type models, the easiest way is in your engineering room in this floor, plug into an appliance box with one of our N1 chips, and feed those 20 cameras into that box. And then you run the large language model on that box so that the older feed can suddenly be upgraded by the ChatGPT already. So from that point of view, that becomes the easiest way to upgrade installed-base cameras. For security cameras alone, it's 2 billion installed-base worldwide. So we are talking about a huge opportunity, not only for the hotel, but retail.
Any retail store probably has four to eight cameras. You can easily upgrade in the same way. So we are viewing that as an opportunity. But we're still talking about video-related edge infrastructure. There's definitely non-video-related edge infrastructure. I think all the corporations start talking about how to upgrade using training their own LLM, but they want to run the LLM on on-prem servers, not instead of trying to run at AWS or other cloud services. From that point of view, you need on-prem edge servers or edge infrastructure boxes that can provide seamless performance.
And why we have an advantage? Because all of the applications we talk about, power efficiency continues to be important. The engineering room here, I bet you, is not well air-conditioned. The power consumption is definitely a problem. Even the power supply to the box is sometimes limited by the configuration. So from that point of view, I think that our power-efficient solution for N1655 is suitable for that.
Maybe we could talk a little bit about that. In the past, I feel like we've sort of moved a lot of the intelligence onto the camera, where you're doing a lot of the edge AI kind of resident in the camera. It's very clear. You guys have been pretty dominant in that business. The value proposition's pretty clear of moving that intelligence into the camera. When you talk about moving into an edge-based kind of box, do you still get the same benefit of performance per watt? Are you more putting yourself in competition with GPUs and things like that? Just what's the value proposition?
Right. So I still think that the performance per watt is important, particularly for the first application I talked about. The engineering room sitting here, in fact, a lot of the box is supplied by Power over Ethernet. So basically, your AI performance for the box is defined by how much power efficiency that you can get out of that chip. So yes, power efficiency continues to be important. But there's another driver, is really the box itself. Most of GPU boxes require heavy air conditioning, water cooling system. I don't think that's widely available in a server room. Even in my company, I don't have a water cooling system in there.
So from that point of view, if you really want to have powerful on-prem servers, I think that power efficiency continues to be an important factor.
Great. I guess maybe if we could talk a little bit about the surveillance market, more home surveillance and things like that. I know that used to be a bigger category for you. There's a lot of price sensitivity. The cameras have to meet really low price points. But it also seems like the value proposition's really strong. And as a consumer of video cameras, where you see all you can do is turn the sensitivity up and down, there's really limitations to that when you talk about doorbells and things like that. Is there going to be an application for you guys as the sort of intelligence in those devices grows again? Or to what degree have you had to walk away from those opportunities?
Right. If you ask me the question 12 months ago, I would hesitate. Today, I am convinced there is definitely an opportunity. If you look at all of the home security suppliers that are working on Ring, Nest, they all are enabling a new service by running CLIP type of a vision language model on the server side. The video streaming from your home to the cloud. At the cloud, they store the video and apply this CLIP, vision language model, on that so that you can provide more services. They are charging $9.99 per month for that service. We all know that Ring and Amazon and Google can do that because they control the cloud.
But all the other major consumer security camera customers, when they try to use the cloud to provide the service, they are limited by the cost and also the transmission bandwidth, the storage cost, and the processing cost on the cloud. In fact, when I talk to them, they are convinced that if this kind of similar service can be offered using an edge device, that the CLIP model can run on the camera. And although you pay a little higher price on the processor and the memory, it can easily be compensated by the lower cost on the cloud as well as the transmission cost.
From that point of view, I think that new service enabled by vision language model is a clear way to upgrade that service. And I believe that our new chip can run 2 billion parameters ChatGPT model for a 2-watt chip. That will definitely enable this kind of service in the future.
Yeah. I mean, the value of these applications really seems to be growing. Can you talk about robotics a little bit? And I guess it seems like drones are on the path there. You see, you go around Los Angeles. You see little refrigerators driving around delivering stuff. It seems like there's a lot of before we get to the humanoid robot upstairs. There's a lot of applications for vision in these robots. Can you talk about your view on that market?
Yes. It's become clear. In fact, I have been saying this before. I view that autonomous driving car is just one special type of robot. That applies to drones too, so I think today, if you look at the biggest robotic application, it's autonomous driving cars and drones, and new applications are popping up, and so when I look at this robotic application, I focus on what we call mobile robots. Any robot that needs to move, and that can take advantage of all our investment on our CV3 technology designed for autonomous driving, so AMR or any other humanoid robot in the future, any drone needs to move, and they need to understand the environment, need to find a way to maneuver over different objects and design the path they will need to move, and then finally decide what kind of function they need to do.
This really sounds like an autonomous driving car for me. So from that point of view, we believe we will continue to focus our robotic development on the mainstream revenue-generation models first, meaning cars and drones, and use that to continue to fund our investment in this direction. That's why, in fact, we definitely continue to invest in autonomous driving cars because everything we invest in that direction will be heavily reused in the robotic application. But the biggest problem for me in all the new robotic applications is it's very segmented.
There are a lot of developers, and they're all trying to demo and showcase their products in a prototype form. How to enable those guys is important for me because we are not talking about one or two large customers anymore. We're talking about hundreds of different robotic applications, and we need to engage with them. So we do have a plan. In fact, at the CES, we're going to have a technology conference. We're going to highlight our new product and new technology. And we definitely will highlight how we want to develop a new go-to-market system that addresses these robotic applications.
Yeah. It's interesting because you've historically had fairly concentrated customers in automotive, enterprise security, markets like that.
That's right.
Okay. Makes a lot of sense. One of the questions we get a lot, particularly when you start thinking about these more consumer-centric applications, is gross margin. You have a model of 59%-62%. You've had a tendency to walk away from markets where you don't see the value a little bit. Is that going to be the right margin structure as you think about your future business mix?
In fact, all of the consumer applications that you're talking about, looking at our drone, we're talking about a $25 chip. So that, in fact, the customer drone, the consumer drone they are selling is $1,000. So it's not cheap. So definitely, there is value, and people want to buy high quality, particularly if you want to compete with DJI. The quality has to be one of the major concerns. So from that point of view, definitely, price is important, but gross margin, I think, is important.
The most important thing for me in the last few years is we gradually started to realize that while we try to maintain the 59% to 62% gross margin target, we are willing to trade off a little lower gross margin to higher revenue, therefore higher leverage on the operating margin side. That's the thing we're trying to talk about. I think we're only willing to do with large customers. And today, in the past, we talked about automotive customers can be one of them. But today, our largest customer is on the consumer side. So it's not the consumer-side market driving us to lower prices.
It's really that they have the volume. They have the potential higher revenue growth for us. And that's where we're willing to trade off our gross margin.
And drones in particular, I mean, DJI was once a big customer for you guys. And I know geopolitics was part of the issue there. But is it also that there's just a lot more value going into these drones now? When you were doing more kind of image sort of capture, now you're doing more image analytics.
Right. So if you look at how DJI drone has been used, although it's a consumer drone for the consumer video capture, but they have been reused in many different applications. And I've seen people using DJI drone for inspection, for many different types of other applications that's not possible to use in any other technology. So drone, to my surprise, when we were working on drone 10 years ago with DJI, the whole market was like 1.5 million units. And people think that would be saturated, maybe two. Today, we're talking about 10 million units of consumer drone.
And out of that drone market, 9.2 is consumer or prosumer, and 800,000 is commercial. So I do believe that this drone market will continue to grow because people start identifying more and more commercial applications. But I think the right approach for me is we need to focus on the customer who has the ambition to be the player in the consumer side or prosumer side so that they can drive to the scale to get the best commercial scale so that they can compete in that 10 million units market. And with that, I think they will have a capacity to develop a solution for commercial drones.
That commercial drone is a lot more profitable. But however, if you don't have the scale, you won't be able to compete with a company like DJI, which is already in the market and dominating the market. So I think that the business model approaching this drone market is very important. I think that technology matters, quality matters. But more importantly, there is already a dominant supplier. You need to find a way to coexist with that.
Just to double-click on that, the military drone market seems like a very obvious application where you really need good computer vision, but it's also one that's specialized, people that are optimized around military applications. Could that be an application for you guys as well?
We don't design chips for the military-grade. However, I do believe some of our customers or design houses are building a camera with our commercial-grade chip and selling to that market. But we don't have any customers really in the military level of customers.
Okay. Great. Maybe if we go back to the automotive opportunity, I mean, the technology that you've delivered is really a breakthrough. And we've seen that years ago. And you've gotten wins with some of the biggest Tier 1s that specialize in autonomy. And we just haven't seen adoption yet. I guess where do you think that stands if you look over the next three to five years? Can people look at the advances of Tesla's FSD and do nothing? Do you think that there's a call to action there that we need to start implementing some of these features?
Absolutely. In fact, one of the things we talk about is this really bad year for autonomous driving, but people are still trying to figure out how to compete with FSD, and now I start hearing people talking about end-to-end models in the Western world, which is a good thing because without that, I don't think you can compete with FSD, but however, to run the end-to-end model, both on the hardware side and the software side, is a huge commitment.
We know that because if you look at the software model that we work with, this is the model that the company will acquire, and we take a few years to get to a point that our software stack is two large models, but to combine the two large models to become one end-to-end model, it takes effort, but we know how to do it. We'll do it. But it takes years to get there. So I really think we talked about this just a few minutes ago. I think one of the biggest bottlenecks for us, for Western OEM, and for us to get penetration into that market is we need to start selling our software in a way that adds value to our customers.
How to get a better perception with our processing module that we can do sensor fusion between a camera and the 4D image radar, and also running everything in a large end-to-end model that runs on our 685. We can demo it. When we demo this and take that software ready to be in production, I think that's where one of the solutions we think we can help that to resolve the current situation that people are looking for a software stack, and they haven't found one. But more importantly, we believe our approach is scalable.
When I say scalable, it means I think our approach can scale from Level 2 to Level 4. Of course, you need to reduce the number of hardware, number of sensors. But if you are training that model properly, you should be able to scale your performance down in a way that you can easily use a single end-to-end model to address Level 2 plus to Level 4 applications.
Great. I want to follow up on that. Let me see first if we have any questions from the audience.
Just wondering what do you think the market is missing?
About Ambrella?
Yeah, yeah.
I think 99% of the AI investment is still on the cloud. Although I think a lot of people here to listen to this presentation because you appreciate Edge AI , but I think the majority of the industry still think Edge AI is on a niche. Maybe if you don't think they can become big, then that's probably one of the reasons that they don't pay attention to Ambarella. But I really think personally, I think give another 10 years, I think Edge AI can be as big as the cloud because there are so many applications that you're looking at today that have to be implemented on the edge. Robots, that's an obvious one. There are many other applications.
If that latency matters, if the privacy matters, if the private data matters, it has to be on the edge side. So from my point of view, I truly believe that when people realize there are new applications that will require running the AI on the edge, that we should get our fair chance to be competing in the space.
Questions? Maybe just to follow up on auto. I mean, how much of these advances are tied to EV? Because it feels like with internal combustion, implementing a higher degree of autonomy, there is just a lot of technology challenges that need to be solved with physical actuators and things like that. It is just easier if you are redesigning the whole vehicle around EV to start implementing these features, as Tesla has, as Rivian has. I guess do you agree with that? It seems like that is a really strong positioning for you guys because a lot of the stuff that we are seeing in internal combustion is not going to translate into an EV where they just cannot meet the power budget that you can meet.
If you asked me this question 12 months ago, I would agree with that. EV and autonomous driving really go hand in hand. Now with the new people start delaying the EV distribution and slowing down the timeline for the EVs, we start hearing a lot of OEM customers start saying how we can implement autonomous driving on the ICE cars. In fact, we start seeing RFQ bidding on that because those cars need to have the autonomous driving to stay competitive. With the EV schedule got delayed, it really brings more attention to the ICE car and the autonomous driving.
I think although we'll just start hearing it, I won't be surprised to start seeing autonomous driving function being enabled on the ICE. I mean, it's incredible to me that we've had the breakthroughs that we've had on reasoning models at the edge, and we've actually moved backwards in autonomy. It seems like we can only move forward at some point. I don't want to comment on the political environment, but that's a reality we need to deal with. But reasoning model, let's give you another example. We can run a reasoning model on our 2W chip today.
We talk about this, that our CV75 is a 2-W chip. We can run a 2 billion parameter DeepSeek model on that. But the problem is, what's the real application with the reasoning model for each device? I think whoever figured that out is going to be one of the biggest potential customers for me. We are not the one to drive application for AGI, but we are enabling all the functions that are not possible in the past. But now we are definitely thinking that with our silicon, we enable something that's impossible. And hopefully, our customers can take advantage of that.