Great. Welcome back, everybody. I'm Joe Moore . Very happy to be here with the CEO of Ambarella, Fermi Wang . So Fermi Wang, maybe we could just start out, you know, you have in a period where there's more enthusiasm for AI than we've ever seen. You guys have 60% of revenues coming from Edge AI. So maybe put that into perspective. I mean, you didn't really call yourself an AI company in the beginning, although that's what it clearly was, even from then. So maybe you could just talk a little bit about what you guys do in the Edge AI world and how that fits into this overall AI ecosystem.
Thank you, Joe Moore. I think, you know, going back six, seven years ago when we started our we call CV2 Family chip. In fact, at that time, AI is very simple. It's really about object detection, classification, you know, traditional computer vision, neural network to enable some basic AI functions. But things changed quite dramatically. In fact, based on CV2 Family, last year we generated 60% of total revenue based on AI silicon, mainly CV2 Family. Then we come out our third generation of our computer vision chip called CV3, which is also AI-based. But this time we target a much higher AI performance for transformer-based network and target at automotive. When we start building CV3 in our wildest dreams, we never thought that the GenAI would happen. So we really defined CV3 Family based on transformer for the autonomous driving, Level 2, Level 3 and Level 4.
Fortunately, that transformer, because of Tesla, became a major requirement in automotive. And we can prove that with CV3 we can do much better AI performance than our competitors. In fact, you know, if you look at the software stack, we have a software stack, a Level 3 software stack that we demo at the CES. 95% of total performance run on our AI processor, only 5% processing run on traditional CPU like ARM. That's how much AI that we move to for the autonomous driving. Of course, then with latest GenAI, which is also transformer network-based, we found out that our CV3 chip can perform a lot of GenAI performance. So basically you can see that in the last few years we start with the CV2 family for just very simple computer vision traditional functions, move to transformer-based for autonomous driving domain controller.
Now we are addressing even bigger opportunity of GenAI.
This isn't, like, a trend-following thing. I mean, this is a really true thing that the people doing image resolution in automotive, the incumbents, are still mostly using heuristic algorithms to do that. You guys very early on adopted the fact that AI software was the way to resolve this. Is that right? You know, can you just talk a little bit about that software approach that you have?
Exactly. So, you know, the software stack we developed is really focused on people call it end-to-end AI. That's called, you know, almost all the modules in there, including perception to the sensor fusion to the path planning, even for the controlling car, all of them are controlled by AI. For example, you know, we also use heuristic at the beginning. For example, when we acquired VisLab, which is really the company that generates software stacks for us at the beginning, in 2015, their software stacks mainly is running on CPUs. Then we gradually move it. But what's the why moving to AI is so important? I'll just give you an example. You know, in Europe, when we do the road testing, one of the most equal problems is a roundabout.
When cars approach a roundabout, in the past, when we used heuristic algorithms, we specified exactly how the car should perform when it reached every roundabout.
As a human, I have that problem too.
Then after we switched to AI, when the first time I saw the car, I was shocked how they approached. Because when the car approached the roundabout, it pushed. He didn't stop at the line. It pushed ahead over the line, just like a human driver, because you get better visibility that way. Then I go back to ask the question. It turns out they exactly learn from the human behavior. That's one just one simple example why learning with although that, I don't think they understand why they want to do it, but because they achieve better performance and accuracy, that's the model they continue to use. So we believe that a lot of heuristic code should be replaced by AI in the future, particularly for the autonomous driving.
So, I mean, I think we define a lot of the AI opportunity today by training in the data center, you know, which is not a market that you have pursued. And it sounds like after some thinking about it, you're not pursuing inference in the data center either. So can you talk about the merits of what you are doing in the markets that you are focused on?
Right. The reason that we decide not to go to the data center is just, you know, still, we are a much smaller company trying to go to a very large market. It's best interest for us to focus on the area that we have a differentiation as well as have some expertise in there. That's why we focus on Edge AI for the inference at the GenAI market, particularly for the market that we are familiar with: security camera, automotive, robotic. Those markets that we are very familiar with will have customer base. That's why we want to focus on those markets for GenAI first. But that doesn't mean we're going to limit ourselves to those markets. We believe that the market we're going to focus on still continues to be Edge AI.
Any market which requires a very low latency, very low power consumption, you know, and also requires data privacy for GenAI, any market satisfying one of several of those requirements should be our target market.
okay. Then we've talked a lot about CV2 and CV3 over the last two years or three years. If you could put this kind of into a perspective that now we're looking at transformer inference and particularly some of the complexity of that, can you talk about the role of CV2, which is your existing revenue stream, and then how CV3 evolves from there?
Right. So in fact, it's very easy to just using our ASP change to simplify the discussion. You know, before AI chip, our corporate ASP was $6- $7. After we introduced the CV2 Family, today our ASP is $12- $13, basically double it. When we look at CV3, our ASP is going to be from low- end of $40 to high- end north of $400. So we're going to continue to see the ASP improvement. And we believe when we hit GenAI, our ASP is going to be even higher than that. So I think, you know, that's why a few years ago we decided that instead of trying to identify ourselves as a video company, we want to identify us as a company that continues to help to focus on AI performance at the Edge. That's how we define ourselves at this point.
So maybe you could educate me a little bit on this, like, in the sense of, you know, it seems like for a long time AI was focused on convolutional neural nets. And the inference task was not simple, but more manageable. You didn't need, you know, even in cloud you didn't need $25,000 cards to do inference. You migrate to Transformer, and the inference task becomes several orders of magnitude more complex. And suddenly we're using this very expensive hardware. And I think the same thing is happening at the Edge. Can you just maybe explain a little bit that transition for us?
okay. So for example, our CV2 family, the problem they solve is reading Edge detection, space detection, license plate detection. You know, there's an object. And this object is a car, is a dog, is really in the object classification, detection, maybe some of the open space detection. It's really low-level object categories. When you take down to the Transformer level, you are talking about different level integration. For example, the Transformer is famous for building Bird's- Eye View based on multiple cameras. You basically integrate multiple camera inputs and build a Bird's- Eye View of the car so you know how to navigate through traffic. You can see that the performance requirement to integrate multiple cameras compared to just identify a few faces in a camera. It's a different level of performance requirement. And let's take one notch up.
For example, at CES we demo a neural network, it's a GenAI called LLaVA. The idea is LLaVA taking a live video input to N1 chip and N1 running this neural network, the LLM called LLaVA. The idea is the LLaVA output is describing in text what they see on the video. So basically you can consider that as a video-to-text conversion. And this is something that cannot be done in the past. Because the description is very in detail. It's not just say there are how many people there. It's people wearing, you know, black suits, you know, wearing black pants, drinking coffee, you know, and they also identify how many people there are on the space, what kind of environment they are. It's in detail what kind the camera things that they are describing.
So basically you can imagine that it's as good as a security guy sitting in front of a monitor describing what he sees. And suddenly this becomes very useful tools for, you know, any company security say, let's say in an airport they have 1,000 cameras. If you can feed that 1,000 camera feed into the software, every second you can generate a text report about what's happening in the airport. And with all of the text search technology we have today, you can have a database of what happened in the airport every second. Think about how powerful that is and what that's going to change the security requirements in different environments. So let's just give you an example. From a very simple face detection to all the way to software can run can monitor the airport, all the security cameras.
That's the transition we go through in the last eight years.
Great. That's very helpful. And so I guess the process of turning all of this into revenue, you know, which has been a little lumpier along the way, you know, CV2, can you talk about the growth profile that you still have? I mean, CV3, you know, a lot of exciting opportunity, but still pretty far from revenue. But can you look at the IoT market and the automotive market as kind of, you know, the CV2 opportunities?
Right. So CV2, we talk about in fiscal year 2024, which is last year, our total revenue 60% of total revenue comes from the CV2 family. So 60% of revenue comes from AI. And we expect that this year is going to continue to grow, although we haven't disclosed the percentage. But it's obvious that it's going to continue to grow. In fact, the growth the percentage revenue goes from 20% to 45% to 60% in three years. That just shows you how fast our CV2 AI revenue grows. We expect the revenue is going to continue to grow. For example, we talk about CV5, which is a high-end CV2 family chip. It's our first 5 nm chip. ASP is roughly $40. And last year we shipped 500,000 units. This year we're going to double it. And that is a high-end chip.
The volume is going to continue to grow. So let's just show you how fast and how important this AI grows for our revenue. CV3 will take this to a different level. Of course, automotive is a much bigger market, but at the same time ASP. We talk about our low-end CV3 chip is at $40. Our high-end CV3 chips are going to be $400 or above. That ASP jump by itself is going to provide our next step function of revenue growth. So it's not only just that we add more value to our customers. We also collect more value from our products.
And, you know, and I apologize if this seems like a negative-leaning question. I'm actually I'm so enamored with the technology, and it's been hard to get to the revenue level that I thought you'd be at. So, you know, that's why I'm exploring it this way. But if you look at surveillance, I mean, there was a very clear growth trajectory where you had video processing at a low ASP. You were replacing it with AI at a higher ASP. You know, and you did do that. You did continue to ramp that. But you had more headwinds in the legacy than I expected there to be. So maybe if you could just talk a little bit about that transition within surveillance. And, you know, you mentioned how you can see that continuing to move forward.
Right.
You know, you've gotten past some tactical inventory correction, which we don't need to rehash. But can we just understand, you know, the bigger picture opportunity evolving?
From the IoT space, I think the biggest problem is when we identify AI, we decide that that's where we need to focus 100% of our attention. And we decide not to take out any video processor chip, which basically becomes our downside of the problem. And so that you see, well, in the IoT space you see the AI chip continue to grow, and our video processor revenue drop. That's probably the biggest problem. But I think the really biggest problem for us last year is inventory correction. I won't say that 100% of the problem last year was inventory correction, but it played a very big role. And that's why I think our investment on AI will pay off in the next few years. Because our video processing revenue has become much smaller, and our AI revenue should continue to drive our growth.
I think I want to make a point on the CV3. Going to the automotive business is definitely taking longer than I thought.
Yeah.
That's been a problem for us. The biggest pushback from the OEM was not our technology. It was our scale. That's become very clear. Even I admit to our investors that had we, you know, worked in a much bigger-scale company, we could have collected a lot more revenue. But the reality is we are a smaller company. So the way we deal with this problem is, first of all, we work closely with large tier ones. And hopefully by showing up with a Continental, Bosch or Quanta in front of our OEM customers, their size helps us to address the scale problem that OEM has. That's definitely one thing we spent a lot of time the last 12 months to resolve.
I mean, if you do the work here, your technology is really good. Like, everybody that you talk to talks about the breakthrough capability that you have, particularly with the right power envelope and things like that. So how much of that automotive problem that you talk about is the evolution of semis in general being a little slower? It seems like, you know, the focus has been on battery-powered EV. A lot of the innovation kind of got siphoned away. And even now, I mean, you have these wins that you talked about with Continental and Bosch, which for anybody who hasn't seen them, you should see them at CES. Both of the last two years you see these really intricate kind of 12-camera demos and things like that. But, you know, the time frame of that being implemented in cars seems to have been pushed back.
So, you know, not all that's in your control. But how do you think about those dynamics?
I think what's happening in the last three years has become very clear to everybody is the focus on Level 4 is dying.
Yeah.
Even Level 3 got pushed out because there are so many regulation problems. We have been focusing purely on L2+. L2+ is not one product. It's a very wide range of performance. In fact, in China, our low-end L2+ solution sells for $40. Our high-end L2+ solution sells for $200. Let's just show you how wide the range is and how many products you can build in there. And I think after COVID everybody reset their product planning from Level 3, Level 4 to L2+ because they need everybody needs revenue. All automotive guys focus on getting products out, getting revenue. And the fastest way is focusing on low-end L2+, or maybe try to compete with Tesla with FSD-type performance with higher-end L2+. Either way, that's the focus right now.
That's one of the reasons for delay. But I think everybody focused on that. And I expect that will become the mainstream product in the next few years. But I think back there, CV3's chip is not available until last year. That definitely is one of the reasons that we didn't see enough revenue or design wins early enough.
Yeah. You talked about China. When you say the focus has shifted to L2+, is that also a comment on the Chinese market? And then just in general, it seems like the regulatory environment in China is much more pragmatic. It's, you know, is it safer than a human driver versus in the U.S. it needs to be, you know, completely safe?
I think what you said is correct. You know, in China they are even more even faster to move toward L2+. In fact, it's clear the majority of design win activity today in China is L2+. Again, in China most EV companies target Tesla performance and say, I want to be Tesla. Right? That's a goal. I think in China I would say even the majority of the design win activity we are seeing is focused in that area. Either it's very low-end, go to market quickly, making money, or at L2+ compete with FSD so they can claim they are the performance leader. That's the range we are seeing in China.
In China, I think, although EV slowdown, EV market slowdown, but I think if you look at Chinese vendors, they not only think that they are not looking at the Chinese domestic market anymore. They think the exporting business is a market they can tap into. And they that's one of the reasons they need to use foreign components. And that's where we think that helps us to get some tailwind ahead help in the Chinese market.
It's really interesting. I was looking, I went to the Qualcomm Automotive Day a year or so ago, and they talked about all this progress knocking out Mobileye from some key accounts, which they've done publicly. And they did that without really showing any technology breakthrough. And I actually asked them, you know, how are you knocking out an incumbent without a technology breakthrough? And the answer was kind of, you know, people want seamless integration with the infotainment system, and they want it to look like an Android phone so people don't just turn this over to CarPlay and things like that. Which was kind of a depressing answer to me a little bit because it sort of says that the market, you know, the evolution has really, you know, slowed down. So, like, do you see that changing?
I mean, the promise of L3, L4 is still really exciting to me. You know, is it really a regulatory environment enough just to sort of keep that from ever happening?
Right. So I think L3, L4 will happen, but it will push out for different reasons. I think technology is not ready just yet. It's playing a major role, particularly in the U.S. when regulation is tough. But let's talk about the integration. I think everybody agrees that more integration drives down the cost so that it's easier to sell. But for automotive, I think the best integration is not to combine the infotainment system with the safety domain. In fact, even in the safety domain, like autonomous driving, there are still plenty of things you need to integrate. For example, your camera and radar system should be integrated. Today most cars are still putting a radar system and camera system totally independent. You use putting outside your domain controller and try to integrate later. I think that's wrong.
The first integration needs to happen is in the domain controller for the safety region safety domain. Everything needs to be integrated into the domain controller. That's the first step of integration. There's many arguments to support the infotainment system and the system to integrate. The main argument is cost. But there are some other arguments. For example, for cybersecurity. When your infotainment system downloads software every day or every week, how do you protect your safety domain from, you know, viruses going into your safety domain?
Yeah.
That's a real problem.
Yeah.
Right? Cell phone is not the best example for cybersecurity. So I think there are many other reasons that we should keep a lot of domains separately. And on each domain you should integrate as much as you can. For example, I definitely think the infotainment system should integrate, you know, everything possible like a cell phone does. But on the safety domain, just integrate enough sensors, enough processing, enough AI performance to Level 2, L2+, Level 3. We still have a long way to go.
Great. And then you mentioned radar. Maybe you could talk a little bit about Oculii, the acquisition that you guys did, which seems like it's potentially a breakthrough in terms of implementation of radar. Can you talk about what traction you're seeing with that?
Yes. So in fact, after we acquired Oculii, we quickly decided that the best way to apply this technology is to use it in a centralized radar. Meaning, integrate all the radar processing into the domain controller instead of trying to use the Edge processing like what's being used today in the last 30 years. Today every radar all the single processing radars happen on the Edge. When you have a radar module, the processing is done there. And at the end, each radar head generates an object list and passes through the central processing and integrates that way. We think it works for, you know, the last 30 years. But with the current technology, the best way to do this is you take all the raw data of a radar signal and the camera radar signal and integrate everything in a domain controller like CV3.
With that, you do a low-level sensor fusion. You can achieve the best performance. And we proved that with our own software at CES. In CES, if you ride in our car, you do see that a very dense point cloud that we take in from the radar head into the domain controller. We're taking six radars plus all the cameras. We have enough performance to process everything at a point cloud level to achieve the best object detection. So that, I think, is the future. And we already start seeing OEMs ask for these features. For example, in Europe and in China, when we build RFQ, we already start seeing OEMs specify saying they want centralized radar as a requirement. I think that's the progress. I hope that we can write down this progress.
I mean, it seems like it's a no-brainer to implement radar that way when you describe it.
It is. However, the problem is, without the Oculii software, the amount of information you bring to the domain controller is huge. You really need to find a way how to bring in less information but still process a huge amount of data points. That's, I think, the beauty of the Oculii algorithm. And that's the IP that's the reason we believe because of the IP and that's the reason we acquired it is because of the IP we believe that centralized radar can be done efficiently. Without that IP it's a difficult problem.
okay. So I guess in that context, you know, how do you guys think about LiDAR? You know, obviously slower L3, L4 means less LiDAR anyway. But are you still approaching? Is there still some sensor integration that has to take in LiDAR, or do you think that radar plus optical is adequate?
I think for people who can afford LiDAR, you should use LiDAR. Particularly for Level 4, you can use as much sensors as possible to improve the safety. However, when you come to L2+, you might not have the luxury to use expensive LiDAR. Particularly, we believe that centralized radar can be a good replacement for LiDAR signals. And hopefully the demo we showed to our customers at CES that within one CV3 chip we can take in multiple cameras, multiple radars, all through sensor fusion in that chip can convince people that's the right way to go.
Great. Maybe you could talk a little bit about the progress that you're seeing in automotive today. We talked about L3, L4. But you obviously have driver monitoring. You have digital rearview mirrors, things you've been talking about for a while. Where are you at monetizing those?
In fact, you know, we continue to announce design wins and the revenue from those areas. But however, the percentage penetration of those markets still continues to be small. And that's why you don't see a huge jump in our revenue. And the ASP is pretty low. It's a single- digit or two low- teens of ASP for DMS, OMS, or even ADAS today. My belief is that when L2+ starts taking over the market, those functions will gradually be integrated. So I think that's another reason that although we continue to try to win designs for OMS, DMS, and ADAS with CV2 family chips, we know that when CV3 comes in, that will replace those silicon solutions with a much better integrated solution. So I think that's how we view the market right now.
okay. And then in terms of your traditional IoT markets, can you characterize, you know, we've gone through an inventory correction that you talked about? You know, it feels like we're through it. It feels like you guys saw it before others and kind of have emerged from it before others. But you're still, you know, kind of along the bottom a little bit.
Right.
So, can you characterize that for us? And do you think you have enough visibility, you know, to sort of feel really good about the growth from here?
Right. So first of all, you know, we definitely believe that the worst of inventory correction is behind us. We said that with some data points. First, data points throughout the last 12 months we continued to talk to our customers. A big portion of the customers is on their way out of this problem already. Some companies are still kind of managing the inventory, but I think a big portion is done. Two, we saw, you know, I've been watching our booking and ordering for the last few months. The recovery is significant and also continues to stabilize our business. So from that point of view, we feel comfortable with our Q1, Q2 booking right now. And Q3, Q4 need to remain to be seen. But at least the momentum we are seeing gives us confidence that we have bottomed out already.
Yeah. I would think I mean, the fact that some of the peers have really struggled would actually help the sentiment for you guys a little bit.
I hope.
We saw, I mean, companies like Silicon Laboratories saw a lot of the same issues that you saw later on. So okay, that makes sense. And then where are you with consumer surveillance? You know, that's a market that it seems like a lot of these AI features would really be relevant, but it's also really expensive for cameras at those price points. So can you talk about that?
So, you know, consumer surveillance is really designed for home. Like I said in the past, there are obviously two segments right now. One segment is extremely cost-sensitive. On Amazon, you can buy a $20- $30 home security camera today. And they try to bundle services and try to make money on the service side. And also there's a different approach, which is using putting a lot of AI into the camera. And so they can provide better service that way. But service fees are higher. And the trend we are seeing is more going to this cost-effective solution because that's where, you know, it's easier to get market share in terms of unit number. So that's the direction. And that's where we have stopped investment for a long time, right? We haven't invested in a video-only solution. We focus on the AI solution.
So for the consumer IP cam, that's definitely an area we are losing market share. We talked about this before. But hopefully that if they come back to if there's any customer who wants to have a, you know, meaningful AI performance, I think we still can be a supplier to them.
When you say you've lost market share, I mean, is that you still have some of the premium designs, some of the top doorbells, things like that? You guys are still in that.
Absolutely. In fact, not only do we have still some critical design wins, we're still getting design wins. It's just that there's a lot, even for the customer who has high-end solutions, they also want to do low-end. For those low-end markets, we don't have a solution for that.
okay. So you have consumer surveillance that has some potential professional surveillance here past the bottom of the inventory correction with a good ASP lift. And then in cars, you're seeing a lift from some of the CV2 designs. And then CV3, you know, is more 2026. Is that the right time frame to think about?
Yeah. We have been saying that CV3 will start 2026 ramping- up in China. Outside China will be a 2027 story.
okay. And your visibility into that at this point is good? I mean, when you I know you have a funnel. We can talk about where the funnel is derived from. But I know you have some probability weighted.
Right.
You know, what's your probability weighting on some of, do you have any of those that are moving into the high probability at this point for 2026?
Well, you know, since the design win we haven't updated the model since November. But we definitely think, say, we moved. We are some of the design moving into the right direction. There's nothing moving to 100% yet.
Yeah.
But definitely moving in the right direction. But however, even just for the one business, you know, we talk about $800 million, which is guaranteed.
Yeah.
Which is a design win we got. The only thing we'll change is whether the volume unit volume can vary when they go to the production time. That's where we are looking at right now.
So I mean, you have, you're talking about less than $100 million of trailing automotive revenue, a six year funnel of $2.4 billion, which includes a fair amount of probability-weighted stuff in the back half of that six year window.
Correct.
Right. Yeah.
So we have a, you know, out of $2.4 billion, $1.6 billion of that is a weighted probability.
Yeah.
$800 million is a one business.
$800 million is confirmed. Yeah okay g reat. And then, you know, can we talk about other opportunities for, you know, outside of Vision? You know, you're not going to participate in cloud-based inference. But you've talked about inference at the Edge. You know, what does that mean? And what, you know, what types of LLM types of opportunities might you see that Ambarella could participate in?
Well, like I said, initially we're going to focus on our existing customer, which means it's a video-centric application. But I do believe that the few things that will help us to go beyond video application one is the delay issue, right? It's really the real-time response time. One is power consumption. But there is another very important thing is more and more Edge customers, when they retrain their LLM model, they want to keep the data private to the company. So which means is people are going to try to fine-tune their data at the Edge, also inference at the Edge. So from that point of view, I do expect for people who are paying attention to latency, power consumption, and the data privacy, well, we have a chance to penetrate. And that's where we focus on beyond just video applications.
okay. Great. And then what other video applications do you see? I mean, you've talked about access markets. You've talked about robotics market. Which of those are you most excited about?
Well, first of all, do you talk about LLM or just in general?
No, you put it for Vision as well.
For Vision, I think robotics continues to be an important market for us. Particularly now, I think with the latest LLM development, I think a lot of people that are familiar with this market I talk to, all of them agree that eventually LLM will be the model being used on robotics that one single model controls the whole robots. It doesn't matter if it's industrial robots or family robots or whatever robot. Even if you come to Level 3, Level 4 robot, I believe LLM will become eventually the best solution for this kind of thing. So I think.
I think it cannot be cloud-based.
It cannot be cloud-based. You know, think about your car controlled by a cloud.
Yeah.
Yeah. So I think, well, Cruise already proved that doesn't work, right? So I think that's key is making sure that we continue to develop a platform lower power enough with enough performance to run, you know, a large language model. But however, I want to point out, not all the language models need to be trillion parameters. In fact, if you focus on, you know, controlling a robot, you can remove, you know, anything related to the, you know, cooking, New York Times, movie theater, all of the, you know, unnecessary information you cut it off, retrain purely for the robots. I think the model a small model like 30 billion, 70 billion can be very useful for if you focus on one particular vertical.
Great. So do you have a question from the audience? Give them a mic.
Yeah. I wanted to get some clarification on your comments about inferencing at the Edge and going on beyond existing video-centric. You mentioned power consumption, latency, and privacy of data. What exactly use case and deployment systems are you referring to? Are you putting these into this side of servers? Or I mean, what are they sitting next to? Can you give us a little bit of detail about how?
Let me give you an example happening in my company. Today, you know, our CV3 programming is pretty unique programming, right? And we try to use Copilot to do C programming. But when you come to CV3, we believe that we need to fine-tune one of the LLMs for this, right? So this inference and this training and inference data, we need to stay in our company. We don't want anybody to use CV3 programming data we put into the training being used by the public domain LLM. So you have to fine-tune this LLM based on our own data and inference on locally. That's one example.
Is it deployed in? So I guess two questions. When you mean fine-tune, you're using, like, RAG type of retrieval-augmented generation sort of techniques? And then what is the ultimate end product that CV3 will be in? Is it going to be in a server? Is it going to be deployed in an end device? Like, what's the point?
The end device can be a PCIe card sitting on a server and sitting on a local or an AI reference machine you design only for the local one particular application, right? So for example, in N1 that we try to use our company is really used for the generate code, generate documents, particularly for our silicons.
okay. So it's going to be in the server, but it will sit on some, like, a PCIe card. And when you're saying it, does it need to be, like, in a SmartNIC format to receive that data? Or is it offloading the data off of the central processing unit?
It will be, you know, because we don't want this data being communicated outside. So you assume that this is really a box sitting alone by itself. And all the data in and out is our proprietary data will stay there, right?
okay.
It's a PCIe box, but with our CV3 chip in there and running our own models.
You're going to market with sort of like the OEMs that are specifically, or are you going to, like in sorry, are you going to deploy this through, say, Quanta or some of these ODMs? Or do you feel like that these can be deployed in general service, you know, kind of general-purpose service like HP, Dell, et cetera?
So, you know, we do not plan to sell the model we built for the CV3. But definitely the model will be we sell in PCIe card with Quanta. Quanta can be selling PCIe card for anybody who wants to develop this similar kind of model for your corporation. And you can run the local model on the PCIe card.
If you're talking this way, you're going through that model. I mean, it's not going to be a standard product. It's going to be very customized. So, time to market, it's got to be really, really long, right? Or, am I missing something? Could you make this sort of like a standardized platform that's kind of easier to deploy?
Well, you know.
Thank you.
Thank you. For example, we use this as one example. But I don't think we are going to do a standard. For the Edge server, it has to be specified for certain applications. For example, that for the security camera example, it's going to be some of our customer security camera customer build their own box, selling to airport, selling to the school, selling things. So it cannot be just general-purpose putting a cloud. Yeah.
I mean, it seems like it's a very focused effort on things that you can do without things that you're going to be particularly good at that you don't have to go invest $hundreds of millions to compete with.
That's the key, right? You know, I think with us, we try to be focused and reduce the OpEx expense to enable our first few customers.
That chip is essentially a CV3 with some of the automotive stuff stripped out of it.
I should say N1 was, yes.
okay.
Yeah.
okay. Great. Any other questions from the audience? Can I actually ask a financial question and with a non-financial motivation? But, you know, your gross margins have always been really high. You know, and you always have legacy revenue. It seems like that's a little bit of a headwind. Have you thought about, you know, sacrificing some gross margin to preserve some of that? Is that a trade-off that you guys could make?
You know, we have been doing that for a long time. In fact, the trade-off is this: if you ask me to serve a customer who only wants to see our low gross margin, that's not what we want to do. However, for a large customer, they have a mix of high gross margin and low gross margin business. We have to do low gross margin to keep them with us. That's where we're willing to sacrifice gross margin to keep the customer 100% with us.
You're not hamstrung by a low-60s% gross margin like this?
No. Well, 60% gross margin is a target I think we can achieve. But it's not vice versa.
Yeah. okay. Great. Well, with that, we'll wrap it up. Fermi Wang, thank you very much.
Thank you very much.