Well, here we are. It's wonderful to be able to resume Ambarella's CES tradition after a virtual event last year. Ambarella's first Capital Markets Day was in March 2018, so more than three years ago, and we tried to have this twice at our office in Parma, Italy in the last year, and were thwarted by the pandemic both times. We're really excited, as you could tell. We've taken a lot of precautions to protect your health, to protect our employees' health, and appreciate everything you've done to persevere and to be here. Let me just read our forward-looking statements. Please allow me to do that.
Today's presentation and discussions will contain forward-looking statements regarding our strategy, plans and objectives for future operations, technology trends, future product introductions, projected financial targets, and the size of markets addressed by our solutions and the growth rates of those markets, our ability to achieve design wins, our ability to retain and expand our customer and partner relationships, among other things. These statements are subject to a variety of risks, uncertainties and assumptions. Should any of these risks materialize or assumptions prove to be incorrect, actual results could differ materially from these forward-looking statements. We're under no obligation to update these forward-looking statements. These risks, uncertainties, and assumptions, as well as other information on potential risk factors that could affect our business, are more fully described in the documents we file with the SEC.
Again, thank you for persevering to be here for our second annual Capital Markets Day. I greatly appreciate it. There's been a significant global and cross-functional effort to make this a very rich, you know, fundamental research experience for you, and so we hope that you'll be pleased. We'll have six presentations from executives at the company. We have more than 30 demos, both indoors and outdoors. I just hope this will be a good positive experience for you and a safe one. Whoops. Let's just go through the logistics of the day. The floor map here shows we're in the Tropicana Room right now, CMD, highlighted in yellow. As many of you have seen, we have demos in the Flamingo Room.
Please also note our Oculii demos are in the Stardust Room. Given the timing of the acquisition, we had already laid out our CES plan, and that's why Oculii is out in Stardust. Please make sure you ask your host to see them. Of course, outside we have several different vehicle demos, an ADAS car, two EVA cars with our latest L4 stack, and a car with Oculii's radar installed on it. The demos will be open today until 5:00 P.M., maybe 5:30 P.M. If you have a reservation, please meet out in the parking lot. You don't need to wait indoors for it. Please go out to the tents.
If you don't have a reservation, we might be filled up for today, but if you wanna try to do it later in the week, the front desk where you got your name badge will be able to schedule that for you on Wednesday, Thursday, or Friday. Turning to the agenda, again, six presenters today. We're going to run through them all together, no breaks. One of our engineers, Lars, told me it's like a Las Vegas timeshare and that you can't leave the room until you buy. But you're welcome, when we do start Q&A, some of you have appointments for the car. Some of you might wanna go see demos. Some of you might wanna stay for Q&A.
This whole event will be recorded, so if you do miss Q&A, you can catch it online later. Questions from online participants, and we thank you very much for joining with us today. We hope this is a very fruitful experience for you, and over the next month we'll be adding videos online that you can access to see some of what you're missing today. I will be accepting questions from the online participants and reading them, mixing them in with the questions from the room. I won't go through the bios.
Again, this will be available in a document after our presentation is over today. However, I would like to introduce two executives who will be available for Q&A, that'll be available at the management presentation. Just so you know who they are, Chan Lee. Chan, if you could raise your hand. Chan has been very busy recently with a product you'll be learning about shortly. John Young, our VP of Finance. With that, I'm going to turn it over to Dr. Fermi Wang. As you all know, Furmy is co-founder, President, and CEO of Ambarella. Fermi?
Thank you, Louis. Again, thank you very much for coming here today and attending our Capital Markets Day. You know, when Les and I co-founded this company in 2004, the underlying premise of the company was that the digital video was such a unique data type that requires a unique silicon architecture to address all those difficult problems. In the first 12 years, we focused on human viewing applications with our high-performance, low-power video processors, and focusing on market like consumer camera as well as security camera.
In the last five years, we augmented our SoC portfolio with a deep learning-based AI processor, which enable machines to perceive the environment and to make intelligent decisions. It also enable many levels of automation, which is used by many different industries. Today, I'm excited to announce that we're expanding our technology processing performance and getting into a new market with the introduction of CV3, and the press release will be available soon. Which CV3 will be the first domain controller that Ambarella introduced, and Les will spend a lot of time to talk about the details of that.
This diagram basically shows what I just said. We start with video processor, moving to computer vision, moving to domain controller. With this progress, in addition to human vision applications, now we can start to address a lot of different applications which will require not just perception, but fusion planning and execution. All this new market is available to us. Before we talk about the size of those market, I want to be clear about, in this AI processing hierarchy, where Ambarella is planning to serve.
In this pyramid, the top two layers are where the AI processing has taken place in servers, in data centers or cloud, or in enterprise edges, in servers, labs are plugged into the wall, and most of the time in air-conditioned rooms. Obviously, those two layers are not our target market. Instead, we focus on this foundation layers, which we call it IoT endpoints, where to execute AI processing, it requires a very, fundamentally different and optimized silicon architecture. This architecture need to address issues like latency, privacy, security, bandwidth, or any other real-time applications at a very limited power envelope. This is where we're going to focus at. As you can see, in this IoT endpoints market, there are many different applications.
One of the largest one is AI security camera that Ambarella has been dominant in this market. Another example of this market is this automotive ADAS market where Ambarella just start taking market shares in the last year. There are many other emerging applications in this layer, and which will become material for us, in the near future. How large are those market combined? In this diagram, on the right-hand side, we're showing you the serviceable market revenue for Ambarella in the next several years. The darker bars represent the market for automotive, and the lighter bars is for AIoT markets. If you compare. Of course, that Chris will cover all those numbers in detail later.
If you compare this diagram with the diagram we generated five years ago, you will notice that the key driver five years ago was those consumer discretionary products like drones, like sports camera. Today, in addition, that we are addressing those mega trends like security, safety, or higher and higher levels of automation, and eventually go to robotics. If you look at financially, there are three major drivers that are driving this large and growing market opportunity for us. All those three drivers are triggered by our computer vision technologies. First, the computer vision really help us to have a brand new these product cycle in our existing IoT markets, for example, security. Second, the computer vision help us reach into new markets that were not available to us, for example, the level 2, level 3, level 4 autonomous driving.
Third, I think we talk about this, that computer vision SoC ASPs are twice of a video processor on a like-for-like basis. With that, we believe the quality and the diversification of our revenue has never been higher before. Until recently, if you look at our SoC strategy, our SoC portfolio has been mainly focused on video perception, which is showing at this bottom left corners in the pyramid. We have talked about before that the performance requirement for video perception is much, much higher than any other sensor or other processing requirement in this pyramid, which give us great opportunity and great position to continue to integrate more functions within the pyramid into a product, therefore capture more values. Moving forward, our SoC strategy is try to expand horizontally and vertically within this pyramid.
The horizontal expansion means in addition to just video perception, we are going to provide complete perception level to our customer by integrating other sensor modalities. That acquisition of Oculii is the first step, that with the acquisition we got the radar technology in-house. Moving forward, we'll continue to integrate and to interface with more sensor modality in the perception level, for example, ultrasonic. Also we plan to integrate vertically, meaning in addition to the perception layer, we want to address fusion, planning, and even the controlling of the whole software stack. With the introduction of CV3, we will have enough hardware support and hardware performance to complete this vertical and horizontal integration. Also I want to make clear that this SoC strategy is designed not only for automotive, but also for all the other IoT devices, IoT markets, including robotics.
On the software side, as you might know that, majority of our engineers are software and algorithm engineers. The software value that we have been providing was really focused on these foundational layers that we show in the bottom of the box. We capture those software value by incorporating it into our SoC price in the past. With the introduction of CV3, that we need to and we will develop this complete software stack so that we can demo the functional performance of CV3 to our customers. With that complete software stack, that give us new opportunities to capture more software values by offering modules in a upper level of the stack. We need to make clear that we do not plan to offer a blind box total solution to our customer.
In fact, our customer has made clear that they don't need it. Well, instead, what we are trying to do is provide a module-based software, and we will allow our customer to pick and choose the modules that they need, and we will help them to integrate those modules into their software. For example, we already licensed some of our perception module, like a blind spot detection, to some of our OEM customers. Moving forward, we are planning to start licensing more software modules like video processing, stereo processing, or radar processing technology to our key customers in the automotive space. Combining this strategy with our open platform, we believe we will offer more software value to our customers. In order to deliver this new SoC and software strategy, we have been investing very heavily on R&D.
I hope use this slide to give an idea how much we have invest in the last several years. In fiscal year 2016, Ambarella's management came to a conclusion that our traditional consumer product line will not give us enough revenue growth that we can count on. Therefore, we need to transition our target market to industrial and automotive. At the same time, we also realized that computer vision technology is one of the key technology will help us to penetrate those market. That's where you start seeing the last five years, we start invest heavily on R&D, particularly on the computer vision R&D, and you can see that, it rise quickly. While our overall revenue decreasing because of the weak performance on our consumer product lines. In fiscal year 2021, our total R&D was 60% our revenue.
Fortunately, that was also the year that our computer vision revenue become material. That's definitely the inflection point that we are hoping and we're glad we reached that in fiscal year 2021. If you look at this year, which our fiscal year will end in the end of this month, and we already provide the forecast that we're going to have a record high revenue this year. With the growth of revenue, our total R&D will be roughly 50% of the revenue. Moving forward, we believe that the computer vision revenue will continue to rise and which will help us to achieve a positive operating leverages. A lot of people ask us how we can compete with those large competitors, considering we have only 800 people in Ambarella.
Of course that the $600 million, you know, R&D expense I just showed you in the previous slide help a little bit. But the most important reason behind it is our technology, our efficiency, our open platform, and scalability and flexibilities. I hope you have time today or later this week to visit our demos in the next room, which will hopefully give you the proof that we do have technology we need to compete. The reason we can have those technology is really because of people, because our domain knowledge and our algorithm-first approach. If you follow our engineering execution in the last several years, you should have noticed that we have followed the most advanced processing node very closely. For example, CV3 is our second 5 nm chips.
Even with this aggressive movement in a process node, we continue to help our customer to build new products efficiently and quickly. I think that's a proof that we have one of the best engineering team out there. In terms of domain knowledge, I think this founding team, we all have more than 25 years of video experiences or radar or autonomous driving. On top of that, Ambarella probably is the only pure play digital video company in the last 17 years. Throughout that, our domain knowledge has again and again built the most efficient architecture for this digital video applications. The last but probably the most important factor is our algorithm-first approach, and because that's an engineering topic, I'll leave that to Les to talk about.
I think that's definitely the most important reason that we can continue to develop advanced technology. I think to answer the question whether we can compete, you know, I think I would like to argue that not only we can compete with those large companies and also winning the market, and I think that confidence is not a wishful thinking, but are supported by the real data. Here, I want to show you two sets of different data. On the left-hand side, the first green bars right here is our CV Wave 1 revenue, which come from enterprise security camera. The second green bar right here is our Wave 2 CVflow revenue, which is from home security camera. The third layers of the green bar is our Wave 3 revenue, which come from automotive revenue.
The yellow part is our video processor revenues. You can see, in fiscal year 2022, our CV revenue will grow to 25%, which we have been talking about for a while. Moving forward in fiscal year 2023, we believe this number will reach 45%. That just shows you we continue to build our CV revenue momentum. On the right-hand side is a different set of data which will help you to come to a similar conclusion. First of all, we have 275 unique CV customers. When we say unique CV customers, meaning those customers have paid to buy either SDK or ref design or engineering samples.
Out of the 275 unique customers, more than 100 of them already reach production with us. Among those customers, they already built more than 150 unique CV products in production. In fact, we show a similar set of numbers six months ago. It was 240, 58, and 87, six months ago. That shows you we continue to build our momentum on this customer base and continue to help our customers build better products. With these two sets of data, I think there's a very strong and growing evidence of a market acceptance of our CV product line.
In terms of global operation, we have a total of 874 employees. More than 80% of them are engineers, and among all engineers, more than 70% of them are working on software and the algorithms. In terms of our global supply chain, I would like to say that we are lucky to work with some of the best partner that we can find. For example, Samsung for foundry, ASE for packaging, and SPIL for testing. In the last 12 months, you know that there's a huge supply problem. Although the long wafer lead times and the material shortage persist, I'm glad to say that Ambarella was seldom the bottleneck of our customer product delivery plan.
On the contrary, I think the biggest challenge we are facing is most of our customer have a hard time to get enough other parts, other components on the products they are delivering. It definitely creates some problem for us. For example, it make our revenue forecast become more difficult. Also it might has, you know, create some of the pockets of inventory in our overall sales channel. Definitely we are cautious, and we're going to continue to watch out of these two problems moving forward.
At the end, I want to have a quick summary. I just talk about the momentum we got with our current CV product, and we are introducing a brand-new product line which will expand our SoC and software offering to our customers. I think this is a good time for us to think what we need to do to reach $1 billion revenue for the company. I think the first thing we need to do is clear that we need to execute on the strategy I just talked about, both SoC and the software strategy.
More importantly, we need to continue to drive our innovation. In fact, I'm proud to say that in the past 17 years, we start competing with TI and the HiSilicon. That now we are competing with NVIDIA, Intel and Qualcomm. At the same time, we're keeping our gross margin around 60%. The reason we can do that is our innovative products. Les will give you many examples, and I want to highlight a few of them. We need to continue to extend our video processing leadership. We're going to talk about how to use neural network to improve image processing, signal processing. We need to unlock the synergy with computer vision plus radar technology. We need to extend our power efficient leadership with new silicon and the algorithm technology.
The third thing we need to do is continue to scale organization, both in engineering and the business development through organic growth as well as acquisition. I think the acquisition of Oculii is a great example that we can acquire not only the technology, but also talents at the same time, and that going to be one of the business model we're going to continue to focus on. Les, I think reaching $1 billion revenue cannot be the only financial goals that we have for this company. When we reach that goal, we need to continue to drive positive operating margin, for our investor, and for our company, and I think that's also a very important goal, moving forward. With that, I would like to introduce Les to talk about our new technologies. Thank you.
All right. Thank you, Fermi. I'm very happy to be here to introduce you to our CV3 family which is a project which is kind of near and dear to me because we've been looking forward to this day since our VisLab acquisition in 2015. At that time, the VisLab self-driving car had a trunk full of server PCs in it, which heated up the interior and caused the rear of the car to sag down. Obviously not a production level solution. We've been working towards a chip family that can bring true autonomous driving to a mass production state in the most efficient and cost-effective way.
This new family is our most ambitious project to date, and it allows us to extend the performance that we can achieve on a single chip up to 42 times faster than our previous high-end chip, the CV2 family. That's a huge step up in performance, and this will allow us to address everything from ADAS to L2+ to L4 autonomous vehicles with a single unified SDK. That's another very important thing that we've seen from our customers, is that they have multiple different requirements and different price points that they need to hit, but they wanna have one common software architecture for all those vehicles. With the CV5 or CV3 family, they'll be able to do that using a single SDK.
At the same time, CV3 will extend our leadership in power efficiency by offering four times more performance per watt than what we had with the CV2 family. Just to recap the evolution of our product line, we started off in the human viewing centric applications. We introduced the CV2 family, which was our first production computer vision family. That has been used in a number of different automotive applications, including front ADAS, in-cabin monitoring, as well as L2+ and L4 vehicles as the perception processing engines for those vehicles. With the CV3 family, we will now extend this to be able to run the full stack processing all the way up through fusion and planning.
Our philosophical approach to new chip development has been really the same since the beginning of the company when we started off with image processing and compression, and we've iterated that over many generations to continue to improve that. It's based on a core set of algorithms that we have developed, and we use those algorithms to optimize the architecture design to make sure it's efficient for those. We have extended this approach into computer vision processing with our CVflow processor, which again has been optimized for running computer vision algorithms. In order to understand how these computer vision algorithms look, we did an acquisition of VisLab in 2015, which had an autonomous driving stack. We've been working with the VisLab team ever since to extend this approach all the way up to the full stack.
In October this year, we acquired Oculii, which allows us to augment our vision-based processing with radar-based processing, a very unique and high-definition version of radar processing, which we believe when combined with the vision processing is the foundation for the robust sensor suite that you need for autonomous driving. Let's look at where CV2 is relative to our competitors because that's something that we can benchmark today. Actually, two years ago at CES, we showed a demo of CV2 versus the N company's GPU. They have a 30 ETOPs GPU, which was their kind of processor design for automotive applications. We're on the exact same network on both chips, MobileNet SSD, same resolution, everything. When we benchmarked it, we found that our CV2 processor was actually slightly faster than the GPU processor.
This shows that even though we actually rate CV2 at 12 ETOPs, it actually is performing at the level of a 30 ETOPs GPU. At the same time, the power consumption was one-fifth of the GPU chip. Now this CES this year, we're showing a CV5 demo where we again compare with the same same GPU chip. With this CV5 processor running the latest YOLOv5 network, we're seeing a 4x speedup in CV5 relative to the GPU. Now we're four times faster than the GPU, and we're about 15 times more power efficient than the GPU. Even with CV5, we've continued to improve the gap with the GPU processing. How are we able to get this level of efficiency?
Well, if there's anything that I've learned in over 40 years of working on computer architecture, it's that it's very easy to throw down a lot of ETOPs to claim I have huge number of multipliers and adders on the chip, and I have, you know, 1,000 teraops. What's hard is to make the architecture efficient. Can I really utilize all those multipliers most of the time? Or most of the time are they just sitting idle, not doing anything? The key to getting a high efficiency architecture is to eliminate bottlenecks in the processing. Those bottlenecks really come about due to the requirements of fetching all the data that you need to keep all those operations busy. We have spent a lot of time on optimizing our CVflow architecture.
I won't go into all the details that are on the slide there, but basically, those architecture improvements allow us to flow data through the data paths, and the data paths themselves are designed to require less data per operation than the traditional general purpose processing that you see in a GPU or CPU. As a result, we operate at a much higher efficiency. What the trade-off we've made is basically optimize for AI and sensor processing and not optimize for things like cryptocurrency or gaming. Now with the CV3 family, what we did was to build on all the experience that we had with the first two generations of our CV processing.
By now, we have hundreds of networks that we've looked at, that are coming from the open source community, from our own internal network development, as well as customer algorithms. We went through all these networks, and we looked for what are the bottlenecks that still remain with this CV2 generation processing. By fixing all those bottlenecks and incorporating some new developments, we were able to get a three to four t imes improvement in power, area, and DRAM bandwidth utilization versus CV2. This is the foundation of our whole next generation family of products, which will continue to, you know, lead in terms of efficiency.
At the same time, we wanted to boost up our CPU performance, so we went to the latest Arm Cortex-A78AE , their highest performance automotive, Arm core, so that we can run all the rest of the high-level functions that the full AD stack requires. This family will have a single upwards compatible SDK from CV2. That means software that's developed on the CV2 family can be easily ported onto the CV3 family. It will also be used for our security and robotics markets in the future. Why do you need such a powerful processor for running full stack? Basically, it's because in an autonomous driving car, there's a lot of sensors that are required. For example, on our EVA car, we have a total of 10 cameras, and 5 radars. Several of those cameras are operating at ultra-high-definition resolution.
We currently have 16 CV2 chips inside of that car to do just the perception processing. You can imagine when you combine the perception, the fusion, and the planning together, you need a large performance upgrade over where CV2 was. That's what we're delivering with the CV3 family. The first chip that we've developed and that we'll be introducing is our CV3 High chip. This is the flagship of the family. It's the highest performance chip, and it delivers this 500 ETOPs for eight-bit, and it actually can run twice as fast for four-bit network layers, so it can actually run up to a 1000 ETOPs for that. It's basically 42 times faster than our CV2 processor.
With the Arm A78 cores and a lot more cores on the chip, it's gonna be 30 times faster in Arm performance versus the CV2. Even with that large performance increase, the power is still only 50 watts of power. That's where you can see roughly 4X improvement in power efficiency relative to CV2. This CV3-High chip can support up to 20 cameras. The other key capability that it brings to the table is allowing true lidar class resolution with the latest Oculii algorithms running on CVflow. We've already, since the acquisition, have been working very closely with the Oculii team on evaluating their algorithms. Even with this first chip, we will be able to take full advantage of their most advanced algorithms with 360-degree radar perception.
Going forward from here, we're going to further enhance the radar performance efficiency with specific hardware optimizations. This is the block diagram of the whole chip. You can see it's a full SoC with all the peripherals that are required to both interface to all the sensors in the car as well as into the control subsystem for controlling the autonomous driving functions. It also has high bandwidth IO that allow you to connect high-performance storage or other processor systems like infotainment systems. It has 16 Arm Cortex cores, which support ASIL B applications as well as safety island lockstep applications. It has a GPU for running 3D visualization for surround view.
One of the functions that we wanted to enhance for full stack domain processing is improve the security capabilities of our chip. To do that, we've developed a new dedicated hardware security module, which does a number of functions, including crypto acceleration, secure storage management. What I'm showing on this diagram is a new capability to support multiple domains running on the same chip. What this allows you to do is isolate the different safety levels from each other, so you can guarantee that nothing in the ASIL B domain can corrupt the safety domain, nothing in the QM domain can corrupt the ASIL domains. Another very important thing that it allows us to do is protect software coming from third parties or from Ambarella from being spied on by customers.
Basically, it allows secure software deployment in a way that nothing that you can do on the Arm processor can break that security. Summarizing CV3, it's a full domain controller family that scales up from ADAS to L2+ to L4. Combined with the CV2 family, it allows us to offer a range of 1 ETOPs to 500 ETOPs, a 500-to-1 range in performance, which we think is the broadest in the industry. It offers industry-leading power efficiency. Given this new capability, what are some of the things that we can do on the software side to take advantage of that? As Fermi mentioned, one of the areas we've been working on is how to leverage CVflow to augment the image pipeline that we've been developing over many generations.
What I'm showing here is a side-by-side demo of our image pipeline, traditional image pipeline on the left side, and on the right side is the augmented AI-based processing version of that pipeline. You can see that there's a huge difference in the level of detail, contrast, between those two pipelines. Even though we spent many generations of development with our classical pipeline, using AI is fundamentally a much more powerful way to augment image processing. With the CV3 AI processing capabilities, we'll be able to apply that to more applications and at higher resolutions and higher frame rates than what we can do on the CV2 family. Another way that you can use AI processing in image enhancement is for high dynamic range images.
Traditionally, when you have this kind of backlit indoor-outdoor scene, it's very difficult to have good detail in both the highlight area and the shadow area. On the left side, you can see highlight area is okay, but the shadows are kind of blacked out. Hard to see anything. Whereas on the right side, you have full detail in both highlight and shadow areas. That's much easier to do with AI-based processing. The other major area that we've been focusing on in the software development is our Ambarella stack. Alberto will be telling you a lot more detail about it, but I just want to review what the stack is doing and some of the high-level improvements we're making.
The stack basically has to combine all these different cameras and radars together in a fusion layer and construct a 3D model of the world of all the objects. In this video, I'm showing how we're approaching a roundabout. There's a bunch of obstacles that are being tracked, bicycles in this case. The car is deciding when it can drive into the roundabout. There's still a bicycle there that it's keeping track of and monitoring whether it's going to be able to turn off here. Basically, it's predicting where the bicycle is gonna be going and deciding it can safely take that exit and leave the roundabout. That's the kind of processing that you have to do to run a full AD stack, and that's what we're planning to do on CV3.
People sometimes ask, "Why, why develop your own AD stack?" Well, the original motivation for developing that was that we need to understand these core algorithms so that we can develop the right chip architecture for doing that. What we've found along the way is that obviously, as we're developing these algorithms, we're optimizing them to run on our chip architecture. Because we kind of know the chip better than any of our customers, we can optimize those algorithms to run on our stack to a level that might not be possible for our customers. We want to be able to help customers take advantage of some of those software modules if they wish to get the highest performance processing that they can out of CV3.
There's another very important thing that has been enabled by the Oculii acquisition, which is deep fusion of vision and HD radar. So far, radar has been kind of a separate subsystem which runs on its own processors out at each radar module. With CV3, we're going to be able to centralize all the radar processing into a single processor along with all the vision perception processing. When you do that enables you to fuse the radar data and the vision data at a lower level than is possible with separate subsystems.
We believe that that will lead to a significantly more robust sensor suite than anything that's possible with current generation sensors. Not only do you have this lidar class resolution, but when you combine it with the deep fusion, we believe that you'll have the most cost-effective and robust sensor suite for all conditions. With that, I would like to introduce Alberto Broggi, who's the General Manager of our VisLab team, Parma.
Thank you, Les. Good afternoon everyone. The presentation that I will be giving today will cover two aspects. The first one is a description of the announcements and the progress that we did on the autonomous driving stack in the last few years. The second one will be the description of the demos that you will be able to see outside later today. I'll start with the first one by showing you what happened two years ago. Two years ago, we were giving demonstrations with our EVA cars, and we were showing autonomous driving and autonomous parking.
You could jump in the car and the car would drive outside of the parking lot, drive around the hotel here in Las Vegas, and then get back to the parking lot, find an empty slot, and then park itself. That was done two years ago. In order to do that, we were using HD maps. Everything was pre-mapped with a very high accuracy. You could map all the surroundings here, and then you would drive thanks to these high-definition maps. Since I mentioned high-definition maps, I would like to show you what's the difference between a high-definition map and a standard map. In this picture, this is a satellite picture showing one of the junctions actually around the corner here in Las Vegas.
You would see that in orange, we have information coming from a standard navigation map. So that means that you can see that there are connection between the lanes, connection between the roads, so the topology of the lane, of the road. Typically, you also have other information connected to these lines. For example, you have the number of lanes, or you have the speed limit, which is connected to that specific segment. This is what you have in your normal and usual navigation software. But if you wanna see the difference between a standard map and a high-definition map, this is how our HD map looks like. It's way denser than the previous one. If you really wanna appreciate the differences, I can remove the background picture.
You can see that we have a lot of lines, a lot of information in the HD map. If you wanna enlarge a portion of that, you can see the different information that we have. For example, we have lane markings, we have curbs, we have stop lines where you have to stop when somebody's crossing the road. We have pedestrian crossings, traffic lights. All these things have their precise geolocation, so the precise position in the world. This is our HD map, so it's pretty dense. The question is, how do we create this map? Two years ago we were doing this, we had this process of creating the HD map, which actually required a little bit of human intervention. You had to fine-tune the position of the objects and the lane markings.
Today we have a tool that is able to help us, and I will show you what the tool looks like in the preview in this video clip. You just drive around with your car and you take images from the cameras in your car. For example, we have a stereo camera in the front, and you take these kind of images from the stereo camera. You process these images, for example, with a stereo engine that you have in the camera itself, and you get this kind of information. This is a 3D point cloud where every single pixel has a color information and it has a 3D information in the world. You also do the processing of these pixels, and you label these pixels as road, lane marking, curb, traffic sign, and so on.
You do that for all the sequence that you acquired and multiple sequences. If you put everything together, you can easily map the whole town. This is, for example, Parma, and we've been driving around, and we've been mapping the road and the whole city in this way. Of course, if you need to drive on an HD map, that's your driving will be very efficient because you already know how many lanes you have, you already know where to expect the traffic light, where to expect the lane marking or a junction. You already know everything in advance. Driving on an HD map is very efficient. You cannot assume that you can have a map of the whole world.
It will happen that you have to drive in some roads that does not have any HD map. This is what we introduced in these years. We have a way to drive also where you don't have HD maps or you will only have standard maps. In order to do that, what we do is that we just perceive the environment and we drive with just what you can see, so with the real-time perception. This video clip shows that. In this case, we were driving in a non-HD map area, so we're just using those standard maps, and we were just acquiring information and driving with what you see. As I mentioned, driving with HD map is way more efficient, but actually, you can also drive without the HD maps.
Plus, the system understands by itself when it needs to switch to either HD driving or standard driving. Of course, if there's a mismatch, for example, you have been mapping one road, there's a mismatch between what you see and what you recorded maybe two months ago, two months before or maybe one year before. If there is a mismatch, the system will understand that and will switch back to standard driving without using the information, the old information of the HD map. Of course, I've been talking about mapping, but mapping is just one part of the autonomous driving stack that we have. We have perception, as already mentioned, data fusion, and we are also fusing data coming from the Oculii radars, object tracking.
We do the prediction of the motion of other objects, localization, planning, trajectory planning, and maneuver selection. These are all the pieces that we have into the autonomous driving stack. Just to show you something that you will be able to see in the demo outside, these are some clips that show some of the capabilities of the system. You would be able to see following a slower vehicle, merging in a road with higher traffic. You would have to handle pedestrians, traffic lights, unprotected left turns. These are all the systems, all the capabilities that you will see into the car. Just to summarize the improvements that we did, since you know, the first demonstration we did two years ago.
First of all, we added new behaviors in the HD map driving. Second, we extended also to known HD maps, so we can drive in known HD map areas. Of course, we started with the limited complexity roads and parking lots. Parking lots are pretty complicated, and you will see during the demo. Also we added the possibility of exploring the parking lot. So when you enter in a parking lot, the car will start wandering around and searching for an empty slot, and then once when the car will find an empty slot, it will be making the maneuver to get into the parking slot and stop. The same on the way back.
When you exit from a parking slot, it will try to find the exit of the parking lot and then get back to the road. That was it for the first part. Second part is the description of the demos that you would be able to see outside. We have two type of cars. The first car is the EVA car, the same car that we showed two years ago, but actually with enhanced functionalities. It would be driving around in autonomous mode with that car.
Plus, we have another car, a little more advanced, let's say EVA 2.0 , if you want, that it would be driven manually so that you can experience the fusion that we do between stereo vision and radar because that car has the radar, the Oculii radar installed in it. With the first demo, I already mentioned that we will be driving both with HD maps and without HD maps. We picked four hot points here around in Las Vegas. There are four parking lots. We'd be able to select the route, so I wanna go to the museum, I wanna go to the university or whatever. The system will create the route and will start.
You will see that everything has been mapped here, so all the roads are mapped, but not the parking lots. When you would be entering in the parking lots, the system will understand that there is no HD map, and the system will get back to standard driving. Each parking lot has some peculiarities like, you know, simulation of a drop off of passengers or pickup of passengers. You will see in each specific parking lots. This is the second car that we have. This car integrates the Oculii radar in the front, so you'll be able to see, as I mentioned, the results of the fusion between stereo vision and the Oculii radar. If you have a look at the car in the trunk, actually, if you turn the car, you will see that this car has only one box for the processing.
This is why we actually designed the car to be CV3 ready. Once CV3 will be ready, we will just swap the boards, and that will be it. As we already mentioned, it's slightly different from what we had a few years ago, when we had our trunk was full of PCs. We had 16 servers in that, and the power consumption was something like a little more than 3 kW just for the processing in the trunk. This is what you will see in the demo area outside. With that, I will thank you for your attention, and we'll turn to Steven. Steven is our VP for Radar Technology and the Former CEO of Oculii. Steven, the floor is yours.
All right. Well, it's my pleasure to be here today. If you, as you've heard from all the fantastic presentations, there's a lot of synergies that we believe exist between Oculii and Ambarella, and we're excited to share some of those synergies that we've already unlocked with you here today. In many ways, radar is really a very natural complement to camera-based processing because of a lot of physical characteristics. Number one, radar operates at a completely different physical wavelength than optical light does. Environments and situations which might blind or degrade the performance of the camera perception, these are actually scenarios where the radar can enhance the perception, such as low lighting conditions, weather conditions in rain, fog, snow, or even obstructions on the lens.
Radar can actually see through all these different environments because of the longer wavelength, and so it naturally complements optical-based camera processing. Now, the challenge is traditional radars are very poor when it comes to spatial resolution. Although the radar can see through the weather, it can see through the obstructions, what it sees in a traditional automotive radar system is not very clear. This is what our fundamental AI software addresses. It takes and preserves all of the advantages of radar, but it makes the resolution significantly higher so that it can be a peer to the camera-based processing and allow for sensor fusion of two very high resolution and efficient sensing modalities. Before I talk about how the Oculii software works, I think it's important to first frame why traditional radars suffer from poor resolutions, particularly in the automotive domain.
Traditional radars are what we call not intelligent sensors. They typically repeat the same signal over and over and over again, constantly and repetitively. As a result, a traditional radar requires many, many, many antennas, hundreds, if not thousands of antennas to have high resolution because this is how each antenna measures the environment, and each measurement provides it with information to create higher resolution. Now, unfortunately, more antennas also increase the cost, size, and power exponentially, while the performance actually only increases sublinearly. This adverse effect creates an unattractive trade-off where the sensor cost and size and power increases, but the performance does not. If you look at the upper end of radar performance, for example, in the military, these sensors are extremely high performance, very, very high resolution, very long range.
These sensors cost tens, if not hundreds, of millions of dollars and are more expensive than the platforms themselves. This is obviously not a scalable solution to deliver high-performance radar. This is exactly what Oculii has addressed with a fundamentally different approach using software and intelligence rather than with more antennas and with hardware. Oculii's software fundamentally breaks the assumption that the radar should be dumb. The radar should just constantly repeat the same signal. We use an adaptive waveform that learns from the environment and embeds different information at different times. This information that we're adaptively embedding and learning allows us to design the antenna array in the radar data cube in a very different way, leveraging sparsity and then computation to effectively fill in the missing information that the sparsity does not provide.
This is in many ways a software-based solution to a hardware-based problem. What this does is it actually fuses and complements very well Ambarella's strengths because of the computing capabilities that they've already been able to demonstrate on CV2, and now that we'll be able to demonstrate even more effectively on CV3 with multiple radars and multiple sensors. Just to give you a sense of what we've been able to achieve on existing commodity market-proven radars already. Traditional radars, as I mentioned, as you increase the number of antennas, the x-axis here shows the number of antennas in a radar system. As you increase the number of antennas in the system, the resolution does not increase very much.
As a result, even as you're going from 12 antennas, even up to 200 antennas, you're still not in the class of resolution that you really need to be in order to have this be a peer to an optical system. Oculii software already on existing embedded silicon provides up to a 100x increase with the three different platforms that we've already built and showcased. The Falcon sensor is a corner radar designed to provide up to 20x performance, and the Raptor on the very right-hand side is a sensor that has up to a 100x performance improvement and is already getting close to lidar-like performance, but at several orders of magnitude cheaper cost, much lower power, and you know, market-proven radar, solid-state sensor that's very, very easily integratable into mass-market automotive systems.
With the new CV3 architecture, one of the things that's fundamentally going to change is not just how much processing capabilities we have now for the radar processing, but by centralizing and processing all of the radar data on the same SoC and on the same fabric that you're also processing camera data on. This will enable OEMs and Tier 1s to build a completely different type of intelligent radar system that is able to leverage this technology along with the capabilities of CV3 to deliver even better performance that can dynamically shift to where the processing actually needs to be as the vehicle is performing different types of maneuvers. For example, if the vehicle is driving at high speeds on the highway, you wanna be able to see as far as you can, 400-500 meters down the road with very, very high precision.
When you're taking an unprotected left turn, for example, these types of maneuvers will require you to look in different directions with higher resolution and different ranges. Our software, combined with the capabilities here in the central processor, will allow you to shift where you wanna use processing for to increase resolution where you need it. Most importantly, though, this CV3 processor, as you've already heard from Les and Fermi, is an order of magnitude better than what we've already been running our existing software on the embedded side. We expect that with this performance that we'll be able to take advantage of, this will unlock even higher resolution, even longer range for the radar perception.
Most importantly, it will do all of this while preserving the cost, size, power, and solid-state capabilities that have made radar already such a prevalently and widely deployed sensor already in mass market vehicles. Just to give an example of what the radar is able to showcase today, this is an example of what the radar data is able to provide from a point cloud perspective compared next to a 32 channel lidar, which is probably two, if not three orders of magnitude more expensive. On the left-hand side there, you see that the lidar data, although high resolution between zero to 50, it can't really see past 100 meters even in this type of good condition. The radar, on the other hand, is already able to see almost 400 meters.
Most importantly, not only can it see, it's also able to measure the precise speed in every single point, which allows you to very efficiently determine whether the target is approaching, leaving, and is a threat, or if it's something stationary that you're actually needing to use for mapping and localization to build up the HD maps that Alberto was sharing earlier in the AD stack. Lastly, as I mentioned, the radar still retains the traditional advantages of the radar sensing modality, namely that this capability is still achievable and reliable in all weather conditions. This sensor fundamentally is solid state, low cost, mass manufacturable, and already ready for mass market automotive deployments. Today, here at CES, we're gonna be showing three demos that I think will showcase the capabilities of this radar in a variety of different use cases and scenarios.
As Alberto mentioned, we're gonna be showcasing the radar on the EVA 2.0 vehicle, and so you'll be able to see what this looks like. This is another video here showcasing what that point cloud looks like in a full three dimensions. The previous video was just top-down, but this video is actually showing the full four dimensions actually. It's XYZ plus Doppler. What you can see here is that the radar is actually able to get extremely high resolution, a lot of detail and a lot of sensitivity, and all of this is going to further improve significantly as we move this processing onto the CV3 processor and unlock the capabilities that the processing enables along with what the fusion can do.
We're very excited to showcase the initial kind of implementation here on the EVA 2.0, but this is gonna improve even more significantly as we further integrate into the CV3. We're also gonna be showcasing the radar's use in a lot of different other verticals and applications as well. As Fermi mentioned, you know, one of the big markets and use cases already for Ambarella silicon today is in security. In particular, in security use cases, you encounter and face many of the same problems that you do in the automotive environments with respect to low lighting conditions and weather and rain and detriments there. Again, here, the radar provides a natural complement to camera-based systems, very efficiently detecting motion, very efficiently seeing through low lighting conditions and obstructions.
Most importantly, all of this can be done fused inside of the CV2 or CV3 processors, enabling a very cost-efficient way to add these capabilities while still retaining a lot of the core advantages that radar can deliver. Lastly, we'll also be showcasing the beginnings of using these radars on other autonomous systems outside of automotive as well. As you can imagine, self-driving cars get a lot of the attention, but they're not the only autonomous systems out there that are benefiting from all the advances in perception over the last several years. In many ways, a lot of the autonomous systems such as these, autonomous mobile robotic systems, and indoor and outdoor systems, they actually have an even smaller cost, size, power envelope that the entire perception stack has to fit in in order to meet the requirements of the end users and service providers.
In many ways, we actually think that, again, camera plus radar processing is a very, very natural fit for a lot of these autonomous mobile robotic applications, whether it's for outdoor delivery, indoor logistics. These systems have to perform with precision and with safety, especially when they're interacting with humans. This technology that we have from Oculii is really gonna complement automotive, security, and robotics use cases across the spectrum.
We'll have a third demo here showcasing the robot there on the top left-hand corner is our own robot, but those are some of the other robots of our partners that are using this sensor. It'll showcase our sensor modules being used in these robotic applications, so you can get a feel for what the other applications are outside of automotive as well. Great. With that, I'll hand it over to Chris, and he's gonna talk a lot more about some of these use cases and markets, and very exciting to hear from him as well.
Thank you, Steven. I think, before I start talking, we have a little video to show you of CV3. I hope. One more. One more?
To create a safe and robust Level 2+ to Level 4 autonomous driving system, you need a purpose-built and power-efficient central domain controller that perceives your vehicle's surroundings by fusing data from multiple sensors and then plans a seamless driving path. That's precisely what you get with Ambarella's new CV3, a central domain controller that perceives the vehicle's surroundings using a single CV3 SoC to simultaneously process a combination of sensors, including long-range cameras to sense objects farther away from the vehicle. Wide-angle cameras to sense objects and pedestrians near the vehicle. Radar, sonar, lidar, and other redundant sensors while enabling future optimizations of the sensor mix. A single CV3 can intelligently fuse the data from all of those sensors simultaneously while providing the highest levels of AI processing.
Once the vehicle's surroundings are perceived, the CV3 helps it estimate all likely paths for any nearby agents, such as pedestrians and other vehicles. Based on these estimates and sensor data about the AV's surroundings, the CV3 then plans the driving path. The CV3 also processes in-cabin RGBIR cameras that monitor drivers for distraction and fatigue in semi-autonomous vehicles, as well as during automated driving to confirm they are able to take control if needed. Ambarella's scalable CV3 portfolio has multiple options to meet the product strategies of OEMs and Tier 1s alike. From our entry-level CV3s for multi-camera ADAS to our premium CV3 SoCs for level four fully automated driving. The CV3 family leverages Ambarella's extensive experience in image processing, AI computing, computer vision processing, video capture, adaptive AI radar algorithms, functional safety, and cybersecurity, with the headroom for future over-the-air software updates.
Together, these SoCs offer high performance, lower power, and low latency for ADAS and automated driving applications. The CV3 SoC family has up to 500 ETOPs of CVflow AI engine processing for neural networks, as well as separate general vector processors for other algorithm types, including radar data processing and classic computer vision algorithms. Advanced image signal processing addresses challenging lighting, weather, and driving conditions. The on-chip Arm CPUs are divided into multiple processing clusters for ASIL B applications, as well as a safety island for ASIL D applications.
The CV3 family also supports numerous camera inputs, as well as radar, sonar, and lidar connected via CAN, Ethernet, and MIPI CSI, along with H.265 encoding of all cameras for recording data. The integrated GPU enables surround view rendering, while the dense stereo and optical flow engines provide depth and motion perception. Ambarella's unified software ecosystem provides software drivers, tools, and an SDK for easy integration, neural network optimization, and application development. With the new CV3 central domain controller family, Ambarella is enabling you to create power-efficient, robust, and safe autonomous vehicles up to level four.
Okay, that concludes my presentation. Actually, you don't get away that easy, I'm afraid. I'm going to talk to you about the target markets and applications on the road to achieving $1 billion in revenue. I'm actually gonna start looking at our IoT business. I think all of the presentations up to now have focused on automotive, particularly with the CV3 announcement and Oculii. Let's first look at the SAM for our IoT business. As Fermi already presented, our total SAM for fiscal year 2022 was $2.3 billion and is forecast to grow to $6.9 billion in fiscal year 2028. Now I'm gonna discuss the revenue SAM for IoT excluding the automotive. The bottom two graphs here show the SAM for enterprise and smart home security.
Thank you. The bottom two bar graphs here show the SAM for enterprise and smart home security. Today, these markets represent about 50% of our total revenue, and together they grow to about $1.5 billion by FY 2028. Above that, you can see markets that represent new opportunities for Ambarella: ID authentication or access control and robotics. The other category is mainly consumer products, which we still serve in terms of drones and 360-degree camera products such as those from DJI and Insta360, and sensing cameras of all kinds that I'll review shortly. Lastly, we have radar. Although we based the Oculii acquisition primarily on the automotive opportunity, we are excited with the radar opportunity in the IoT markets, and we will have to revisit these numbers, which are admittedly conservative at this point.
Our product portfolio of AI vision SoCs is the widest in the industry. Let us introduce you to CV3 based on our new third generation CVflow architecture, but we also have a full lineup of SoCs from the low-cost CV28M all the way through ASIL automotive chips up to our 8K CV5, which we introduced last year. Of these seven families, five are in production today, with CV5 expected to be in mass production by the end of this year. CVflow offers the best performance per watt and is combined with our strengths in image processing and quality. Our customers like the ability to have a common software SDK and development tools so that they can develop multiple camera families with much reuse between the different family members.
All of the family members share advanced image signal processing pipelines, or ISPs, that provide outstanding imaging under challenging lighting conditions, which is a critical differentiator in security and automotive applications. We will be improving upon these with our AI-based ISP developments in future, as Les demonstrated earlier. We have introduced sensor fusion, for example, with vision and radar on CV3. It's important to know that we have been doing sensor fusion in non-automotive markets for a while now, and I'll show you some of these examples.
It's important to note that we have also been doing sensor fusion in non-automotive markets for a while now, and I'll show you a few examples. In this slide, we have many sensor types listed on the right-hand side, and images of the fused results combined with these sources on the left. Actually, audio could also be added. In these examples shown here, we are combining vision data with radar in robotics applications, time of flight in robotics and sensing applications, and thermal data in security cameras, and structured light in access control. Also, our AI performance combined with IR image processing makes us unique in many of these applications. Earlier last year, we introduced our CV5 family. This is state-of-the-art for video security and our first chip in 5 nm process technology. With CV5, customers no longer had to trade off between AI performance and video resolution.
The ability to process four 4K images on multi-imager cameras is unique and addresses the fastest growing market segment. We also introduced a low-cost 4KP60 derivative, the CV52, which offers state-of-the-art imaging and AI in a single or dual sensor camera, performing AI and 4KP60 video in under three watts of power. We introduced an 8K version for consumer applications that can encode 8K video in less than two watts, making it suitable for small form factor action cameras. CV5 can also encode 14 separate streams for automotive recording applications, and we will demonstrate that to you at CES this week. We have multiple wins at key customers and mass production is expected to begin by the end of this year. Let's look at the IoT security camera market. It has largely transitioned from analog, Analogix to 1G, to network cameras or 2G.
CV represents two types of opportunity in the IoT camera market. One, in new product cycles in existing markets, and two, acceleration of growth in its installed base, in particular for sensing applications. Now the transition is towards AI, or 3G here, which grows the value in the mid to high-end segments where Ambarella is focused. In the bottom bar graph, you can see the split between the enterprise or public market segment and smart home security, with about 75% being on the enterprise side. The transformation of the market towards AI is accelerating with cannibalization of existing cameras with new ones with intelligent AI features. Cameras now combine both human viewing and AI-based software, such as person detection, tracking, license plate recognition, et cetera. AI adoption in the smart home security market is just beginning, and is perhaps 18 months behind that of enterprise.
We are seeing customers demanding smart monitoring features and companies like Ring shipping cameras with AI hardware acceleration. Features include person detection and notification, vehicle detection, pet detection, et cetera. The enterprise camera segment is currently serving smart cities and infrastructure applications. The trend is towards making cameras intelligent. Some of the applications are shown here, for example, and include traffic management, accident detection, license plate recognition, finding missing persons, and reporting unusual behavior. The replacement cycle is accelerating as people demand these features. All of our customer discussions today and new designs are for AI cameras. The ASP for these AI SoCs is approximately twice that of existing non-AI SoCs. Additionally, there are new incremental opportunities to grow the installed base in sensing cameras. For example, applications in smart retail and occupancy monitoring. Here, the cameras are purely for data extraction for business analytics.
There may actually be no video streaming or storage at all. Applications here include product placement, warehouse product tracking, business intelligence, people counting, social distancing, and property management. The ability to do AI processing in the camera also provides the ability to preserve privacy by avoiding sending video to the cloud. This slide shows a few of the logos of some of the companies we are doing business with, both in the enterprise IoT arena and the IoT smart home market. We are doing business with basically every major enterprise camera supplier. Customers such as Motorola and Bosch have based their solutions on our solutions for over 12 years. For some of the new opportunities I discussed on the prior slide, large incumbent security customers are aggressively moving into these markets. For example, Motorola acquired Openpath, an access control customer that is using our CVflow applications.
On the smart home side, we do business with most of the major consumer brands, and these major players include Ring, Vivint, Alarm.com, and Comcast. Now let's look at the smart home security segment. More homes are installing security with seamless integration with other security systems. More cameras are being installed per home, driven by new form factors such as doorbells and outdoor cameras. The industry has been transitioning from pure video streaming and storage to analytics, and people are demanding features that allow home monitoring without these false alarms and allow easier monitoring on their smartphones. For example, people and package detection and Vivint's new swimming pool alert are examples. The trend now is for analytics to move to AI-based solutions and new custom AI to use AI for presumptive security.
For example, the Ring Doorbell and Wired Pro and Floodlight Cam Wired Pro Cameras use our CV25 for person and package detection today. Additionally, there are new form factors like Ring's Always Home Cam, where the AI processor will be required to do SLAM, object avoidance, and path planning in addition to traditional camera functions. If we consider how the growth of the IoT camera installed base can potentially accelerate, you can see on this exhibit that the sensing camera opportunity can be more than twice that of security alone. There are now multiple new IoT applications being driven by AI. These include sensing cameras, where cameras are used for data collection and decision-making and may not include human viewing at all. On the right-hand side, you can see a few of these new applications, all of which Ambarella is looking to address.
These include mobile payment, access control, robotics, new retail, healthcare, machine vision, smart occupancy, logistics, and smart security. In the smart building category, we have been working with our partners on semiconductor and Lumentum on a reference design for 3D access control, recognizing faces and detecting spoofing. This leverages Lumentum's structured light technology and sensor fusion running on CVflow. We have also a battery-powered reference design for smart locks based on the same partnership, and we'll be demonstrating that to you this week at CES. In future, AI will allow building automation, including anti-tailgating occupancy sensing. The occupancy sensing can be used for HVAC control and efficient meeting room allocation. Other new markets for us include robotics and machine vision. In home robotics, cameras are increasingly being used in robotic vacuums to enable them to become smarter.
Our CVflow SoCs are ideal for these applications because they can fuse data from multiple sensors. In industrial robotics, cameras are again being deployed to make autonomous robots smarter. In these applications, we can also leverage our CVflow SoCs and the technical developments from our automotive business, including radar. In machine vision and Industry 4.0 applications, cameras are increasingly performing AI at the edge in the camera, making systems faster, more reliable, and more easily scalable. Here we can leverage our IP security camera experience and technical strengths in image and AI processing. During the show, we'll be demonstrating our bean counting demo here at CES, and I won't make any jokes about that being a favorite of our accounting department. Based on our success with our CVflow architecture, we've been building an extensive ecosystem of partners. These include hardware platform partners shown here at the top.
We have been partnering with AWS to host our tools in their cloud and enable easy network development and deployment. We have a wide range of third-party software vendors covering various AI applications, including security and robotics. Many of our customers, especially the big security camera makers, also work on their own software, all supported by CVflow's open architectural model. I'll now talk about the automotive market. Let's first look at the automotive revenue SAM by application. The total SAM here is growing from just under $2.5 billion in fiscal year 2022 to close to $7 billion in fiscal year 2028. The bottom two bar graph segments are for OEM car recorders and dash cameras, both of which are markets that Ambarella has served for many years.
Above that are new electronic mirror and in-cabin monitoring segments, which include both driver monitoring and combination products such as driver monitoring plus interior monitoring cameras. Our automotive SAM looking forward is now dominated by sensing applications rather than viewing ones. The largest portions of the sensing SAM are for front ADAS, level 2+, and level 4. These are markets that Ambarella is serving with our functional safety CVflow SoCs, including our existing CV2 family and the new CV3. Lastly, we have radar, a market which Ambarella is now addressing with the acquisition of Oculii, providing both software and module solutions. For the first time, we are now adding the Oculii SAM to the Ambarella SAM, and by fiscal year 2028, that's about $600 million, mostly software with some module business.
This slide shows the trend in vehicle sensor suites, and specifically the increase in the number of cameras and radar in vehicles with high levels of autonomy, as highlighted by the industry example shown. In the past, the increase in number of cameras per vehicle was the opportunity for Ambarella. Now with Oculii and with CV3, we are not only providing camera perception, but radar perception and processing for fusion and planning. The average number of cameras per car across all vehicle types is increasing from approximately 1.5 today to about three by FY 2028. In the high-end passenger cars, it will be in the range of eight to 12 per vehicle, and potentially more than that in level four robotaxi applications. These numbers do not include cameras for electronic mirrors or driver monitoring or backup cameras.
This slide shows a quick snapshot of some of our automotive camera logos. Our automotive business began with dash cameras, which became an OEM car recorder option. Since then, we have expanded into forward-facing ADAS design, starting with Chinese commercial vehicles and more recently fleet manufacturers. More recently, we have won significant ADAS wins with new electric car makers and autonomous vehicle manufacturers. We have also entered the electronic mirror market with partners such as Gentex, the world's largest mirror maker, and are shipping in-cabin and driver monitoring applications, including our wins at Great Wall Motor and Dongfeng in China. I'd like to highlight a few of the recent design wins showing our progress towards providing the central process and solution in these vehicles. In our recent earnings call, beginning of December, we mentioned that the new Rivian R1T truck was using our solution.
We can now also tell you that the Rivian R1S SUV and the Rivian vans are also using our solutions as well. Our CV2 SoCs are being used for the Driver+ autopilot and Gear Guard security features. The vans also use our CV2s for stereo forward-facing cameras. As we announced earlier in the year, we have been chosen by Motional, a joint venture of Hyundai and Aptiv, for its autonomous vehicles and Aptiv for the vision systems in its buses and vans. Ambarella's open platform approach allows our OEM and Tier 1 customers to differentiate. This contrasts with Mobileye's fixed vision IP bundle, which provides limited room for customers to add their own software. Our front ADAS approach enables our customers, as well as third-party software vendors, to run their software on our open platform based on our CVflow SoCs.
These enable more functionality within the single box design due to their lower power consumption, allowing their use within the thermal limits of a single box. Our high-resolution processing provides longer detection distances and wider field of views for cross-traffic, and we offer dedicated hardware support for stereo vision and optical flow. Furthermore, our platform enables combination of different functions. For example, addition of driver monitoring, recorder, and viewing functions, and the ability to combine best-in-class software from multiple software vendors. It's important to note that vehicles will increasingly be expected to add software features over time. For example, to meet changing Euro NCAP requirements, and our high-performance open platform fully supports these requirements. I'd like to show you an example of what we have planned for a reference design targeted in the front-facing ADAS market.
This builds on the synergies between Ambarella and Oculii and was not previously on either company's roadmap. We are now one of only a few companies that have the advanced vision and radar technology under one roof. The REVL 2 is a one camera, one radar front ADAS reference design with very low power and embedded sensor vision fusion. It includes AI vision perception, Oculii radar processing, and sensor fusion all running on a single CV2 functional safety chip. The combination of vision plus radar fusion enables superior range and scenario coverage and higher performance versus a vision-only system. Availability of the REVL 2 will be in the second half of this year. We are continuing to build an extensive ecosystem of software and algorithm partners for each market segment. These include level two plus, forward-facing ADAS, interior perception including driver monitoring, electronic mirrors, and AVM, among others.
In the driver monitoring category, we will be demonstrating our partnership with Seeing Machines later in the week here at CES. Of course, in addition to the third parties shown here, many of our automotive customers are also developing their own software in-house. I'll now talk a little bit about Ambarella's software business model. Ambarella currently addresses software value or captures software value by selling chips with different part numbers and price points while enabling software features via an electronic fuse lock in the chips. We've actually done this for many years and across all of our different market segments. We also sell modular SDKs, base layer of functions, middleware, reference applications, and with different operating systems, including Linux for security cameras and real-time operating systems for automotive applications.
For certain markets, we sell mature software applications to shorten time to market, for example, in-car recorders and electronic mirrors. We will be demonstrating in our autonomous driving stack, as presented by Alberto, on our EV vehicles here in Las Vegas this week, and hopefully you'll get a chance to ride in them. We will provide complete AD stack implementations as a reference for evaluation, benchmarking, and test, and the stack contains significant IP, which we will be discussing with our customers. Some modules, such as stereo calibration and AI-based ISP, we expect to license for mass production. With the acquisition of Oculii, we will now be selling radar software. This will be our first move into a pure software play, selling software that runs not just on Ambarella chips, but on other companies' chips as well.
We will collect NREs, receive per unit license fees, as well as annual maintenance fees. We will also offer Oculii radar modules for IoT and lower volume vehicle markets, e.g., L4 and off-highway vehicles. Finally, I'd like to briefly address some of the other new automotive applications that Ambarella is serving. The first image shown here is a KeepTruckin AI dash camera for the fleet market using our CV22 SoC. The camera integrates one camera for front ADAS and incident recording, and a second RGB IR camera for the driver monitoring system with driver recording. We have this being demonstrated at CES this week. In the electronic mirror example, our CVflow SoCs are simultaneously processing rear, left, and right camera images with very low latencies and high frame rate while we perform AI-based object recognition and blind spot detection recognition in the left and right cameras.
The driver monitoring solution shown here is from Cipia, which leverages both our AI performance and RGB IR processing. The truck image is from the Coros van, where we automate logistics using our CV2 stereo processing to automatically read the barcodes and paragraphs on packages. Lastly, we show the new Rivian Gear Guard security system. One image that I didn't have time to include would have included a new product announced today that is Nextbase, which is the largest European dash camera supplier. They announced their Nextbase iQ product, which uses our CVflow SoCs, and it's one of the first new dash cameras to use AI intelligent processing in it.
All of these new applications leverage Ambarella's significant technical advantages, including optimal system solution leveraging our low-power CVflow SoCs and comprehensive software SDKs and applications, computer vision with industry-leading performance per watt and mature CV development tools, and best-in-class video processing, including excellent imaging in low light conditions, highly efficient video encoding, and color RGB IR processing. In summary, we see significant new opportunities in the automotive market and believe that we have a highly differentiated product portfolio of both SoCs and software to successfully address them. Thank you very much. I'd now like to hand over to Louis Gerhardy.
As a reminder, I'll be the last presentation, and then we'll go into Q&A immediately afterwards. The title of my presentation is Building a Sustainable Model of Return. You know, collectively, what you've heard from all the speakers today is a description of this new foundation I'll describe for Ambarella. From a financial perspective, it's this new foundation we expect is going to deliver more sustainable, more predictable financial returns than we've had in the past. Our technology, as you know, it's always been differentiated, but what's changing is that there's just a lot more of it. Les described the third generation of CVflow. Alberto described the more complex scenarios for the L4 stack.
Stephen's overview of Oculii and how the radar technology fits into our roadmap all the way down to raw data fusion. There's just simply more technology, more value per system that we can access in growing markets where, again, as Chris said, we have new product cycles in existing markets, but this CVflow sensing capability enables us to reach into many new markets, enable machines to perceive the world and make intelligent decisions. Again, this is the new foundation that we're talking about. As Fermi said, the gross profit dollars per unit on a like-for-like basis, it's about 2x. This is going to be really critical for our ability to continue to drive positive operating leverage, and we'll introduce a financial model that will assume that that does occur.
Finally, Ambarella is in a position of financial strength. We have ample liquidity and a positive long-term operating cash flow outlook. I'm frequently asked, you know, after a 30- 40 minute meeting, portfolio manager will say, "Okay, that sounds great. What's the elevator pitch? What's this all come down to? What evidence can you give to us that it's actually happening?" I always refer to the average selling price. As you all know, the semiconductor industry operates in a deflationary environment, the deflationary characteristics. Every year, you do more, and you get less. Well, I'll point out Ambarella's ASP is in the mid, you know, single digits today, moving higher over the last two years. We're introducing products now, whether it's CV2 family or the new CV3 family with double-digit ASPs, and CV3 reaches into the triple-digit ASP range.
Again, for a company with ASPs in the mid- to mid-high single-digit range today but going up for the last two years because of CV, we'd suggest this rising ASP is indicative of the change and this new foundation, and we would expect that there's more to come. Going backwards, quality of revenue we put forth has never been higher. If we back up for a minute, our revenue this year will be at an all-time high for the company, and that compares with fiscal 2016. That's a nice stat, but it did take five years. What's exciting about that, and what's really important is the internals. The internals at Ambarella have changed dramatically. Let me give you a couple of examples.
Our largest customer year to date, the first nine months of this year, our largest customer, is 7% of revenue. In fiscal 2016, the largest customer was 30% of revenue. From a different perspective, our top five customers for the first three quarters this year were 23% of revenue. Back in fiscal 2016, they were almost 60% of revenue. There's another way to look at how this foundation is changing, this more sustainable foundation I've been talking about, and that would be what's the underlying economic driver of the revenue that we're realizing today. There's been a major shift in that as well, whereas back in fiscal 2016, heavily driven by the short product cycle, discretionary consumer electronics market. You know the applications I'm talking about.
Nowadays, 90% of revenue is driven by three things: enterprise CapEx, public infrastructure spending, and by consumer durable goods. A product that might sell into a smart home and get replaced every five or six years. This is the foundation that we're talking about, and the quality has changed, and these internals are quite a bit different than they were before. Let's just cover the automotive funnel. I'll describe a few new things about this. First of all, why do we even provide an automotive funnel? We do this because we can get a design win in the automotive market, and it might take two to three years to begin production. If we get a design win tomorrow, it might take two to three years to begin production. Furthermore, we're often not allowed to talk about those wins.
We've created this analytical framework with a very disciplined and transparent model. We tell you exactly how we build it and what the discount factors are. This way, we can communicate to you how things are changing at Ambarella in our automotive business. On November 30, when we announced our earnings and we announced the new funnel, and we'll update it once a year, this funnel increased from $600 million a year ago to $1.8 billion. It tripled. And just to put that in context, what this means is that we expect in the six-year period of fiscal 2023, which is calendar 2022, to fiscal 2028, calendar 2027, in that six-year period, at this current time, we expect our automotive revenue to be $1.8 billion. How did it triple?
Well, number one, a significant expansion in our global reach. We've talked about in the last 12 to 18 months, increasing our sales and marketing, FAE support in Europe with an office in Munich and more people in Detroit. That's obviously had an impact. When I get to some of the funnel stats, you'll see that. There's also more content per win. I mentioned double-digit, even reaching into triple-digit ASPs now with CV3. It's a growing market, and most importantly, we think our share is going to be growing. If we just touch on the share for a minute, we think our market share of the SAM in auto that Chris presented is a little more than 3% this year.
With this funnel, this $1.8 billion, if you look at g o back and look at Chris' automotive SAM chart and add up the SAM in the same period of time that the funnel is, that's about $25 billion. Divide 1.8 by 25, you can see we think our market share in that period of time is going to be about 7%. That's the way you can think about how we see our market share changing. What are some other important facts in this new funnel? Well, first of all, about 80% of this $1.8 billion is computer vision products. About 20% is human viewing products, which is the market that Ambarella initially penetrated the automotive market. Number two, Americas and Europe now represent more than half of this new $1.8 billion funnel.
Number three, in the back half of the funnel, the last three years, L2+ is beginning to make a material contribution to it. With this new foundation we've been talking about, what's the financial model gonna look like? First of all, in terms of revenue growth, the SAM that Chris presented for IoT and auto combined is in the high teens CAGR, high teens revenue CAGR, and we expect to take market share. First of all, we expect to grow at those levels or above as we take market share. From a non-GAAP gross margin perspective, our non-GAAP gross margin guidance has been and remains 59%-62%. We wanna be a larger company, and we're gonna price accordingly.
Yes, our gross margins are above that now. We're comfortable staying in the 59%-62% range. In terms of the operating leverage, CV, while it's early in the ramp, is a major driver of this positive operating leverage we're talking about. We're projecting at $500 million of annual revenue, non-GAAP operating margins of 21%-24%, and at $1 billion, 30% or higher. If you calculate the incremental operating margins on that's 30% or higher as we scale to these higher levels of revenue. From a free cash flow perspective, you know, as a fabless company, the most important thing for us is the human capital. You heard that from Fermi, you heard it from Les.
CapEx, purchase of equipment, is going to remain in that 1%-2% range, which has very positive implications for our free cash flow outlook. With the changes in our business, we are going to report revenue in a different way. We will not begin to do this until May or June when we report Q1 of fiscal 2023. For Q4 and for all of fiscal 2022, we're gonna continue to report the three revenue segments on the left side of this page. Given that other has declined and some of the other changes in our business, we're gonna put security and other together, call it IoT, non-auto IoT, and then we'll continue to report automotive as it always has been.
We will continue to report sub-segment information for IoT, non-auto IoT, for example, enterprise, public security, the smart home security. One of the reasons we're doing this is that there's a number, as Chris mentioned, many new emerging segments of the non-auto IoT market, where as they become more material in the coming years, we'll break those out as sub-segments as well. A couple of examples could be access control, which we've spoken about in the past, as well as, mobile robotics could be another one, but there's many others. In terms of the ample liquidity I described earlier, after Oculii, Ambarella has about $150 million of net cash. The acquisition closed on November fifth. But most importantly, this company has incredibly robust track record of delivering positive operating cash flow, 13 consecutive years.
That goes back before the IPO. This includes a period of time that Fermi described the last five, six years, where around $600 million was invested in computer vision R&D, and it really just started to become material in the last year and a half. That contribution from CV, we think, is early, and it bodes very well, we think, for our ability to generate cash. It's also important to point out, you know, the computer vision investment is been completely funded by internal capital markets, in other words, the video processors. Ambarella, since its IPO, has not had any secondaries, it has not issued debt. The gross profit dollars have financed this CV investment, and what's left over has been deployed this way.
The accumulated cash flow that Ambarella's generated has been deployed to two M&A transactions of VisLab and now Oculii. It's been deployed to stock repurchases, about 35% of it, and then the rest to CapEx. All of these arrows remain in the quiver or on the table going forward, and we'll continue to update our capital deployment plans as we move closer to those milestones of $501 billion of revenue that we talked about earlier. It hasn't always felt like it, but the stock has been doing well. Ambarella's IPO was in the fall of 2012, and it's been the best performing semiconductor IPO since that time, as the chart on the left depicts.
Against the major indices on the right, the ones relevant to us, Ambarella's also performed very well. With that, we're gonna stop for Q&A. I think we're gonna do a slight reconfiguration of the room. Is that right? You can get your questions ready. Julie will have a microphone, and maybe just give us a minute. Come on up, and we'll start to reconfigure the setup here for questions. While we're doing that, let's talk about the rest of the event. Immediately following Q&A, we're gonna have a management reception right outside here, and you may choose to mingle with the management and ask additional questions, or you can go see the demos.
We'll have Ambarella hosts in front of the demo room, and they will take you through and explain what you're seeing. Some of you do have reservations for the EVA car as well, so please go out to the parking lot in the tents, and wait for those. Again, if you don't have an EVA reservation, talk to the front desk here, and we can arrange it, if not today, later in the week if you're gonna be here longer.
You wanna take over here and.
No.
'Cause I have the computer. I have to get the computer.
Can we start the Q&A?
Yes. Yeah.
I think, you know, we'll start the Q&A. Yes, please.
Do you have a microphone?
Yes.
I've got one right here. Sorry. Sure.
Yes, go ahead, please.
All right. Yeah, firs t one from Google.
Thank you for putting this together. Great to see everyone in person again. I don't know if this question is for you, Fermi, or Les, probably both of you, but you both mentioned today that your larger competitors obviously have different business models, and the way that you do things is with the algorithms approach first. Why do you think your largest competitors don't do that yet? I mean, I understand they have different model, you know, business model, they leverage their R&D differently, but we're getting to a time now with AI where, you know, you would think that they start to catch on to algorithms first approach.
Les, you wanna give it a try? Come up.
I think a lot of the.
Les, you will need to come here.
I think a lot of it has to do with the DNA of the companies. NVIDIA's background is in graphics world, and they've been building GPUs since the beginning of the company. Their mindset is always how can we extend the GPU to cover these new markets? If you look at how much area, even in their latest Orin GPU, you look at how much area they devote to GPU functions versus dedicated AI acceleration, it's the vast bulk of the area is dedicated to GPU functions. You know, having worked in large companies before, I know it's very difficult for a large company to make a fundamental shift in direction. I think that's the basic reason.
I think I would like to add another comment here. I think if you look at, you know, those large companies, they already have established business based on GPU or app processors. Any time it's not only changing the silicon. When you change your silicon architecture, you need to change the complete software stack. In fact, for NVIDIA, they've been pushing that their software stack as the main differentiator from the business model. Asking now then not only to change the silicon, but changing the complete software stack, I think that's a big challenge for them because still 90% of business is still inside the GPU application. I think that's another reason for bigger company like Les said, it's hard for them to change the direction they are already on.
Sorry, go ahead. That's okay. Okay. There's no problem. Go ahead.
This was an incredible presentation, and one of the most exciting things to me is how Ambarella is moving towards creating more software or I guess like decoupling the software product from the hardware product. One thing I'm really curious about is just how you're gonna see like how will the unit economics of that work with respect to the ASP you're seeing today in CV three or like CV two. How should we think about like the software modules as they're being sold with respect to the ASP and perhaps maybe a hypothetical customer or something?
Right. I think I'll take that question. I think let Louis talk about that CV2 was like a double digits ASP and the CV3 can be triple, but I think that's all still on the hardware side. On the software module, when you license, you usually need to attach a different price on that. For example, when we acquire Oculii, the radar processing, we talk about from $3 to $15, depends on the performance. You should expect similar things like, for example, we license our blind spot detection to our e-mirror customers, which costs them probably a few extra dollars. It really depends on the performance and uniqueness of the offering, right? I don't have a price tag for everything yet, but you should expect that there will be extra price associated with each software modules.
Thanks. I'm trying to wrap my arms around the domain controller kinda opportunity. I guess going from CV2 to CV3, does that expand? You know, is each domain won by one person? Those guys might win one, you might win one, or how does that kinda shake out? I'm also curious about kind of the new car guys, the Rivian guys, why they're liking you versus some of the existing.
Les, you want to? I think you should stay here.
I think what you can say right now is it's quite a wide range of opinions in the OEM space about how to construct their central domain controller. I think in general, people are moving to a central domain location that's running all the software. Exactly how many chips and how they partition it is gonna vary from OEM to OEM.
I have a question online.
Oh.
A question online, related, so Les you can stay there. With Ambarella's heritage in video and imaging, there's companies like Tesla that have developed their own chip. How would you know, compare and contrast what Tesla has in volume today with t heir architecture and just the competitive dynamics between what you talked about today with CV3 and what Tesla has?
Right. I'm going to assume that's from a technology perspective, right? I think Tesla has said that their current chip is around 70 ETOPs of performance. If we use that as a metric, you can compare that against the 500 EOPs number that the first CV3 chip will have. And also, I think Arm performance is higher. In general, I believe our chip is significantly higher performance and more power efficient than the Tesla offering. Another differentiation is unlike Tesla, we do believe in radar and especially high definition radar as that being integrated into the sensor suite is gonna be very important to have a robust true anything, let's say, beyond L2+ level of autonomy needs a more robust sensor suite.
Thank you very much for the presentation. I had a follow-up on the CV3. I think, Les, you mentioned that the EVA car has 16 CV2 processors, but it's set to be retrofitted with CV3. How many equivalent CV3 chips will you need for that? And then, maybe this is a question for Fermi. You know, you've got obviously some pretty high profile, front-facing level two wins with some, you know, automotive disruptors out there. How much of your time these days are you focused on these EV disruptors versus tier ones and traditional OEMs? And going forward, is it just a timing issue, and can we also expect to see traction with some of the more traditional automakers as well?
Okay, the first one's an easy answer. That's one chip. One CV3-High chip will be able to run the entire stack.
Well, I think it's a really different timeline we're talking about. When we talk about electric vehicle announcement we made, in fact, we start engaging with Rivian in late 2018, and then four years later, we talk about production. Recently, if you talk about how I allocate my time, in fact, you know, in combination with our office in Detroit as well as Europe, in fact, in the last two months, I traveled there four times with the pandemic. That just show you that, you know, there's activity, and we need to continue to show up to our customers, and that's definitely one major area I spend a lot with our customers. Please.
Online question.
Sorry.
If I can. Question is, for L2 plus and above, how do you see, radar and lidar, competing or cooperating, at L2 plus and above?
Yeah. I think for L2+, you can get away with almost any type of vision plus maybe nothing in the case of Tesla, right? That's because the human is always there to take over control if something goes wrong. As soon as you go to L3 or above and the eyes are off the road, you need to be very sure that the sensor suite is not making a mistake in the perception. You need to augment vision with at least radar. Whether you still need lidar is something I would say is still an open question in the community. From our point of view, we believe that with the Oculii radar technology running on CV3 that we will not need lidar.
Go ahead.
Computer vision as a percentage of your sales. You said 45% of your sales will be computer vision in calendar 2022. That implies about a 115% growth rate off a very high growth rate last year. Wondering how sustainable that is in that. Does that incorporate sales from Oculii being integrated and the new CV3 chip? And then secondly, when you scale to $1 billion in revenue, what percentage of your sales by then do you assume will be computer vision?
For next year revenue, I don't think CV3 will contribute on that. Maybe a little bit sampling and so on, but probably not a material revenue. On the Oculii side, this year we talk about $3 million-$4 million revenue, and we expect a growth there, but it's not going to be, you know, jump up to a level that we just talk about. Majority of the CV revenue growth will coming off on our CV2 family chips next year. We do believe that the growth will continue to be there, but the ratio, you know, it's hard to maintain 100% every year.
Definitely, I think if you look at the other set of data which we offer you, we have 270 customers, 100 of them already in production. I hope that majority of the other will go into production soon, and each customer will introduce more than one product. I think with that, I am still confident that the CV revenue continue to grow in a very healthy way. Yes.
Two questions. First, since we're in Las Vegas, I'll start with a fun question. Can you give us the over-under on when you think you'll hit the $500 million and $1 billion dollar revenue model?
Well, I think, you know, you look at the growth of the SAM. We said we expect to do that plus by taking market share. I think that's the closest we can get there for you.
Okay. I didn't think I'd get an actual answer. The more serious question is, as you look at introducing CV3, how should we think about that expanding your TAM? Was the domain controller already in your SAM numbers, and so CV3 allows you to come in and just take a higher share of that SAM? Should we think about it as really new incremental dollars now coming into the company? And sort of again, what would the timeline be on material revenue from CV3? I mean, my guess is that's three to five y ears out. Is that sort of a reasonable expectation?
CV3, in terms of timing for revenue, you know, could be as early as second half of calendar year 2024. In terms of the incremental SAM, if you go back and look at Fermi's, he had a pyramid that showed where we are today in that pyramid, which, you know, depicts basically a L2+ or L4 software stack and providing camera perception. That is what you know Ambarella for today in the automotive market. Now with Oculii, we can go horizontally, as Fermi said, and do radar perception as well. CV3, what that brings in addition to the incremental perception opportunity with radar, it also allows us to go up the stack and capture the fusion and planning layers as well.
You know, perhaps Les can talk about the amount of processing required to do fusion in a car where you've got so many sensors and so many HD camera and radar sensors. But just to answer the dollar question before Les does that, if you look at the SAM that we've provided, and we've added two additional years relative to what we've had out before, look at how L2+ begins to take off as an opportunity for Ambarella in the higher layers. While we can do camera perception like we have with Motional, Arrival, and Rivian, there's an incremental opportunity now to do that fusion layer and the planning layer, and that's a big part of that incremental L2+ opportunity. Maybe Les can describe now the incremental processing needed to do fusion and-
Yeah.
The planning.
If you look at the stack, it's actually a pyramid also. Most of the processing is actually at the perception layer. When you're doing all of those sensors on one chip, that's a big multiplier factor. You know, the 10 cameras, five radars, 15 high bandwidth sensors coming into the chip chews up a lot of processing. Then the next layer up, the fusion layer, is also a significant amount of processing, which we'll leverage the CVflow engine. At the planning layer, current planning is actually running primarily on the Arm processors. That's one of the reasons we beefed up the Arm processing a lot on CV3 was so that we can run all the traditional kind of planning software, which requires a lot more Arm performance than what we had on CV2.
Just had a quick follow-up on your radar SAM. I believe you said it was gonna be $600 million by fiscal 2028, and then most of it is software. How much of that would be, you know, pure software, you know, versus software, you know, coming from your own chips?
The question was, repeating it 'cause it broke up. Of the 600 million of incremental SAM from Oculii that we showed for the first time, how much of that in our model is software versus their module business, I believe. Is that right? How much of that would be third parties paying software license, was the second part of the question. Of the $600 million, a majority of it, say $450 million or $500 million, is coming from the software licensing, and we've been very conservative in terms of modeling the radar module business and how it'll ramp. But there are some interesting opportunities out there. We'll give you some updates as to how that business is progressing.
We've mentioned interest in the security camera market, where obviously Ambarella has a lot of good customers. Most of it, to answer your question, is on the licensing side. There was an online question. I'll go ahead and tack on to that, which is what is the range of ASPs for your radar licensing business? That can range for the Falcon from $2-$3 per unit up to $10-$15 for, say, an Eagle. We haven't talked about Raptor yet.
We haven't talked about, you know, the radar algorithm is running on CV3, which has much higher performance and the resolution. The price will be different too.
I had a question online from the regarding the go-to-market Chris talked about for software. Is there a recurring revenue model in here for Ambarella do you see?
Well, I think, being in the semiconductor industry for 30 years and many of our competitors tried that before, I haven't seen one being successful charging recurring revenue. I don't think that is part of our business plan either.
Yeah, Suji.
I was just curious for Dr. Broggi, perhaps. This HD map that all the data you have there is that a proprietary Ambarella format, or is that standardized, and can customers populate that information versus you guys? I mean, does that level of data help, for example, faster highway driving? I'm just curious.
Please.
Yeah, no, actually, right now it's our own proprietary maps, and we're not thinking about doing any, you know, we are using that as our own stack. So we haven't been doing any homogenization without any other software sources and sources of maps.
Where's the other microphone? Julie, do you have it? While you're taking the microphone.
Hold on.
Let me just add one thing to that. You know, I think the map format is something that's very easy for us to adjust if there was a reason to use a different format. We're really focused on the core technology right now of how to generate that information and how to use it inside the stack.
Go ahead and pass the microphone. I'll ask this question. When will the presentations be made available? Shortly after this meeting is over. Next question is, with regards to radar specs, NXP and TI are currently limited to 12 by 16 or 16 by 16 channel resolutions. A recent spec described 48 by 48. Where does Oculii fit into all this?
Steven, yeah, go ahead.
That's a great question. In many ways, I think you're seeing a lot of the asymptotic returns on the hardware-based solutions starting to top out. When Louis referred to the two numbers there, the first number is number of transmitters on the system, and the second number is number of receivers in the system. Those two numbers multiplied together give you the total number of MIMO receivers. Already what you're seeing is that even as you go from a 12 by 16 system, which is really where TI, NVIDIA, and NXP are capped today to you know there's other companies doing a 48 by 48 system, Mobileye in particular.
The amount of compute that's required to do that in a brute force way is already going up into the tens of teraops just for one radar module to process that many number of physical active channels. That's not to mention all the number of ADC channels, power amplifiers, and all the analog circuitry that you need to enable one module as well. Obviously, that cost, size, power increases a lot, but if you look at the specs and the resolution numbers of what these companies are claiming, even with 48 by 48 systems, they're still about one degree by one degree, which is actually what our system that only has six transmitters and eight receivers is capable of delivering today. You guys can actually see a demo of that.
The module is about the size of a postcard, only six transmitters, eight receivers, and it gets the same amount of performance as a 48 by 48 system. Our software effectively allows you to achieve about a 50x performance improvement on a two-chip design today. On the four-chip designs in the future, we're planning about a 100x improvement. That's not to mention what we're gonna fuse on the Ambarella SoCs to enable even higher levels. This software-based approach is completely scalable to actually complement any other system that has more antennas.
As you see, we have three categories of products. We have a Falcon, Eagle, and a Raptor. Those have increasing number of antennas. We really think that the sweet spot is going to be fewer number of antennas because that means smaller size, lower power, smaller form factor, lower cost, and that is what's gonna be mass scalable with the right software, I think, with CV3. So.
I just had a short follow-up on the map question. I was just curious, like, where does all the map data get stored, or who is responsible for storing the map data? Is it, like, currently Ambarella just, like, locally looking at Parma and storing the maps there, or is this data service gonna be available to customers, or how should we be thinking about that?
We haven't been optimizing that. The main map data is now streaming to the car. Actually, we have a big server in Parma. We have all the maps. When you drive, you can just download a little portion of that. Again, this is. We haven't been optimizing that. Yes. I think-
Yeah. I think the kind of high definition map that we're using is actually quite efficient compared to the kind of dense point cloud maps that are used by people at Google because it's, as you could see from Alberto's talk, it's primarily landmarks plus lane information. So it can be stored on the car for fairly large areas. It could be streamed from the cloud as well. The infrastructure for handling that is something we would expect our customers to deal with, not us.
There's an online question about CV3 derivatives. Can you talk about what those might be and what are the adjacent markets that those would serve?
I think just like CV2, CV3 will be a complete family of chips. CV3-High that Les introduced today is the higher version of the family. You can imagine that CV3 is capable of doing level four type of autonomous driving. You can imagine the performance requirement for level two, level two plus, and level three probably are different. We should have a family of chips at different performance and therefore different price points to address those key application. Based on the same number that Chris present, the two biggest market moving forward is ADAS level two plus, and also maybe in the future, level three, level four. You should expect us, we have separate chips. Share the same unified software SDK and to serve our customer. That's for automotive.
You should also expect the CV3 architecture will have a separate chip for our other market, which will require higher CV performance, particularly with today's introduction of a neural network-based ISP. You can see that the quality difference at a very low light situation, which is critically important, not only for automotive driving but also for security camera. We expect that more customer move to the, you know, this kind of direction which require the solution we provide to them with higher neural network performance. You should expect that there is a derivative chip from CV3 family will address that market too.
There's another online question saying that you made it clear about GPU compute comparisons, but could you describe how CV3 compares with a more application processor type of approach?
Sure. We have done comparisons with both Mobileye and Xilinx architectures. In each case, we saw kind of a large advantage in terms of power efficiency and overall efficiency. I think it's like I said, it's easy to have tops. It's efficiency is hard. Even some of these other so-called application-specific approaches are not necessarily that efficient. You really need to have a combination of silicon expertise, algorithm expertise, and bring those two things together very tightly.
Yeah, Quinn?
Just a clarification, probably more on the billion-dollar model than the $500 million dollar model. You talked about the software module opportunity, which is a longer-term opportunity. You've got the Oculii software. Is all of the software revenue fully incorporated in the gross margin targets in that billion-dollar model? Or to the extent that you start to recognize software, especially from the autonomous driving modules, would that potentially be upside or change the margin model over time?
Yeah. I would describe two software businesses. There's the existing one that I think Fermi described as lower in the stack, and that's in those numbers. It's in the SAM, it's in the funnel. But then there's the new emerging software strategy that is not in the SAM, that is not in the funnel. Should that update our long-term financial model? That's just something we'll have to, you know, keep you up to date with.
I want to point it out that software licensing is not 100% gross margin. John helped me to understand that there's any associated investment needs to be accounted as the cost of the revenue. Although I think still that software revenue gross margin will be higher, but not dramatically higher than our current corporate gross margin. Any other questions? Let me check online. If you could reiterate the timeline for CV3 tape-out and commercial deployments, and what markets and applications do you think would be the first to use CV3? Sorry. What is the size of CV3 opportunities for Ambarella versus CV2?
Okay. CV3-High already tape out. Thank you to our VLSI team and work hard to make it happen. We are planning to sample the silicon and our first generation software, probably I'll say second quarter next year. The production will take a while. Because meaning that we need to integrate to our customers and systems and software development qualification, I think a minimum of three years, but maybe even four years process.
Second quarter this year. Sorry about this. This is January 4th, 3rd already. I think that in terms of the size of CV3, I would say is a lot higher than CV2. In fact, if you look at the same number or the sales funnel we talk about, I think Louis mentioned that more than half of the funnel is based on the you know level two plus and level three application. I would say that you know over time, when the CV3 become mature product, I think CV3 will contribute a lot more in combined than the CV2 family as a whole. I think that's definitely is expectation. Not to mention the ASP is like one order of magnitude higher, which all helps to build out the CV3 revenues.
Two additional online questions, two different ones. In five years, what do you expect the CV versus video processor split to be? The same question from a auto versus non-auto markets.
I think in five years, vast majority of our chip will be CV based. I don't think that, you know, video-only solution can survive, because majority of our customer are demanding our CV solution. Fewer and fewer video-only product being required by our customer these days. So that's part. From the auto and non-auto, I think that I'll be very disappointed if our auto revenue is not bigger than the non-auto in five years. In fact, based on the same number and with the activity we're seeing, I think that our auto revenue will be higher than the non-auto in five years.
Just, you know, one last one. A lot of different questions on Mobileye. Maybe if you could summarize how CV3 compares to Mobileye and which Mobileye products would you compare to and what applications is there overlap in?
Right. Well, I guess the closest product that you can compare it to would be EyeQ6. I'm not quite sure what the public information is about EyeQ6, but from what we have heard unofficially, you know, CV3 is well above it in terms of performance and capabilities.
Any other in-room questions or pretty much tapped out online?
I got a question.
Yep. Julie?
In terms of the performance of CV3, you say it's a lot higher performance. Are you talking about 10%, 20% faster? What are the metrics?
Yeah, let me repeat the question. So the question is, correct me, Hans, if I don't get it right. CV3 performance,
How much better is it than the EyeQ6 or EyeQ Ultra, whatever it is that you're talking about?
Yeah. Well, it's a little bit difficult for me to answer that because I don't know what information that hasn't been made public yet. I can't give you a direct comparison. Let's say there's a class of products which are aiming at around 100 teraop kind of processing level. If you compare that to, you know, to CV3-High, that's a 500 teraop chip. Also, I think the Arm performance that we have is well above what they have. I think traditionally, as part of Intel, they're Intel always wanted to sell another Intel CPU alongside of the Mobileye chip, and so that kind of handicapped how much CPU performance they were able to put in there.
Additional question. Is Ambarella consciously moving away from the more volatile consumer discretionary markets that you'd call other, that may not upgrade to CV longer term? What is your strategy in that market?
I think that market will move to CV eventually, but we did make a decision not moving away but defocus on that market because we know that the size of the market and we know there were only one dominant customer in each market, meaning drone and a sports camera. The decision we made is really about that the market itself is not enough for our growth, so that we need to find a new market that can leverage our expertise and the product line, which also at the same time have a much bigger base in terms of customer and the revenue growth. That's the decision we made.
For consumer market, I really think that eventually, those companies need to move to CV because I just don't see. In fact, they can add a lot of value with the CV functions, but that's not our focus areas anymore.
Yeah.
I guess this wasn't really talked about today, but one of the most interesting things about Ambarella is just how many iterations of chips you're able to push out throughout just like a year and, you know, throughout five years. You mentioned that CV3 will be backwards compatible with CV2. As all of these CV2 chips are now entering production and customers are using them, how fast do you think they'll need or how much time do you think they'll need to be able to move to CV3? Because that's like a 10 times ASP increase.
Right.
Like-
Well, obviously you need to write, you know, some software to take advantage of new hardware that we add to CV3, so it will take time. More importantly is really our customer need to go through the production cycle. We're talking about automotive customer. There are really safety concerns. I think software development is a portion of it, but more importantly, the back end to do production testing, that's probably taking even more good time. My opinion is it's going to be very fast for us to demo the capability of CV3, but for customer going to production, that's different question.
Andrew, we'll get to you next. Question for Fermi Wang. What's the biggest risk to hitting the $1 billion bogey in revenue?
Well, I think the biggest risk is always that we don't know what our competitor going to do, right? In my mind, that we know exactly where we're going and we need to run fast and making sure that we keep our advantage. You know, we cannot control what our competitor is going to do, and that's definitely always the biggest worry for me. There's a question.
Sort of the same question, but asked maybe a different way. I guess what's the bigger risk here? Is it competition or is it continued, you know, inflation/component shortages, you know, lingering or maybe even kind of this ongoing geopolitical risk with China, just that that might be argued to be in the stock at this point, but you know, things could still escalate.
Well, I think there are a lot of risks you talk about, right? I think some of the short-term, for example, I really don't believe this shortage problem will last another two to three years. That's just not possible in my opinion. It's a short-term problem. Also that, the geopolitical situation, although it's going to last, but if you look at it, our revenue combination is kind of de-risk to a point that I think is less risky for us. I really think that, now we are focusing developing automotive products. CV3 is a great example.
That we need to focus on now we have engineering development going, and we need to focus on the business development and making sure that we talk to all the key customers, which we do, and convincing that although we are a small company compared to those big competitors, but we are capable not only just in terms of technology, but we are ready to support them as a partner. I think that's where we're gonna focus most effort on in the next couple of years.
Online question on the driver monitoring market, and how you see this market evolving, and which software partners do you work with for driver monitoring?
Right. I think driver monitor, the consumer level driver monitor continued to happen as a aftermarket. Now we see the driver monitor move to become OEMs, and in fact, today at the OEM level, we provide a complete solution. When they move to CV-based things, we are counting on our software partner. Like, you know, we have a multiple software partner that we list in the Chris's presentation. We're counting on them to deliver ADAS type of a software stack for those customers. I also want to point out there is another market related to a driver monitor is really fleet management.
That fleet management at beginning is just putting a driver monitor to keep safety. Now they found out that they can integrate with their current telematics system, and that combination become very strong. For those customers, most of them are counting on us giving them a software SDK and giving some perception level software, but they are using their third-party software, custom software to finalize the product. For that recorder business, for the aftermarket, for the OEM business and the fleet management, we are doing our standard SDK solution, but we're counting on the third-party software partners to support, and we have multiple of that.
Yeah. I would add from the SAM perspective, the SAM numbers you saw for non-auto IoT as well as auto are completely refreshed. The overall auto, with the exception of the additional two years, didn't change much. With the exception of driver monitoring, where we did increase our estimate for how rapidly that market would become by, let's say, calendar year 2025 or 2026. We do have higher expectations in our SAM for that market. In terms of the overall size, as you can see from the stacked bar chart that Chris had for auto, it's not the principal driver of our automotive business. By calendar year 2025, that I think it was green color, really takes over in terms of driving the largest part of our automotive SAM, and that's where CV3 really kicks in. I think we're good online. Anything else here? Last chance.
Well, there's one more.
Oh, yep.
Hey, thank you for the presentation, guys. If I think about your competitive position relative to your peers, obviously you're lower power, you're not a black box, so you know, you give the OEM more ability to have control if they want it. You know, relative to some of your peers, where would you say that you think that you might have holes in your portfolio or your IP might be lacking? I know obviously CV3 addresses a lot of those things and that's a big announcement today. Just in terms of infotainment or mapping, if there's any areas that you could talk about where you want to expand your IP and your advantages there.
Well, you know, maybe Les can address this infotainment question better than me, but I just don't. You know, a lot of our customer or our competitors are talking about combining infotainment system with ADAS and other safety functions into one chip, which I just don't think that's possible, but not because of the business reason, but technical reason. Just like considering to do a level two-plus car, how much performance you need to jam into a silicon and how much DRAM bandwidth you require to do that, and you try to add another function which require huge amount of DRAM bandwidth to put in. I just don't see that it's a right partition from a technology point of view. Les, you wanna-
I have one more comment. I think the other big problem with that kind of architecture is how do you ensure that the hard real-time processing that's being handled by this L2+ stack is not impacted by the infotainment processing? It's very difficult to do that, when they're both sharing the same DRAM. DRAM is usually the ultimate performance bottleneck in the system.
In fact, in my presentation, I highlighted this real-time application, which I think is something we're really proud of, that under any other circumstance that our product would deliver the performance. In many other architecture, because they try to share all the things, although they claim a high performance on every possible aspect, but when you put it together and when they run together, you don't get the high spot high performance on any aspect. That's a result of sharing resources too particularly on DRAM bandwidth.
I think this relates to the question on application processors as well to some degree, because some of those competitors are trying to attack every single domain in the vehicle. We're talking about the active safety and autonomy domain, period. We're not selling this into infotainment or body and chassis, engine control. That's not what this is about. We're focused on these high bandwidth applications, video and radar and now fusion and planning. That's a big point of differentiation versus that application processor approach.
Any other questions? With that, I would like to thank all of you attending the meeting today in person. Also, I thank all the people participate online. Thank you very much, and looking forward to another, you know, CMD Days in the near future. Thank you very much.
Yeah. Thank you. We have management reception right outside after this. There will also be Ambarella hosts in front of the demo room to give you a guided tour of what we have. Thank you.
Okay. Thank you very much.
Oh, sorry. One more thing. As you check out. Thank you, Eric. We have a little bag of souvenirs for you, and so please, before you leave, check with Julie in the pink turtleneck here to get your souvenirs. There's a Capital Markets Day cheat sheet in there too that you might find helpful. All right. Thank you.
Thank you very much.