Mobileye Global Inc. (MBLY)
NASDAQ: MBLY · Real-Time Price · USD
9.23
+0.53 (6.09%)
At close: Apr 24, 2026, 4:00 PM EDT
9.28
+0.05 (0.54%)
After-hours: Apr 24, 2026, 7:57 PM EDT
← View all transcripts

CES 2024

Jan 9, 2024

Dan Galves
Chief Communications Officer, Mobileye

Welcome, everyone. I'm Dan Galves, Chief Communications Officer of Mobileye. Thanks for so many joining us in person, and I'm sure that there's many, many more joining us virtually. This is actually the tenth year in a row that we've done this presentation at CES. Our CEO, Amnon Shashua, has addressed the crowd at CES. This is the tenth year in a row, and I think what people love about it so much is, of course, part of it is about Mobileye, but most of it is really about this industry that has raised the safety level of the world's roads and has potential to do so much more. A state of the union on automated driving, if you will. Over the years, it's really raised the knowledge level of many of us, and we really appreciate it.

Before we begin, please note that today's discussion contains forward-looking statements based on the business environment as we currently see it. Such statements involve risks and uncertainties. Please refer to the accompanying presentation, which includes additional information on the specific risk factors that could cause actual results to differ materially. Without further ado, let me introduce you to President and CEO of Mobileye, Professor Amnon Shashua.

Amnon Shashua
President and CEO, Mobileye

Hey, welcome everyone, for Mobileye's annual press conference. So the way we structured the presentation this year is kind of the three layers of why, what, and how. Why we're doing what we're doing, what is the derivative of that in terms of the product portfolio, and then under the hood, you know, novel things that have been being developed in 2023, which fuel the how of this. So in terms of the why, I think this slide kind of shows all of what we're doing. You know, there is a transition going on in the industry for the past decade, and for the next decade as well, going from hands-on to hands-off to eyes-off to no driver in the car.

So when people think about autonomous cars, they think about the last one, the no driver, but there's those three steps along the way. The hands-on, this is on the left-hand side, this is the advanced driving assist. This is about safety, basic safety, let's call it. You have a front-facing sensor. Mostly it's a camera. In some cases, it's a camera plus a radar, with a visibility of the front sector and provides basic intervention against collision with a pedestrian, collision with a car, veering from lane. All this stuff that you know has been proven to reduce accidents in a very significant way. And it's fueled, the volume is fueled by regulatory influence, like star ratings and other mandates in every country.

So just last year, this year in 2023, we shipped around 37 million chips. So 37 million new cars on the road fueled by, by our chip on this category, on this left, category. The category, the next one, which we call SuperVision, is also about, safety, but now we have full 360-degree, awareness. You have 11 cameras around the car, four parking cameras and seven long-range, cameras viewing 360. So the kind of safety envelope that you can provide here is much more extensive. You can make sure that the driver is protected from, you know, other cars, you know, getting closer laterally, getting closer, something happening from the rear.

You have full, you know, 360 awareness, and there you can provide a much, much higher degree of safety. The other thing that you can provide is a Navigate on Pilot. You punch in an address, and it will take you hands-free to that address, but the driver is supervising the system. This is why it's called still hands-on. So it's hands-on, eyes-off. The driver is responsible because the mean time between failures of the system is not at the level in which you can say, you know, drive autonomously, and I can go to sleep in the car. So it has to be much, much better than a human driving, and this is not where this system is at in terms of the mean time between failure. So that's SuperVision.

The next level up is kind of a step function and adds another value proposition. So this is the eyes-off, where in some predefined Operational Design Domain, ODD, say, on the highways, certain speeds. So you set up your conditions, and under those conditions, the driver can have his eyes off the road, can do something else, like, you know, work with a smartphone. So the value proposition here beyond safety is buying time back, right? The roads are congested. We stay on the road for quite a while every day. If most of the time we can have eyes off and we can do something else more productive, this is a value proposition. So this is another value proposition. It's not only safety but buying back time. So this we call a Chauffeur.

The difference between SuperVision and Chauffeur is that within that ODD, say highways, you need to prove that your mean time between failures, means the number of hours, the average number of hours of driving between incidences that require the driver to take over, is way above human statistics. We're talking about 10 x better, 100 x better, 1,000 x better. It has to be much, much better. And this requires adding more redundancies, adding more sensors. So it's a more expensive system, but not much more expensive... adding beyond the cameras, a front-facing LiDAR, a front-facing imaging radar, and surround regular radars, and a bit more compute. The next level up is no driver. So again, you define an ODD.

In this case, it's a geofenced point-to-point within a city, and you have also teleoperation because you have no driver in the car. So when the car requires assistance, say something is clogged or you know, a first responder approaches the car, needs to speak with someone, so there's a teleoperator. Of course, in order for this to make sense from a business perspective, you know, the teleoperator needs to intervene only very rarely. Otherwise, it's like having a driver in the car. And here there's another value proposition, in which the car is a resource, and you want to utilize this resource in a commercial manner. So this is the robotaxi. So those four kind of columns, you know, tell the story of what driving assist autonomous vehicles are all about.

Going from hands-on up to, up to no driver. So now, if we match this to our product portfolio. So on the left-hand side, driving assist, our expected lifetime volume going forward of design wins to date is about 270 million cars going forward. SuperVision, those 11 cameras around the car, 3.65 million units going forward. I'll spend some more details on SuperVision later. Chauffeur, all the design wins in Chauffeur happened in 2023. It's a lifetime volume of about 600,000 cars. And the robotaxi, this we call the system Drive, is about 50,000 across three partnerships including you know the Volkswagen ID. Buzz, the Schaeffler VDL, and also Benteler Mover.

Partnerships that we have also starting from 2026. So SuperVision is already on the road. Chauffeur will start late 2025/2026, and Drive also 2026. So now if we go, what is the impact of this in terms of our, in our numbers? The design wins in 2023 are responsible for a pipeline of $7.4 billion, and around 60.5 million-60.6 million systems going forward. You see on the right-hand side the kind of the chart showing this growth, starting from $2.2 billion of booked business in 2020, and the $7.4 billion of booked business in 2023. You see this nice growth.

But I think what's more important is the average system price. Because ADAS, the front-facing camera, the average system price is around $50. The average system price of a SuperVision is about $1,500. The average system price of a Chauffeur is at about $3,000. The average system price of Drive is about $50,000, right? So as we go forward in time, the average system price of Mobileye products should go up because the mix of products going from pure ADAS front-facing camera, to SuperVision, to Chauffeur, to Drive, influences the average, the ASP. And you see that the ASP is growing. This is the curve showing the ASP starting from $59 in 2020, and this year, the business won has an ASP of $122.

So as you see that it's, it's going up. So this is kind of the transition of Mobileye business, going from front, front-facing systems up to a full surround, hands-off, eyes-off, no driver systems. This is the number of vehicle models launched with our chip, launched in 2023, about over 300 models. And this is a 29%, almost a 30% year-on-year growth. Last year, it was 233 models. And you can see that the, you know, the very good worldwide coverage. U.S., Europe, Asia. So it's a full coverage over the globe. Another point is this cloud-enhanced. So when we're looking at the forward-facing camera, there is another jump that is only software-enabled.

We have a technology we call REM, REM, to build high-definition maps through crowd sourcing. Today, there are about 3 million cars sending us every day data, from which we build those maps automatically. Business that uses these maps with a front-facing camera, we call this Cloud-Enhanced ADAS. This year, it accounts for 46.5 million licenses. So a license is per car, per year. So sometimes a car can have a license on average, about five years. So you can see this business also growing, taking the front-facing camera business just by enabling the map through software and creating a much higher ASP for a front-facing camera business. So this is ongoing, progressing well. This brings me to the announcements with the Chery.

So Chery in China, in the next two months, is going to launch Cloud-Enhanced ADAS. So front-facing camera. So we have front-facing camera business with Chery for quite a while. Now, these high-definition maps, the REM maps, are added to the front-facing camera, and going to be launched in two months. And this is the first launch in China for cloud-enhanced driving assist. So this is why it is important. This is really the first of, I hope, many to come, cloud-enhanced driving assist in China. So now I want to start focusing on the advanced products, from SuperVision, Chauffeur, to Drive.

Yesterday we announced with a major Western OEM design wins of the entire product portfolio, the advanced product portfolio, SuperVision, Chauffeur, and Drive, over 17 car models, nine of which include the Chauffeur. So this is very meaningful as kind of the credibility going forward with this type of a product portfolio. This morning, we announced with Mahindra and Mahindra in India, also, SuperVision, which I think the interesting point here is when you think about India, it's not the first territory for, you know, a hands-off system, you would think. Actually, in India, new highways are being built. Driving assist, front-facing camera driving assist is having major success.

For example, Mahindra and Mahindra, they have a number of SUV models with our front-facing camera technology, and the demand is more than tripled than what they forecasted. So the Indian population, you know, there is a strong desire for safety systems in cars, and now SuperVision is going to be on Indian roads in 2026. Now, India is a huge territory, as we all know, and this is a huge growth potential for Mobileye, not only for the basic systems of a front-facing camera, but for the more advanced systems of SuperVision. And this is why this announcement, I find this announcement also very meaningful. Now, this slide, I want kind of to pause on this slide. This is really all what we have with SuperVision and Chauffeur.

Last year, we had only one car model, which was the Zeekr 001, and here, what I'm showing you is 30 car models. So we have two car models in Zeekr that are in production with the SuperVision. One in Polestar 4 model already in production in China, and then later, 2024 and, and more importantly, 2025, would expand globally to Europe and the U.S.. We are launching in about couple of months with the Smart in China, and launching with the Volvo middle of the year with Volvo in China. So this is the Geely Group. We announced 0.5 year ago that SuperVision is going to be launched with Porsche, and you see the date, 2026.

The number of car models are not disclosed yet, but it's at least one, right? And we have, we also announced a few months ago, back in September, the FAW Group in China. There are six models, both of SuperVision and the Chauffeur. The SuperVision is coming out end of 2024, so it'll impact the 2025 business. And we have the Chauffeur to be launched end of 2025, beginning of 2026, all in China. Mahindra, I just mentioned, 2026. And then the Western OEM, 17 car models, both SuperVision and Chauffeur, all in 2026. And you see also it's a combination of electric vehicles, EV, and also combustion engine vehicles.

So it's not just a product designated for electric vehicles, it's a product designated for cars, regardless of the powertrain. So you can see in terms of our business growth of this line of products, we are going from two car models to five car models in the Geely Group this year. In 2025, we have the FAW models joining, and there's also some expansion from the Geely Group to Europe, Polestar. And this provides a growth in 2025, and then the big step function starts from 2026, when we have the Porsche, Mahindra, 17 car models of the Western OEM, some ramp up in 2026, and then, you know, the big impact in terms of volume in 2027.

So this is the current state of design wins and how they affect our product portfolio going forward, how they affect our growth. So let me say a few words about the rollout of SuperVision in China. So we have today over 160,000 vehicles with the SuperVision in China, and this is a great test bed for the technology. It's going to come out in Europe with the Polestar 4. Also, Zeekr is launching in Europe throughout 2024. So we'll start seeing it going out of China with the Geely Group this year as well.

About middle of last year, about August, there was the full launch of the entire feature spec, including Navigate on Pilot, in which you punch in an address... and while the car is driving on highways, the system drives hands-free and through very challenging driving conditions. And then urban settings, urban is going to come out this quarter in China, and the number of cities are gradually growing. Today we're supporting 22 cities. End of this quarter, it's going to be around 100 cities, and then urban is going to start as well in a meaningful manner. But now, let me show you some a bit of... This is about a 10-second clip from, you know, Chinese influencers.

You know, they drive the car and then they post some clips. I'm not going to bore you with the many clips. I just want you to see the type of scenarios that the system is being driven. So this is kind of hands-free. You know, rainy conditions, it changes lane autonomously in congested traffic. Now, we'll go over a number of scenarios also in China, just to show the kind of challenging environments that the system is working. So for example, a blocked lane. The system, you know, will change lane automatically, negotiate the lane change. Construction areas, identifies them and merges the traffic accordingly.

Merging into a kind of changing lane, when the lane ends and you need to merge, is doing that in very congested driving. A pedestrian on the road, so the system will offset and, yeah. Let me show you a more detailed test about the safety features of this system. So out of the box, regardless of Navigate on Pilot, in terms of what the safety environment, the safety envelope that the system provides, is quite impressive. That, we believe it's unmatched by any system on the road. And we tested, we did benchmarking on all the known systems. What I'm going to show you here is really exclusive to the system.

So, let's start with that there's a box on the road, and you see the car offset in order not to get in the box. A car is blocking the lane, there was an offset. The car is not blocking the lane, but the door is open, also an offset. Of course, I'm speeding this construction zone areas. It'll identify, offset, negotiate... Very sharp curves. Here, again, this is with mapless. There is no map. This is kind of out-of-the-box highway assist. Very sharp curves. It'll pass through a junction with sharp curves and stay on the right lane. So this is the kind of, you know, out-of-the-box safety without even before we get into the more advanced feature of Navigate on Pilot, powered by the REM map.

So as I mentioned at the beginning when I talked about the why, the fact that you have full surround camera sensing can give you a much more extensive, comprehensive envelope of safety around the car. So now I'm going to go into the how, right? So just to remind what are the key technology enablers, and then I'm going to focus on a subset of them. Our key technology enablers are our silicon platform, a system-on-chip platform we call the EyeQ, our computer vision algorithms, sensing the environment, whether it's a front-facing camera or full surround cameras, being able to provide a sensing state of where are all the important elements on the road that you need to know about if you want to control the car.

Then we have a crowdsourced technology of creating high-definition maps we call the REM. Our driving policy is based on a technology we published back in 2017, which we call RSS. It gives guarantees about guarantees about not creating an accident if you follow this kind of technology. Our ECU, the hardware that we build, DXP, Driving Experience Platform, is something that I'm going to focus about later in this talk. And we also develop active safety sensors. Imaging radars that are going to come out in 2025, start of production, which is good, which is relevant for our Chauffeur launches in end of 2025, beginning of 2026.

And our FMCW LiDAR, Frequency-M odulated Continuous wave LiDAR, this is the next generation, next step technology of LiDARs. Today's LiDARs are time-of-flight LiDARs, and that's going to come out in 2028 timeframe. So those are the key technology enablers to allow us to build a system end-to-end from silicon, including sensors, including hardware, including the software, the high-definition maps, to power the product portfolio that I showed in the first slide. But now I'm going to focus on two issues. One is how to reach kind of sufficient mean time between failures for an eyes-off system, right? Because when you have an eyes-on system and the driver is responsible, right, you measure things. The KPIs are kind of comfort level KPIs.

But when you, when you're going up to an eyes-off system, your KPI is how many hours pass between interventions. And the number of hours should be much, much greater than human statistics. Human statistics is about a crash every 500,000 mi of driving, so say about 50,000 hours of driving. So we need to be much better, much better than that. So how do you reach such a high mean time between failures in such a system? And then the second point I want to focus on this talk is: how do you reach scale, right? Because at the end of the day, we are a business, right? We're not an academic institute. We need to reach scale.

These are very complex systems, and if you now try if you customize each system for every car model, it will be very, very difficult to reach scale. So how do you reach scale, yet provide your car makers, you know, full control of the driving experience? Actually give it to them in a way in which they feel they built the system on their own. So how there's kind of a tension. On one hand, to reach scale, you want to have a black box system, so the same system fits all car models. On the other hand, then the car maker does not have influence on the driving experience. On the other side of the spectrum, you know, I can provide you an open compute. "Here is a chip, here are some libraries. Go and build your system yourself.

You'll have full control of the driving experience," but the risk of execution is very, very high. And we saw a number of car makers going through this journey with limited success. So there's this tension between these two, and we found a sweet spot of how to find the sweet spot in which we can get our scale, and the car maker can, you know, control the driving experience as if the car maker built the system from scratch. So those are the two areas I want to focus. So let's start with, you know, what is the optimal way to leverage the recent AI breakthroughs? So we all know about ChatGPT, transformers, language models.

There isn't anyone in the crowd that hasn't heard about that today, or is not, you know, using, ChatGPT or Claude or, you know, all, all the Gemini Pro that is coming out and then so forth. So, so there, there is kind of a recent breakthrough in the past two, two, three years, in AI. How is it affecting this kind of, business, and how do we utilize it in, in really, an optimal, an optimal manner? So we'll start as this end-to-end, approach, and, and we can divide the end-to-end approach into, into two. The first one on top is the input are images. I'm focusing now on cameras, but you can think of cameras, radars, LiDARs. The, the input are, images, and the output is control, right?

So it's one monolithic engine, or the one deep, network, that is receiving the images and outputting the steering and braking, control. So this is one version of an end-to-end system. Second version of an end-to-end system is that the end-to-end is responsible only for the perception. So you have one big network that receives the images as the input and outputs a sensing state. Where are all the road users, located, the lanes, all the information that is relevant to controlling the car. And then the driving policy, the stack that is responsible for making decisions, and the control stack, is a separate stack. So there are kind of two versions of an end-to-end. Now, if you kind of look at the pros and cons of them, the full end-to-end system, the issues there is you don't have transparency.

You don't know what the system is doing in terms of decision-making. Second, you don't have controllability, right? You can have constraints coming from regulatory bodies. You can have constraints coming from the car maker, the customer, that wants to control the driving experience. You are not able to control this because it's an end-to-end system. And the third is this MTBF. Now, it's really unprecedented. One moment. What happened here? Yep. Okay. It's really unprecedented to reach, you know, a system with 99.9999% accuracy with one machine learning engine. There, there's no precedent for something like this. And if you are using language models, you see those language models, they hallucinate. You know, they're very impressive, but they have mistakes, right?

So creating one machine learning engine that reaches such a high MTBF, it's really unprecedented, okay? If you look at the other end-to-end system, which is only the perception, so since you are controlling the driving policy and the control of the car, the HMI of the car, you have the transparency and the controllability, but you still have this issue of how to reach a very high MTBF on the perception stack, right? Later, I'll show you the way we reach high MTBF is using this end-to-end as just a component in a system with many components for purposes of redundancy.

But now, let's focus on this end-to-end perception, because the advancements, the AI advancements of the past two years, you know, give us a lot of ideas of how to create a end-to-end perception system. So in order to have an end-to-end perception system done right, this is kind of the title here, there are five multi things you need to satisfy. You need to satisfy multi-camera, so all the cameras, you know, are represented in one shared space. You need to have multi-frame. So multi-frame is time, right? All represented in the same space. Can you have spatial temporal consistency? You need to have multi-objects. All the objects around the car should be in the same shared space. You need to be able to satisfy multi-scale, because things are in different resolutions.

If you have a car very far away, you need really high resolution. If you have a car very close to you, you don't need all the resolution in order to make judgments and detect the, detect the car. So you need to work in multi-scale. And very importantly, it has to be multi-lane. That means it's not enough to place an object in 3D accurately. You need to know on which lane it is, because you want to predict what's going to happen in the future, right? The fact that you know on which lane the car is would help you to know to predict what's going to happen in the next, in the next few frames. So multi-lane is also very important in creating this end-to-end perception. So current technology really is about the first three multi.

It's called Bird's Eye View or BEV. And you take all the surround images, the multi-camera images, and you all map them through, you know, a backbone, deep convolutional net, a transformer. You actually not need the self-attention because the images are local in space and local in time, but some people also use transformers, and they bring them all to a shared space. The shared space is both spatial and time, right? So it's all in a shared space, and that way you get the consistency, the spatial temporal consistency. And then you use an autoregressive model, just like used in language, in order to create the list of objects, to create the sensing state, right? And there are academic papers in the past two years about that.

So it's called the BEV, and it really handles the first three multis, and it's already embedded in Mobileye's system. We call that a top view net. We use only parking cameras for that. So the four parking cameras are fed into a BEV network, and you get out of it a sensing state of the vicinity. It's about 15 meters from each side. As parking cameras are very wide field of view, they can see about 10 meters-15 meters ahead. And it's also useful for another class of products that I did not mention here because it's somewhere in between. It's called 5V, 5R. You have five cameras and five radars, and the plus here is for the fact that we use a top view net.

So you have a front-facing camera, just in basic ADAS, and many cars today have four parking cameras to support parking. Whether it's autonomous parking or whether it's just to visualize when you are doing the parking maneuver. But those parking cameras are not playing a role in intelligent driving, are not playing a role in driving assist. But you can use those parking cameras and create a top view net, as you see here, together with a front-facing camera, and create a higher level of driving assist using the existing hardware, front-facing camera and parking cameras, and we call that 5V , 5 R. Okay? So why can't we use this technology to get the full, you know, the full range? Is because of a computational load.

So in the literature, it's called dense, right? Because you're not really handling the multi-scale part. So why is this difficult? So in order to be relevant, you need to see about 200 meters on each side. You need to see 200 meters in the front, 200 meters in the rear, and when you're in junctions, you need to see 200 meters on every side. So it's 400 by 400. Now, a resolution, say, of 10 centimeters is kind of relevant. So you are now on a 4,000 by 4,000. 4,000 by 4,000, now you need to multiply this by the number of channels. Usually, in these networks, it's about 256 channels. And then you need to represent the information in bits.

Normally, it's about 16 bits. So if you now multiply all of what I said so far, it's about 64 GB. 64 GB is the maximum memory that an A100 of NVIDIA can handle, right? And putting an A100 in the car, you know, let's see a car maker making business from that. So... And this is before looking at all other memory requirements that you need for a system. So it's simply too much of a brute force. And in the academic literature, people have noticed that, and what has been introduced is what is called sparse BEV. It's called a BEVFormer or DETR3D, and the idea of a sparse is kind of to sample this dense shared space by using a prior.

So you could think of it, maybe you have a different system to detect objects, and that system will give you a prior of where the object is in your space, and in that way, you sample that space in a more smart and efficient way, and you don't need to represent this 64 GB of data. Okay? So how do we approach this? So first, multi-scale is not the only problem. We need also multi-lane. So if just solving the multi-scale will not be enough, we'll be able to place an object in 3D, but we need to place it in lanes. So we need to solve also the multi-lane. And here comes our REM maps, right? We have a high-definition map everywhere on the planet.

So the map itself is the ultimate prior, because now you know where the lanes are, and that gives you kind of a prior of where the vehicles, where the road users are, and it also gives you the lane assignment, so you have the multi-lane. So now this technology gives you an end-to-end system using the bird's-eye-view technology, but done right. Rather than trying to create priors that are not accurate priors on creating a sparse sampling of this huge space, we use our REM map in order to do that, and that way, we get a very efficient system, and this system is running on our EyeQ 6 platforms. So just to give you some pictures of what this gives you.

So for example, you have here a car in a roundabout, and we have the lane and the center of the lane from the map mapped onto the image. On the right-hand side, we're kind of straightening the road. So you can see that we can place the car in the lane, even though it is occluded behind a roundabout. So this is one advantage of this technology. You can see here that you have an occluded car, and on the right-hand side, you see a mapping of all the objects around the car, where we marked in the red rectangle this occluded car. So we can place it in the right lane, even though there are many, many occlusions. Again, you have here cars in kind of an angle.

One is in the lane, the two other are on the side, parked side. And you can see that on the right-hand side, marked in a red rectangle, you see exactly how those cars are placed with an accuracy of centimeters from the ground truth. The same thing here, you see cars in the red rectangle in the image. It's not clear whether the cars are blocking the lane or not blocking the lane. But on the right-hand side, you see where they are placed, and they're not blocking the lane. So this is the kind of thing that you can get from a holistic, holistic processing of the entire camera, spatial-temporal, both cameras, frames, scale, and also embedding our map into it. Okay, and this is the multi resolution.

Okay, so sounds like great technology. Is this sufficient in order to get our MTBF? And we say, "No, it's not sufficient." It's great technology, should be in our chip, but it's not sufficient. And this is our kind of the motto that I've been talking about for a number of years is about redundancy, right? So redundancy comes from the sensor modalities. On one hand, you have cameras, on the other hand, you have radar, LiDARs. The reason that we are developing those imaging radars and FMCW LiDARs is to create that layer of redundancy that could reach a very high MTBF, and then we combine the two in a redundant system. Then from the computer vision, there is appearance-based, there's geometry-based, there is machine learning, like what we did right now, and there is model-based.

You know, the old type of processing, model-based processing of finding vehicles. Then in the end-to-end, there is the decomposable approach, and there is the end-to-end system approach. Specifically, when you look at the decomposable approach, this is very good. It excels at solving edge cases. On the other hand, the end-to-end is very good at comfort because it creates a consistency among all the objects around the scene, but it's not that good at edge cases. Right, so you really need to combine the two and not just focus on the end-to-end. And this is one of the strengths of Mobileye, that we combine everything together. So we are still in the how. All of this is powered by our system on chip. So now I'm just showing the advanced family.

The EyeQ5 is powering the current production of SuperVision. The Zeekr 001, 009, the Polestar that's in production, Polestar 4, the Smart and Volvo in China that are coming out this year, the FAW that will be launched beginning of 2025. All this is powered by EyeQ, EyeQ5. It's a 7-nanometer process. It's 16 tera operations per second in int8, 8-bit. The power consumption, the maximum power consumption, it's not the average. Average is way, way lower. But, you know, in our business, you measure the maximum power consumption, 27 watts, and this is in production. EyeQ 6 is replacing EyeQ, EyeQ5. It's a 34 TOPS. Also the same process, 7 nanometers.

Its power consumption is not much higher, it's only 33 watts. We are already working with it in terms of we have samples, and our software is running on the chip. The SOP is in 2025. So this is for the Western OEM 17 models, for Mahindra, for FAW, the Chauffeur, for Polestar, the Chauffeur. All of these are EyeQ6 systems. We are in development of an EyeQ7. It's a 5-nanometer process, 67 tera operations per second, the 60 watts max. We'll be sampling it in mid-2025, and the SOP is in 2027. By the way, all these chips are designed together with STMicroelectronics, our partner since 2005.

And we see a long, long, road ahead to, work together on designing these chips. Now, when you look at the line of the TOPS, it's kind of underwhelming. All right? It's... When you hear about, you know, competing system on chips, there could be hundreds of TOPS. Some even claim thousands of TOPS, and here we're talking about something like 16, 34, 67. The point is that TOPS is like, it's like measuring the quality of a company by its headcount. Right? It doesn't say anything, right? If you have a very large company, sometimes you can have a very small company worth much more than a company that with a very high headcount. And so reality is much more than TOPS.

So here I'm comparing on the top systems in China that are similar in terms of the desired performance to SuperVision. They have also multi-cameras around the car. Many of them have LiDARs around the car. And you see that they're using Orins. Two of them are using two Orin chips. One of them are using four Orin chips. The TOPS are about 500 TOPS, 1,000 TOPS, compared to 32 TOPS of the two EyeQ5 chips in our system. You can see also the power consumption is much, much higher. So we're talking about an order of magnitude more TOPS than us, yet our performance in all the benchmarks that we have seen, both benchmarks that we have done, others have done, our performance is better, right?

So how do you explain this? The explanation is that our chips are designed for, for the kind of, you know, software stack that we, that, that, that we build. It's not one monolithic accelerator. There we have, five, four or five, types of, accelerator, engines, each one specific for different, software type of, stack, and they all work, together to create a much more efficient system. So for example, the EyeQ6 has twice TOPS than EyeQ5, but in terms of what it can do, it's about 4 x EyeQ5. So if we take the stack of... that's running today on SuperVision and put it on an EyeQ6 system, the two EyeQ5s are running on 50% of a single EyeQ6. So it's 4 x. The TOPS doesn't tell that, story. Next generation, imaging, radars.

These are, you know, the spec of these radars are unprecedented, both in terms of their dynamic range, in terms of the number of channels. There are thousands of virtual channels. Their side lobe everything is way more advanced than any spec on any imaging radar that is in design today or in the market. And let me show you a clip. On the left-hand side, you see a wooden ramp about 240 meters away, and we'll be running this in fast-forward, and then you see that the wooden ramp is detected. You see that's kind of the gray box. It's being detected. Now we'll be slowed down so you can see the wooden ramp, right? So we can detect it consistently 240 meters away.

Of course, this radar is capable of detecting, you know, pedestrians close to vehicles, close to trucks. You basically create an image just like an image from a camera, very high resolution, and the idea of this imaging radar is to create a layer which is independent of cameras, and in that way, you reach a redundancy. So the last point I want to talk about is, how do we reach scale, yet satisfy the desires of our customers to control the driving experience? So there is naturally a kind of tension between differentiation, no differentiation, and linear growth, which is what we want to achieve when we do scale, linear growth in terms of resources. You know, what we want to achieve is sublinear growth.

So the number of customers that you have, your resources grow sublinearly with the number of customers that you have. So a full black box solution is perfect in terms of sublinear growth, but in terms of differentiation, does not give any differentiation to the car maker. On the other hand, if you look at the risk axis, if you look at a full open in-house solution, the risk becomes very high. If you now just offer a sensing from Mobileye and the OEM performs its own driving policy, also creates a lot of risk because the driving policy is quite complex.

It is tightly tied with the sensing, because sensing is not perfect, and this would require Mobileye to create a different sensing branch for every car maker and will tie the driving policy to a particular sensing branch, and you'll not be able to do over-the-air update of sensing, because then you'll need to revalidate the driving policy. So how, how do you kind of resolve this tension? So the kind of the common way is to provide an SDK. This is on the left-hand side. So say I give the customer my system on chip, give the customer a list of libraries and an API call to those libraries, and tell the customer, "Go and build your system." The reason why it is challenging, because none of these components are perfect. Therefore, it creates... the integration becomes very, very tricky.

How to integrate, to create a very high-performance system with imperfect components. The other way to do that is to provide knobs, to provide tunable parameters. Say we built a system with a driving policy, and now it has some tunable parameters to allow the customer to tune the driving experience. The problem here is that the number of tuning parameters grows exponentially. There's no way to give the customer a full control of the driving experience just by tuning parameters. So how do we go and kind of square this circle? So the observation is that we need to separate universal from unique. Universal is everything that is really OEM-agnostic.... It doesn't change from car model to car model. Say, for example, sensing.

Perception is the same, whether it's on a Audi vehicle or a BMW vehicle or a Hyundai vehicle. You need to perceive the world at high level of fidelity in order to perform the actions that are needed in order to control the car. There's no differentiation there. On the other hand, control and HMI is really unique to the car. This is where there's differentiation. So the elephant in the room is driving policy. Driving policy is all about how do you control the car? How do you make decisions? How do you whether you change lane, how do you negotiate with other cars to change lane? Your braking profile, whom do you yield? Whom do you take over? All the kind of decision-making of the car.

This is something that the car maker wants to be able to control. Because it controls the it influences the driving experience. So what we found out that this driving policy can be separated into universal and unique. Where the universal, think of it as kind of an operating system, in which we built all the universal parts of driving policy and created a framework of writing code on top of our infrastructure to allow the car maker then to use that infrastructure and create its own driving its own driving policy. And that we call DXP, and my colleague and CTO of Mobileye, Professor Shai Shalev-Shwartz, is going to give a full detailed kind of presentation of this. This is an example of kind of universal and unique for the driving policy. One moment. For the driving policy.

Things about facts are really universal. Uncertainties are universal. Optimization engines, when you want to optimize a trajectory, this is also universal. No, it was, it was okay. No, now it's not okay. Okay. In terms of unique, all the driving decisions, the lateral planning, the control, the HMI. So you can really try to separate what is universal and what is unique. And the dividing line is, think about when things happen, what you want to do, and then how you want to do it. The how is really unique, and in between those is the platform that we built.

This platform allows the car maker to build a driving policy as if he has done it from scratch, yet not deal with all the universal and complex and AI-driven elements that are required in the universal, in the universal part. We call that DXP, Driving Experience Platform, and this is an operating system and tools and abstractions that allows the OEMs to really control the driving experience and allows Mobileye to scale, because then we build one system that fits all. With the ability of the customer, it's like, it's like when you're writing an app on an iPhone, you don't need to reinvent the entire operating system on the iPhone in order to write the app. It's, it's very, very similar to that.

So this satisfies both our scale desires and the OEM desire to really control the driving experience. So if I'm summarizing here, we have, you know, Mobileye's product vision is becoming a reality. As you saw, there's big traction to SuperVision. If last year we were talking about 2 million units booking for life, now it's 3.6 million units. Chauffeur, last year we had a line of sight of winning Chauffeur. This year we already have won at least nine models, 12 models, car models with Chauffeur, with 600,000 cars in the pipeline over life. So all of this is really coming to reality.

In terms of, how, how do we leverage the latest AI transformers, self-attention mechanisms, autoregressive, models? How do we leverage them in our business? I kind of walked you through this end-to-end system, how end-to-end system should be done right. And in our business, leveraging the maps creates a very accurate way of using the most advanced AI tools that exist today. And then in order to create the MTBF that we need, you use redundancy. So we call this end-to-end is done right. And then last is DXP. How do we scale? These are very complex systems. SuperVision and Chauffeur and Drive, if we need to customize this system for every car model, then well, this will block scale. We'll not be able to...

It's not like the front-facing camera business. This is much more complicated. So the idea of how to allow us to scale, yet serve the car maker's desire to fully control the driving experience, is something that took a while. And, you know, it really sunk in the first time that we were working on the EyeQ6 platform with a customer. We received a customer request for 12,000 tunable parameters. We said: "How are we going to do this?" 12,000 tunable parameters only for this customer. What's going to happen with another customer who want another 12,000 tunable parameters? And then we found out that 12,000s are not enough. The car maker didn't see through the entire complexity. It's not enough, right?

So this was the trigger for us to develop you know this platform to first allow us to scale, not customize for every car model and not scale properly, and yet give the carmaker what they need in order to build a fully customizable system. So I think DXP is going to turn out as a major technological driver, just like REM was a major technological driver and like RSS was a major technological driver going forward. So I'll end here. Thank you very much.

Dan Galves
Chief Communications Officer, Mobileye

Thanks, Amnon. We don't have time for many questions, but maybe one or two. Emmanuel?

Emmanuel Rosner
Lead Autos and Auto Technology Analyst, Deutsche Bank

Hi, Emmanuel Rosner from Deutsche Bank. Thanks so much for the presentation. So you've had considerable breakthrough with technology, considerable attraction with customers, yet it feels like the timeline sort of like gets pushed out a little bit. I think a lot of automakers have taken more time than expected, maybe to make some sourcing decision and certainly to ramp up volumes. Can you talk a little bit about how those conversations are going, and what is essentially taking a little bit more time than expected initially?

Amnon Shashua
President and CEO, Mobileye

Well, I don't think it's taking more time than expected, but I agree with you, it's taking time. Because these are complex systems. These are complex, these are expensive systems. You can take a few thousand dollars and not expensive, but for a carmaker, it's very, very expensive. So these are expensive systems. So it takes time to go through with the potential carmakers in order to get design wins for these types of systems. But a year ago, we had two car models, Zeekr 001, and then the end of the year, you know, Zeekr 009 that came out. Today, we're reporting 30 car models. So I think we're progressing quite well. So it's 30 car models across five OEM groups or about 10 OEMs, right?

If we count Zeekr as an OEM, which is an OEM, right? Smart and Volvo and Polestar and Porsche. The Western OEM is a number of. It's a group, it's a number of OEMs, so it's more than 10 OEMs in the course of a year.

Dan Galves
Chief Communications Officer, Mobileye

Multiple geographies-

Amnon Shashua
President and CEO, Mobileye

And multiple geographies.

Dan Galves
Chief Communications Officer, Mobileye

... and multiple powertrains.

Amnon Shashua
President and CEO, Mobileye

Multiple powertrains. I think we did well.

Dan Galves
Chief Communications Officer, Mobileye

Okay. Maybe right over here, please.

Dan Levy
Senior Equity Research Analyst, Barclays

Hi, there. Dan Levy, Barclays. I think with this DXP you're addressing, one of the constraints was that some automakers have wanted to own the experience, and that's maybe why they've been reluctant to engage with you on SuperVision or some of the more advanced platforms. What is the forcing mechanism for these automakers that have still taken a very committed approach to owning experience, having very, very large software teams, dumping a lot in resources, to getting them to change their approach to engage with you? Is it just the realization that for them to scale, it's just too difficult and they have to look at alternatives?

Amnon Shashua
President and CEO, Mobileye

I think at the end of the day, right, you need to satisfy a number of axes, optimize a number of axes. One is time to market, another one is performance, another one is cost, right? If your in-house development doesn't satisfy those three axes, you'll not be able to compete with your fellow carmakers. And DXP really solves this remaining tension that, okay, I want to have very good time to market. I want to have very good cost. I want to have the best performance, the best cost, but I want to control the driving experience. I want this, I want to own it. And DXP allows the carmaker to really own it. There's no reason for the carmaker to write code of perception. There's no differentiation there.

Perception needs to work well because it's safety critical, and if you have a supplier that built perception, then why invent the wheel again and re-do it? The validation of this is incredibly difficult, time-consuming, you know, cost-consuming, and getting it right has proven to be very, very difficult. This is why Mobileye is in this business for 25 years, right? If we were doing something easy, we would not exist today, right? But the carmaker wants to control the driving experience. We acknowledge that, and DXP provides that in a comprehensive and complete way. It provides the... It satisfies the desire of the carmaker to own the complete driving experience.

Dan Galves
Chief Communications Officer, Mobileye

Thanks, Amnon. I think, guys, I think that that's all we have time for today. We're really tight on time. Please come back for Shai's talk at 1:00 P.M. tomorrow. Thank you.

Powered by