Mobileye Global Inc. (MBLY)
NASDAQ: MBLY · Real-Time Price · USD
9.23
+0.53 (6.09%)
At close: Apr 24, 2026, 4:00 PM EDT
9.28
+0.05 (0.54%)
After-hours: Apr 24, 2026, 7:57 PM EDT
← View all transcripts

CMD 2024

Dec 9, 2024

Dan Galves
Head of Investor Relations, Mobileye

Before we begin, please note that discussions in this program contain forward-looking statements based on the business environment as we currently see it. Such statements involve risks and uncertainties. Please refer to the presentations accompanying these discussions and Mobileye's periodic reports and other filings with the U.S. Securities and Exchange Commission. In particular, the sections therein entitled Risk Factors, which include additional information on the specific risk factors that could cause actual results to differ materially. Without further ado, let me introduce our CEO, Professor Amnon Shashua. Thanks.

Amnon Shashua
CEO, Mobileye

Hello, everyone. Hope you had a good experience with the different drives. So the way we broke it down, I'll take the first part, which is more high level, and then Shai will do a more deep dive into the technology stack. So first, kind of a bird's-eye view, kind of the lay of the land. I put here three prototype approaches: Waymo, Tesla, Mobileye, and tried to kind of articulate what are the real fundamental differences. There's lots of commonalities, but what are the real fundamental differences? So if you look at the Waymo approach, it's really LiDAR-centric. The LiDARs are very high quality. Those are LiDARs that were in-house developed in order to meet the kind of spec that they require. So it's very high quality. But there are also many other sensors: stereo cameras, cameras, many, many other infrared cameras, radars, and so forth.

But the approach is LiDAR-centric. The AI approach, it's a compound AI system. There are lots of AI, but it's systems that are glued together, just like ChatGPT is a compound AI system. You ask a question that requires a tool, then a piece of Python code would be generated to do the calculation for you. So it's not neural in that sense. It's being able to use tools. So this is known in the community as a compound AI system. The cost, we put here an X because we're thinking about a consumer vehicle. So the cost is too high for a consumer vehicle. It is appropriate for a robotaxi, but it's too high for a consumer vehicle. Modularity, can you break it down into smaller ODDs, up to down to driving assist? No, you cannot. This is one system. Geographic scalability, we put here a question mark.

They need to map every area that they are, and they're quite efficient in mapping, so this is why we put a question mark. It is deployed as level 4, so there is deployment as level 4. Let's look at Tesla. Tesla is camera only, and they're very, very consistent about it. Just cameras. The AI approach is end-to-end. At least this is what they claim. End-to-end meaning images come in, control of the vehicles comes out. The cost is very relevant for consumer cars. The modularity, they can break it down into lower ODDs, and geographic scalability is good. It is deployed as level 2+ , so we're talking about an eyes-on system as level 2+ , and their way to solve autonomy is the claim is that cameras alone will be able to achieve autonomy.

As we go through this presentation, the kind of question is, what it takes to reach success? What it takes to solve autonomy? Mobileye approach is similar to Tesla, in fact, that it's camera-centric. That means we double down on processing cameras. It is a compound AI system. The cost is relevant for consumer cars. We are in the consumer car business, also in the robotaxi business, but in the consumer car business. We have geographic scalability also with the aid of REM that is crowdsourced. Now, what is the difference in our approach to how to solve autonomy? Our point is that in order to reach the very high level of mean time between failures that you need, or the robustness that is needed in order to be eyes-off, we need to introduce redundancies.

We introduce redundancies in many, many areas, but let's talk about redundancy in sensor modality. We said we need to introduce another sensor that has the density of the camera so we can see very fine details in the scene and has quite different failure modes to the camera system. For example, LiDAR, there's lots of commonality in failure modes, especially in bad weather conditions, wet roads, rain, and so forth and so forth. Now, there's no such sensor today because radars do not have the density of cameras. So this is why back in 2018, we started to develop the imaging radar, which would be ready end of this year, start of production 2025. I'll show a few slides about that. That would give us the redundancy that we want on top of cameras.

And then you can add a front-facing LiDAR to get a triple redundancy. But really, the heavy lifting would be done by cameras and the imaging radar. And the imaging radar would have the same stack as the cameras from end to end. Radars coming in, sensing state coming out, and control. So what we're saying is in order to solve autonomy, we need to reach very high levels of MTBF that no one in the consumer cars have reached. And the way we think of doing that is while we're doubling down on cameras, if it's possible to reach it only with cameras, we will do that. But we believe that to be practical, you need to introduce another type of sensor, but it has to be low cost. Otherwise, it's not relevant to consumer cars. So those are kind of the three.

There's LiDAR-centric, camera-only, and camera-centric, where we're adding additional high-density sensor, low-cost high-density sensor, which is the imaging radars. So in kind of the question of the quest of what would be determined, what are the requirements for success to go from eyes-on to eyes-off, where eyes-off is kind of the quest of solving autonomy. We put here five pillars: productization, scalability, both the geographic and operational design domain, the technology stack. It's not only the software. It's the silicon, the infrastructure, the hardware-in-the-loop infrastructure, the training infrastructure, sensors like the imaging radar, and so forth. So the technology stack, the cost. At the end of the day, these are systems that should be in consumer cars. So cost matters a lot, and also safety. Three, four weeks ago, we published a safety report.

Looks like an academic paper, but it's really the principles of our safety approach, and there are some nuances that I would share with you in this slide deck, and Shai would go a bit deeper onto it. So let's go pillar by pillar. So in terms of productization, it is really the challenge of going from a demo car to a volume production car. There's lots involved. It's kind of a death valley going from a car that is used for demo purposes to a technology that is on volume production and with many brands and many OEMs also. So you have to meet requirements. You have to meet the level of robustness that goes into productions. So we divide it into three. One is kind of geographic scalability. We're working with more than 50 OEMs.

The data that we get is very diverse coming from all over the world. REM already covers over 95% of roads in the U.S. and Europe. We have more than 200 petabytes of data worldwide. So it's not data coming from a certain geographic location. It's really worldwide. I'll have a slide about that in a moment. Multiple car models and OEMs. We're not talking about building a technology that is for one brand, one car model, and one car maker. We're talking about technology that should go on multiple brands, multiple car makers. So for example, car makers have requirements. We cannot come to them and say, look, this is a black box. You take it or leave it. They have requirements. And some car makers have tens of thousands of requirements.

So we built technology like DXP that we presented at CES that allows the car maker to tune the entire driving experience on top of our stack. The compound AI system is a modular stack, allows us, for example, we train the system on a certain LiDAR, and the car maker wants another kind of LiDAR. We train the system on a certain camera configuration, and the car maker wants to add another camera or shift the position of the camera. We need to have the flexibility to adapt to changes without creating another moonshot development. So this is the multiple car models. Meeting industry standards. So everybody talks only about MTBF, but industry standards is way beyond that. There is FuSa, functional safety. There is SOTIF. I'll mention that later when I talk about our safety. Then there is the RSS. Then there is fail-safe.

How do you make sure that if you have a hardware failure, you can still operate and do a minimum risk maneuver, so safety is not just counting miles between intervention. It's something much more comprehensive that you need to handle, and you need to convince car makers that your approach is acceptable to them, and you need to adapt also to their requirements and their ideas and build a comprehensive safety protocol, so in terms of productization, the same category. For example, we have already what you have driven, some Zeekr and Polestar SuperVision system on EyeQ and EyeQ5. The next generation EyeQ6 is coming in the 2026 timeframe by Porsche and Audi, and Nimrod will talk about all the engagements we have with additional customers. For that, there are about 300,000 vehicles in China.

Polestar now is ramping up in Europe, a few thousands of cars every month. Then hopefully later in the year, they'll also start ramping up in the US. So this gives us a lot of productization experience. Hundreds of thousands of vehicles with a piece of very advanced technology. It gives us productization experience. Pre-production, what you have seen in the garage, the SuperVision 6.2 based on EyeQ6, the Chauffeur 6.3, the eyes-off system with the three EyeQ6, the Drive 6.4 for the robotaxi. All of that is in pre-production. Second pillar, scalability. So here we divide it into two. First, geographic and the operational design domain. In terms of geographic, we would like the system to operate across very, very diverse regions. So for example, what difference from region to region?

Roads, they look the same, but semantic data, traffic signs, traffic lights, there's lots of variations from region to region. Traffic laws vary. Public transportation routes and so forth. There are lots and lots of semantic data that change from region to region, and you want to quickly adapt to it and not retrain for every region that you go through. This is one type of scalability. Human driver behavior, kind of the culture of driving changes from region to region. You can still drive safe, but with a different culture of driving. It's not a matter of safe or not safe. And that changes quite radically from region to region. And the laws also, the regulations are also different from region to region. So you need to adapt to them. So all of that is what we call the geographic scalability. On the right-hand side, operational design domain.

We would like to expand the ODD to cover all road types. For example, in an eyes-off system, we're starting with highways. The industry is starting with highways. But later we would like to move to primary roads, secondary roads with protected turns, unprotected turns, and so forth. So the road types, the driving conditions, nighttime, daylight, rain conditions, snow. You have to determine what kind of ODDs you want to support and how you expand. The weather conditions, I mentioned speed limit. You'd like to be in a useful speed limit, at least 130 kilometers per hour. How do you support that? What kind of sensors do you need to support that? And I'll show you some examples of what we do with the imaging radars. So a useful system that operates effectively in all those conditions and not just focused on a very narrow optimal use case.

It has to be quite diverse. So for example, the geographic scalability of REM, of our mapping. As you see, the amount of data that we get every quarter goes up. So to date, we have 56.6 billion miles harvested. It's a number that's difficult to comprehend in terms of the amount of roads that are covered. In 2024, about 30 billion miles were covered in terms of harvesting, of receiving data. Every day, we collect around 60 million miles of data. So this is a huge machine, a huge endeavor of being able to build maps at scale. And here is an example. We're starting from a very close view. This is how a map looks like. And zooming out, zooming out, zooming out. And you get all of Europe. The white areas are the Alps, if you're asking. So as you see, this is very huge coverage.

Next is the technology stack. The technology stack, most of it will be covered by Shai, especially the compound AI system. But I'll cover a bit of the EyeQ and the imaging radar and the hardware-in-the-loop. So let's look at the technology stack. So first, in terms of generation one. Generation one, again, we're talking about this tide of solving autonomy. It's not ADAS. Generation one, it's the EyeQ6. The EyeQ6, what you saw in the garage with the pre-production vehicles with the Porsche and Audi and the ID. Buzz. So the EyeQ6 has the capacity we need to solve autonomy. One EyeQ6 for Surround ADAS, two EyeQ6 for SuperVision, three EyeQ6 for the Chauffeur level 3, four EyeQ6 for Chauffeur level four, and also for the Drive system. Also, the Mobileye Imaging Radar. Call it BSR. This is the front, and we have also a corner.

In generation two, EyeQ7, the purpose of EyeQ7 is cost reduction, so a system that has two EyeQ6, there'll be one EyeQ7 and so forth, so this would give us cost reduction, and also, the radar will also have a next generation, which we call CSR, which will also reduce cost. Not that the BSR is so expensive. It's a few hundreds of dollars, but still, you always strive to reduce cost, so a few words about the EyeQ6, because really, the design, there's a very thoughtful design behind it, which brings us to very high efficiency, so what do I mean by high efficiency, so you can think of efficiency versus flexibility as two axes, where CPU on the right-hand side gives us low efficiency, but it's general purpose. You can write any kind of code on a CPU.

On the other hand, a fixed function silicon would be very special purpose, but very high efficiency. And GPU is in between. What EyeQ6 has is a number of accelerator cores. We have CPUs by MIPS. This is RISC-V. We have special purpose vehicles like the xN N. And we have all those in between. Now, in order to understand the efficiency, let's look at the following. This is a comparison between EyeQ5 and EyeQ6. On the right-hand side is the kind of measure that people like to use, which is tera operations per second, TOPS. So an EyeQ5 has 16 TOPS, and EyeQ6 has 34 TOPS. So twice the TOPS.

But if you look at the number of workloads, pixel labeling, number of workloads, and you measure the frames per second that you can support with this particular workload, you see a factor of 10 between EyeQ5 and EyeQ6. Because it's not just TOPS. It is the way those accelerators are spanning this flexibility versus usability. If we take ResNet-50, which is a standard convolutional net, and run it on NVIDIA's Orin chip. So NVIDIA's Orin chip, in terms of TOPS, is a factor of 8 to EyeQ6. But if you look at the measure of frames per second, it's only a factor of 2. So this is just a point to make that TOPS is not the right measure of efficiency of a chip.

We build a chip that is very efficient for the needs we need for running computer vision, for running AI stack, and it doesn't need very, very high TOPS. We trade TOPS with more different types of accelerator families. The imaging radar, the spec of this particular imaging, I'm not going into the spec, but the spec of this particular imaging radar is off the chart. There is no radar that is even close to it, not in production and not in design even. And there are two types, front-facing, which is the BSR, and the corner radar. Now, I've been working with OEMs to test the radar. Our vision of using this radar is part of a bundle. It's not necessarily that we are eager to be in the radar selling business, but it's part of a bundle of chauffeur and drive.

But we're working with OEMs to understand what are their specifications of the use of an imaging radar. And you see here, these are two years of testing with a particular OEM. You see the type of things that are being tested. And what is really the sweet spot of use case is hazard detection. If you're driving 130 kilometers per hour, you need to see even a small object. It could be a piece of a log or could be a person lying on the road head-on, not lying laterally, but lying vertically head-on to you. So you only see the head or the foot. So it's very, very small, and you need to be able to see it way more than 100 meters away, 150, 160, even more. And this is very, very challenging. A camera can do it.

The question now is, what would be the additional sensor for redundancy that will be able to do something like this? So here's an example of this kind of a dummy head-on of a person head-on. This is 8 meters away, and you can see here the detection down here. This is the 8 meter away. And as you go backwards, you see this box 100 meters away. You go backwards 220 meters away. So we can detect this for 220 meters. And this comprehensive set of tests for the last two years have proven that our radar is really off the charts. It's really off the charts. And this is going to be our vehicle to combine with cameras in order to solve autonomy, in order to get to this very high mean time between failures to solve autonomy. So it'll be used for redundancy.

It has the density of a camera, but because it's based on Doppler radar, it has a completely different set of failure points than cameras. This is why it's perfect for redundancy. It has a density of cameras, high accuracy. We can also replace flash LiDARs because it starts detecting from a few centimeters up to hundreds of meters. And it's low cost. Low cost meaning if you take five of these radars, front and four corners, is $1,400, $1,500, something like that. Again, when you're talking about consumer cars and provide them an eyes-off solution, it is very, very reasonable. We're not talking about each sensor being thousands of dollars. Another part of infrastructure. Again, we're still in this pillar of technology stack, Hardware-in-the-Loop. So the question is, what kind of offline infrastructure you need in order to support your development? It's not only training.

Here, for example, is validation. We want to have an ability to do a real-time playback of 10,000 hours of driving overnight, so we build a hardware-in-the-loop system of 1,000 ECUs. So it's 1,000 SV6.2. They work. They have their own memory such that you can feed 10,000 hours, and then overnight, you get the results of 10,000 hours of driving, whether there are fault sets, whether there are misses, and so forth. So this is a very significant infrastructure in order to be able, within a day, to revalidate systems, make changes, and revalidate, and be able to support multiple car makers. That's the point, so in addition, Shai will cover what we do with the transformers, what we do about ground truth, safety goals, what is our latest view of how to reach the safety requirements to do an eyes-off system. Next pillar is cost.

If we look again at this product portfolio, what I put down here is system cost. It's not revenue to Mobileye. It's system cost. For example, when you look at Base ADAS, depending on the bundle, $100-$150. Surround ADAS is front camera, rear camera, and four parking cameras, so six cameras. In some cases, there are five radars. In some cases, there is only front-facing radar. It's one EyeQ6. It's about $700-$800. Then we're talking about SuperVision, which has two EyeQ6. It's about somewhere between $2,000-$2,500. It's a system price, including the cameras, cables, everything. Then when we do a jump to the right, Chauffeur Level 3, three EyeQ6, it has also LiDARs, multiple radars. This is about $4,500. If we add a front imaging radar to get ability to handle more use cases.

So the idea of eyes-off is that you can do something else. And then with a grace period of, say, work on your smartphone, and then with a grace period of time, say, 10 seconds, it's a matter of choice of the OEM, you'll be asked to take control. So say, for example, the car is approaching a construction area. So the system may tell you, "Now, please be eyes-on." And there is a front-facing camera looking at you to make sure that you are complying with the requirement. And if you do not comply with the requirement, the car would do what is called minimum risk maneuver, move to the side, and stop. So what would be the difference, say, with an imaging radar and without an imaging radar?

So with an imaging radar, we can also have more redundancies and therefore support also a construction area as an eyes-off. So you'll be able to be minds-off, not just eyes-off, for an extended period of time within the ODD, say, within the highway driving. So the system will not pester you. Now there is a construction area, be eyes-on. Now there is another area of uncertainty, be eyes-on. There's much, much lower frequency of asking for your intervention, and this would require an imaging radar. And if you add surround imaging radars, you can support more road types. And you see that the price going from 4,500 to 5,000 to 6,000. So you need to, and these are system costs. It's not our revenue, and we need to make money. So imagine how tight everything is. You cannot put here a monster computer.

You cannot put here sensors that is nice to have. You have to be very, very tight on cost if you want to meet these price levels. Safety. So what is our view of safety? So normally, what is being communicated or articulated in terms of safety is mean time between failures or mean time between critical interventions. And is this sufficient on its own? So is this just one parameter that I need to optimize a mean time between critical interventions? And if I pass the bar of whatever that number is, then I'm good enough. What we're saying is that it's a bit more complicated. Why it's a bit more complicated? So first, there is this idea of what is called unreasonable risk. So say, for example, there is an event which is extremely rare. It could happen at an infinitesimal probability.

Say, for example, a child or baby lying on a highway. Probably this never happened in the past 20 years. But if a human driver sees a child lying on the highway, it will take action. It will not say, "This is a rare event. It is above the MTBF that is required, and therefore, I do nothing, and I simply run over the child." So this is what is called unreasonable risk. It's not just MTBF. You cannot hide behind MTBF when the risk is unreasonable. You can also define what is a reasonable risk. Say, for example, being able to handle two flat tires simultaneously. You can say that this is an event that is a reasonable risk. I'm not going to handle it. I'll handle one flat tire, but not two flat tires simultaneously. Of course, you have to articulate this to the regulator and get acceptance.

But this is an area where you can say this is a reasonable risk. Second, when you compare to human driving, it's not just apples to apples because human driving statistics that are influenced by illegal activity, like driving under influence, and so forth. A machine is not expected to drive under influence and is not expected to lose attention. So it's not really apples to apples. So the system goal is what we say is, first, there should be no unreasonable risks, which I'll define in a moment. There should be no unreasonable risks. And the overall MTBF of the system should be at least as good as humans. It should be way better, but at least as good as humans. And now, how do you build a safety case around this? So it's not just measuring mean time between intervention.

And this is where, kind of, this is the safety paper that we published a few weeks ago. Shai would give more details. But we say in terms of unreasonable risk, unreasonable risk in terms of planning, you are misinterpreting the intentions of other road users. This is covered by RSS, which we published in 2017. Then we talk about identifiable errors. These could be hardware errors. This could be, for example, a camera is malfunctioning or the board is the power adapter of the board, the power regulator of the board is malfunctioning, a hardware failure. So it's something that I need to be able to identify. And software bugs like a memory corruption. So what are the redundancies? And this is FuSa. This is fail-safe. What are the redundancies I need to have? And what is the probability of its occurrence that is allowed by the standards?

For example, FuSa talks about 10 to the power of 8 hours of driving. This is the ASIL D requirement or ASIL B, ASIL B with parenthesis D that is required. Next is reproducible errors. It is something that I found out that I can reproduce in the lab. So once I found out a reproducible error, I need to fix it. Next comes, and here I need to be open about what is unreasonable risk and what is reasonable risk. And the kind of example that I gave, baby lying on the road is an unreasonable risk. Two flat tires is a reasonable risk. So I need to be open about it. This is the standard called SOTIF. SOTIF talks about all of that, how you move from unreasonable risks. How do you identify all those unreasonable risks in reproducible errors? Next comes the black swans.

So something that is rare, unexpected, sometimes you cannot predict and you cannot reproduce. And let's call these AI bugs, machine learning bugs. That could be a stray light that created something with your neural network and created a fault, a missed detection or a false detection, and you cannot reproduce it. So these are what we call in the community edge cases. So this is where you would like to make sure that those black swans, their frequencies are as small as possible, and this is the MTBF. This is what you want to say. The meantime between interventions caused by a black swan would be at a certain level. And here, what we do that, we achieve that through redundancies. It's redundancies with sensor modalities. It's redundancies with different sets of algorithms. And the principle of this redundancy is kind of two out of three.

That failure involves at least two subsystems. If one subsystem fails, you can overcome it. Only if two subsystems fail, this would be deemed a failure. And it's a bit tricky because when you think about redundancy, you think about binary cases. There is a vehicle in front of me. Should I brake or not brake? But there are many subsystems that are not binary, like lateral decisions. It's not black and white. You need to make a kind of a more smooth transition. So how do you create redundant systems there? And this is what PGF, the primary guardian and the fallback system talks about. And Shai will provide more details. So if I want to summarize here, safety is more than just mean time between. There are certain standards like FuSa, like SOTIF, like RSS.

You need to build redundancies into the system in order to reach this very, very high mean time between intervention. And all of that is something very comprehensive that you need to be able to handle. So if I summarize what I said here, in terms of requirement for success, first of all, it's productization. From demo to real product, it's kind of a death valley. It's a big transition. And this is something that Mobileye is very, very good at. And your visit to the garage, kind of a taste of what it means to execute these kinds of programs, scalability in terms of data, in terms of maps, in terms of ODD, in terms of being able to support multiple brands, multiple OEMs without creating a moonshot from brand to brand, from OEM to OEM.

The cost, both the development cost and the system cost, needs to be very, very tightly controlled if you want to gain market share. Because at the end of the day, success is translated to market share and top line, bottom line growth. Everything else is noise. So in order to gain market share, cost is key. Technology stack. What's all behind technology stack? It's not only the AI. It's the hardware infrastructure, the silicon, the imaging radar, and so forth and so forth. And Shai would go a bit deeper into compound AI systems. And then safety. Safety is the big area. It's not you have a black box and just measure mean time between interventions, and you pass the bar, and you say everything is fine. It's way more than that.

And we were very thoughtful about it throughout the years, and our maturity of thinking has evolved throughout the years. Because as we are practicing in building the autonomy, we kind of confront these issues and create a comprehensive theory about it. So safety is also an area that you need to crack in order to start gaining market share. So I'll end here and pass the baton to Shai. And we'll have later a Q&A.

Shai Shalev-Shwartz
CTO, Mobileye

All right. So we're going to talk about the ideas and the thoughts behind the technology stack of Mobileye. How do we tackle things? How do we tackle the problem of solving autonomy? And I'm working on machine learning and AI for 25 years now. I'm known even more. And I thought to start with showing you what's going on in this field. It's really amazing what happened in the last 25 years.

Let's talk about six AI revolutions that happened in the last maybe 20 years. And four of them happened in the last five years. So we are definitely accelerating. So the first one is the machine learning revolution. This is not how things were done 25 years ago. And Mobileye was founded in 1999 and was among the first companies worldwide to use this technology. Then came the deep learning revolution around 2012. Again, Mobileye was a very, very early adopter of this technology. And then came the generative AI and the universal learning revolutions. And now we're talking about sim-to-real in robotics and maybe the most exciting things, which is reasoning. I will not talk about sim-to-real in reasoning today, but I will talk about the first four revolutions, starting with machine learning and deep learning. So what is machine learning? What is a revolution?

Before machine learning, when you want to tackle some problem, you just need to code it. There is no other way. So if we want to detect, say, vehicles in order to know how to avoid them, we need to somehow program our system to know what a vehicle is. And it's pretty tricky to say in code what exactly is the meaning of a vehicle or a pedestrian or any hazard. It's pretty tricky. So the machine learning revolution says, "Okay, let's collect data." Instead of telling the machine, coding the machine exactly what to detect, let's collect data and have the machine figure out by itself what a car is, what a pedestrian is, what a hazard is. So the first revolution was still code features, hand-tuned features. But let's have automatic algorithms.

Let's collect a lot of data and let's have automatic algorithms to figure out what's a vehicle and what's a pedestrian. And Mobileye did it among the first, even EyeQ1, EyeQ2, already applied machine learning algorithms. Then in 2012 came the deep learning revolution that made a big step forward. It says, "You know what? Forget about creating the features by hand. Let's have the machine look at pixels. Start from pixels and figure out what a car is, what you see in the image." And it started with a seminal paper in 2012. Already in 2013, we already have deep learning algorithms running in Mobileye. And it was in production already in 2014 on EyeQ3. So I think we were the first worldwide to adopt deep learning on an embedded system. Now came the next steps.

While I could explain the first two steps with one slide, it will take around 20 slides to explain the next steps, so I will walk you through this. It's a bit complicated, but I'll try to do it as accessible as possible, and by the way, what I'm going to describe is the technology behind ChatGPT and all the Gen AI revolutions that you see around, so it should be interesting as well, I hope. All right, so let's go back to the deep learning revolution and see how we did object detection with deep learning before transformers, before Gen AI, so what you see, you see an image. You see detections pixel by pixel. Each pixel says, "Okay, I'm on some vehicle, and let's have the box around the vehicle." So we are working in the image space and find many, many candidates, so it looks good.

We put rectangles around all objects, but there are too many of them. We need to do more. We need to find just a single rectangle on each object. This is called clustering and maximal suppression, so this is going from here to the right, the image on the right, but this is still not enough because this is in the image space. What we really need in order to drive the car is understanding where the vehicles are in the 3D space and not only where they are, also what their velocity. We need another algorithm to go to the 3D world, so while this was done by deep learning, the other two steps are not deep learning, and they were coded because there was no technology to do it with deep learning, so this is before transformers. Now comes transformers, and basically, they brought three revolutions.

Three revolutions of generative pretrained transformers. This is GPT, generative, G, pretrained P, transformers, GPT. So the three revolutions are first, tokenize everything. What does it mean? It means that we are referring to everything as language. I don't know what images are. I only know language. And in language, there are words. So think about tokens as words. There are words that explain language, but there are also words that explain images. So some pixels will be a token, a word. So everything becomes, it doesn't matter if it's image, if it's video, if it's voice. So wave signals, everything is tokenized. And then it's a sequence of tokens, which we think about as a language. So this is the first revolution. The second revolution is the generative autoregressive approach.

Generative just means that we are not learning to label things to say, "Is this a car or not?" But we are going to generate sentences. Think about the problem of what you see in ChatGPT. It generates text, one symbol at a time. This is what's called generative. Autoregressive means one token at a time. It first outputs the first word. After that, it will output the next one and the next one and the next one. This is called autoregressive. This is the second revolution. The third revolution is how to do it. This is done via a specific neural network architecture, which is called transformers. The paper that came out with this idea from Google Research is titled "Attention Is All You Need" because there is a specific structure which is called attention.

So together, these three properties brought the GPT revolution. Now I will elaborate on what are these components and why this is such a big deal. So starting from the first one, tokenize everything. So let's take again the problem of detecting vehicles in an image, like we talked before. But let's think about how to tokenize it. So what we're going is we are going to take the image and divide it into small patches. And each small patch will be a token. Now the image, I forget that it has a two-dimensional structure. I just have patch, patch, patch, patch, patch of images. And this is the input. What is the output? I need to define some language to communicate to you where are the cars in this image.

So let's say that I will tell you where is the pixel, the bottom left corner of the vehicle and the top right corner of the vehicle. And I will do it for every vehicle. So it will be a sequence of coordinates telling you where the vehicles are in this image. So what did we achieve by this? We achieved by this that we can tackle everything using this approach. This is just one example. But if you think about any task that you want to solve, you as humans can communicate it in some language. So if you can communicate it with some language, think about it as a tokenization process. And now it becomes something that we can learn. The second component of this revolution is a generative autoregressive approach. So before this, we only knew how to classify things into few options.

Now we say there is no classification. We just learned probability on what is going to come next. And we will do it step by step. So every time, we will say what's going to come next. And then we will say, "Okay, let's sample from this, get the next token, and produce another one." Why is it good? First, because it allows you self-supervision. We don't need labels. We just want to learn what's around us. So if we want to learn text, let's just look at the entire internet as token by token by token by token. And then we don't need labels. If we want to learn images, let's look at YouTube. Just look at many, many videos and figure out what's coming next. This is how we learn.

The second advantage is that we learn how to handle uncertainty building in the method because we never learn definite things. So this is a car. This is not a car. We just learn what is the probability that what's come next is something. And this property is very, very useful and also has some pitfalls that I will explain later. So why this enables many, many things? So let's again think about the problem of detecting vehicles. And here we have four vehicles. Suppose that we have four coordinates to describe each vehicle. So overall, we have 16 numbers that we need, or 32 actually, because we need both X and Y. So 32 numbers that we need to communicate. So if we have, say, 100 dimensions to each place, then the number of possibilities becomes huge. So we cannot just tackle it directly.

In autoregressive, we say, "Let's decompose it. Let's walk step by step. Let's first bring out the first coordinate." Then the dimension is very, very small and continue like this. So again, it enables the machine to learn everything because even if your output space is very, very complex, if you think about it one step at a time, then it becomes easy. The last thing is a transformer architecture. So the transformer architecture is simply an architecture tailored for predicting the next token given the previous tokens. So again, we have all the context of all what we know so far. And we want to induce a probability over what's coming next. And the transformer is a neural network architecture to do it. And in order to explain it, I like to think about it as a group thinking process.

So suppose that all of us, each one of us is a token. Now all of us are the tokens observed so far. And we want to say something about the next person that is going to enter the room. Now each of us individually knows something about the world, not everything. But together, we have more power. So let's think how we do a group thinking process in order to combine everything that we know individually into something very, very strong. So if you think about it, there will be two steps in this group thinking process. One step is what I call self-reflection. Each one is sitting in his chair and thinking, "Okay, what I already know, let's process it a bit. Now I can articulate it maybe in a better way." So each one individually is doing a self-reflection on what you currently know.

But this by itself is not enough because we are not doing anything together. How can we do something together? We need to communicate. And how can we communicate? So someone can ask the crowd some question. And some of the people in the crowd will have no clue about this question, this particular question. Other people, maybe someone will know something relevant. So we will give the answer. And then the person that asked the question can combine the answer that he got in order to be smarter for the next iteration. So these two constructs are exactly what's going on in transformers. There is a self-reflection part of the layer of the neural network. And there is a self-attention part of the neural network. So self-reflection is simply each token. So I don't think I need to go too much into the details.

But just remember two numbers, N and D. N is the number of tokens, the number of people here. And D is the size of the embedding of the knowledge of each individual people. So if each of you, sorry for doing it, but I summarize you to a vector, to a D-dimensional vector, this is what you know on the thinking process that we are doing. So what happens in the self-reflection, each one works individually. This is why the complexity of this layer is linear in N because we are doing in parallel each one thing by itself. So we will have linear in N, no power in the N. But each one thinks hard on its own relations between the features that it has. So this means quadratic in the dimension of each individual because each one has many, many features that he knows.

And he wants to combine them. So this gives D squared. So the complexity of self-reflection is D squared N. What's going on in self-attention? So it's exactly how I described before. But it all happens simultaneously. So it goes like this. Each one of you produces three things. One is a question. We call it a query. What do you want to ask the crowd? The second one is a key. A key is just what type of questions I want to answer. If someone else will ask a question, do I want to answer it? This is represented by the key that you have. This is what you know. And value is the actual answers that you are going to have in case someone will ask. So each one of you produces these three vectors. And then we are doing matching.

So we take every query, match it with every other key. If there is a match, then the value is propagated to the right place. So this is what's going on in attention. So again, no need to go into the details. But just know this number, N squared D. So before we have N D squared. Now we have N squared D. Why? Because now the communication is between every two individuals. So the number of pairs of individuals is N squared. But the calculation itself is just matching query and key. These are both D-dimensional vectors. So it takes linear in D. So we got N squared D. So the two parts of the networks are N D squared for self-reflection and N squared D for self-attention.

And all of this process is happening again and again and again with a number of layers because after all, we are talking about deep networks. Deep means many, many layers. All right. So now if we compare this complexity with what happened before, we had a fully connected layer. Fully connected layer means that every feature of every person talks with every feature of every other person. So we have N times D over all number of features. And each are talking with each other. So we get N squared D squared. So we get much more compute than what happens in transformers. An alternative is recurrent neural networks in which I just put you in a line. And each one only talks with the one before him or her. So this one is N D squared, which is less than transformers.

But transformers are more powerful because they didn't have the limitation that you can only talk with the one before you. So if we take the new transformers relative to previous ones, so I said fully connected networks, now we are sparser. Convolutional neural networks, which were the hot thing before transformers for images, are as good as transformers, actually. But they are specific. Transformers are more general. So transformers give you any modality. CNNs give you only images. But in terms of sparsity and effectiveness, they are the same. And there are the recurrent neural networks, which are denser but effectively, never mind the details. So what did we gain? We can handle all types of inputs. We can deal with uncertainty. And we can enable all types of outputs. So basically, the premise of GPTs are the ultimate machine learner.

So something universal that can solve all problems of humanity. Sounds great? Right? So let's use it for driving. Sounds like a good idea. Let's take as input images, as output control commands, how to steer and how to move the throttle or the braking pedal. And that's it. Problem solved. The ultimate machine learner. So let's give a little bit more details how to do it because we actually did it. So we do the following. We start with a convolutional neural network backbone for creating image tokens. Remember that we want to do tokenization. We want to transform our original input into a language.

The way we chose to do it, which is standard in the literature and in the industry, is by using convolutional neural networks, which are very, very good for images, just as a tokenization layer, just to make the images be a sequence of tokens. So we took 32 high-resolution images. The CNN reduces them to smaller images, 20 by 15 pixels. But each one of them has dimensions. So it's not just a pixel. It's a representation of an area in the image. So now we have 20 by 15, which is 300. We have 300 tokens per image. And each one has dimension D of 256. So I use NP because this is for a single image. In total, we have 9,600 tokens because we have 32 images. So 32 times the 300 pixel tokens that we have per image gives us almost 10,000 tokens.

So this will be the representation of our input. And each one, I chose a very, very mild dimension of 256. By the way, if you look at GPT-3, for example, the dimension is 14,000, so much higher than that. So all what I'm going to tell will be much worse if we choose the parameters of GPT. So let's run a transformer on this with L layers. I chose 32 layers. Again, a very mild choice relative to GPT that has 96 layers, if I remember correctly. So assuming we are running 10 Hertz, so we are doing the calculation 10 times a second, which is standard actually on the low side in the industry, then just the encoding itself takes 100 TOPS. And I didn't count the CNN in the beginning. And I didn't count anything else that you want to do.

And I chose a very, very mild network. So this is 100 tera operations per second. Now just if you need a little bit of flexibility, then you run out of space in your compute. The decoder will be 50 tokens that describe where the vehicle should drive in the coming five seconds. So let me show you how it works. So what we see here, a few of the images of the input images, what you see on the bottom, you see the ego car. And then you see these, I think it's blue dots. I'm colorblind. So sorry. I think it's blue or close enough. So you see 50 points here. These represent where the car should be in the coming five seconds at 10 Hertz, so 10 times 5.

Now what you will see, if this line, this blue line will get shorter, it means that we need to slow down because if we are driving slow, then we will be here in five seconds. If we are driving fast, we will be here in five seconds, so you will see that the line goes to be longer or shorter depending on the speed, so this is how I encoded the speed, the required speed, and the position of the line encodes our steering, how we want to drive, so now let's play this video, and what you see is that we are coming to an intersection. We need to do a right turn, then comes some cyclist, so we slow down, and then we can accelerate and do a lane change in order to overtake the cyclist.

So this was learned by the ultimate learning machine, GPT, end to end from pixels, not to the control command, but to something that we can easily translate to control commands and is actually more interpretable. We didn't choose control command because we don't want to be vehicle specific. In one vehicle, you will need to translate these dots in one way to the controller and then in another. So just let's take something generic that just says slow down or accelerate and how to steer. Another example. So here, this is, by the way, in Hamburg, if I'm not mistaken. So it's a narrow street. So you will see that the blue dots take you between parked cars. So we need to plan to do the slow down when it's a tight place to go by and so on. So this is another example.

Here is actually maybe a more interesting example. We are driving behind a bus. This is in Jerusalem. Then we see a car that looks like an obstacle. Let's try to overtake it. We try to overtake it. Then it starts to go. We say, OK, let's go just slow down and go after it. All of this is examples of how you run this end-to-end from pixels to control commands or to close enough to control commands. If it's so good, why shouldn't we just take it? Some did, not to mention names. We think it's actually not such a good idea, not such a great idea as an individual solution, complete solution to the problem. It's good for other cases. We are using it, but not as a single solution to the problem.

So why we don't think it's a single solution to the problem, it is good. First, we have many, many doubts whether it will reach the desired MTBF. We elaborated on this in the AI Day. There are some issues with this end-to-end learning machine. The first thing, if you may recall, I told you that one of the benefits of this machine is that it doesn't need labels because it just learns what's going to happen next. How did we train this, what you saw in the videos? By just looking at what's going to happen next. But the future is uncertain. We don't know what's going to happen next. There is no way to know because there is inherent randomness. We cannot model what's going on in the brain of each individual that contributed data to this training process.

So the only thing that we can say is what is likely to happen next. But is it good or bad? There is no good or bad. There is only likely or unlikely. So inherently, this method cannot distinguish between rare and correct versus common and incorrect. And when you train them, you often see things which are a common practice of human drivers, which is simply not a behavior that you want to adopt because there is no notion of correctness. Another problem, and this is something we see very, very clearly in language models. So even if you take the largest language model out there, something that we have no even hope that in the coming future this will be able to run on a car like GPT-4, trillions of parameters, not something that you can put on a car and run in real time.

So even GPT-4, when you ask it to tell you how much is a number times another number, say with four digits, then it will give you a wrong answer. And what OpenAI did in order to solve this problem, they just understand that you are asking about something that they should apply a calculator. And they just run Python in the background in order to translate your request into a one-liner of a Python program, run the Python program, take the output, and this is the answer that they give you. So even OpenAI stopped doing end-to-end. They switched to a compound AI system, a system in which there is some large language model. But there are other components which are not based on learning at all. Why did they do it? Why the network couldn't figure out with trillions of parameters how to multiply two numbers?

The reason is that these methods have limitations. And many times, they fail to learn important abstractions on the problem. They can learn instincts but not abstractions. Now you can say, fine, humans are driving by instincts. I will be as good as human. And I will say, wrong. Humans are not driving only by instinct. If this was true, then we would let little children drive. We don't let them drive. We make them learn the rules. There are rules to follow. And you need to be responsible. And there are more things that you need to do. You need to be mature enough in order to grasp abstractions, to learn them in driving school. And only then we give you a permit to drive. It's not just an instinct. You need to learn abstractions, and especially for autonomous driving, where we want to be very clear about safety.

So we don't want to say, OK, it's an instinct. We want to give some clarity on what we are doing. For this, we need abstractions. There is also a problem which is called the shortcut learning problem. I don't have time to explain fully. But these are cases in which the network finds a shortcut and doesn't really grasp the real thing that you want it to understand. And the issue with it is that it will not generalize in edge cases, which brings me to the long-tail problem. So you saw that we are driving pretty nice with end to end. I showed you the video. Very nice. But what happens in edge cases? Very, very difficult to control edge cases with these methods where you don't have any transparency on what's going on in between.

So for all of these reasons, we have doubt if it will ever reach the desired MTBF or maybe not ever, but if it will reach the desired MTBF with the current technology. The second thing is that we also want to eliminate unreasonable risk. So Amnon mentioned it. Think about what happens if you have a board that just burned or you have a malfunctioning camera or dirt on the front camera. Even if you have the best AI in the world, it can only work on the computer that it runs. If the computer is malfunctioning, it will not work. It can only rely on what it sees. If it can't see, then we have a problem. So if you put all the eggs on one monolithic system and you want to be robust to this system, so you just need to duplicate it, doubling the cost.

You just need to take the same end-to-end expensive system and double it. So two computers, you need to double the cameras. Hopefully, they will not get dirt at the same time, et cetera, et cetera. The way we build the system is that we have redundancies built in the system, which does not cost us more because it's by design, which brings me to the third thing. These methods, the GPT methods, are brute force. Brute force is the dark side of universality. Now we do not need to solve a universal problem. We need to solve driving. We know a lot about the problem. Why can't we use it? Why won't we use it? So what I'm going to show you is transformers where we use what we know about the problem.

By doing this, we get 100 times better efficiency relative to vanilla transformers without paying nothing by performance. In fact, we actually improve performance because we injected prior knowledge, things that we know about the problem. There is a lot of sophistication that needs to be done in order to achieve it. One of these issues is we now need labels because we do not rely only on GPT, only on unsupervised data. We do need labels. If you're smart about labels, you can get it for free. These are two things that I'm going to show you: how we get transformers 100 times more efficient and how we get labels for free. OK, back to our problem. We have images.

We want to output not only the trajectory but also sensing state, but also all the vehicles around us and all the objects around us for the reason I've mentioned before about MTBF and unreasonable risk and all of this. So it's a similar problem, just I require more from the neural network. Same neural network that outputs the trajectory should also output additional things like all of the objects, the lanes, et cetera. So what we propose is something we call STAT. It's Sparse Typed Attention. And in order to understand it, first recall that in vanilla transformer, we have the n squared d plus d squared n. And let's focus on the n squared d, which is the thing that hurts us the most because we have n to be 10,000, so 10 to the power of 4. So n squared is 10 to the power of 8.

It's really large. Now what's the problem with it? What we are not using? What we are not using is that we are here maybe 30-40 people. Maybe it makes sense for us to do a group thinking where everyone can ask all the rest a question in the self-attention phase of our group thinking process. But imagine that we were 10,000 people here, so a whole stadium of people. Does it make sense that each one will be able to ask each other one a question? I don't think so. So how we handle so many people? We group them together. So we group them together. And then each group does some group thinking. And there is maybe a manager to the group that communicates with a manager of other groups. Or there are groups, groups, and there are link tokens that link between different groups.

In companies, this is called product group. Moran here is head of product. So Moran is responsible for linking between the R&D and the business. The business needs something. The R&D is working on general technology. How do you link them? You want someone to communicate between the business needs and the R&D. So this is exactly what we are doing in STAT. We are grouping things together. And for every image, we have 300 image tokens. But they are not talking with each other. They are talking with the product group, with the link tokens. And then we are having a bottleneck in the communication. And this reduces the n squared dramatically. Now what is dramatically? By a factor of 10. But recall that we have n squared. So a factor of 10 in the communication becomes a factor of 100 in the overall compute.

And this is how we get 100 times more efficient. And we didn't pay anything because we did something that built for the problems that we are trying to solve. So when we did it, we actually improved performance versus vanilla GPT. The other thing, I will not go into it, parallel autoregressive takes care of another hurt point of transformers. No modern chip likes to produce things one at a time because we cannot leverage parallelism when we output things one at a time. And on the other hand, all the idea of autoregressive is outputting things one at a time. So how to handle it? We, again, have a novel idea of how to do it. And we implemented it. And it works very, very well. I will not go into the details. So intermediate summary. Transformers. So we had machine learning.

We had deep learning, machine learning 2000, deep learning 2012. Then came transformers 2018 and became very popular in 2019. The good, the bad, and the ugly of transformers. The good is that these are universal learners. They are generative and universal. You can solve everything by it. You can solve it without labels. This is great. The bad is that by construction, you cannot separate between right and wrong, just between common and rare. You miss important abstractions many times, like multiplications. This approach is questionable when very high accuracy is required. You see, GPT family, ChatGPT, and other similar models are really amazing. But they hallucinate. I would not count them with my life. I propose that you will not as well. They are very impressive but not for safety, heavily safety-critical applications.

The ugly: it's the dark side of universality, brute force. So if you can solve anything with the same hammer, then this hammer should be a really big one. But when you know what nail you need to press on, push on, I don't know how to say it, then you should adapt the hammer to the nail. So working smarter with transformers, we use transformers as one component in our system, but not the only component. We use them where they are great. And because we use them just as a component, we need to be much more efficient when using them. So this is why we developed 100 times more efficient transformers. And then it's just a part of our system, no big deal. Everything is fine. OK, the next thing I said is that because we are not doing just end-to-end unsupervised, we need labels.

When you think about it, there is really a trade-off. This graph shows how much data do you need versus what's the amount of supervision. You can think about it as quality versus quantity. So the x-axis here is quality of the data, how much you know about this data. It's not just what's going to happen next, but also this thing is called a car. This thing is called a pedestrian. This thing is called a cone. This is supervision versus just the human is going to drive like this. So the end-to-end method, they really need a lot of data of very low quality. So they prefer to take large quantities over quality, low quality of data, unsupervised. These approaches that we take seemingly need much less data but of a higher quality because we need labels.

We want to understand that this is a car and this is a pedestrian. Now we are doing a magic. We use a low amount of data, and this is sufficient. Even though that we need all the labels, still we can live with very, very few manually labeled data. How we do this magic? This is what I'm going to show, so the magic is to automate the creation of ground truth, requiring very, very little amount of human labeling and a lot of automatic process. Now why can we leverage automatic process? There are one major reason. If you try to solve an easier problem, then you need less labels. Now why is the problem of ground truth easier than the problem of learning how to drive? There are two reasons for that. The first superpower that we have is that we know the future.

So we are at training time, not at inference time. When you want the car to drive, you don't know the future. When you are just creating data, you know the future. And this is a superpower. Why? So you see this car. Now it's close. And it's very easy to see it. Now if you track it, you can keep seeing it when it is far away. So this is just one illustration of how knowing the future is very, very helpful and makes the problem much easier. The second superpower that we have is what sensors that we are using. During data collection, we can rely on expensive sensors because we do not need scale. In production, we cannot rely on expensive sensors. So these two superpowers make the problem much easier. In addition, we have offline compute.

So we don't need to constrain ourselves to the compute that we can put on car running in real time. We can calculate on a server at the office, not at real time. All of this gives us another ability. We can build foundation models, which are heavy, and use them in order to create data. So this is an example of our foundation models. This is a foundation model that we built in Mobileye, which beats the best state of the art out there for our data. I'm not saying that we are beating for data that we don't care about. But for our data, it's better than anything else. And it gives at the level of pixel for every pixel what the semantic meaning of the pixel. All in all, we get this level of detail. So what you see here is you see a point cloud.

And you see the level of detail of all the cyclists and the trees and the road and the cars. Everything is fully known to the model, also hazards, et cetera. OK, last topic, safety goal. So Amnon already mentioned that the common requirement is to have superhuman mean time between failures or critical events. And we require more. We require no unreasonable risk. And what do we mean by that? We mean no lack of judgment in the decision making. We simply think that in decision making, we cannot rely on statistics and do something really crazy. We need to be very clear and transparent about what is reasonable to do and what is not reasonable to do.

And maybe saying a few words about it, and this is about the RSS models that we published in 2017. What bothered us is that there is an inherent trade-off between usefulness of the road system, if you want, and safety on the other hand. What's the trade-off? So on one hand, if we will say that every car on residential roads will drive at one mile per hour, no faster than that, this will be safer than the situation today. You cannot do big damage while driving one mile per hour. But the usefulness of the road system will be damaged. It will take only the last mile from the main road to your house might take one hour. We are driving one mile per hour. So it takes one hour just the last mile.

So we want to be at a reasonable place on this trade-off between safety and usefulness. And the regulator sometimes puts numbers on this trade-off, for example, setting the speed limit. Setting the speed limit is a number on the trade-off between usefulness of the road system and safety. And the RSS model takes it several steps forward, giving a model to say, OK, at the end of the day, I want to make reasonable assumptions on what others might do. And if they will behave crazy, then they behaved unreasonably. And it was a valid assumption to me to assume that they will not behave like this. But if everyone behaves reasonably according to the model, then you cannot say, I miscalculated what others will do. And therefore, I created a collision.

So the RSS model allows us to put a clear and transparent line between reasonable assumptions on other road users versus unreasonable assumptions on other road users and behave accordingly. What we did in our recent paper is extending this type of thinking to all the decision making that we are doing in the development of self-driving cars. So planning is, again, RSS. For hardware, again, so we have a computer that does all the calculations in our stack. What happens if it is burnt? Is it reasonable to assume that it will never be malfunctioning? We think it's not a reasonable assumption. And there is actually a standard for this, which is called FuSa for functional safety. And this standard tells you the boundary between what you assume and what you cannot assume.

For example, it tells you that if you have two computers, you can assume that both of them, the event that both of them will be burnt at exactly the same time is something that you can say this is unreasonable. So you can say, I don't need to deal with it. But if one of them is failing, you need to be able to deal with it and know how to identify that you have a malfunctioning hardware and be able to drive the car based on the other board, the other computer. Likewise, if the camera has dirt on it, you cannot just say, OK, this will not happen. It will happen, and you need to know to detect it, to understand that there is a failure, and to behave appropriately.

Behaving appropriately is based on other sensors stopping on the side and asking the human in the car to clean the dirt. If there is a human in the car, just as examples. This FuSa standard tells you how to deal with potential hardware failures and what is reasonable and unreasonable in hardware failures. Now, much trickier is how to deal with AI bugs. Because if we rely on machine learning, machine learning, deep learning, transformers, all of these are statistical creatures. They can never tell you that you are correct. The standard model of learning is called PAC learning. PAC stands for probably, approximately correct. The only thing that they can tell you is that probably, approximately you are correct. What if you are not correct? It's not good enough, probably, approximately for safety-critical applications. How to deal with AI bugs?

So we built a methodology to deal with AI bugs. And the methodology, following a standard again, which is called SOTIF, safety of the intended functionality, says how you should hunt and fix reproducible error, what we define what is a reproducible error. And each reproducible error, you cannot rely on statistics for something that happens again and again. You cannot say, yeah, you can reproduce it. It will happen again and again. But it will happen very rarely, like the example Amnon mentioned about a baby lying on the road. You cannot think about it that because it is rare, it's OK not to deal with this case. It's not OK. And if it is OK, so if there is something that you say, OK, it's very rare. And we are not going to deal with it, you need to be transparent about it.

You need to state formally what you take care of and what you say explicitly that this is something that it's reasonable to assume that this is a low probability event. And we can ignore it, like two tires become flat immediately at exactly the same moment. And finally, there is the black swans. The black swans is all the rest. And here's the methodology that we have for black swan is that we don't want any single point of failure in our design. So everything that we solve, we want to solve by more than one method. And the failure can only happen if two methods, at least two methods, fail simultaneously. This is a design methodology, which is, by the way, a very standard design methodology in aviation and in every industry that requires a high level of safety.

When you design more than one system, you need to tackle the problem of how to fuse systems. And it's not trivial, even though it sounds trivial at first thought. But it's not. And we wrote about it in the paper, about the PGF model, the Primary Guardian Fallback approach, how to fuse different systems. But the last thing is just to have superhuman MTBF. Now, the way we think about it is that there are two things that you need to require. One thing is don't have unreasonable risk. And the second thing, the flat MTBF argument is a greater good argument. At the end of the day, you don't want to cause more harm relatively to what's going on today with human drivers. So it's a greater good argument. But greater good is not sufficient.

You also need to be responsible in the sense of not inducing unreasonable risk, so this is my last slide that just gives you a hint how we design with PGF three systems with Primary Guardian Fallback. So, for example, for physical objects, we have three low-level subsystems, one based on a radar, one on a camera, and one on a LiDAR, and then we fuse them by a neural network, and we used RSS-based driving policies. This is our primary system. Now, we have a guardian system that takes the output of the primary system and contrasts it with each sensor individually, and then we can count majority. If two out of three agree with the primary, then we are fine, but if at least two of the systems disagree with the primary system, then we have a problem.

And when we have a problem, we go to a fallback system. And again, you need to think about it. But this design is such that if overall the whole system fails, then at least two subsystems fail. We never want to fail when only one subsystem fails. So this is safety by design. For lane semantic system, we use different definitions. Their radars and LiDARs are less relevant because understanding lane marks is mainly either a camera thing or a map-based thing. So the sensors are a bit different. It's camera and map. And here we use another approach, which we call algorithmic redundancy. So you can have redundancies that follow from sensors. But you can also have redundancies that follow from algorithmic redundancy. Just use different algorithmic mechanisms in order to get redundancy.

So to sum up, I talked a lot about machine learning and AI and what happened in the last 20 years. And spent quite a lot of time working with you on transformers. I hope it wasn't too much. What I showed you is that we are well aware of all of these things. And we don't think that we need to take them as is. But we need to be smart about how to combine them for solving autonomy in terms of efficiency, 100 times more efficient, in terms of relying on a compound AI system and not just a single end-to-end system, in terms of how to be smart on how to create labels and safety goals. Thank you.

Dan Galves
Head of Investor Relations, Mobileye

OK, we're going to take a 15-minute break. The restrooms are across. The men's room is on the left of the elevators. The women's room is on the right.

Thanks, everybody. Coffee available over there.

Speaker 12

I had this feeling when I saw your smile. I had this feeling had I felt in a while. I had this feeling when I looked in your eyes. I had this feeling when I knew you were wise. I had this feeling when I saw your smile. I had this feeling had I felt in a while. I had this feeling when I looked in your eyes. I looked at you and I knew you were wise. Baby, I knew you were the one 'cause I felt it inside of me. I took one look at you and I could feel the sun from within you shining on me. I bathed in your rays and I laced all my days around you because I found you. It felt so right. I touched your eyes. I didn't even question why 'cause I knew you were good for me.

I had this feeling when I saw your smile. I had this feeling had I felt in a while. 'Cause I knew you were good for me. I had this feeling when I looked in your eyes. I looked at you and I knew you were wise 'cause I knew you were good for me. Good for me. Good for me. 'Cause I knew you were good for me. 'Cause I knew you were good for me. Good for me. I bathed in your rays and I laced all my days around you 'cause I knew you were good for me.

Dan Galves
Head of Investor Relations, Mobileye

Okay, everybody, we're going to get started again. That was a quick 15 minutes. Sorry, Stefan. Let me know when we're live again. Great. Thanks, everybody. So, yeah, two more presentations, and then we'll do a Q&A.

So let me introduce our EVP of Business Development and Strategy, Nimrod Nehushtan. Nimrod, take it away.

Nimrod Nehushtan
EVP of Business Development and Strategy, Mobileye

Okay. So now we'll start the business side of the presentations after the technology side. And I'm trying to, I'll try to explain how the deep technological innovation translates to business and to our programs with car makers, and what is the process that they're taking to make such a decision, where are we in these engagements with OEMs, and how does it, what impact will it have midterm, long-term on the growth of the company? So I'll try to connect the dots between the business and the technology and products. Okay. Recapping from, or starting from where Amnon finished, eventually, Mobileye SuperVision is about solving car accidents and autonomy.

Our product portfolio spans from the base ADAS front camera-based system that provides driver-assist safety features and kind of all the regulatory requirements globally, and all the way to driverless robotaxi business. What I am trying to explain is not just the technology and the sensor set and things like that, but the user experience. Why will OEM want any of these systems? What do they want to offer to drivers and to consumers? Where do they see the added value? What are the most important criteria for them making a decision on which is the best possible option for the different needs? In ADAS, it's about helping the drivers prevent dangerous events and meeting regulatory requirements. In supervision, it's about changing the way drivers are experiencing driving.

So instead of driving all the time, which can be a little bit bothersome, you can now take your hands off and relax. There is a lot of comfort being a heavy user of the system. I think it changes the way you're experiencing driving, and it's really hard to go back to manually driving all the time. But not just this. There is also a safety benefit. Because the system is very capable and very sophisticated and intelligent, you can offer new types of safety benefits to drivers. And this can be considered as kind of the premium ADAS systems in the upcoming generations. Chauffeur is about giving back time to drivers. So there is no more compelling. I don't think that there is a more compelling value proposition than giving back time to people. And I think it's pretty straightforward.

The key here is how much availability of a car, of a system you can have. Because there are existing Level 3 eyes-off systems out there. However, they're not providing enough of consistent and, let's say, useful availability of the system for people to really change the way they are experiencing their system, so maybe every once in a while, you can activate it for a few minutes or a few, like 30 minutes, in a specific road, but it's still not enough to really get used to doing other things while you're on a highway, for example, and that is the key challenge there, and Drive is about transformation of mobility. It's about removing the driver and maybe changing how cities are behaving, so before we continue about the business, I think it is important to explain what happened in the past two years.

It has been, as Dan mentioned at the beginning, two years since our IPO, since we had a similar event. And some things have happened, probably you noticed. And we want to kind of explain how we perceive what happened on the good side and on the challenging side, I would say. And I think we should divide between the base ADAS, our core business today, and the advanced products and the future growth engines of the company. So starting with base ADAS, the headwind factors were, I think, twofold. Number one is China. There is a growing market dynamics in China in which, number one, there is an emergence of lower-cost, lower-performance solutions in China, which created a race-to-the-bottom dynamics, which creates some level of challenge to maintain market share in China. But not just this.

Also, our core customers are losing market share in China because of the rise in competition from the Chinese OEMs, which adopt local solutions with no performance regulation or with less sensitivity to performance issues. Then, over time, our core customers are losing market share. This created uncertainty on our sales in the near term. And, of course, I think it happened in a few, or it affected us in some ways. However, at the same time, it is important to say that we had very successfully maintained our existing customer base, and we have solidified our position as the core ADAS partner for our top 10 customers.

We won, practically, or 95% plus of all the RFQs, all the future volumes that our customer base opened in the past two years, which means we have a pipeline well into the 2030s of continued sales of base ADAS for the global OEMs that are our core customer base today. We're not seeing any global competitor emerging with consistent design wins in this time period. Although there is growing competition, let's say, competitive forces in China, which come mostly from the price competition aspect with lesser performance in a market that is tilted towards price versus performance, we're not seeing the same effect happening globally in base ADAS. We are perceived as the de facto standard that is synonymous with base ADAS. That has been the case in the past two years, and we're not seeing any reason why that will not be the case.

Also, we are seeing continued regulatory upgrades and continued push for more content, more features, higher standards, which also helps us because that's our biggest advantage. We have the best performance versus cost product in the market. And the more regulation in Europe and the U.S. continues to push the envelope, the better it is for Mobileye. And when it comes to China, there is a trend of maybe China for China, but China not for global, which is important to kind of isolate the impacts and the dynamics in China to the Chinese market and not perhaps have the same effects outside of China. That is mostly driven by macro factors like trade and restrictions on technologies and so on. And today, it's harder to imagine a technology from China becoming the market leader in the U.S. market, for example, when it comes to self-driving and AI-driven systems.

When we look at the advanced systems, advanced products, sorry, the headwind factors were mostly delays in decision-making, and these delays are not necessarily directly related to things that Mobileye is doing or influencing, but it is related to the overall uncertainty in the industry, also driven by China, but not just China. The EV versus combustion engine discussion is not affecting us in the sense that AV is tied at the hip with EV. That is a myth that I think we need to debunk. The way it does affect us is the fact that OEMs did have a plan that the future generation of their cars will be electric and will have the next-gen intelligent driving system. Now that the EV demand is different, they need to go back to the drawing board and maybe think of a next-generation combustion engine car.

That has inserted a certain delay in their decision-making timelines, not because AV cannot go with a combustion engine, but because they did not have a plan for a next-gen combustion engine architecture, perhaps. Also, AI and recent advancements in AI, continuing from where Shai ended, this deep technological revolution is something that OEMs are trying to get their heads around. They try to understand what does it mean, how does it change the landscape. That creates some delay in making a decision because you don't want to make a decision that maybe will be proven wrong because of a technology revolution. I think, at the same time, very important for us is we managed to secure the first global OEM partner for a full adoption of our product portfolio.

That has been a very, very important tailwind factor because it gave us credibility that our products and technology have. It's like if Volkswagen Group has selected Mobileye as its partner for base ADAS, SuperVision, Chauffeur, and Drive, it is known in the industry that Volkswagen Group has the highest standards in making such a decision. They probably went through all the length of what it takes to be the right partner. It really helped other OEMs to feel comfortable with making such a strategic decision with us. I'll show you later how it translates to design win pipeline and expectations. Sorry. We are in a stage where nine out of our top 10 customers are in some level of discussion and engagements, concrete engagements on us and our advanced products.

That number two years ago was different, as you'll see in a minute, which means that we're on a good path to deepen our partnership with our top 10 customers. And the key fact here is, or the key benefit, advantage that we have is our diversity in the products that we can offer. So our growth is not based on a one-fits-all product. As we have explained in the workshop, it is very important to have adaptability, modularity, synergies between different products because OEMs have kind of different needs for different segments. So if I recap this, the headwind factor is the delays in decision-making because of both mostly macro-driven aspects. But at the same time, we did manage to solidify our position with our existing customer base and to pave a way for deepening the partnership with more advanced products with them.

So what is the kind of the planning process that OEMs are doing for their next-generation ADAS systems? So today, ADAS is divided into two categories. You have the standard ADAS, which is mostly reliant on a front camera, in some cases a radar, which provides the safety features. As Amnon mentioned, it's around $100-$150 per system. This is the system cost that the OEM pays. It's not the Mobileye price point. It's the entire system cost. And the premium is something not very clearly defined. In a lot of cases, it's an expensive system that provides maybe in-lane hands-off, like a BlueCruise or a Super Cruise function that provides in-lane hands-off using multiple radars and multiple cameras in a kind of complex architecture.

What we are seeing working directly with OEMs, multiple OEMs from their RFQs, from what they are planning, is a more, let's say, precise definition of the segments of products they want to launch, let's say, in 2026, 2027 onwards, and let's say the standard to the entry is pretty much the same. You want to do as much as you can with the smallest amount of sensors because here, it's all about the price. This is the standard that is going to be in all cars, including in emerging markets like India, for example. The premium is more precisely defined around eyes-off and hands-off everywhere, which means point-to-point driving like we have demonstrated today. That is the consensus on what will be a premium system in two years, three years from now.

What is interesting is that in the past maybe year, we have seen an emergence of a middle category, which will be kind of optimally positioned for large-scale cars, high-volume cars, low price point, but still very advanced feature. So imagine the drive we did today. When you're driving on the highway, you have the same experience as you have experienced today, but with a system cost that is $700 or so. That is extremely important because it allows OEMs to deploy this in very high-volume cars, in a $30,000 car. While the premium might be more for generation one, might be more for the $50,000, $60,000 cars because it directs kind of a different segment in the market. So that is kind of the needs of OEMs and what they want to deploy, let's say, next generation of ADAS offerings.

What are the criteria for OEMs when they make a decision on who is the best possible option? What is the best possible option for them to cater for all of these segments? They want to have the best, optimal, most optimal solution for the entry, for the mid-trim, and for the premium, which is a very tall order. I think there is more than meets the eye in their decision-making process. Maybe the tip of the iceberg, what everyone is talking about, is the technology. It's about, do you have a clear path to autonomy? Do you have the right AI philosophy? Do you have the competencies in technology? Do you have the assets that are needed for kind of the state-of-the-art development? That is only maybe the tip of the iceberg.

As I said, underneath the surface, there are many more parameters because eventually, this decision will be made somewhere soon, and it will not be changed for maybe a decade onwards. Because every once in maybe in a decade, the OEMs make a decision about their next-gen architecture. And it really needs to be the right decision because they cannot make such a change every two years. So beyond kind of checking the box on the technology, you also need to convince that you can be the trusted long-term partner for the OEM. Number one, can you get to a product quickly? Especially now, with the pace of innovation, OEMs are very, very cautious or cognizant of how long will it take to get to SOP. Otherwise, they might be losing market share or losing their competitive advantage or competitiveness at all. Can we trust you to get to SOP?

Do you have the track record to execute in automotive? Do you know what it takes to get to start of production in millions of cars in different markets, passing homologation, passing the regulatory requirements, doing this consistently in all markets at the same time? Also, can we trust the performance we'll have in production? It's not just about convincing us in small scale that you have something cool. We also don't want to be surprised by having massive recalls or warranty claims from our customers when we start selling this product on the road. How much control will we have on the product? How can we influence the user experience? And from our day-to-day interactions, it starts with the user experience. OEMs would like to have direct influence on what will their customers experience in the car.

Then finally, which I think is probably one of the most important points, is the modularity and synergies. Ideally, if we can find a partner that can cater for all of these segments in a way that we can consolidate efforts, so we don't need to do the entire validation for functional performance for the entry segment and then the mid-trim and then the premium trim, and do this with three different partners and spend three times as much or maybe more and take three times the risk. Maybe we don't have enough control, and we have more to kind of a higher risk of failing in each of those segments. If we can find one partner, maybe we can save costs. We can have synergies between these products. Advancements for the premium can serve the entry or the mid-trim.

So if there is such a partner that can fulfill the technology, check the box of the technology at the tip of the iceberg, but also be the trusted long-term partner in the execution and the modularity aspect, then probably that's the best possible option, especially if now the question becomes, who can be this partner for 2026 and 2027? It's not a theoretic question in 2035. So there is a strong constraint of time that maybe some other challengers would say, we're going to be that player, but really, you cannot be that player for two years' execution duration for a project. Okay. So when we look at these criteria, I think let's take maybe two types of competitors. Let's consider kind of the startups. This one's supposed to be legacy Tier 1s. So it's three types of competitors.

So the legacy Tier 1s, the silicon actors, and the startups. There are the competitors that we meet mostly when it comes to if there is a kind of a competing option in RFQs. Maybe I would say that in general, we are considered to be kind of the most trusted partner when it comes to our track record in ADAS, which is unparalleled. Our execution and the progress that we're making with our existing customers, with Volkswagen Group, the same tour that we did with you in the garage, is something that really resonates with other car makers because they understand the difficulties, and they understand what it means to be able to go from silicon to a functioning car within four weeks.

And we believe that the legacy Tier 1s and silicon actors and startups do not have that aspect of being a surefire solution that you can get to SOP within two to three years, let alone that there is a big question to be asked about the technology philosophy. Because again, showing things in small scale compared to proving them in large scale and getting from a demo to a product is something that today, after 10 years of failed attempts from some OEMs, is something that a lot of OEMs are taking with a grain of salt, I think more so than in the past. So what is the process that takes place to get from the earliest stages until nomination, until we can get to a design win and awards and announcement? Normally, it starts with generating interest.

We need to convince the OEM that they need such a product. I think recently, it has become easier the more that the industry is realizing that the next great frontier will be intelligent driving. Tesla has been very vocal about FSD and the strategic importance of FSD for them in the future. They see China as, again, a very high-paced, high-octane environment in which these products are becoming the de facto standard for the leading players. That generates a growing sense of urgency amongst OEMs to find an answer to this question. Convincing on this, why do I need this product, is becoming easy, but it's still a necessary part in the process. Afterwards, there is kind of a recurring engagement in which the OEMs need to start understanding the implications of launching such a system. What are the system requirements?

What will they need to invest in terms of budgets or manpower or certain tools and infrastructure? This is a step that can take maybe a couple of months, two to three months, in which they need to understand if they can realistically do this from what Mobileye will require, the Mobileye solution will require from them to get to SOP within X amount of years with a Y amount of vehicle platforms and so on. After we kind of go through the high-level requirements and implications of such a project, we go into the due diligence stage. This is RFQ.

This means that there is an actual, let's say, selection process in which we need to comply with very, very detailed requirements that go into the system architecture, the functional performance, different technical constraints, like what is the power consumption and how will it be cooled and how to place the ECU versus other ECUs and how will it communicate with different components, and of course, the more you scale this to multiple vehicle platforms, multiple markets, different constraints, it can become a very, very kind of deep and detailed process, but it is important because at the end, there are three outcomes: the technical rating, the commercial rating, and let's say the execution risk rating, so the OEMs go through this process. It could be either just with Mobileye because we're kind of the only option that they're considering, or maybe with a couple of alternatives.

They will need to provide a summary of what do the OEM think will be, let's say, the functional performance and the innovation. This is the technical aspect. So A-plus means this is the best possible system that we can see. There is no better solution. The commercial means we can sell this. The price is right. The cost is right. The budget that we will need to fund this is right. So A-plus means there is no risk in selling this system to consumers. We can fund this, and there is no, let's say, financial risk involved. And execution risk means A-plus means we saw everything that we need to see from this partner to be convinced that we commit the time on quality, on performance, with manageable risks in the process. So ideally, you want to get to A-plus in all of these parameters.

And it takes somewhere around six months, maybe more, to go over all of these aspects. As we mentioned in the workshop, as an example with Volkswagen Group, this process included multiple expeditions in which we used our systems to drive in Europe, in the United States, for tens of thousands of kilometers, and then analyzing the performance, walking through what is acceptable, what do they want to change, how will they want to change it, how can we change this. It's a process that involves very, very detailed analysis. I think the positive of this is there are only very, very, very few players that can really go through this process effectively and have the experience of kind of crossing all of the checkboxes when it comes to the technical aspects and the execution and the financial. So the more it's actually detailed, the better it is for us.

Because if it was just a fluffy process, then maybe it will have been easier for others just to get a quick win until eventually, it would have been dying, because sooner or later, the OEMs would have realized that they made the wrong decision, so the moment they make such a decision, even though it takes them longer, it will be the incumbent solution for a long period of time, and we saw the same thing happening with, let's say, AEB and ACC functions, which 10 years ago was only in premium cars, and today, it's a standard. Back then, it was a very, very complicated decision for OEM to make, but as soon as they made it, it became the de facto standard, and everyone adopted it, so these decisions happen every 10 years, maybe. It's a detailed process, which we have been progressing through.

I think it is important because if you pass this, then the entry barrier is so high that it's going to be hard to compete afterwards once you successfully accomplish this. Afterwards, there is kind of a negotiation. This is more about the price and negotiating the prices and signing an agreement if that's necessary, which can take its own duration, as you can imagine. This alone, after you have all the technical ratings and commercial ratings, still, there will be some negotiation process, again, because of the strategic importance. So when we kind of look on the entire process, it can take anywhere between eight months, 10 months to a year and a half, realistically, especially with the leading OEMs in terms of vehicle production that have global sales with dozens or maybe more vehicle platforms with different needs in different segments.

In some cases, it is being discussed within a strategic deal that caters for all segments. So this is in order to kind of explain why does it take so long?, which is a question that probably some of you are asking or all of you are asking. So let's look at where we were in this process a year ago. A year ago, the most advanced discussion, negotiation, pre-nomination project that we had was with Volkswagen. This process took us around a year and a half to conclude.

From the first day of interest, early demos to their management and explaining why they need such a system way before it was evident that they need such a system, it took us a year and a half to sign the contracts and to build the prototypes and to get to nomination, kick off the project, and start executing and get to where we are today. In the past year, we made a very, I think, significant progress. There are more flags here. I cannot disclose the names. But what we believe happened that kind of accelerated the process in the past year is a combination of, number one, the deal with Volkswagen really gave credibility, as I mentioned, to our products and to our capability to launch such a system within two to three years. Now, it's not just that.

I think some OEMs might look at this as an opportunity to kind of piggyback on the work that we're doing. So we're going to develop this system, get to the performance level that we're aiming for, and maybe that reduces the risk for OEMs to also join the Mobileye wagon because now there is already a function in hardware. There is already a prototype working. And now they make progress. They have the data, and they have a lot of work done under their belt. So maybe now the risk for me is lower. So we're past that point, which was a critical point. Also, I think competitive pressure on the OEMs has been accumulating, especially when you think about China and Tesla in the US that have been strategically investing into intelligent driving to become the next differentiating factor.

More and more OEMs that maybe would have decided otherwise to slowly and gradually adopt and kind of research these technologies in a maybe more extended time window, now they are in a "we have no other choice." Otherwise, we will be outdated in 2026, 2027. So we have to find the most realistically the solution that can realistically meet the two- to three-year SOP challenge with the best possible performance at a controllable price. And it has to happen. I mean, it's not a question of. It's not a should have. It's a must have. That's the transition that we've seen in the past year. And we are now. I don't think that we've ever had such an extended or kind of supporting multiple programs that do this due diligence process at once. And it's not a coincidence.

Again, I think it demonstrates kind of the realization in the industry that in 2026, 2027, if you don't have one of these products, you're going to be outdated, and your brand might be compromised, which is the most important thing for OEMs. So I don't want to go into the when do you think the next announcement will be, who will it be with. I think looking at this from a 30,000-foot lens, the timelines are subjected to eventually to the OEMs making a decision. It can be a month, three months. It can take them a couple more weeks. If you think of this as kind of a long-term thing, eventually, we believe that a lot of these flags will be converted and revealed within months from now.

And when that happens, from our experience, that becomes like a decision that sits in place for a long period of time afterwards for these OEMs. And it will start gradually, maybe with a few vehicle models, but then it will expand and expand, and it will become their next-gen ADAS offering or autonomous vehicle offering within three years, four years, and onwards. So that's kind of, I think, the way to look at this is the progression in the past year. We have many more opportunities, not just from and these are the flags here represent large and reputable OEMs. These are not small startups that did not produce a car. And seeing more traction in Europe, United States, Japan, Korea is really something that can help us create the growth in the future.

Now, let's look at a case study of what will it mean if and when an OEM selects the next-gen products from Mobileye. What will it mean for our revenue? So let's look at a case study. This is a large OEM that sells different types of cars at different price points. Today, as we saw a few slides ago, there are only probably one variation of products that they can buy from Mobileye. It will be the EyeQ chip for the front-facing camera. So you'll see today it's like 100% of their volumes, regardless if it's a $30,000 car or a $50,000 car or more, will be based on our single EyeQ, EyeQ 3, EyeQ 4, EyeQ 5, maybe Lite or EyeQ 6 Lite starting this year. And that is the entire composition of the revenue.

What we are seeing, and this is based on RFQ volume. This is based on actual planning volumes from a case study OEM, is a much more distributed composition of products that goes from just base ADAS to base ADAS maybe for half of the cars and then surround ADAS for a third of the cars and then SuperVision and Chauffeur for the remaining part. What it means on the revenue is that there is maybe a four- to five-time increase in the revenue from that OEM, four times. Yeah, three to four times. EyeQ. Not good at math. I'm just kidding. But this is very, very important. This looks at the, let's say, midterm. This is not a long-term projection on maybe 10, 15 years ahead in which maybe SuperVision and Chauffeur can become all of the volumes.

This is just a couple of years in the future on when these products will get to SOP and will be adopted. The impact on the revenue for this specific customer will be very significant in the midterm. So, of course, it is subject to what are the percentages of this product versus that product. But what we're seeing from multiple OEMs is roughly the same planning strategy, let's say, that surround ADAS is becoming kind of the heavy lifter for the base volumes. The premiums are SuperVision and Chauffeur, but maybe not just the premium in some cases, even more than just the premium cars for specific markets. Then if you look at the kind of the growth stages of the company as a consequence, we consider three stages to the maybe 10-year growth story of the company.

Stage one is where we are today, in which the revenue primarily comes from base ADAS. That is, the vast majority of the revenue that we're generating is still from the existing product. The growth is somewhere linear with or somewhere, let's say, moderate with the ADAS adoption rate and with the sales of our customers. And also, the composition of the revenue is, as I said, mostly from base ADAS. But in stage two, which will start in 2026 onwards, we will have a broader array of growth engines, these different products with different, let's say, different price points. And that can create this four-fold increase that we just saw. And what we are seeing is that, let's say, on average, it can be anywhere between a two-time to maybe six-time increase per customer for us in this stage.

Stage three is maybe somewhere farther in the horizon, in which our revenue will mostly come from an increased adoption of eyes-off, no-driver, true autonomy products that will have broader ODDs, will have lower costs. As Amnon mentioned at the beginning, Gen 2 of our products is designed for cost reduction, which will enable us to maybe launch SuperVision not just in premium cars, but also in the $30,000 and $20,000 systems and so on. So I think what is important is that we have seen evidence of a broad adoption and a realization that OEMs want these advanced products. They need these advanced products, and they need that for the next two to three years onwards.

We are, because of the criteria that I covered earlier, we're considered to be the best possible option when it comes to the technical rating, the commercial rating, and the execution risk and our capacity to support what they need within two to three years. Just to recap and to give you some numbers and some sense of scale and the impact of each growth engine, so base ADAS, which is a very mature product and a mature market, which we sell for $50 on average, we have around 250 million units in the pipeline. This is nominated volumes from our customers for a decade plus in the future. Let's say the RFQ volume that we are now competing for on base ADAS is 80 to 90 million units.

Now, in hands-off, which is surround ADAS and SuperVision, the price points for the product that we're selling is anywhere between for surround ADAS is $150-$200 because we sell the EyeQ, EyeQ6 High, more content, more sophisticated software. And the volumes that we have in RFQ is 25 million already today. And this is early stages in the creation of this segment. We have seen this development just starting in the past year. And it started with the four OEMs that are listed here. We expect this to increase and to become more broadly adopted and recognized as the next standard in ADAS by more and more OEMs. In SuperVision, where the price point is $1,200-$1,500, we have 8-16 million units in RFQ volumes. This number is significantly increased compared to a year ago.

The more and more OEMs, the flags that I showed you earlier, are realizing that they have to have a competitive intelligent driving product. And I think what also is important is that the Mobileye Drive program has also picked up more and more interest. Again, the Volkswagen Group really helped us to get the credibility of being a leading player in this arena. And we have another very, very interesting prospect with a very big OEM to do a similar collaboration. And this business model where we are selling the self-driving system at a very affordable cost that can really pave the way for a sustainable business and not just a proof of concept really is attractive for OEMs because it can help them create a new business model for themselves. They will buy the self-driving system.

They will sell the car or partner with a mobility operator and start deploying these services globally. And the price point that we have allows them to make a profit out of it, which is really, really important. And I don't think that there is anyone competing with us when it comes to the system cost that we can enable this. So that's it. Good, on time?

Operator

Thanks so much. Thank you, Nimrod. I'll bring up Moran, and then we're going to have maybe a five-minute break where we change the stage around after Moran, and then we'll do the Q&A. Thanks so much.

Moran Shemesh
CFO, Mobileye

OK, thank you, Dan. And good afternoon, everyone. So I'll try to kind of demonstrate in numbers what was discussed today throughout the day, some ADAS business advanced products, and also touch on some operating expenses or operating leverage. So just to start with Mobileye and some numbers. So as of the last 12 months until Q3 2024, we've delivered 31.3 systems. That means EyeQ system and SuperVision systems and generated $1.8 billion of revenues. That also included some digestion to inventory, as you're all aware of, in the first half mainly of 2024, an issue that we believe is mostly behind us. Moving to headcount, so over 3,900 employees around the globe.

In the last few years, we really increased our headcount and kind of expanded our capabilities in terms of silicon development, core software, our radar team, and of course, the productization and development of our advanced products, SuperVision, Chauffeur, and Drive. We really had, in the past five years, almost doubled. We've almost doubled the number of employees. We now feel we have the right resources in place to support our design wins and also our advanced products, what we have in the pipeline or what's in front of us in terms of the scale of employees. The growth we're expecting in future years isn't that big as we have experienced in the last five years. Moving to execution, we have a long history of execution, 190 million vehicles with our chip, over 1,200 vehicle models. That in terms of our history.

Lastly, operating cash flow. We generate a very significant amount of cash flow. In the last five years, we've generated over $2 billion in terms of cash flow, which was a little over 100% of our adjusted net income. On top of that, we also had capital expenditure at the amount of approximately $500 million, but still a very consistent generation of cash and performance of cash capabilities against our profit. Moving to the business bit, the left side is a 2024 forecast ADAS revenue breakdown by OEM. You can see that our ADAS revenue is concentrated in our eight or nine OEMs that make up approximately 90% of our revenues. These OEMs, in terms of the industry, make up approximately 50% of the industry.

Within these OEMs, we have a very high market share, a very high market share in their ADAS portfolio. So a very strong business, as also Nimrod mentioned, in terms of ADAS. Takeaway from this is when we come to project our ADAS volume, we really need to look into the specific expectations of these OEMs. So our 8, 9, 10 top OEMs, in terms of their expected performance, this is a really huge factor. And just to give an example, in Q3 2024, year on year, the industry total production was down 5%. Our top 10 or 9 customers were down 9%, which is more than the industry in terms of reduction. Our deliveries were decreased only by 4%. So it means we kind of increased our share for most of these OEMs. Still didn't outperform against the industry.

But again, this is something to take into account, the concentration of our ADAS activity. On the right side, this is the total focused revenue for 2024, so including supervision, spread by regions, the region of our shipments. So it's not necessarily where the product is sold, just the shipment, but it pretty much represents the spreads. When we look at China, so China is estimated at 26% for 2024, going down from 31% in 2023. So this is a reduction also in terms of total revenue and also in terms of its portion from the total revenue. The supervision revenue in the 26% of China is approximately third. So we expect it to be 8 or 9% or forecasted to be 8 or 9% of our total revenue in 2024.

We also mentioned in our Q3 earnings call that Q4 is expected to be the lowest in terms of SuperVision volume. It's about expected to be only 3%-4% of our total revenue for Q4. This is also something to bear in mind when we look at China. Out of the 3%-4%, also I would mention half of it is coming from Polestar. Polestar is a brand mainly exported to Europe and not really sold within China. This is the non-SuperVision breakout of our China business. This is just, again, the ADAS business. What you see here, we kind of can break down our ADAS China business to two pieces. One piece, what you see at the bottom, is China OEMs. Great Wall Motors, Geely Group, Chery, so domestic OEMs in China.

The second layer is foreign OEMs or global OEMs producing in China. So VW, GM, Hyundai, et cetera, many OEMs that sell in China. So this is the second layer. And the third layer, the darker green, is just the rest of our volume. So you understand in terms of the scale where we are at in each quarter. I didn't mention, but for Q4 2024, of course, this is a forecast based on our last guidance from October. All the rest is actual volume. So when we look at the global OEMs selling in China, this year, we are expected in 2024, we are expected to ship 4.4 million chips versus 7.2 million chips in 2023. So of course, there was some inventory digestion into that.

But also the more interesting factor is for global OEMs expected to go down in terms of sales at least 15% from 2023 to 2024. This trend is not something that we can indicate that would change in 2025, the dynamic of underperformance of global OEMs in China. So definitely something we take into account in our forecast for 2025 and we'll take into account in our guidance. On the China OEMs, so again, the bottom layer of the local OEMs in China, we are expected to ship 1.5 million chips in 2024 versus 3.5 we had in 2023. The dynamic in China, of course, is different. And Nimrod also spoke about it. We're talking about low-priced chips in terms of performance, lower performance, lower ADAS requirements within China.

So here, it's a more challenging environment for us in terms of our share and in terms of the race to the bottom with the pricing. And also on the China OEMs, although you can see some ramp in volume in the second half for the China OEMs in the second half of 2024, which we did not expect at the mid of this year, we're still going to be very cautious about the China volume for 2025. We have not very high visibility for these volumes. So definitely, again, something we are taking into account in our projections. So moving a bit to the future, Nimrod also mentioned these numbers. Here you can see, again, the flags. So each column, this is only the ADAS business, historical and future business. Each column represents one of our top nine OEMs.

On the right side, you can also see the Chinese OEMs, domestic OEMs in aggregate, and the rest of the world. There are three layers here. The first layer is what we historically shipped, 190 million. The second layer is nominated or booked business, 250 million, as also Nimrod mentioned, and 80 million, between 80 and 90, again, in terms of RFQ or negotiation discussions. I think the main takeaway from this slide is, again, our strong position in ADAS. We see very low competition in terms of this, again, our top nine or 10 customers. You can also see that in terms of volumes, for most of these columns, the future volumes are higher than our historical volumes. It means, again, we keep winning the business. In terms of competition, we hardly see any competition.

Another takeaway is for the China OEMs make up approximately 6% or 7% out of the total volume, future volume. Again, this kind of limits the exposure that we have in terms of local China OEMs. As for the rest of the world, at least for our booked business, this mainly relates to India. Again, to give a sense of the volumes, past and future volume for ADAS. Speaking about India, this is still ADAS. This is the adoption rates expected for total 2024. On the left side, you can see the adoption rates in each territory. This is the total production volume on the left side. The light green represents the no ADAS portion. For example, India has 92% with no ADAS.

On the right side, we kind of zoomed in only the no ADAS part to show how that spreads or how that is reflected in terms of the different territories or different countries. So for the no ADAS, you can see China makes almost 50% of it. These are mainly low-cost cars. India is the second one with almost 25% with no ADAS. And again, India has over 90%, which is a very high percentage in terms of no ADAS. For India, we have launched a couple of years ago a program with Mahindra that is going well in terms of the expansion, other expansion with the customer, new regulation in India. So it's going well. And just to show the potential of this business, so if we're talking about Western adoption rates, it could reach almost 5 million chips a year of opportunity just from India.

This is with 5.6 million cars expected to be produced in 2024, which is very low compared to the population in India. Only that, it's almost 5 million chips. We were one of the first, again, to get into India with ADAS solutions. We really feel this market is something, or the market growth of India is something we can really capture in future years. Of course, it wouldn't happen overnight, but we can capture a significant portion of the growth in India. We've talked a lot about ADAS, our strong position in ADAS and our opportunities. This is for the advanced product. Nimrod really went through the different advanced products. What we try to do here is to put in economics what we have in terms of design wins.

So we took the total potential RFQs we have and kind of made an average for a specific year. This is for a specific OEM, an average of a peak year volume. So this is a peak year volume. It's not a first year volume. So you can see the range of quantities. And again, these are yearly quantities. Also an average of the price, which also Nimrod showed. So this is the range of the price. And the potential revenue and gross profit this could lead to in terms of midpoint, of course. And again, this is not when you're launching a program, it's not a first year volume. So usually the peak volume comes after three years, four years, five years, so usually three years and after we see this peak volume.

But that's just to give a sense of the potential as we have in future design wins. We won't necessarily be able to say for each design win how the volumes are spread between the segments, what is the exact pricing. So when we announce a design win, just for you to get a sense of or to have some tool how to calculate these design wins in terms of financial impact as they go along. Last slide is our operating leverage, so operating expenses. So on the right side, you can see the breakdown by activities of our R&D expenses. We kind of carve out the ADAS portion. The ADAS portion includes the EyeQ6 Lite platform. That is the main platform for our SOPs in the next few years. Some program execution portion and also a part of development infrastructure.

That all sums up to 25% of our operating expenses. The rest, 75%, has to do with our advanced product. The 75% is more than $600 million a year. So we're talking about next-gen EyeQ. Many talked about it this year today, the EyeQ6 High platform, also EyeQ7 that we are putting in resources in also in 2024, 2025. Chauffeur Drive SuperVision. This is, of course, software lifting of our new project, building the hardware capabilities, again, with the productization process. Of course, our radar activities and also some development infrastructure to support that. With this spread of operating expenses, again, we, as I mentioned also in the headcount slide, we really feel we have the resources to both complete the development of our advanced product and get to the execution phase, the integration with the different customers, the different OEMs.

We really feel we have the right resources for that. Also something I would mention, 80% of what you see in here, of course, payroll expenses are embedded here, has to do with internal resources, payroll, et cetera. So of course, this cannot remain flat over the years as we have some inflationary effect. But again, we wouldn't expect the same growth we had in recent years in terms of OpEx as everything is in the right place in terms of allocation and capabilities, which leads me to the left part, 2024 and also probably 2025. We are, in terms of operating expenses, more than 50% of our revenue, which is kind of heavy lifting in terms of the ratio between our operating expenses and our revenue.

Until the end of the decade, again, with the design wins that we have in the pipeline and also our opportunities, we believe we can reduce that to 30%. So gaining 20% in terms of operating profit, which is a big leverage. Again, and this is till the end of this decade. We're not talking about like 10 years from now. So that's it. Thank you.

Dan Galves
Head of Investor Relations, Mobileye

Okay, let's take like seven or eight minutes and then maybe five after four. We'll come back and do some Q&A. Okay, let me know when we're live. Okay, so kind of final segment of the day is Q&A. So we'll start with Tom. If you could, just wait for the mic since we're live streamed here.

Speaker 11

Thanks, guys. So on that, two questions. One is just a housekeeping on the slide with the OEM wins on the premium with the ones that are already won. I noticed FAW wasn't there. Is that?

Dan Galves
Head of Investor Relations, Mobileye

I think we're just being kind of cautious about Chinese OEMs. It's difficult to predict kind of timing and volume, and we're focused more on our top 10 customers.

Speaker 11

And then the other question. So we all know, like, Mercedes-Benz and BMW both have Level 3 approval in Germany, California, certain states. Their product maybe isn't as good as yours. But when it comes to regulatory, the thinking is that it's hard to sell safety. So would OEMs just go to whatever they need to get approved? And clearly, these two OEMs got approved for Level 3. So what is your guys' advantage when you have big OEM wins like that? Is it the product is better or is it a cost issue? Is it the cost that you could sell to other OEMs?

Amnon Shashua
CEO, Mobileye

Not all Level 3 are the same in terms of ODD. The Level 3 that was approved so far is 60 km per hour driving. There needs to be a lead vehicle. It's only daytime. Not all highways, only very specific highways. So it's kind of the early days. What we are working on with Audi, the Chauffeur, is completely order of magnitude different in terms of ODD. So kind of think of it as the next generation of Level 3 systems that will come out at this particular time frame. So it's just a matter of usability. How useful it is in terms of value proposition to the driver.

If you have a system that you are on any highway and you can drive 130 kilometers per hour without constraints, I mean, you don't need to have a lead vehicle and it's daytime and nighttime, and you can really do something else by having your eyes off, we're talking about a completely different value proposition than today's level 3 system. I think also other car makers like Mercedes and so forth are also planning at that time frame to upgrade the ODD of their systems.

Nimrod Nehushtan
EVP of Business Development and Strategy, Mobileye

I'll just add to this that it's not just the eyes off in highways that our product offers, and just driving today here, you saw how much of the drive was not in highways, but was in intercity roads that are maybe more a country road, not explicitly defined as a highway, but still you don't want to design a system that is confined just to a very, very specific definition of a highway, and otherwise it cannot offer any type of benefit to the driver. Our product is designed so that in highways, it will offer the eyes off up to 130 kph, but outside of highways, you have the best possible hands-off experience in all road types. So it's much more than just a level 3 system up to 60 kph, which is what you can find in the market today.

Dan Galves
Head of Investor Relations, Mobileye

Next question, please. Let's go with Ivan in the middle.

Amnon Shashua
CEO, Mobileye

Just to mention, the BMW Level 3 system is Mobileye's front-facing camera technology.

Ivan Feinseth
Analyst, Mobileye

Hi, Ivan Feinseth, Tigress Financial Partners. Great presentation today. Just have a couple of questions. One, on the mapping, do you think at some point there would become an industry standard? A lot of different companies are mapping. You seem to have a big advantage on the total miles mapped. And is there an opportunity for you to sell your data or to create some kind of industry standard that they use your data? And then during the presentation at your factory today, we're talking about insurance. And do you think at some point the insurance companies evolve to have a special policy for autonomous vehicles? Because concerns were that at some point, the most accidents are driver fault, and usually the driver's not paying attention.

So the probability of actually an AI failure is much lower, but then could blow back on the manufacturer, and then the insurance company could be working with you to create special insurance to cover the autonomous vehicles and somehow protect the manufacturers and companies like yourself from liability. You could be involved in creating that.

Amnon Shashua
CEO, Mobileye

So I'll start with the second part about insurance. I think once it is proven in terms of the MTBF of a system, what is the probability of an accident of an eyes-off system, then insurance companies can do their calculations and reduce the coverage in terms of the cost of coverage. If it's less likely, and say 10-factor less likely to be involved in an accident, and it's proven through data, through experience, and so forth, then of course it will be reflected in insurance. And whether we will enjoy kind of something out of it, we're not planning. If we will have benefits, it will be a bonus. As the first part of the question, I think we have not built a mapping system that is exportable. It's really tied very, very strongly into our other stack.

Now, trying to export this mapping system would incur a lot of work because every user would have different requirements, and we will have to kind of meet all sorts of requirements that our system does not need in order to perform well. And I'm not sure that mapping business is a very rewarding business. So I think our business is much more rewarding.

Nimrod Nehushtan
EVP of Business Development and Strategy, Mobileye

If I just can add to this, I don't think that we should look at REM as just a pure map solution. Maybe a better way to think of it is like a memory for a human driver, where you can still drive if you don't have a prior understanding of the road you're driving, but you're probably much more confident and drive better, more consistently in places that you have a very good understanding of what's expected even before you see it, like your daily commute, for example. And we built this database, this memory in a way that is very closely knit to our brains of the system.

We use this strategically to enhance our performance and to create a very strong competitive advantage because if you don't have this database that takes time to build and you need millions of cars to upload data, and we have these in place for a few years now, your system has a much lower glass ceiling of performance.

Ivan Feinseth
Analyst, Mobileye

Thank you.

Dan Galves
Head of Investor Relations, Mobileye

Let's start with Emmanuel and then Aaron in the back next.

Speaker 11

Thank you so much. Two questions. The first one, can you speak a little bit more about the various drivers of adoption in the various regions that you're operating? What will prompt OEMs to pull the trigger? I'm looking in particular at your excellent slide with the flags, and it was pretty striking that for SuperVision, it seems like it's very heavily Japanese automaker, then it seems like for surround ADAS, it was like European automakers. What accounts for this kind of regional differences? Is it the consumer adoption? Is it regulation? And how should we think about the U.S.?

Nimrod Nehushtan
EVP of Business Development and Strategy, Mobileye

So I'll start with the surround ADAS part. Surround ADAS is driven not just by a desire to have a unique new feature, but also because of regulation and push from regulatory bodies in Europe. And you saw more European flags, let's say, in the surround ADAS row. NCAP 2028, the next milestone in Euro NCAP, significantly increases the requirements to have five-star rating, for example, in a way that necessitates a more sophisticated system, surround perception. And more and more OEMs are understanding this. And now that there is a cost-effective solution, it creates this increased adoption. When it comes to the supervision row, I think that in addition to, let's say, a global realization that intelligent driving is important, the U.S. market specifically seems to be more accelerated growth of adoption of these systems, I think because of Tesla, among other things, but maybe a very leading factor.

And the Asian OEMs, the U.S. market for them is extremely important, as important as their domestic market in some ways. So I think that this need to compete in the U.S. and also Europe, but also in China and globally, is what created this and maybe more so for the Asians that have more sales in the U.S. market.

Speaker 11

And then just a quick follow-up. So you highlighted some sort of timeline, how long it takes to discuss and validate with these OEMs until you sign it, until start of production. Realistically, I mean, at some point, obviously, you used to give us sort of like a forecast for SuperVision units. Obviously, we're not going to go there for all sorts of great reasons. But realistically, when would you be able to see an inflection point in terms of SuperVision volumes if some of those companies that you're speaking to materialized?

Dan Galves
Head of Investor Relations, Mobileye

Yeah, I mean, I think that the first kind of launch of a supervision system with the kind of non-China OEM happens in 2026. But obviously, with every launch year, it's uncertain about if things change a month or two, like that changes volume a lot. So that will be an inflection point. The other kind of engagements we're in are primarily for 2027 SOP. But again, the timing in 2027. But we do think that we can have much more significant visibility on kind of that of the trajectory of supervision once we get through this round of very significant opportunities.

Nimrod Nehushtan
EVP of Business Development and Strategy, Mobileye

And I think today we exposed a lot of. We're very transparent in numbers and data that we have good confidence with. And the more we can get visibility into how it will scale and when we'll expose this, I think it's maybe early.

Dan Galves
Head of Investor Relations, Mobileye

Aaron, please.

Aaron Rakers
Analyst, Mobileye

Yeah, Aaron Rakers with Wells Fargo. I'm going to build on that because I think you gave a lot of great numbers. I'm thinking about your slide that talks about incremental advanced engagements, RFQs. How do we think about the duration? How far out do you typically see those RFQs? And again, talking about your confidence in that visibility, how do we take those numbers and kind of contextualize that of when, in theory, you think about or how you thought about rolling up those numbers? And then I'll throw out my second question real quick. As we think about chauffeur and drive and we think about those platforms, I know it's longer-term in nature, but is there a software license element to this story at some point on robotaxi, for example, that that's another layer of monetization aside from just the ASP? Thank you.

Amnon Shashua
CEO, Mobileye

The second part, there is maintenance. You can call it a license fee, but there is a yearly maintenance on the robotaxi self-driving system. REM has its annual license fee, which is on top of the piece price. It's a few tens of dollars per car per year, something like that. The first part, I'll get to Nimrod.

Nimrod Nehushtan
EVP of Business Development and Strategy, Mobileye

So basically, the RFQ volumes represent what is the quantity of cars for a few vehicle models that the OEMs are making the sourcing decisions for. They make a decision today for maybe three years, five years, seven years, 10 years. Each OEM has their own planning processes. Of course, in reality, it's subjected to how many cars they will actually sell in a given year. So it's hard to predict how many cars exactly of a specific platform in a specific market will OEM A sell in three years or in five years. So we don't want to give forecasts that are too long into the future. But we think that, let's say, the end of the horizon for us is early 2030s. So this is kind of where the last final pieces of the volumes are expected to be shipped, roughly saying.

Dan Galves
Head of Investor Relations, Mobileye

Yeah, and I think from just to add one more sentence, from an IR perspective, we get a lot of questions about kind of the representative size of these different products in terms of a market. So what we wanted to do was kind of look at all the projections that we're seeing in real RFQs and get to basically an average per program once it gets to scale. So it's not intended to be any sort of a forecast, but just to give a sense of the framework, the scale of these different products. And as you could see, surround ADAS is much bigger because it's more of a mid-trim, mid-priced vehicle product.

Amnon Shashua
CEO, Mobileye

Yeah, I think I'll just add that there are two inflection points here. One inflection point that you refer to in terms of when the volume would rise up to become an inflection point. The other inflection point is design wins on these advanced products. So far, outside of China, we have the Volkswagen Group, which is very meaningful. And I think the market is waiting to see more proof points. Now, is it only the Volkswagen Group that is investing in these advanced products or others, global OEMs? And I think this is an inflection point that should happen quite soon. The more OEMs that we can announce that have surround ADAS and supervision will create this first inflection point. And then comes the volume inflection point.

Dan Galves
Head of Investor Relations, Mobileye

Let's go with Chris and then Joe.

Speaker 11

Great. Thanks so much. Question on regs. With the incoming new U.S. administration, there seems to be some focus on federal regulations versus state-by-state control. So maybe Amnon, what would be ideal regulations that you would like to see if there were to come from a federal level to accelerate AV within at least the U.S., or you could go globally?

Amnon Shashua
CEO, Mobileye

I think a stronger federal regulation rather than being managed state-by-state will be very beneficial to anyone who wants to launch autonomy in the U.S. And it seems like it's going that way with the new administration. So I think this would be a great positive for us. Anything that can create confidence in terms of the KPIs, which KPIs you need to meet in order to be compliant rather than just launch the system and see where it goes, right? That will also be a positive for us. Any statement of what is the required MTBF, for example. Right now, it's up to the OEMs to decide how much superhuman should you be in terms of the MTBF. I think some regulatory guidance there would also be very beneficial. So I think what's happening in the U.S. is going to be a very positive development.

Speaker 11

And that quick follow-up, you're really one of the only companies that uses numbers when it comes to MTBF. How would an independent or a federal body, in your opinion, measure that, particularly pre-service launch when you can't do on-the-road testing?

Amnon Shashua
CEO, Mobileye

So what we have agreed with our OEM partners is when you launch an eyes-off system, you first launch it eyes-on and collect data for, depending on the size of the fleet, collect data of incidents, of mean time between intervention and so forth, collect that data from the fleet, which is a production fleet. And then you can prove what is really the MTBI, what is the mean time between critical interventions. And once you reach the required bar, which again, there is no wide agreement what that bar is, once you reach that bar, then you can prove to the regulator, this is the bar that I have reached. And then you can kick in the eyes-off feature.

Speaker 11

So you could provide shadow data, basically.

Amnon Shashua
CEO, Mobileye

Exactly.

Speaker 11

Yep.

Amnon Shashua
CEO, Mobileye

Thanks.

Dan Galves
Head of Investor Relations, Mobileye

Back to the timeline question because I guess just a clarification. We've obviously seen some delays. You put out a nine-month to a year-and-a-half timeline, roughly. Are you saying that the delay has been within that timeline? Because it feels maybe at least to me or maybe outside that it's been a little bit beyond that, and I guess if we look at your stages, is the delay more in that due diligence phase or that negotiation and decision-making phase?

Nimrod Nehushtan
EVP of Business Development and Strategy, Mobileye

I think it depends on the specific project. It depends on when our engagement with that OEM started and what happened in this time period. So it could have been the case that OEM A had a plan a year ago to deploy the next-gen ADAS with its next-gen EV product. And then within the last year, there has been a lot of discussion in the industry about what is the future of EVs, for example. And if we were at the midst of the very intensive due diligence stage, then maybe there is no sense for the OEM to continue at the same pace until they make up their minds on what will be the target platform to launch this system, right?

And if the designated platform is not going to be launched at the same point in time with the same architecture, then why continue with the same high sense of urgency until maybe let's pause and let's go back to the drawing board, make up our strategy again, and then kind of resume the process. So I think that what happened, and it's not like one answer for all engagements. I think that in general, the delays are mostly behind us. When it comes to the factors that I mentioned earlier, of course, new things can always come up. And these decisions can always take a month longer or two months longer because of whatever internal reason. But the macro big things are, I think, mostly resolved when it comes to how it affects such a decision for them.

Dan Galves
Head of Investor Relations, Mobileye

The follow-up is the other scale you provided, I think, was technical solutions, execution risk, and commercial rating, and that's sort of how you're judged. Are you able to provide any context in terms of how Mobileye scores on those or either sort of where is the hang-up generally? Is it on the commercial rating side? Because I can understand why that would be the case given affordability and other concerns. But it also feels like with some of the competitive pressures, that seems to maybe be something that's easing. I think you alluded to some of those conversations.

Nimrod Nehushtan
EVP of Business Development and Strategy, Mobileye

So in some of these engagements, there are no other alternatives being examined. It doesn't mean that by default we'll get the nomination. It means that we still need to prove some points. But I think it's an important point to recognize. It's not that we have like that there are five different companies that do the same thing and it's like a neck-and-neck competition and who does it better. And it can go on like a one-point difference that will tilt the outcome. In those that there are competing solutions, from our experience, and it's hard for me to be perceived as objective here, obviously, we are the A-plus technical rating no matter which product. I think that is something that not just a reputation that we have, but also we are proving this with our technology, with our system today, the progress we're making.

And there is no other company that has the same depth and breadth of technology. Mostly, it's about execution risk because that involves not just us. It also requires some things from the OEMs. And they also need to understand if they have the capacity, the competencies to integrate this system, to launch it, to validate it, to get to where they want to be. And that takes time, and it can be more of a back-and-forth discussion with them. The commercial is, of course, kind of a tug-of-war. Normally, it ends with both sides feel like they had a bad deal, which means probably a good deal, but I think it's resolvable. We didn't see this as a roadblock so far.

Dan Galves
Head of Investor Relations, Mobileye

Nick in the back, and then I think we'll probably have to wrap it up after Nick.

Nick Doyle
Equity Research Analyst, Needham

Hey, Nick Doyle with Needham. Thanks for taking my question. A gross margin question. Your individual OEM example, which assumes really solid units, implies supervision gross margin holds at around 45%, even with the scale and maybe the assumption that you've moved to the Gen 2 cost-down solution. And then the CAV gross margins are implied at 60%, which I think is a good amount higher even with the higher hardware requirements. So I guess I would think that the supervision could go a little higher, but the CAV a little lower. So just asking, why are those the right numbers to think about? Thanks.

Moran Shemesh
CFO, Mobileye

Yeah. Yeah, so I'll take this. First thing, this is a midpoint of the gross margin. So for some transactions, there is more content than the gross margin can reach of 40%. Some it's 42%-43%. So we kind of try to put the midpoint and not something exaggerated in terms of a gross margin. Second thing, the slide didn't include the drive domain. So in drive and also ADAS. So in ADAS, we are almost 70%. The drive is also higher than the margin that we presented in the slide. So that kind of balances out to what we also said back two years ago or the long-term gross margin that we are expecting. So it is aligned with our expectation because this business that you saw, again, has more hardware content where gross margin is obviously lower.

Surround ADAS, for example, has higher gross margin, of course, less hardware content. So we're talking about one EyeQ6 High with some additionals. But still, it's just a part of our portfolio.

Dan Galves
Head of Investor Relations, Mobileye

Just to follow up, like on SuperVision specifically, I think you can think of it as maybe a fairly significant incremental price for us with going from two EyeQ6 High to three EyeQ6 High. So only one additional chip, and you would expect that there would be some synergies in the ECU hardware as well, so I think that explains the higher gross margin. I think, again, these are not definitive numbers, but kind of based on what we're seeing in terms of kind of the actual engagement we're having with customers.

Amnon Shashua
CEO, Mobileye

Yeah, but also to put this in perspective, there are programs where we are acting as a Tier 2 and programs where we are a Tier 1. As a Tier 2, the gross margins are much higher. We're selling a chip, and we can have a much higher gross margin, 70% and higher. As a Tier 1, naturally, the gross margin should be lower. And I think 50% or 45% is really very high as a Tier 1 offering. So higher revenue, right? But gross margin naturally should be lower when you're selling a piece of hardware in which only one component is yours. All the rest is off-the-shelf components. So you cannot charge any big gross margin on something like that.

Dan Galves
Head of Investor Relations, Mobileye

Any follow-up, Nick?

Nick Doyle
Equity Research Analyst, Needham

No, thank you.

Dan Galves
Head of Investor Relations, Mobileye

Okay, maybe we'll do one more for James.

James Picariello
Head of U.S. Autos, BNP Paribas

James Picariello, BNP Paribas. Question on China. So there's the clear success story among the Chinese domestics in sourcing their ADAS. What factors prevent other OEMs from reaching that same success, that same tipping point on in-source?

Amnon Shashua
CEO, Mobileye

Again, let's divide it into base ADAS. And I think base ADAS, it's simply that there's no government testing regulations. So this is basically a race to the bottom on price where systems, there's no testing regime that will guarantee that the system even works. Okay? So this is why you see a race to the bottom on price, and which we do not participate. At some point, we set a limit. And even if we are losing market share there, we set a limit to where we're willing to go down in terms of pricing. And I believe that over time, there will be government testing regulations just like you find in the West. And maybe then the situation will change. In terms of the high-end systems like supervision, the systems cost twice than the Mobileye system, just to put things in perspective.

And all our benchmarking that we have done, they're no better than ours. So why are they willing to pay twice? This is something local for local government subsidies. Who knows what's going on there? But it's strategic. It has nothing to do with our value proposition. Outside of China, cost matters, right? You cannot sell a system twice the price for the same performance.

James Picariello
Head of U.S. Autos, BNP Paribas

Yeah, that's actually my follow-up. There are competitors that have a 2x the cost, right? As we think about domain centralization, can that more capable, that more costly chip from a competitor, does it play better for that chip to foster both ADAS and infotainment where domain centralized and that higher cost chip then gets spread across two domains instead of just one in terms of the compute?

Nimrod Nehushtan
EVP of Business Development and Strategy, Mobileye

So I think that there are many challenges in this approach when you think about consolidating ADAS, infotainment, other services in the same chip. It's a different story maybe when it comes to the same hardware platform. The safety and, let's say, the robustness requirements on the ADAS intelligent driving stack are very, very high. And the risk of compromising this while connecting this to the streaming service in your heads-up display, for example, and a glitch there can create a dangerous event when the car is driving is something that, from what we see, OEMs are talking about this maybe as an exploration for the future generations. But there are many technical challenges that need to be overcome. And when we think about, let's say, next two, three, four, five years from now, I don't think that's going to be still solved.

There is still some room for consolidation maybe between intelligent driving, parking, driving monitoring systems, for example, which have commonalities, but we are playing in all of these fields, and it's within our offering.

Amnon Shashua
CEO, Mobileye

I'd like to make this point of Nimrod. Driver monitoring and parking are all going into production. So we'll be able to offer a system which does both intelligent driving, whether it's ADAS, whether it's SuperVision so far, performs the parking and performs driver monitoring system. We have looked into integrating infotainment, not on the same chip, not inside the EyeQ6 for the reason that Nimrod mentioned, but as a chip on our ECU, and so far, we did not find synergies, so cost synergies in such a move.

Dan Galves
Head of Investor Relations, Mobileye

Okay, I think we're going to have to wrap it up for the day. Thank you so much for dedicating so much time to us. We really appreciate it and hope the day was valuable for you. Thank you.

Powered by