This event includes forward-looking statements which are subject to risks and uncertainties. Please refer to our SEC filings available at intc.com for more information on the risk factors that could cause actual results to differ materially. We will not be discussing the proposed Mobileye IPO today. Today's event does not constitute the offer to sell or the solicitation of any offer to buy any Mobileye securities.
Welcome everyone to Mobileye's Under the Hood, our annual session of what's going on with Mobileye past, present and future. I'll first start with, you know, 2021 in numbers. We had really a record year, 41 new design wins with more than 30 OEMs. This is really a record figure. Those new design wins are responsible for the pipeline of 50 million cars going forward. This is compared to 37 million in 2020. Our revenue rose year on year by about 40%. We finished 2021 with $1.4 billion. During 2021, you know, there were 188 vehicle models launched with Mobileye inside.
On the right-hand side, you can see how, you know, the number of EyeQ chips that have been shipped. In 2020 it was 19.3 million units, and 2021, 28.1. Overall, since 2007 till today, we celebrated 100 million EyeQ chips shipped to date. This means 100 million cars on the road with Mobileye's technology. During 2021, there were also some industry first launches. We launched the first Level 3 vehicle with Honda in Japan. We are responsible for the computer vision. A 120-degree field of view with an 8-megapixel camera. Very high resolution camera. This was launched this year with BMW.
We are talking about cloud enhanced driving assist using our crowdsourced technology called REM. This has been launched with Volkswagen this year. Yesterday at the press conference, I had a chat with Doctor Herbert Diess, Volkswagen Group CEO, and we kind of showed under the hood of what this car can do with a cloud-based assistant. Really the crown jewel still with the Level two is our SuperVision. 11 cameras around the car, powered by two EyeQ chips with an ECU designed by Mobileye. Already a few thousand cars have been shipped in China. Through over the air updates throughout 2022, you know, ADAS features up to navigate on pilot hands-free, navigate on pilot features would be OTA into this platform.
Just, you know, in terms of a few more figures. You know, if you look at our infrastructure data, we have 200 petabytes of data, partly on-prem, partly cloud on AWS. Just to give us a reference to the scale of this figure, Intel, you know, has 238 petabytes, all of Intel. 16 million clips covering 25 years of driving. In terms of compute, 500,000 peak CPU cores. These are all based on spot instances. Just to give you kind of a reference figure, this is 10 times more than Skyscanner. We have 50 million monthly runs. It means 100 petabytes being processed every month on 500,000 hours of driving. So this is really very, very powerful.
The way I kind of built this presentation is today we're in 2021. Let's look kind of four years backwards, 2017, and then four years forward to 2025. Back in 2017, we kind of laid down kind of three fundamentals of our strategy. One was what we called REM mapping, which is crowdsourced technology based on driving assist systems. Every car with an EyeQ chip has the capability to send, you know, pertinent data, very coarse data to the cloud. In the cloud, we developed in the past five years algorithms to piece all this data together and create high definition maps to power both premium driving assist and also autonomous driving.
We laid down another pillar, which we call the true redundancy, in which when we build an autonomous car, it's based on a number of different types of sensors, cameras, radars and lidars. We build it in a way which is the cameras and the radars and lidars are separated in two subsystems that don't talk to each other. We build an end-to-end, you know, driving experience, perception, driving policy, control using only cameras and an end-to-end driving experience or the sensing state, the perception end-to-end using only radar and lidars. This creates an element of robustness. When you put them together, you get redundancy. The third pillar is about our safety model, which is called our Responsibility Sensitive Safety.
It tackles the problem of how do you define what safe driving is? What is the borderline between reckless driving and carefully driving? How do you merge into traffic in a way that allows human level driving, but is and also safe and useful? We published that in 2017, and throughout the years, it's kind of a very fundamental building block in worldwide standardization. Those were kind of the three pillars. Those three pillars are kind of powering two sections, two divisions of Mobileye. One is premium ADAS, what we call a level two class. We are referring to a system which has multiple cameras. At the extreme, it's you know full 360 degrees cameras.
A full operational design domain, so full traceability. Very low cost because it's powered only by cameras. REM is built in. That means the cloud enhancements through our pilot functions provides geographic scalability. The RSS-based driving policy, which I'll take you towards later in the presentation, allows for very lean compute in terms of planning. There is level three, level four, which is conditional autonomy to full self driving. Here comes the second layer, second subsystem for redundancy of radars and lidars. REM enables scale, RSS enables the safety guarantees, and radars and lidars, they depend on the desired ODD. Full level four, we have a 360-degree coverage of radars and lidars. For level three, depending on the ODD. Where we are today.
We established that four years ago as a target where we are today. In terms of REM mapping, we built kind of the largest crowdsourced fleet for mapping. We receive today 25 million kilometers every day. During 2021, we collected four billion kilometers of data. 2022, if you start with 25 million kilometers a day, it's roughly nine billion kilometers in 2022. We have a fully functional Level two plus, which is kind of drives everywhere. We have AV test vehicles deployed in many cities. In New York City, Detroit, Tokyo, Paris, Munich and of course, in Israel. True redundancy. SuperVision is the commercialization of our camera subsystem, has been launched with a Zeekr, a Geely brand in China.
A few thousand vehicles have already been shipped to customers, and functions will be OTA, over-the-air updates, throughout 2022. We have a radar lidar subsystem complete, means providing an end-to-end driving experience without cameras. We unveiled our robotaxi, which unifies those both subsystems, cameras and radar lidars. We unveiled that at IAA back in September. From middle of 2022, it's going to be on the road, and then fully homologated by the end of 2022, at which time we'll be able legally to remove the driver from the car. Finally, the RSS safety model. We published it in 2017. There were about, you know, two papers around that. It's really a cornerstone of many standardization efforts worldwide.
The example is the ISO/PAS 2846. It's a working group. It is chaired by Intel, Mobileye. There are more than 30 leading industry players in that group. It will be published in a few months. Through RSS, we also have a very lean driving policy, and that enables level two at scale and also a level four at scale. Let's look at our kind of testing footprint. We show, you know, throughout the year, we kind of upload these unedited drives in Israel, in New York City, in Munich, Detroit. I'll show you two new sites, Paris and Tokyo. The Paris site is a collaboration with RATP Group.
This is the biggest public transport operator in France. This is with collaboration with Moovit. Moovit provides layers above the self-driving system for ride-hailing. Employees of Galeries Lafayette can use the car to drive autonomously from work to home. There's a safety driver at this point in time. Let's run this clip. It's fast-forwarded. But you can see kind of the richness and the challenge of driving in Paris and the kind of challenges our car needs to face while driving in Paris. Here is Tokyo. We have a site in Tokyo where many OEMs have already experienced our driving technology, autonomous driving technology.
Now, Tokyo, it's left-hand driving, lots of pedestrians and narrow roads. It's also a driving experience that is somewhat different from other locations in the world. Using our crowdsourced data by building those HD maps allows us very efficiently expand our testing. Where we are today in terms of cloud-enhanced Level 2. Our REM technology was designed to power autonomous cars, but also it can be very powerful addition for enhancing the driving assist. Volkswagen Travel Assist 2.5, the first vehicle is the electric vehicle ID.4. It's powered by REM, by REM maps. Yesterday with Dr.
Herbert Diess, the Volkswagen Group CEO, we had a chat and a drive with the vehicle to see how it performs. You can drive in areas where you don't see any lane marks, yet the availability of lane keeping assist and lane centering is maintained because of the map data. With Ford, we just announced next generation BlueCruise is going to be powered by REM maps. Of course, the Zeekr SuperVision is productizing our computer vision subsystem for hands-free level two plus. It's also using a REM map. In terms of level three, we launched with Honda and Valeo in Japan. We are responsible for the computer vision.
BMW Seven Series is coming out this year with a Level three system in collaboration with Mobileye and Aptiv. We have future programs with higher ODD for Level three that we'll announce down the road. In terms of Level 4, we unveiled our robotaxi at the IAA back in September. It has the integration of our camera subsystem and radar lidar subsystem. It will be on the road from middle 2022 and homologated by the end of the year. At which time we will be able to remove the driver in Germany and in Israel.
We have signed many robotaxi, AV, shuttle, goods delivery deals with the SIXT, the RATP Group, the Transdev, the Udelv, the Willer, more recently with Marubeni in Japan for goods delivery. We've also announced this week we received the first design win for consumer Level 4, I mean, the car that you can buy, not a ride-hailing technology. A car that you can buy with the Geely, the Zeekr brand. SOP is going to be early 2024. This is the design win with the Zeekr. It's going to be powered by two EyeQ5 chips and SOP early 2024. What does all of this lead us to? These are two kind of three business pillars.
One, there is a new category emerging in driving assist, kind of a premium ADAS, we call Level 2+. It's based on the extreme side, it's based on full surround sensing. Like in the Zeekr, we have 11 cameras, seven long-range cameras, eight megapixels, and four 14 cameras. There's cloud-based enhancement using our REM, our REM maps. There's a firmware over-the-air update involved, both for upgrading functionalities, updating functionalities, and also streaming the map data to the car to run your cloud-based enhancements. There is a full sense plan act cycle on a very wide operational design domain.
Basically, the ODD is the two clips that I've shown you right now with Paris and San Francisco. This is the kind of ODD that the car is able to support. This is Level two. You know, there should be eyes on and minds on by the driver. The second emerging pillar is, as we all know, this is the Level four robotaxi. It's geofenced. The cost of the system could be in the tens of thousands of dollars. This is our robotaxi platform that we unveiled back in September at the IAA. A new category that's emerging in the 2024, 2025 timeframe. This is Level four consumer AV.
What it has that the robotaxi does not have, one is that it should drive everywhere, and this is where our mapping technology, the REM mapping technology, becomes very, very critical. It's not just geo-fence capabilities. The cost, we believe that the cost in MSRP to the customer around $2,000 means that the cost, bill of material cost should be way below $5,000 to support a consumer AV. Mobileye is working on all three categories. To continue where this leads us, when we look at autonomous driving, you know, the future can move in two directions. One of them is the robotaxi moving people, moving goods.
Mobileye is active there and many companies are active like Waymo, Argo, Cruise, Aurora, etc. Then there's another possibility, which is the consumer AV. You purchase a car, the car is enabled, has a level four flexibility. You press a button, and you can have your eyes off and mind off the driving experience. Mobileye is active there. Tesla, of course, this is kind of the flag, the torch that leads them forward. You know, perhaps Apple, as far as anyone knows what Apple is doing. But what I need to note that these two futures are not equivalent. That means the consumer AV future contains inside it also the robot taxi.
Because if you can purchase a car, then of course you can acquire, you know, the layers of the drive, of the ride-hailing and sell it to PTOs, to public transport operators, transportation network companies, and provide the service. The other way is not clear. That means if you have a robot taxi technology which is geo-fenced and expensive, this does not lead to consumer AV. For consumer AV, you need geographic scalability, and you need to, you know, self-driving system cost way below $5,000. We argue that the consumer AV, this needs to be purpose-built. You cannot start with a robot taxi technology in terms of scalability, lack of scalability and high cost and bring it to a consumer AV.
It really needs to be purpose-built, and this is really what Mobileye is doing. We're active on both fronts, but thinking about scale, thinking about cost and scale, and designing everything around cost and scale. Now, doing both is not just about hedging how the future is going to unfold, because there are very strong synergies between the robot taxi and the consumer AV. There's also learnings to do by deploying the robot taxis that will be very critical for the next phase of consumer AV, because robot taxis can come earlier than consumer AV. 2022, 2023 timeframe looks like a very reasonable timeframe for robot taxis, and 2025 looks like a reasonable timeframe for consumer AV.
Let's look at what are the criteria for a good solution that can cover both the futures. We divided them into kind of three pillars. One is capabilities. You would like to support a very wide self-driving operational design domain. Now, whether it's highway, rural, urban, arterial roads, we provide a very, very wide ODD and human-like driving policy. It needs to merge into traffic in a very smooth manner, similar to the way a capable human driver would merge into traffic. The next pillar is robustness. It should have a very, very high meantime between failure, much, much higher than the average human driver. Somewhere 10 times higher, 100 times higher, 1,000 times higher, it should be higher. Third is efficiency. Efficiency is cost.
As I said, you know, we want the compute and sensors to be way below $5,000 and also support scale being able to drive everywhere. So those are the three pillars. What is Mobileye's approach in supporting those three pillars? In terms of capability, we, you know, support full operational design domain, full ODD from level two to level four. That means we claim that the differentiating factor between a level two to level four is not the ODD, it is the MTBF. It's what is the mean time between failure. If the mean time between failure is lower than the average human driver statistics, then it's a level two system. If it is higher, then it can be a level four system.
We support a full ODD in terms of perception, driving policy, and control even for a level two system. In terms of robustness, we have two principles. One is the RSS, Responsibility Sensitive Safety, the formal model that we published back in 2017 and today is the cornerstone of many worldwide standardization efforts for standardizing the regulation for level four. Our concept of true redundancy in which we take cameras, radars, and lidars, but we don't treat them alike. We treat them. We divide them into two separate subsystems. One is the computer vision subsystem, means perception being done only by cameras. No reliance on radars, no reliance on lidars. Another subsystem where the perception is done only by radars and lidars without any input from cameras.
Then we integrate them and create a redundant system, redundancy by two separate subsystems. This increases the robustness, increases the meantime between failure and allows also easier validation of the system. In terms of efficiency, we have here four elements. One is purpose-built system-on-chip. We'll spend some time on it. You know, the next generation of radars, the software-defined imaging radars, and I'll explain why I'm making such a big fuss about the radars. Lean compute. It's not only the SoC should be purpose-built, but the compute supporting a Level four should be lean, and I'll explain what I mean by lean. Then the REM crowdsourced mapping is a critical element to allow for scalability, and this is the fourth element.
For the remaining of the talk, I'm going to focus on those four elements one by one. First, purpose-built SoC, software-defined imaging radar, lean compute, and the REM crowdsourced mapping. Where we are, what is the status on each of the four elements? Because really, those four elements are critical in building a good solution. Our SoC. Our SoCs are supporting EyeQ chips. Today we are the fifth generation. We have millions of, in terms of design wins supporting, many, many millions of EyeQ5 chips, going forward. The BMW that I mentioned, the 120 degrees, eight-megapixel is supported by EyeQ5. Audi launched a system with EyeQ5 and there are many more. What we are announcing today is three new generations of EyeQ.
First, the lowest is EyeQ6 Lite. This is supporting the Level one, Level two ADAS. Very, very low power consumption, around three W. It's behind the windscreen. It's a single box driving assist. Very, very powerful. EyeQ6 High, which is roughly equivalent to between 2-3 times EyeQ5. This is going to power the next generation of our SuperVision. EyeQ Ultra, which takes all the learnings that we have done throughout the years of building an AV technology and understanding what exactly are the workloads, what kind of, you know, silicon architecture to support those workloads. After all these learnings, we are ready to design the ultimate chip, which is basically an AV on chip. We call that EyeQ Ultra.
This will support a single chip, monolithic chip to support a full self-driving inside the chip with internal redundancy with an external MCU, ASIL D MCU. This entire system will support an ASIL D system. Let me start with the crown jewel, with EyeQ EyeQ Ultra. Mobileye over the years, you know, have developed our accelerators come in four different families. One of them is a pure deep learning calculation. Another one is somewhat similar to an FPGA called CGRA. Another one is SIMD very long instruction word that's similar to a DSP. Another one is a multithreaded general CPU. Each of those cores is responsible for different type of workloads. EyeQ Ultra has 64 of these cores. 64 accelerator cores. It's running on 5-nanometer process.
It will have 12 RISC-V CPUs, 24 threads each. It will have also a GPU and ISP for visualization purposes. 176 sounds like a small number compared to what we hear from our competitors. It's roughly 1/10 of what we hear from our competitors, 1/5 to 1/10. Because it's not only about really understanding, you know, this very, very tight interplay between hardware and software, what type of cores, what type of algorithms to support those cores. If you take, for example, the SuperVision, what's now on Zeekr and what I've shown you in the clips with the Terraton, it's running on two EyeQ5 chips.
It's supporting processing of 11 cameras, eight megapixels, very high resolution, and the full driving policy, the full sense, plan, act cycle on two chips. Each of them is 15 tera operations per second. It's very light in terms of computing, the computing power. This shows that you can be very efficient and be lean in your compute if you design this properly by designing both the hardware and the software together. 176 TOPS, this is roughly 10 EyeQ5 chips. This is based on all this learning that we have been doing. Two EyeQ5 is powering the SuperVision today, which is the computer vision subsystem of our AV development. We have an EyeQ6.
This is going to be for the limited ODD Level four 2024 Zeekr. We have an eight EyeQ5, which is powering our robotaxi. All these are learnings that make us believe that 10 EyeQ5 is all that we need as a single chip to power a robotaxi. With such a chip, the entire ECU, the entire electronics of supporting a full Level four system will be way below $1,000. This is really game changing in terms of cost. Power consumption is very light. The entire system will be below 100 watts. EyeQ6 High. EyeQ6 in terms of tops is about three times EyeQ5, with just, you know, 25% more power consumption.
It has 14 of our cores compared to 64 cores of the EyeQ Ultra. It's seven nanometers just like EyeQ5. Supports 34 TOPS. It has GPU ISP to support visualization. Our next generation of SuperVision, which today runs on two EyeQ5, will run on two EyeQ6. There are many, you know, configurations that one EyeQ6 High will be sufficient for. This is being sampled this year. The first engineering sample end of this year and volume production in 2024. I forgot to mention that EyeQ Ultra will be engineering sampled at the end of 2023, so less than two years from now, with the volume production in 2025. Mobileye's EyeQ6 Lite, 7-nanometer supports five TOPS. It is our EyeQ...
The powerhouse EyeQ today powering AV is EyeQ4 Mid, so it would be about four, about five times stronger than EyeQ4 Mid in terms of computing power. Very low power consumption, about three watts. We have already committed deals for design wins over 9 million units for EyeQ6 Lite. Has been sampled half a year ago, and it will be for volume production of 2022. Now these are systems that are designed for behind the windscreen. Very powerful computes for a wide range of driving assist features. That was the system on chip. Next comes software-defined imaging radar. This is something that I talked about last year.
This year I'm going to kind of update the samples, more importantly, show what this radar can do. Why are we interested in radars? When we look at today, we're talking about two subsystems. One is camera-based subsystem. This is our SuperVision. Another subsystem is 360-degree lidars providing 360 coverage and a number of radars to complement it. That's a subsystem. That subsystem performs independent perception, and the computer vision subsystem independent perception. Going forward, we want to both create more robustness and reduce cost. One rule of thumb is that the cost of a radar is somewhere between one-fifth to one-tenth of the cost of a lidar. I'm talking hypothetically.
Lidar costs will go down, radar costs go down. It's about 1/5 to 1/10. The problem is that radars today, they are useless as a standalone sensor. Now, you cannot create a sensing state in congested traffic, separate, you know, pedestrians and vehicles, you know, that are very close to each other with the kind of resolution and the kind of spec that radars have today. So radars evolution of radars is very promising. We set up a goal about three years ago to build a high definition radar, a high resolution radar. We call this software-defined radar because everything is configurable through software. Even over-the-air update, configuring the transmission, the transmitter, configuring the receiver, the signal processing, everything is configurable through software.
Build it such that with the right algorithms, with the right deep learning algorithms, we can create standalone sensors. It can do what a lidar can do. If we achieve that, then the sensing configuration in the 2025 timeframe would be that you have only a single front-facing lidar. You have front-facing, you have three-way redundancy, lidar, cameras, and this imaging radar. Surround, you have the cameras provide full surround and the imaging radar is the full surround. You don't need lidars as a full surround because the radars will be a standalone system, just like today the radars and lidars are a standalone system.
This would be game-changing in terms of the cost and even add more robustness because front-facing now will have three-way redundancy and not two-way redundancy. Last year I described the spec. I'm not going to spend so much time on the spec. The spec hasn't changed. It's about 2000 virtual channels. It's 48 by 48 transmitters and receivers. Very high resolution. Dynamic range is 100 dB. Dynamic range is really unheard of. Elements are very powerful. We completed sampling all the chips of the system. We have the radar also built. You can see here the picture of the radar.
What I want to focus now is to show you some examples of what this radar can do together with the right algorithms supporting the radar. I'll do it step by step. Before I run this clip, on the left-hand side you see radar points, green versus yellow. Green is kind of elevation, so green is on the floor. Yellow is going up. The red and blue dots are, the blue is vehicles that are away from us, and red are vehicles that are coming, vehicles or other objects that are coming towards us.
First thing that you can see before I run the clip, you can see that the curbs, the right and left curbs of the road, you see those green dots that show you that the radar sees the curb. Now let's run this. You can see, you know, the complexity of the scene in terms of, you know, how many cars very close to each other, pedestrians, you know, very congested scenario. We'll stop in a point where you have cones. You can see here that, you know, those cones are marked in yellow on the left-hand side. You can see them, you can see them here.
Our deep learning algorithms have not yet been trained on the different cones, but the data exists in the radar output. Let's show for another example. Here we're going to go under a bridge. The deep learning algorithms are trained to detect vehicles up to 50 meters. The data supports much more than 50, 60 meters. What you saw before was you saw the cars on the right, even hidden by the guardrails. You saw the car inside the tunnel was not distracted by the tunnel. All sorts of, you know, radar perception with existing radars is quite difficult to do.
You can see here that the curve, the left and right curve, you see them very, very clearly in the radar output on the left-hand side. Next, you are going to see on the right-hand side a motorcycle, and then we are going to follow that motorcycle. Here's a motorcycle. You can see the blue dot on the left-hand side, and now we'll continue to run. Even if the motorcycle is, you know, off broadside, which is very challenging for radars, it's still detected and you can see that on the left-hand side, that blue dot going forward in very congested traffic. To give you an idea of the quality of data that is there, because reading radar output is difficult.
It's not like reading image output or reading a lidar output. We trained two networks. This is not actionable. This is just to show what this radar can do. We trained a network that was trained on radar input, lidar output. We put a radar and lidar together, and then we used the lidar as output for a neural network to map the radar to lidar. What you see here is on the left-hand side, the scene seen from a true lidar. On the right-hand side, the mapping from our radar. This neural network maps the radar output to a lidar representation. You can see that they're almost equivalent. Let's look a bit more. Let's start looking at the right-hand side. On the left-hand side, I'll run this.
Left is true lidar, right is the mapped radar to lidar representation. It looks almost identical. The point I want to make here is not the beauty of this neural network that we designed. There's nothing special about that neural network. The point I want to make here is the quality of the radar data. If you can take that radar data and generate a representation that's almost equivalent to a lidar representation, it means that there is gold to extract from this radar. Let me show you on the right-hand side. Same thing. On the left is true lidar. On the right is the radar mapped through the neural network to a lidar representation. You can see they are almost equivalent. Okay. We did another experiment.
This experiment lets us say, let's train a neural network to map the radar output to an image, a camera image. Again, we put the radar and the camera together, and then we use neural networks to train it. What you see here. This is not a true image. It looks like a true image. There's all sorts of, you know, deformations here, but it looks like a true image, but it's not. It's neural network mapping the radar output to an image. You can see even the guardrails, you can see the vehicles. This is just a way to kind of give you the idea of the quality of this radar output.
The purpose of all of this is to take such a radar, which cost is one-fifth to one-tenth of the cost of the lidar, and use it as a standalone sensor in a 2024, 2025 timeframe. Then we can reduce costs considerably and be left only with a front-facing lidar to create a three-way redundancy. Talking about lidars, as I mentioned last year, we have another division building a frequency modulated continuous wave lidar, which is a 4D cloud of points because there's also velocities like a Doppler effect. We already finished the samples what we call LiDAR SoC. It's a chip that is multi-channel. It processes. There's a system on chip that processes all the data coming from our silicon photonics.
Silicon photonics supports 90 vertical channels, and there are channels, and there's also a way to create a more vertical range. Basically 1D scanning. You have 90 lines, 1D scanning and some vertical configuration. Again, all of this would be volume produced in 2024. What would we have covered so far? First is AD on chip. It's a monolithic chip. In 2024, it will be on the road. 2025 in mass volume production. It's also a 4D imaging radar that can take its place and can be a standalone sensor in terms of redundancy. You can create a layer of cameras, a layer of radars, and you have two subsystems. Then take a three-way redundancy forward-facing with the lidar as well.
So that with the monolithic AV on chip, we get a system which is way below $10,000. All right. Next one on lean compute. Next one efficiency is lean compute. Here I want to talk about the cycle of sense and act. Sense is the perception. We all understand what perception is. You want to understand the world around the car, 360 degrees, where the road users are, their kinematics, stationary objects, obstacles, traffic lights, curbs, lane marks and so forth. This is sensing. There is the planning. The car needs to make a decision. It's all about decision making. It's what would happen. It's this kind of what would happen if I do something. That's the type of reasoning.
The idea is that you want to change lane. How do you change lane in a way that is not reckless? How do you decide when to change lane? How to change lane? Signal through your motion to other road users. Do it in a way that is not reckless, yet is useful. That means you can merge into and merge into traffic. This is all about planning. Then act is the control of the vehicle. Now, this planning, which is called the driving policy coming from robotics. This is. Now, the amount of compute that is invested there could be very, very high. I want to explain why it could be very, very high and why Mobileye has a very, very lean approach to it.
First, let's kind of define what is driving policy. All what I'm saying now is kind of a very teaser to a much more detailed talk that is given by our CTO, Professor Shai Shalev-Shwartz. I'm going to spend five minutes on it and then link you to Shai's talk with all the details. First, driving policy. The definitive driving policy is a mapping between the sensing state, what all other road users are doing, the kinematics and obstacles and lane marks and so forth. Everything around the car is called a sensing state. You have kind of an engine that maps it into action, which is longitude and lateral control of the vehicle. What might consider the hard problem.
Of course, there's no ground truth, unlike with the sensing, where you have a ground truth, where are the vehicles, where are the pedestrians, traffic lights and so forth. Actions may have a long-term effect. A very, you know, simple action that you do now may cause an accident, you know, few seconds later or few minutes later. So you really have to roll out the future, kind of extended roll out of the future in order to decide on the correct action. Then there's this closed loop notion where actions affect other road users, right? You're not the sole agent in this game. Your actions affect actions of other road users, especially when you are changing lane in congested traffic. You must handle uncertainties about the future.
What other road users are going to do. The many techniques around this, I'm trying to picture it in a kind of a non-technical way. There are three dimensions here. One is the amount of compute. Let's look at the left-hand chart. One is the amount of compute. That's the vertical axis. Then say the quality of the solution, the quality of the search in terms of what deep action to do. Brute force is, you know, very simple to define. You're kind of rolling out the future by looking at all possible actions for all possible agents for step one, and then again, all possible actions for all possible agents.
It's kind of a tree that is growing exponentially large. That's the brute force approach. Clearly the amount of compute is unwieldy, but you'll get a very good solution at the end. The problem here is compute. The technique which is really mostly used in robotics and also in autonomous driving development is the MCTS, Monte Carlo Tree Search. The idea is to take this tree and prune it. The first thing is that you assume that you know what is the driving policy of other agents. Given a state and an action, what would be the next state and so forth.
You start pruning this tree with all sorts of heuristics, neural networks that kind of learn how to prune that tree. The quality of search will depend on the amount of computing budget that you have. If you have a small computing budget, then the quality of search would be low. If you have a large computing budget, the quality of search would be high. There is the MDP, the Markov decision process, in which the quality of search is very, very high because I'm making a very strong assumption about prediction of what other road users would do.
You are kind of statically assuming where enough road users would be, not only in the next second, but in the next few seconds, in the next 100 seconds, without taking into account interaction with other road users. If you agree with this assumption, then MDP is very efficient. The line on the right-hand side is what is the realism of assumptions. The MDP is the least realistic because you're assuming that you can predict what other road users will do in a long kind of looking forward in time in a lengthy way and not using a closed loop assumption.
That means their trajectory is independent of other road users, trajectories. Monte Carlo Tree Search is assumed that you know what is the policy mapping between state and action of other road users. Now it may not meet reality. Brute force has the least number of assumptions. You can see here that in order to get a good solution, MDP is not realistic, and MCTS, you really need to spend a big computing budget in order to get to a good quality of search. This is why driving policy historically require a lot of safety. Here comes what we did back in 2017 in terms of building a safety model. The paper that we wrote was quite lengthy.
It contains also inside of the driving policy, but in all our public appearances, we never emphasized the driving policy. We emphasized the regulatory framework of what RSS enables. RSS is saying the following thing. First, let's define what are reasonable boundaries on the behavior of other road users. Because when we humans, we drive, we make assumptions. Let's try to capture those assumptions and capture them in a mathematical, in a formal manner with parameters. We can communicate with industry actors, with regulatory bodies on setting the parameters of those assumptions. Once you set up those assumptions, the theory says, "Let's take the worst case." Let's not predict what other road users will do.
Let's take the worst case under those assumptions. We have there a formal theory which is based on induction that shows that under this kind of framework, if you agree on assumptions and you take the worst case, you can prove that you'll never cause an accident. Okay? This provides formal guarantee. I don't want to spend time here is just to show that, you know, was a really successful framework in working with regulatory bodies. For example, the IEEE 2846, as chaired by Intel Mobileye, consists of 30 leading industry players. RSS is really the starting point of asking those kinds of questions.
You know, the EU Commission, the UK Commission, ISO, and so forth. What I want to emphasize here is not the regulatory frameworks. It's how this affects driving policy. So far RSS, you know, was presented as a regulatory framework, which is that was the purpose of the development of RSS. We use it also as a foundation for our driving policy. Basically, what it says is that if we use induction and analytic calculations, that means you need to present the problem in a way that will support analytic calculations. RSS, the formal guarantees of RSS is that it couples all plausible futures. Not all futures, all plausible futures, because we made assumptions on how, you know, road users drive. Couple all plausible futures into the present.
It's not dynamic programming like MDP with induction principles, but same effect of coupling all plausible futures into the present. This yields efficient, it's realistic, you know, it's quality of search is maximal, and you have also explainability. Where do the neural networks come into? They come not at the safety layer, but at the comfort layer. Instead of predicting what other road users would do, we detect intentions. Those intentions could be give way or take away. Intention could be that this car is now going to do a turning maneuver. It's going to do a U-turn.
We have a long list of intentions, and we use neural networks in order to detect those intentions and control the parameters of RSS. All together now we place our RSS-based driving policy in this chart. We have the efficiency of MDP, but without the lack of realism in terms of assumptions of MDP. This creates a very lean policy. Going back to kind of our computer vision process in SuperVision, all the clips that we are showing, you know, there is a full ODD driving policy. The car is driving in very challenging situations, doing full perception. You know, eight-megapixel cameras, 11 cameras. The full sense plan act threshold, and only two EyeQ5s.
It's a proof that our driving policy is very lean. This was just a teaser. There's about a one-hour talk by Professor Shai Shalev-Shwartz, our CTO. You know, search Mobileye lean driving policy, and you'll find that the YouTube clip. Just last in terms of efficiency is our REM maps, outsourced mapping. Travel Assist 2.5 by Volkswagen is using. We announced that yesterday. We had an Intel press conference, a session with Dr. Herbert Diess, the Volkswagen Group Chairman. ID.4 is powered by the REM maps. Here's an example of this drive that we did together.
Now that it's at the right turn, you can see that the magenta lines are the drivable paths coming from the map, and you don't see any lane marks. A system without cloud-based enhancement of the maps would not be able to activate lane keeping or lane centering because it wouldn't know that there are two drivable paths past you and exactly where they are. Information about traffic lights, the relevancy, the association between traffic lights and landmarks and drivable paths is also inside those maps. It can power critical safety functions like making sure that the car will not run a red light.
Now, driving assist will be able to do that if it's powered by such a map. Our coverage is very wide. This is Europe. We have 2.5 km of road already covered. We harvested four billion of these km in 2021. We are now at the rate of 25 million km harvested daily. So, it's growing. Every year it is growing. In terms of richness of the map, I talked about this last year. There's lots of accurate, geometrically accurate, very rich, semantically about drivable paths, road edge, the traffic light, association with traffic lights and drivable paths, crosswalks and the relevancy, checking points, yield points, common speed for the very rich.
We added more of a richness in 2021. On the left-hand side, construction areas. Every road where there is a construction is colored here in red. On the right-hand side, turn indications. For example, in this enlarged block, 12 out of 15 cars turned their right turn signal. Knowing when to turn a turn signal sounds simple, but it's actually very, very tricky. Getting this information from the crowd enables a much, much smoother AV experience. Speed bumps is now also encountered. Legal speed, knowing what is the legal speed at every section of the road is also very, very tricky because you need to associate traffic signs to the right drivable path. It's quite tricky.
Getting this from the crowd is something that is very helpful. This we also added. It's not only common speed, what is the average speed in the lane, but also what's the legal speed in every lane has been added. Public transportation lanes, like bus lanes, toll areas in the center of this figure, and also road types. This is also very useful to enable smooth driving, controlling the parameters of our safety model, knowing what type of road we are driving in. This is also now part of the REM mapping. To summarize, the building blocks that we have been building is first, you know, redefining the evolution of ADAS, this premium ADAS. Where can ADAS sympathetically grow into?
This is, you know, our SuperVision. REM is a very critical element into it, the REM mapping. The lean compute of driving policy is a critical element of it. This is kind of the North Star of where ADAS can evolve into. Second is the right engineering design to achieve, you know, the right mean time between failure, using our true redundancy, and from there, we can go to mobility as a service. The EyeQ Ultra and the software-defined imaging radar are basically two game-changing elements that would, you know, enable a future of consumer Level four in which cost becomes a major driver and also geographic scalability coming from REM.
Thank you for your patience, for listening to this one hour talk, and see you next year. Thank you.