Dan, thank you so much for joining. Really looking forward to getting into a discussion about Mobileye and its business and the opportunities. Dan Galves, Chief Communication Officer. Dan, I've known you for a long time. It's so fun seeing your, your career in its many forms. And, you know, you've come back home, to Mobileye. It, it's good to have you.
It's good to be back in Laguna, too. Like, this is a great event. It's always good to see you, and, you know, it's gonna be an interesting talk.
We'll stop crying in a second, folks. I promise we'll get into this, but I do wanna say, I think your role, not just in community, understand Mobileye, that's obviously well known, but I think you, you've really taken on an even broader, kind of a educator, in a kind of common language, 'cause it really is a different kind of language, the kind of business you're in. So you help make these concepts understandable, and you clearly have added a, I think, in my opinion, a lot of value to, to Mobileye, through all the phases of this next iteration as a public company, and even beyond Mobileye and other aspects of electrification and autonomy. Just, you know, thanks. Thanks for all that you do.
Thanks.
I'm a fan.
You too, Adam. No, it's, I mean, I get a lot of help from the team in Israel. Like, they have a very kind of pragmatic approach to really everything that they do, and that kind of, you know, it bleeds into the ability to explain it simply.
Okay. Now, we are gonna ask some tough questions.
Okay.
The love affair is over here. We're done. But any-
Get into it.
Any key messages you wanted to kinda, you know, get out there at the top?
Yeah, I mean, I think that, you know, we feel great about the current business, but what we're really feeling great about is kind of the advanced systems business-
Mm-hmm
... SuperVision, Chauffeur. I think our confidence has never been higher, that, you know, a substantial number of design wins are gonna come through, you know, pretty quickly. I think that that confidence has ramped up pretty significantly over the last, you know, six to eight weeks even.
Mm-hmm.
I think that there's a couple reasons for it. You know, one is proof points, and I think one of the things that's benefited us over the last couple of years is the increased ability to provide proof points to our customers. You know, you go back two, three years, and to experience the SuperVision system, you really had to come to Israel.
Mm-hmm.
It's like, you come to Israel, and it's-
You're jet-lagged.
You're jet-lagged. It's like, it's their hometown. What did they do to the roads? You know, it's, you know, nice experience, but, you know, let's see kinda what happens. Then the REM maps started getting built, and, you know, so that gave us the ability to take cars to North America and Europe and really drive anywhere. And so last year, we did several 2-3,000-kilometer drives with customers where they, you know, basically said: We're not gonna tell you where to meet us.
Yeah
... but once you meet us, we'll tell you where to drive. And so that's kind of an increased proof point. Then you have the Zeekr production programs, right? So we launched, you know, a set of hardware into a production vehicle, in China. You know, that was another proof point. The Porsche win, I think, was helpful.
Mm-hmm.
But the ultimate proof point really happened two months ago, when we rolled out the true kind of Navigate on Pilot SuperVision software to first 1,100 beta users, you know, Zeekr owners.
Mm-hmm.
So they experienced the technology for six weeks, you know, six, eight weeks. Zeekr monitored, you know, got the car into hands of influencers and media people, and kind of the reviews were really super positive. You know, the expectation in China, which is the market where this technology is moving the fastest and the technology iteration is the fastest, the expectation was like: Okay, Mobileye has, you know... They're using 50 TOPS of processing power in their vehicle, and XPeng is using 1,000 TOPS. XPeng is using LiDAR, and Mobileye is not using LiDAR. Zeekr is not using LiDAR. Like, how could they possibly compete? So maybe you get maybe a comparable system, but certainly not a leap ahead.
What we've found and what the third parties have found is this really is a leap ahead. The ability to kinda handle much more difficult situations, the ability to, you know, be more assertive on the road. Things like, you know, if you're wanting to get off an exit and there's a line of cars, you know, do you get in back of the line, or do you do the Jersey move-
Yeah
... and go up to the front-
Jersey to Jersey
And push yourself into what the Zeekr vehicle is doing. And then, you know, Zeekr had enough confidence to roll it out from 1,100 people to 110,000 people about two weeks ago. And so this is really the ultimate proof point, because it's, it's one thing to do a demo, it's one thing to, you know, put a production hardware in place, but to actually have the confidence to launch the software, kind of full capability, and have these types of positive reviews, even though you're clearly kind of at a cost advantage versus your competition, is huge. And we think it, it really led to kind of an acceleration of kind of the seriousness of the conversations we're having.
Mm-hmm.
I think the other thing that's happening, really, is that, you know, China is just,
They move fast.
They're really moving fast. You know, the kind of the market share gains within China have been pretty significant, especially this year. The rhetoric of bringing exporting vehicles to Europe, I mean, even in Israel, like, Chinese vehicles are maybe 30% of the market now, after being in market for two or three years.
Of the EV market or of-
Of the overall market.
Did not know that.
And so I think, you know, the question of, like, you know, clearly we're a little bit behind on EV costs and input costs. Are we willing to be behind on AV as well?
Mm.
Has really played into kind of the, the sense of urgency of our customers, and it's led to. You know, we talked about nine automakers kind of in significant discussions. You know, we expect, you know, most, if not all, of those to convert within the next six months. But now we're seeing the next wave, and these are some companies that have been kind of doing their, you know, doing their own thing and, you know, now are, you know, again, not anywhere close to as advanced as kind of that first group, but we're starting to see the next wave kind of come in, and that's important, obviously. So, you know, we're feeling great about kind of the performance of our system and this proof point and kind of the response from the customer base.
Dan, I've had more than a few investors email me in the last couple of weeks. They went to China, they experienced the system and the Zeekr vehicles, and they think it's better than Tesla. They think it's better than Full Self-Driving. Are they right? Why are they right? Is that an unfair comparison? Is it? Are we comparing apples and oranges? Because I've heard it. I hear from a lot of investors: "Hey, Adam, I see you're bullish on Tesla, but, like-
Mm-hmm.
Mobileye's system is better. Mic drop. Done. Over." Like-
Yeah.
Help me with that.
I mean, I think we've been very impressed with kind of the progress that Tesla has made and their ability to put this type of technology on the roads, you know, particularly in the U.S., because I think that, you know, it's not quite as advanced in the other regions.
Mm-hmm.
You know, I have to say, we don't have a SuperVision system in production in the U.S. yet.
Mm-hmm.
So maybe it is a little bit apples and oranges. I, I think that, you know, we know what we think, but I, I think most of our customers have also done these types of benchmark tests, whether it's versus Tesla or whether it's versus Li Auto, XPeng, NIO. And kind of the, the most common comment we get is, you know, the, the intervention rate of the supervision system is much lower.
I mean, you can measure that?
You can measure that.
You can test that.
Exactly.
Do you test that?
Yeah. I mean, yeah, you know, it's... I think things like, you know, things like kind of coming to a bank of traffic lights-
Mm-hmm.
you know, if one is red and the rest are green, you know, kind of a harsh braking event, to us, that comes from kind of the lack of the high-definition map, which gives you kind of the relevancy of, you know, traffic lights to lane. You know, which traffic light is relevant to my action, which traffic light is relevant to my lane? So I mean, you know, I think it's too early to really compare and contrast-
Okay.
Exactly, but this, this is what we hear from our customers.
I had the CEO of a major EV startup give me this view about ADAS. He said... And I started the conversation by saying: "What do you think about what Tesla's doing?" And he said: "Look, what Tesla has an advantage of is they do it all themselves. The hardware and the software, they do pretty much all the key stuff, all in-house. They have a monumental data set, and their sensors are pretty much configured in a very, very consistent way. So that combination of huge amounts of data-
Mm-hmm.
Every line of code, all the hardware done in-house, and then the sensors that are here and kind of fixed, and there's no arguments about, 'Hey, what data is yours or mine?' That's a big advantage," he said, and he continued saying, "So you either want to do it. If you're going to do advanced ADAS and then the Level 4+ , you want to do it all in-house or let someone else do all of it. Like, someone like, like SuperVision, just take over." Amnon, is that, is that oversimplifying it? Where would you kind of, what would you adjust from that, that view of the world, of all or, or outsource?
Yeah. I mean, I actually think that there's a middle ground.
Mm-hmm.
So, you know, what we're providing to our customers and prospective customers with SuperVision is a platform that includes some core technology assets. It includes a camera-based perception system, you know, highly accurate. You know, you don't have to do anything to it. The car will build a model of the environment based on kind of the data that's coming in through the cameras. We're providing a high-definition map that provides, you know, a lot of preloaded information about the world. So when the car goes out and drives in some environment, it's like it's driven there, you know.
Mm-hmm
... 20, 30 times before. A decision-making system that, you know, makes decisions through AI, but then really checks them against safety measurements through a system we call Responsibility-Sensitive Safety. And then the compute platform, which is super efficient because we design it in-house. And then, you know, what we've found and what we believe is that, you know, the OEM has a lot to do beyond that, right? They, an OEM should tune the system to create a driving experience that their customers want.
Mm-hmm.
Right? And they should monitor feedback from the customers and have the ability to tweak how the car feels like.
Mm-hmm.
Do I want a car that's, like, weaving in and out of traffic? Do I want a car that's doing the Jersey mode?
You can click Jersey mode.
Right, the Jersey mode.
We're in a Jersey mode.
Do we want a car that, you know, is kind of, like, harshly braking in front of stoplights?
If anyone here is from New Jersey, like, come on, just, just chill out. Sorry. Keep, keep going.
I am, so I'm allowed.
Here we go.
You know, the driver monitoring, right?
Mm-hmm.
How am I going to monitor the driver to make sure that they're staying focused? Like, what kind of warnings? How am I going to visualize the system? How am I going to make the driver feel comfortable that the car is identifying everything around it and ... you know, what's its next move? Like, these are all kind of customer-facing areas that the OEM, you know, should take on themselves. They're areas that don't require, you know, hundreds of AI experts to do. And so we've worked very hard over the last five or six years to provide these types of essentially tuning knobs-
Mm-hmm
for the driving experience and ability to kind of access our data feed and create visualizations. And so I think our view is you should let the expert take care of the kind of the core of the system and provide you these assets, but you should be able to build on top of it. And I think that creates, you know, kind of a nice offering for the OEMs.
So look, there's different ways to skin a cat, and, you know, I don't want to overly obsess about Tesla here, but, you know, Mobileye and Tesla are the, really the two, you know, main, main benchmark setters in, in this field right now, at least-
Mm-hmm
... and I think in the eyes of investors. You do HD maps. Tesla says they do not. Is that really correct, or is there some nuance there from your understanding? Tesla says they're no longer doing labeling, not even auto labeling, and that's freeing up 40% of the GPU capacity. They just, they say they feed just raw visual data to the computer, and then out comes actions. They don't draw circles around a stop sign. They let the car figure out that that octagon with a—it's red with the word "stop" on it, is just synonymous with bringing a car to a to a stop. You know, are those- are these things that you're... You do label-
Mm-hmm.
And you do HD maps. Is that something you think is just absolutely critical for the, for the safety? Or is it something that maybe over time, maybe you won't need it one day when the, when the compute can catch up with the, you know, with the task?
Yeah, I-
Sorry for the wordy question.
No, no, no.
I understand.
No, it's a, it's a good question.
Good.
I mean, on the HD map side, I think our belief is that Tesla is building something kind of similar to what we've built.
Mm-hmm.
You know, it took us six or seven years to do it, so it's not easy.
It could be a definitional thing of what-
It could be a definitional thing.
-a map.
It's like, we don't know that for sure, but some, you know, kind of clues from their AI days would-
Mm-hmm
would give us that impression. And I think our view is the HD map really takes a lot of pressure off of the real-time sensing, right? And kind of gives the car this kind of preloaded information where... I mean, that's traffic lights are kind of the perfect example because it's like, you know, clearly a camera can see whether a traffic light is red or green, but if there's eight or nine traffic lights across an intersection, which one is for the pedestrians, which one is for the left-hand turn, which one is for your lane? You know, to kind of have that you know, built into the car, it's very helpful.
I think in construction areas or, you know, areas where the you know you have a merge or a split in the highway, it's good to have that capability. Another thing that's been really impressive, you know, to the kind of Chinese medias, there's a lot of road you know kind of you know highways in between highways. So like connector highways, that could be curvy, they could be three or four miles long. And, you know, if you get stuck in one lane, or maybe there's two lanes, but if you get stuck in one lane behind somebody that's going 30 miles an hour, it's very frustrating.
Because our system has kind of the knowledge of what's the common speed on that road and kind of very, kind of clear knowledge about the geometry of the curves in the road and the lanes, you know, people have been very impressed that we're passing people on those-
Mm-hmm
-types of lanes. So I think that HD map is really critical. And, you know, not Tesla, but you've also got companies in China that are saying, "We don't need an HD map." Well, if it's gonna cost you, like, $20 million for every city to build a map manually, and then you've got to figure out how you're gonna keep that updated-
Mm-hmm
... then that, you know, kind of could make you say, like: I don't want a map. In terms of the, you know, kind of the question of labeling-
Mm-hmm
or not labeling, kind of, you know, I, I think that this is really like the end-to-end approach, versus what we do is kind of break the problem into modules. We have different techniques for each module. Could be like vehicle detection or pedestrian detection or lane detection, traffic sign. We have multiple techniques within each of those modules, which creates redundancies within the vision system. This is our approach. This is what-
Mm-hmm
-we think is the right way to do it. You know, I think if you go back to 2015 and kind of when deep neural networks were new, this was kind of NVIDIA's pitch to automakers. Like: We've got these GPUs, you know, deep neural network technology. All you need to do is feed a ton of data into the system, and kind of out is gonna pop autonomous driving. That clearly didn't work, but I think Tesla's in a different ballpark, right? They have... You know, they're willing to spend a lot on compute power, training, compute power. They have a lot of data. So I think I'm not qualified to say if it's gonna work or not.
I wanna go a little deeper on compute, then I'll come up and take questions. Please think of your questions now, folks. Let's talk about the hot mic on the 2Q results. Thanks for letting me ask a question, but I did make an amateur hour thing at the end. I asked Amnon about custom silicon versus GPU.
Yeah.
I was thinking in the training computer-
Yeah
And I think he was answering the question more at the edge of the inference computer.
Yeah
which of course, you've been doing yourselves, custom silicon, forever.
Yes.
I said the words, "I don't think he understood the question." I apologize for that, Amnon.
It's okay.
I'm sure you're not-
That's not the word, like I should have heard.
I had no F-bombs. But so the question as intended -
Yeah
... let's talk about that. I'm sure I would imagine that given the scope of your mission and then how it's ramping, that you wanna get your hands on as much compute power as you can. Am I wrong with that? And where are bottlenecks? Are you experiencing bottlenecks in getting GPU clusters from NVIDIA?
We're not experiencing bottlenecks at all. So I think it's a different approach that, you know, on the edge, we don't need as much compute power, and I think Tesla doesn't really either. It seems other approaches need a lot. I think in the training environment, we don't need like, a massive amount of compute power. We have, you know, if you look at kind of the data size, we have, you know, 400 TB of kind of video clips-
Mm-hmm
... which is, you know, a lot more than anybody else we've heard of has. But we're able to kind of process that, use that information to close up edge cases. You know, when we find a problem with the system, you know, we can search for the specific, you know, clips that would help to kind of improve the system, train the system. And so we're not. I think it's just a different technique.
You're not constantly feeding the data back into a training computer, like, in a, all the time.
That's correct.
Okay.
So, you know, we spend, you know, most of our CapEx is related to buying GPUs for kind of on-prem-
Okay
... computing. A fairly significant part of our R&D is for AWS, off-prem-
Mm-hmm
... cloud services.
Right.
So we use a lot of compute, but it's not in the same ballpark-
Okay
... as kind of what you're-
One more from me, and I wanna go to the audience here. The topic of clean sheet versus retrofit. Around the time of the IPO, you and I engaged in this discussion, and I was quite skeptical, and I still kind of am, on the ramp of clean sheet or pure EV architectures from legacy car companies.
Mm-hmm.
At Morgan Stanley, we think that that stuff's gonna get way pushed out, way dialed back, and I don't see a path to profitability from the legacy car companies, categorically.
Mm-hmm.
Okay. There'll be exceptions along the way. My message to you was, well, if SuperVision. Yes, there's the China ramp, which is, which is thankfully, you're there, and that's an incredible asset and learning for you.
Mm-hmm.
But, if the EV adoption of clean sheet Gen 2, Gen 3 is slower, then maybe your shots on goal for supervision might be lower. Because supervision, it seemed to lend itself to start over clean sheet. But then there's the argument of: Oh, and you may not need that. You could, you know, put supervision on a diesel F-150, quote, we know, whatever, dually- and it'd be fine. But it strikes me as that I don't know if OEMs are gonna necessarily wanna commit to that kind of thing. So tell me where I'm wrong in terms of the, do you need the attachment to clean sheet and your revenue being, perhaps outside of your control, attached to that ramp-
Mm-hmm
... of legacy car companies? versus, "Oh, no, in order to sell a car, you're gonna have to have supervision, whether it's a hybrid or, you know"-
Yeah.
-or whatever.
I think that supervision is not, it's not necessary to have a clean sheet EV architecture to support supervision. I think it's a mix. You know, our customers, I think, you know, much more to come in terms of sort of who the design wins are with and kind of what platforms they're going on. But I don't necessarily think that the majority of OEMs are thinking of this as, you know, needing to be on a clean sheet EV architecture.
Mm.
I think that they're thinking of it as like, we wanna put this on our highest profile vehicles. Now, a lot of OEMs, their highest profile vehicles are EVs.
Mm-hmm. So there's some overlap.
Does that change, right? But, you know, with Porsche, it's not an EV only.
Okay
... Design win.
Okay. Thanks for that, Dan. Questions for Dan Galves? And just wait for the mic, if you don't mind. Thank you.
Could you maybe frame the opportunity or, you know, how you think about companies that are coming from different backgrounds, say, cellular modem or GPU, compete in this space, and how have you seen things evolve for them?
Yeah. So I think that, you know, our competition in kind of the single front-facing camera, ADAS, you know, comply with safety ratings, is the same as it always has been. It's other Tier 1 auto suppliers, and we're not seeing competition from, you know, semi companies or GPU providers. I think that those companies, like Qualcomm and NVIDIA, are trying to penetrate into these advanced systems, right? Because I think that their products are not relevant from a cost standpoint, you know, for basic ADAS. It's not new. You know, like I kind of indicated before, you know, NVIDIA's been kind of trying to get into this market since 2015.
I think their kind of approach, and it's generally the same with Qualcomm or other semi providers, is, "Hey, we've got, you know, these kind of very powerful chips, and we're going to tailor them to automotive uses. And, you know, if you can come up with the software, this is kind of the right approach. You know, you're gonna need a lot of processing power. We can help you get there. And we'll create tools and libraries and SDKs." But it's really reliant on the automaker to find the content, the software. And we've seen a lot of attempts at this, and some of them, you know, have come to the road or some are coming.
And I think when you know, the results, you have to think about, you know, kind of what do the OEMs want, right? They want good performance, right? And performance is really all about how broadly will these functions work. Is it just on highway? Is it off highway? Is it all across the country or regions? And kind of what we're finding is that the systems coming to the road are limited by maybe speed. They're limited by, you know, what roads have been mapped. They're limited by geography. There's a lot of limiting factors. So I think from a performance standpoint, you know, our ability to scale and create a system that works everywhere is very positive.
From a cost standpoint, you know, what we're, you know, all you really need to do is look at the sensor set on the vehicle and the kind of the amount of compute. And so we're seeing efforts that are using 2 NVIDIA boards and a Qualcomm board, you know, so probably more than $1,000 of compute and, you know, with LiDARs and radars and tons more sensors than we have. And kind of what we're hearing from inside companies is that, you know, these systems that are limited, severely limited in terms of where they operate, are costing, you know, $4,000-$6,000 bill of materials. And we can provide SuperVision is essentially, you know, roughly $1,500 to us and maybe $2,000 total bill of materials to the OEM.
So we feel like we have a significant cost advantage. Ability to customize is something that it was the real kind of benefit of working with those companies. But we've, you know, we've created EyeQ Kit , which is an open architecture within our chip, and we've created these kind of knobs, you know, where you can tune the driving experience. So we feel like we're comparable on a customization standpoint. And then the last thing I would say is, like, ability to scale to eyes off, because a lot of OEMs feel like the real value here is if you can put a system on the road where people don't have to pay attention anymore, and they can do other things, at least on the highway.
You know, because our system is kind of, SuperVision is camera-centric, like, so we're supporting this broad ODD with just cameras, mapping, and kind of a low compute, driving policy. Our approach is to add a second perception system based on radars and LiDARs, to kind of expand the mean time between failure, to get to the point where you can feel comfortable enough to allow the driver to disengage. The other systems that we're seeing, they're basically throwing everything that they have at the initial problem and not really kind of and still kind of being limited, but then what do you do?
If you've got a system that's not good enough to kind of tell the driver to disengage, except, you know, at 15 miles an hour, then how do you, but you've thrown everything at the problem, then how do you go to the next step? I think that that's something very appealing about our approach. You know, we feel like we're competing very well against those providers.
Any more questions for Dan? I got one more, before we wrap up here. Would love your views on some of the differences between the Chinese EV makers and the Western ones. Seems to can be very topical and even the OEMs, the Western OEMs themselves, the Germans, the U.S. guys, have, you know, experienced these vehicles and met with the management teams there post-COVID. It's like, "What were you doing during COVID? Like, something, something's changed.
Yeah.
But from your lens, in terms of, how they work-
Speed to market.
Mm-hmm.
You know, it's really unbelievable that you've got, like, startup companies launching a new car every year, you know-
Mm-hmm
... or maybe two. And I think that there is a, you know, our, you know, we have a lot of people within the company that are kind of directly interacting with automakers that are coming from the legacy automaker world. And sometimes they're surprised that it's not really cutting corners, but it's not the same process.
Right.
It's not the same validation process, and it's... But they're willing to take those risks, you know, and I think it's leading to, you know, much faster iteration. Of course, you know, it's helpful when you only have to focus on one type of propulsion system.
It says a lot about your strategy, that you, you know, were ahead of that and doing business with them to, to help your other customers and be that kind of vessel, if you will, of, "Hey, look, you need to pay attention to what they're doing.
Yeah.
Super important.
It's a really important market for us.
Yeah. Well, Dan, thanks for spending time with us, and-
You, too.
Again, whenever you were up here, I feel like I could spend hours with you. Unfortunately-
Yeah
... we don't have the time.
It's always fun.
Yeah.
Have a good rest of the conference.
Appreciate it.
Thanks a lot.
You got it.
Thanks, everybody.
Thank you.