Hello, everyone, and our sincere apologies. Unfortunately, we had some tech issues, but we got through them. And I'm really, really glad that we did, because I am so pleased to welcome you to this the Aptiv Virtual ADAS Roundtable. I'm your host, Chris McNally, head of Global Auto and Mobility, Evercore ISI. And on the heels of returning from, you know, our seventh annual ADAS, AV, and AA, and AI forum, it could not be a better complement than hosting this ADAS roundtable with the leading supplier, Aptiv. We are going to begin a presentation from Aptiv shortly, and then we'll come back for a fireside chat. And finally, I'm gonna open up to you, the investors, for a Q&A towards essentially the, you know, the end of the hour. So write in your questions.
And with us today, Kevin Clark, Chairman and CEO, Anant Thaker, Aptiv's Chief Strategy Officer and SVP of Product, Benjamin Lyon, Aptiv's SVP and CTO, as well as Simon Yang, President of Aptiv China and Asia Pacific. Really, what a panel team. Thank you so much in advance for putting this together. And again, I think because we had such a great panel, maybe that's why we had so many tech issues, but we got there. So with no further ado, I'm gonna hand it over, and welcome, team Aptiv, and hand the floor over to Kevin, as I know you have some bespoke ADAS material prepped for the event. Team and Kevin, thank you so much.
Chris, thanks very much. Thanks for having us. We really appreciate it, and we welcome the opportunity to talk about Aptiv. I think on slide two, everyone's familiar with the safe harbor related to forward-looking statements, so I'll not read through the document, through that page, if we could go to the next slide. Chris mentioned those of us from Aptiv who are participating in this presentation. I won't go through and reintroduce everyone. I would say, though, we have strong capabilities here from a technology, from a product, and from a regional standpoint, that we thought would make the conversation today really interesting. So, you'll hear from each one of us during the presentation. So on slide four, just an overview.
To provide some context, Aptiv has roughly three decades of experience providing what have become increasingly complex end-to-end ADAS solutions. And we have a number of firsts: first radar in the car, first sensor fusion of radar and vision systems, first radar and camera in a single unit, first multi-domain controller, and then a number of firsts, including the first supplier to deliver a hands-free L2+ system. Awarded a full system L2+ award with General Motors last year. That includes parking. And then, earlier this year, we announced our first full system Gen 6 ADAS award, that we're gonna talk a lot about today with an emerging EV partner. Now, all that capability is translated into basically technology on over 55 million vehicles.
To date, we've shipped over three million full L2+ systems across 55 nameplates that are equipped with our L2+ solution. This is translated into over $3 billion in revenues. In terms of outlook for 2024, a 20% growth rate, and as you all know, strong customer awards or active or bookings over the last few years. If you go to the next slide. Our view on the ADAS market, and we'll talk about this as we go through the detail. ADAS growth has gone from $15 billion in 2020 to our forecast or estimate of $27 billion in 2025 to $45 billion in 2030.
A lot of this, roughly 50% of this, will be driven by L2 and above ADAS penetration by 2030. Now, that represents mass democratization of L2, L2+, L3 solutions. In order to do that, you need to spread advanced ADAS technologies across a broader portfolio of vehicles, including mass market vehicles. The ability to deliver a high-performing technology at very competitive cost is very important to drive market growth. We're gonna talk about, during our presentation, not only the technology, but how are we taking the cost out of the system? How are we making it easier for our OEMs to adopt the technology, to bring it on vehicles at lower cost, and provide them flexibility to get to where they wanna be as it relates to their overall ADAS solution.
And a big part of the overall content we'll talk about is going to be growth in software, which we estimate will be $9 billion by 2030, so an important part of the value proposition that we offer. If you go to the next slide. Having been in the business for 30 years, we're very, very familiar with the challenges that our OEM customers have, especially with the next generation of ADAS. And our real focus, again, just reiterating what I said, is how do we deliver better performance at lower cost, while at the same time, you know, providing or supporting our customers as it relates to their strategies and their capabilities, giving them choice? That's really important in terms of what we deliver.
So, you'll hear us talk about an open platform, a flexible system that's vision agnostic or perception system agnostic, as well as SoC agnostic, and that's one of the big value propositions that we think our solution, our Gen 6 ADAS solution, delivers.
... So with that, I'll turn it over to Anant, and he's gonna talk about, how our system actually, how it's designed and architected, and how we actually do that. So, Anant?
All right. Thanks, Kevin. So, you know, I think Kevin really hit those customer needs and pain points quite well, right? Just quite simply, it's our customers are looking for really strong and improved performance at a lower cost, and then flexibility and choice across the technology stack in delivering that. So, you know, how do we do that with our Gen 6 platform? So on the left side of the screen here, really shows the key elements of the Gen 6 platform, which spans from the sensors at the bottom all the way up to the cloud. But there's really three key elements that I, I'd like to focus on and that we're gonna zoom in on here. First is just, you know, our software architecture, and that's where everything really starts.
And specifically, how we use a cloud-native software architecture, I'll talk a little bit about what that means in a minute, and a container-driven approach that allows us to really modularize our software stack and abstract that from the underlying hardware. And if you pair that with a modern end-to-end DevOps platform that's hosted really in any cloud environment, this allows us and our customers to speed development of the ADAS software stack, to streamline deployment onto vehicles through OTA, and to really optimize the lifecycle management and the cost of that lifecycle management, which ends up being a very substantial cost driver for all of our customers. So, you know, we really start with having the right software architecture.
And then this enables us to really optimize how we leverage AI/ML in enhancing system performance, and applying, you know, advancements in that AI/ML capabilities, you know, everywhere throughout our stack, to perception, to behavior, to path planning. And we're able to do so in a way that's compute efficient and as compute efficient as possible, again, with an eye on cost, and we'll talk about that. And these things together allow us to have a platform solution that, for our customers, really scales efficiently from lower levels of compliance ADAS, all the way up to L2++ and level three, while driving significant reuse in the hardware and software components, as well as in the testing and validation for that entire system and stack, which drives favorable economics for us and for our customers.
So I'll really dig into each of these kinda one by one. So as I mentioned, right, we really focused first on just starting with having the right software architecture. And if you look at, you know, the ADAS software features and building blocks, they really follow what we call a services-based architecture, and that means we've broken the stack really into logical containers that can be independent and interact with each other through open APIs. And because of that separation, we're really able to update the software at a container level, right?
Any of these blocks that you see here as containers, with the dotted lines around it, right, we can update that without really affecting the rest of the stack, and that allows us to make those updates and refinements more easily and seamlessly, and importantly, more frequently, right, which is really important for the end customers. And then we can also really spin up these containers in the system, really only when we need it for specific feature applications. And that allows you to really optimize and utilize just the right amount of compute resources, which can be more optimal than if it was all one large stack.
And then, you know, finally, that, you know, modularity gives our customers more choice, you know, allows them to have more choice with partners that, you know, we can work with, and every one of those containers, OEMs can own certain parts of it, partners can own certain parts of it, and allows for really flexible collaboration models. And so I really focused on kind of the containers at the top level of this stack. It's really only possible because of what we're showing here at that lower level, right, in the operating system. So we're using an RTOS, VxWorks, which is really the only operating system that's a real-time operating system that supports these types of containers.
It's a specific, you know, Open Container Initiative that's a very well-known standard format, again, that drives our customers a lot of flexibility higher up in the stack because anyone can adhere to this format. It's not proprietary. And because you pair that operating system and the open container format with our middleware capability, that abstracts the hardware from the software below to different types of SoCs, this gives our customers a lot of choice in which SoCs, you know, we're porting our software stack on. And that's openness is particularly relevant in China, for example, where increasingly we're seeing customers that are looking for localized SoC solutions, localized vision solutions, localized map providers, localized cloud providers. So that openness is gonna be really important in China, and Simon is gonna talk about that later.
But because this is really so, you know, fundamental, I figured we can just walk through it because I wanna do a comparison of how this, compares to typical legacy systems that you see in the market today. So it is kind of a complicated, architecture diagram here, but this is really how a lot of the vast majority of level two systems on the road today, this is what the system stack looks like. You know, on the far left here, you have a vision SoC, and you can see there, you know, we have these lock symbols because the vision software is really tied to the underlying silicon. It really can't be separated, and that silicon can't be used to host any other software functionality.
And then oftentimes, if there's a parking feature on the stack, you have another SoC that's running the parking perception and the parking features because that's fairly compute intensive. And then you often have a microcontroller, which needs to operate in a real-time environment because that's running your ADAS software stack. And you can see all these different features kind of listed at the top, right? Like radar perception and vision perception, and the highway stack, and motion control. These are all your underlying software features, but they're all tightly integrated into what's called a monolithic architecture, and that means it's all one large binary of code in which all of these individual components are highly dependent on each other, and they're tightly coupled. So if you change one thing, you really have to change all the other things.
So the issue there is that, you know, because you have this type of an architecture, you see the following, right? With this monolithic, all of these things, because of the interdependencies, they're quite challenged and hard to update, so you can't update the stack as frequently, and when you have to test and validate each time, you build up a long sort of train of changes before you actually would flash this to a new vehicle, because you have that software tightly coupled to the hardware. That reduces your SoC flexibility and reduces kind of the, you know, drives vendor lock-in for several of the customers, and because I have to split that across multiple units and multiple SoCs, it drives up the underlying cost of the system, you know?
So if we can break apart that monolithic stack, right, which is what we're really focused on, it allows us to then, you know, take all these components and organize them into individual containers, right? And if we have those containers, as I mentioned before, it allows us to update at an individual level and refine at an individual level. And if you really pair this with, you know, Wind River Studio, which is our end-to-end cloud native DevOps platform, I can leverage that platform to develop my code, you know, deploy that onto vehicles, and then manage it over the full life cycle, and then importantly, really improve these individual components.
So for example, if I mentioned, you know, we're bringing machine learning to our perception capabilities or to our behavior or path planner, I can update those individual components, right, as I've highlighted on the screen, without having to touch the rest of the stack. As you can imagine, right, with machine learning, these features continuously improve, so it allows me to continuously make those updates in a seamless way, to the vehicles I have on the road. And if I'm working with partners, right, in many cases, we might be working with OEMs who want to own a certain number of the features or bring their own flavor to a certain number of the features, or who may want us to work with a particular vision provider. You know, each party can really, you know, work within its individual container.
It leads to a much more seamless and efficient collaboration model, and we've seen from the past that trying to do exactly what we're showing here and just highlighting these different ownership pieces in a monolithic software architecture is incredibly difficult, and it really drives up the overall cost, leads to a lot of delays, and can cause all kinds of challenges for our customers, so this is really what we've learned from and what we wanna do differently, and you know, I mentioned Wind River Studio is a DevOps environment. I think what's quite important is that it's not linked to any one cloud environment, right, so this can be deployed onto any public cloud environment and on-premise cloud environment. You know, we're. It's, we meet our customers where they are.
We see some of our customers work with one hyperscaler, some with another, some that need to work in on-prem. Again, in China, this could have unique characteristics, right? You need to work with certain cloud providers in China. So the flexibility here, you know, from the software up into the cloud is quite important. And just as important, right, since I'm running it on Aptiv's middleware as well as the Wind River software stack from a platform standpoint, you know, that is actually quite flexible with the SoCs on which it can operate, as well as with, you know, various sensor types and stacks it can operate with. And so we're driving really the flexibility up into the cloud and down into the SoC level because of this middleware and because of this software architecture.
And if you think about cloud and SoCs, these are two of the most expensive parts of the technology stack. That's where we're driving the most choice for our customers, right? So ultimately, it really comes back to the three points that you know Kevin had mentioned, right? That you know because we're allowing you know more frequent updates to the stack and more refinements to the stack, we're driving better performance. Because we can speed up development times, we can deploy you know container-size updates to the vehicles, we can manage the full life cycle and identify issues, and update those more frequently, we're lowering the overall cost significantly for our customers.
Because we're driving that openness up to the cloud and down at the SoC level and throughout the stack with open APIs, we're giving our customers a lot of flexibility and a lot of choice. That software architecture is really what allows us to, you know, also deploy AI ML in a really optimized way and efficiently with our Gen 6 platform. This really drives significant performance benefits in how the system performs and operates. Just as an example, at the perception capabilities, you know, the way we deploy ML, it allows us to really see significant improvements in our object detection, and size determination, and position estimation. These are all things that are just gonna reduce your overall takeover rate. They're gonna drive more ODD or operational design domains, so, like, kind of where does the vehicle operate?
Just with radar, for example, so you can imagine a low visibility environment. Solely with radar, we're able to identify, you know, vulnerable road users with a 90% identification rate, and we can, you know, much more accurately detect stationary vehicles and objects because of how we're deploying ML. So this is something that's very challenging to do with vision alone. It really improves our overall system performance and availability. And then importantly, when you get into things like behavior and path planning, you know, we leverage ML to really drive context to where, you know, more naturalistic, humanistic driving, and how we do path planning, and how the vehicle operates on the road.
So not only does it sort of feel more normal in how you typically are driving in these types of environments, it also enhances kind of how it deals with certain situations, like tight curves or roads without lane markers, and it's able to do that, you know, leveraging multiple sensor inputs. It's not tied to any one specific sensor configuration, so, you know, again, like, because we start with a software architecture, it allows us to deploy ML in a highly optimized way, and it comes back to performance, right? The vehicle has higher availability, and it feels much more naturalistic in how it's, how it's driving. It lowers the cost because we can really leverage ML at what are even lower, less expensive sensors like radar. We can really get the most out of it.
It drives down our overall compute usage, and that drives down the cost. And it allows, you know, this openness because we can work with multiple vision suppliers, and we're decoupling the software stack from the underlying SoC. So it meets all of those three key customer needs. And, you know, all of these things really come together for... You know, it's, we're not just talking about, you know, level 2++ , or level 3, or level 2+ . You know, I think as Kevin mentioned, and we showed that sort of growing market demand, we're seeing it across all levels of ADAS. So for us and for our customers, it's really important that we can drive a scalable solution, you know, a scalable architecture, and reusable components that can, you know, meet their requirements all the way from compliance vehicles.
You know, very low cost vehicles that are just trying to meet GSR requirements or basic NCAP requirements, all the way up to higher levels, like level 2++ and level 3. I'll kind of walk through this step by step, but the key takeaway is that, you know, when customers are investing in, you know, elements of Gen 6 or the full Gen 6 ADAS platform, they have that reusability and scalability all the way across all of these different use cases. For us, that's what really enables true democratization of the technology for OEMs to deploy across all different types of vehicles within their lineup. Just to kind of walk what this looks like, right? If I start with compliance, it's really feature driven.
So it starts with, what features are you trying to deliver, at a software level, and then what is the sensors and compute and functionality that you need to enable that? And for compliance features, right, our Gen 6, we can just start with a simple smart camera and an optional front radar, depending on what the customers need and what level of robustness they're trying to drive. And think of this as like GSR II and just your basic kind of compliance capabilities. And when you move up into core, like, this is like core level two, you know, hands on the wheel, hand, eyes on the road, hands on the wheel, for lane, lane keeping and ACC.
This is where we're really adding, you know, the corner radar, the front radar, as well as the optional, front corner radar that enables that type of capability. But I'm reusing again, the assets. I'm reusing the smart camera that I just deployed in my compliance configuration. And as I move up into level two plus, hands-off highway capabilities, I'm reusing that same vision capability, I'm reusing that same perception stack, I'm reusing all the same sensors, but now I'm bringing in much more machine learning into how I do perception and behavior and path planning, and even crowdsourced localization to truly unlock that hands-free capability. And you know, we're seeing, we've already, as Kevin mentioned, deployed this to several customers and have over three million units shipped that are hands-free highway systems.
You know, we won our first Gen 6 ADAS award that is now moving off of highway into all roads, so moving beyond highway. And again, we can scale quite a bit here, right? So everything that I have from a software stack, I'm able to reuse. I'm bringing in a redundant computing platform, so I have a smart camera, as well as my central domain controller compute. But I'm able to really reuse all of my same sensor set and all my same software stack. I'm really just adding that redundant compute.
So if you think about when we approach our customers, if they wanna have a level two plus system and ultimately scale that up to level 2++ , we're giving the ability to reuse all of the testing, all of the validation, all the same assets that they had just for highway as they expand it to a broader ODD. And ultimately, even when you move up into L 3, yes, you have to add more sensing, yes, you have to add more compute, because now you're getting into much more complex scenarios and more highly complex redundancy requirements. But the same perception capabilities, the same underlying software stack, the same underlying compute that I just used for level two, becomes my redundancy case in level, or excuse me, in level 3. All of that reusability is quite important.
And you know, if you kind of look at this stack, importantly, right, I've focused on sensors and the software stack. The software platform, which is the middleware, on top of which all of this is built, and the DevOps environment that I'm using to, again, develop the software, deploy it to vehicles, and manage the entire life cycle, that is reusable across all levels. So we're using that everywhere from compliance all the way up to level three, and that drives a lot of scale for us, a lot of improved cost effectiveness for our customers. And so if I really, again, tie this back to why is all the point of all this scalability, ultimately, it's driving, again, performance. You know, because it's the same assets that are being used across multiple levels of ADAS, they're mature, they're tested.
We have significant testing and validation that our customers can benefit from. The same things they invest in at one level, they get the benefit in the other. Because they're only making that investment once, they get to scale that cost across multiple levels of ADAS as they grow across their vehicle lineups, that drives down their overall cost for deploying, maintaining, and updating this over their full life cycle. And, you know, because we're allowing them to sort of pick the best system that meets their vehicle needs, drives a lot of choice, and we can, again, engage with our customers at just one level of this. We can provide just the perception capabilities, or we can provide the full system. So we're giving them, you know, choice across the ADAS levels, as well as across the entire technology stack.
And having that choice, having that flexibility and openness, it's not just relevant, you know, globally across all markets, based on what we're seeing from our customers, it's particularly relevant in the China market. And I think that's really a perfect segue to Simon, who's gonna talk about what we're seeing in China. Simon?
Yes. Thanks, Anant. Let's talk about China. As we are showing here, China is driving ADAS adoption, including three major drivers. One is increasing consumer demand with high expectations on functionality, particularly hands-free functions. OEMs, especially the Chinese local OEMs, using ADAS to win shares in a highly competitive Chinese market, as you all are aware, and a very strong government promoting innovations. L2 and above adoption is expected to increase to 60% by 2030. For suppliers, it's very important to differentiate and win in China. Aptiv, we are focused on positioning ourselves to be successful in China market, which is the largest globally. Next slide, please. Our ADAS strategy allows us to adapt effectively to specific regional needs. As Anant just showed you, the key product characteristics are the same....
That including a scalable modular solutions, modern software architecture, and optimized AI machine learning deployment. We then tailor our solutions specifically for China. We have a fully in-region team from manufacturing and engineering perspective. Because local OEMs move very fast, as you know, they are more open to full system solutions. It is also important to be flexible and hardware agnostic, so we work closely with local SoC, vision, and mapping providers to deliver a full China-based system. Our strategy has allowed us to win several key awards, including, you know, Changan. For Changan, we're using a smart camera on both local SoC and local perception algorithm provided by Horizon Robotics. For Geely project, we're using a smart camera based on a local SoC provided by Axera, and a local perception algorithm provided by MaxEye.
Actually, MaxEye is a Chinese vision supplier, in which we have just completed an equity investment to accelerate deployment of a local solution for the China market. We believe our strategy is what China OEMs need, and we are focused on working with the top players, who will drive volume over the long term. With that, now let me turn it over to Kevin. Next page.
Kevin, you're on mute.
Thanks, Simon. Thanks, Anant. In closing, hopefully the presentation made very clear that we believe we're perfectly positioned to capture a significant amount of value from the innovative solutions we've developed in and around ADAS. Clearly, the market is growing. It's growing rapidly. As I mentioned earlier, nearly 50% of the global automotive market will be L2 level L2 or above by 2030, so there's a tremendous amount of content opportunity there. Our approach is what our customers are looking for in terms of scalability, openness, and ability to drive down cost, while at the same time, increasing performance.
All of the architecture of our Gen 6 ADAS solution not only provides flexibility and optionality for Western OEMs, but it also has been designed to deliver or operate, you know, in the context of the specific needs that the China market has today, and we believe will continue to evolve. So with that, Chris, we'll turn it over to you.
Thanks so much, team, Kevin, and Anant and Simon, that was a fantastic tech overview, but also, I think a great one-on-one on how to think about modular and evolutionary ADAS systems. You know, what I wanted to do was maybe address, you know, some of the numbers and the market drivers, particularly from the moving OEM pieces, which I know investors will ask, and if I don't cover them in the Q&A. You know, trying to think about how that $3 billion, roughly, in ADAS revenue, how we can grow that, you know, when we look at legacy versus sort of these, this new OEM, and it's gonna a lot is gonna fall into Simon's realm.
So, you know, the first question is, you know, however the best way to think about it, percentage of your ADAS book, whether it's on an orders basis or revenue, that I can think about US, Europe, and China split, and then, you know, sort of forward that through to kind of, you know, win business. I know you have a large percentage in your overall ADAS, in overall active with domestically Chinese exposed, but we're trying to hone in on that same sort of question for ADAS. So however is the best way to answer the geographic-
Sure
... or the legacy versus new question.
So, Chris, me, I'll take the initial shot. So when you think about just given our history in ADAS, and you think about the evolution of ADAS, it's probably no surprise to you that the majority of our revenues sit within North America and Europe, with the Asia Pacific region accounting for, I think, just north of 20%, and a big piece of that being China. When you look at our mix of ADAS solutions for the China market today, from a revenue and booking standpoint, roughly 2/3 of that is with the local OEMs-
Sure
... 1/3 with the multinationals. I would say with a tremendous amount of opportunity, opportunities that are in front of us with the Chinese locals, both for the China market as well as opportunities with those who are currently exporting and are looking at manufacturing outside of China, just given our supply chain outside of China, our knowledge of the regulatory requirements, the quality requirements that are necessary to be successful in that market. So we feel like that's a very big opportunity, and we have a very strong funnel. Having said that, and we've talked about it in the past, when you look at our legacy OEM customers that we have large ADAS programs with today, I would say virtually all of them, we're working on extensions of those programs or expansions of those programs.
You know, there are a number of Western OEMs that we're working with now or in discussions about additional opportunities beyond what we historically had. So big opportunity in China, I guess, just to cut to the chase, but listen, significant opportunity in the West, and probably over a period of time, you'll see more of a balancing between China, principally with the locals, as well as Europe and North America.
Yeah, I would love to get Simon's take. I mean, Simon, the way I think about it is, you know, two-third, one-third, you know, approaching pretty much where the market, you know, depending upon which month, which numbers, you're seeing almost a 70/30 split, and there's very little indication that that number is gonna change, if not grow. So yeah, would love your take on how you're sort of trying to get ahead of future market share changes, you know, for the next couple of years.
So as you just mentioned, you know, 70/30 split, right? The locals is really gaining shares. This is where also our system and our team is focused on. Kevin already mentioned, you know, two-thirds of today's booking is with local OEMs, leading local OEMs. So we're, you know, working many focused on 10 to 15 major OEMs who can drive the volume. Also, at the same time, those top 10, 15 local OEMs have, you know, planned to expanding outside China. This is where, you know, our advantage is compared to, you know, the local, pure local, competitors, because we can really bring in our, you know, our know-how of regulatory, you know, requirements, how to, you know, meet that globally.
Obviously, the system we developed, as we just mentioned, right, scalable, modular, because China market is also that way, right? From, you know, entry level to L2, you see a lot of competitions, but we use our platform developed to the higher end is end-to-end. So with local ecosystem, we're developing, you know, the local SoCs. That give us not only just the China-based system service, that 70%, which may go even higher, than the portion, at the same time, actually providing us with competitiveness in the market. So, you know, we'll say from entry level to the whole full system, you know, we're, you know, evolving, engaging with the local, you know, ecosystem with very competitive, you know, products.
Yeah, no, I would say, I think for most investors, you know, the idea that Aptiv, as I've used the word, you know, sort of the Switzerland of ADAS can work, and that flexible approach that Anant talked about, working with Horizon Robotics is the example you gave, Axera, which is an even, you know, sort of more low-end, cheaper chip for the mass market in China. Simon, if you could just expand, who is your primary competitors when you're in these RFPs, RFQs for, you know, tier one in China? Do you see... Obviously, the global players, but, you know, I think what we're all curious about is the size of this world, Evercore, you know, many examples. And you mentioned there's also some perception players.
So maybe a little bit about who you're competing against in China at the tier one level.
Okay, so, actually, as I mentioned, because Active does cover from entry level to L2 ++ , right? So from the whole full scale, you know, system competition standpoint, I mean, the only player we view locally probably is Huawei. And the remaining local players, you mentioned a couple of them, right? It's many in the different segment. You know, some of them are competing with entry level, the L2 level. You know, there is several local players, like Freetech, you know, you got NettFour, those players. Some of the our global peers, you know, competing with the L2 that segment, but not full system.
But really, you know, to answer your question, there is no single player, other than Huawei, is able to do what Aptiv is able to do locally in China.
And I remember we did many, many years back. I remember, Kevin, you probably remember. You talked about, like, the percentage of RFQs that you were winning, that Active was sort of, you know, in select number for prime business, and it was a very high percentage of RFQs that you were winning. Is that the same case in China, or naturally, should we think about China, even though you're so well positioned as a more competitive market than what you had in Europe or the US?
Simon, let me start. I think, and I don't have our percentage win rate in China in front of me right now. I would say, Chris, we're really focused on a subset of the broader base of OEMs there. So Simon mentioned the top 10 to 10 to... Sometimes we go down to 15. They're those who are focused on quality and performance as well as cost, and ideally, not all of them, but ideally, they have plans to take their vehicles overseas. So there's an incremental value that we bring from an overall system standpoint. So I'd say that, which, you know, those opportunities that we're pursuing, that the award rate is relatively high.
Perfect, um-
Anything you want to add to that, Simon?
Yeah. Add what Kevin just mentioned here, right? Actually, our winning rate is similar on the global basis also. Globally, I mean, locally, you know, with the market mix, right? The reason I mentioned this, with the market mix and what our winning rate also, as we just mentioned, you know, before this, you know, two-thirds locals, almost matching the market, the market percentage. Our winning rate is similar.
... It's just we got definitely because it's a large market, a lot of players sit there. We do get number of, you know, RFQs. Numbers are higher, but the percentage-wise, I would say the same with our global winning.
That's really impressive.
Hey, Chris, sorry, just one more thing I would just add to that, to the win rate, right? Especially once we get anchored, right? I mentioned kind of the, the flexibility. Like, sometimes we start out with just perception, then we build our way up. Once we're an incumbent with an OEM, our win rate exceeds 70% if I look at all active safety over the last four years. We see that in China, we see that globally. I mean, it's certainly lower when we're conquest or when we're getting in there. But especially once we're anchored, you know, we see very high win rates.
That's fantastic. I mean, those are sort of like, you can imagine, my check the boxes for, for China, which is obviously even more rapid deployment than cycles. But and you touched on where I was going next, about this idea of incremental content as you, as you move up with the same OEMs. Can you just however you wanna describe maybe, you know, what an advanced Level 2+ or Level 2++ , you know, however you wanna describe, what are the incremental levels for of content per vehicle for active, for sort of maybe an average today, given you provided all those stats on the extra, you know, radar, sort of the five C, five R sort of configuration, which is becoming increasingly standard for higher levels of Level Two plus?
Yeah. So if I, you know, showing that chart, right, as we sort of walked up. So if you were starting with your, your L0, L1, like, that can go up to around $350 type of price point. You know, our L2 systems, we can go sub-$500 if we're really optimizing it, again, to drive cost benefit to our customers, depending on how they wanna deploy it. When we get into hands-free, it can vary if you're including parking and these other capabilities, but it can get into the, I'd say, around $1,200 CPV price point range. And then, you know, you maybe get into, around the $1,500 when we're getting into L2++ .
If I looked at any of those numbers, and you compare that to our, you know, our, a lot of our peer set, I would say we typically have found, and this is in competitive RFQs, that we can bring the same type of capability with, you know, anywhere from 15%-25% savings, right, relative to those alternative solutions. So we drive high CPVs for us, but we can also drive value and savings to our customers.
That's great. And just to, so basically, you think when you go into a competitive RFQ because of your scale and also that you have all these capabilities in-house, that you can basically win that same business, same performance at 15%-25% lower to the OEM?
Right. That's right.
Wow! That's fantastic. Maybe we could dive in a little bit on the tech, because I think it's really important for the gen six, where obviously you spoke about, there's a lot of excitement. Can we talk about the radar capabilities? You know, how you think about sort of legacy radar versus 4D imaging radar, which allows for, you know, so much more performance. There's only a couple of products out there competitively doing 4D imaging radar. So would love to know just an update. When that's coming to market, how that will play into Gen 6 , and then, you know, if you can give a little bit more detail on the roadmap for Gen 6 after these first initial wins.
Sure, I can start. So I mean, we're actually going into production with a customer on, like, a L2 plus L3 system that, you know, they wanted to deploy an imaging radar into that system. It's gonna go into production, I think, end of this year or early next year. So, I mean, we're already going into production with that type of imaging radar in those deployments. But the only thing I'd mention, Chris, is, like, I think we actually see relative to past generations, where you had a different architecture sometimes for the front radar versus kind of corner radar. Now we're driving much more commonality in that platform. So we actually can use the same radar for the front and the corner, drives much more scalability for us.
We found that the performance requirements we can get for that, you actually even when I get into level two plus or what, you know, what was our first gen six or level two plus, plus, that's getting into on-highway in those urban environments, we actually don't need an imaging radar. We're able to do it with just the rest of our radar platform, with just our perception capabilities and the ML that we can apply to that radar. You know, we actually are seeing that all the way up to Level2+ and even Level2++ , we don't necessarily need to deploy an imaging radar. We can get really good performance with the rest of our perception stack.
Then when you get into, you know, Level3 , I think that's where you might start to see more of those deployments.
That's great. And then, now just from a tech perspective, what type of channel-by-channel performance is the imaging radar, when we look, think about specs?
Yeah, Benjamin, maybe, do you wanna cover this one, too?
Yeah, I mean, we can provide you kind of more details offline, but I think the short-
Mm-hmm
... the short version is, is that it's actually not about number of channels. It's about how clean a signal-to-noise ratio you can get, and balancing between those channels in order to be able to see a really big object like a truck, with a small object, like a pedestrian or a motorcycle, right next to it. And so one of the things that's really awesome about the machine learning that we're doing inside of our radar is that we're able to achieve the same discrimination between those kinds of objects using a much smaller channel count, physical channel count, than you see on some of our competitors' radars. And so that allows us to drive the cost down of the hardware and the radar SoC, by turning a hardware problem into a software problem using ML at the radar level.
Yeah, I mean, we've seen we can get a lot of performance from just 16 by 16 relative to 48 by 48, just, you know, to make it specific. Yeah.
Yeah, perfect.
Amazing. Benjamin, maybe we can stay on you and talk about, you know, some of the, you know, also Aptiv's, you know, kind of strategic partners. When I think about some of these, you know, larger compute platforms. So we talked in China, where obviously, you know, the, it's quite cost competitive. Maybe, Benjamin, you could talk a little bit about some of the relationships with some of the larger compute players, your Qualcomms, your NVIDIAs, where we're seeing, you know, work being done at the OEM level, particularly on drive policy by, you know, the likes of Daimler or BMW, where they still want to control drive policy, but they, they're gonna need a lot of help from players like yourself.
So, relationships on the larger compute side as we start to think about level three programs, you know, if maybe you can just expand upon that.
Yeah, well, a couple of things. First off, you know, I think part of what you're touching on is actually two things. One, how modular do we need to be? And second is that abstraction of the hardware from the software. And because we're actually commercializing autonomous driving, at all levels of the stack, all the way from just compliance, all the way over to L2++ into L3, it's actually important that our stack, while leveraging end-to-end in key areas like perception, for example, or in policymaking and decision-making, or in mapping and world modeling, that we have natural breakpoints in the stack that allow us to meet our customers where they're at. So that's kind of piece one. Piece two is we've actually invested significantly in our middleware stack so that we can work with all the various different major SoC players, and we do.
And that's not just the major ones that you see here, you know, kind of coming out of the United States or globally, but also in China. For China, that's a huge piece of our ability to accelerate in the China market, is the fact that we can work with the Horizons and the Axeras and the Black Sesames, et cetera, of the world. That's through that abstraction layer. So we actually really focus on an open platform with standard APIs, as opposed to getting locked into a specific proprietary neural processing unit or GPU, graphics processing unit, type architecture, that then doesn't allow having a global application that really meets the needs for the region.
Great. Perfect. I see some questions from the investors coming in, so please continue to write in. I'm just gonna finish up with one or two here, and then I'll address the investor questions. Benjamin, maybe the last one on the sensor set that you laid out. You seem like you have extreme confidence in radar. That we'd only really, from what I can tell from the slides, that probably LiDAR really only needs to be involved at that, in that Level 3 and above.
Is again, the flexibility and the agnostic nature that you could work with any, you know, external provider, for example, of LiDAR, if you can kind of go through that same sort of logic on the sensor set.
Yeah. Well, one of the things that's interesting about LiDAR is that it's a direct depth sensor, whereas camera is an indirect; you have to infer from a video what is actually where in the world. Now, the nice thing about radar is that it's completely solid state, and it's also a direct depth sensor. And so one of the things that we will see as we go forward into the future is the combination of radar plus camera for a multimodal system, which you really need for L3 and above. How that evolves, because currently that's not static, it's getting better and better and better, which keeps eating into the LiDAR use cases.
It used to be, you know, if you rewind the clock five years ago or six years ago, folks thought, "Gosh, you need the LiDAR for all these different scenarios, especially, for example, detecting pedestrians near large objects." With the introduction of ML into radar, and then fusing radar together with camera in more effective ways, the number of use cases that LiDAR is really, kind of shines in, has shrunk. And so we'll see whether, you know, as we go forward in L3 and L3 plus systems kind of come to market, because right now it's, it's very, very narrow. Whether there's really a need for LiDAR, and whether there's a need for LiDAR at the kind of spec that you see really on research-
Yeah
... you know, research prototypes, you know, you know, like a $200,000 vehicle-
Yeah
- that, you know, a prototyping company is doing, like a Waymo or something like that.
Yep.
At the end of the day, we have to develop for things that work at scale. And LiDAR at scale is something that is still not proven. That said, we've developed our architecture to be truly multimodal and to be able to link in with whoever's LiDAR in the event that we need LiDAR.
Exactly. The, you know, only practical at sort of, you know, future kind of content per vehicle for LiDAR, maybe under $500, you know, something along those lines. Kevin, it's always my responsibility to, you know, to come back to the numbers. I can see some of those questions coming in. So, what I really appreciated was the market outlook slide, right? Which very simply, if you think about it, every five years, the ADAS market is, you know, almost roughly doubling, give or take. And if I look at your sort of, you know, $3 billion of revenue and I grow it to next year, I can kind of come to sort of a mid-teens market share.
And so, you know, a question that comes without getting you to commit to 2030 guidance is that a good way to think about it? That you know, maintaining market share or growing it, would give you that sort of, you know, CAGR, you know, looking out to 2030. So just, you know, something to help us with, you know, framing the revenue opportunity over the next five, six years.
Yeah, Chris, listen, I think over the last several years, right, we've-
... our revenue growth has exceeded market growth for
Yeah
- for ADAS. I think given the investments we've made, given the solutions we've developed, I would expect that that above market rate would continue, right?
Yeah.
It may not be perfect quarter by quarter, as you know, but just to given the solution that we've developed, the flexibility it provides our OEM customers, the lower cost, the ability to take a platform and sell big portions of it in China as well as in the West, given all the geopolitics, that is a, you know, a competitive factor that should position us well for the future. So I think the way you framed it up, we're confident in it.
No, that's fantastic. Because if I think about the market growing 15%, you grow something a little bit better, you know, the rough math, that alone, and this is why we, you know, I think you know we wanted to do this call, is, you know, that alone is basically, you know, three points of content per vehicle growth to Aptiv as a total company, forgetting, you know-
Right
... the other 80% of the business. So that's really exciting. And then, you know, you don't break out ADAS margins, but obviously, it's becoming a material part of users' overall margin framework. Joe, long, long time ago, used to use this double-digit EBITDA margins once-
Yeah
... we sort of scaled past $750 million. A lot has changed, and again, not looking for anything, you know, specific year over year, but can you talk about how to think about the margin, you know, where, where we are and where we could be going as that software component, which obviously is gonna have a high incremental margin growth?
Yeah. Yeah. So, our ADAS business today, roughly $3 billion in revenues. That's our outlook for 2024. And we would say, that margin rate is roughly in line to slightly higher than our AS&UX segment margin rates. And you've seen a significant improvement there over the last couple of years, for a couple reasons. One, when we went through the supply chain crisis and inflation on semiconductor chips, as you know, significant impact on our material cost. That seems to have stabilized, and we're not seeing the same levels of inflation that we've seen in the past. Two, you know, part of what we've talked about, what we're enabling here is productizing our ADAS solution.
So we get far more reuse today on everything that we've done in the past, software as well as hardware, than the traditional business model, where oftentimes customers were trying to pull us into a customized ADAS solution for a given OEM. Now, we're really forcing it into a high level of reuse, really utilizing our Gen 6 ADAS platform with some amount of flexibility. And I think, to be honest, doing a better job presenting to them the value proposition of the cost of customization, the cost of change, and what it means for them, you know, let alone us. So we've seen engineering costs come down significantly. I think we've talked about that on some of our engineering calls.
So this is an area, given what we've developed from a product standpoint, what we're doing from an engineering productivity standpoint, given software being a bigger part of the value prop. We'd expect the margins to continue to increase. And I think in the past, we talked about kind of high teens levels, and hey, that continues to be our expectations.
That's great, and Kevin, I could kind of think of this when I, when I sort of model it myself as sort of a typical. Because you put the investment in, this could also be a low to mid-twenty sort of incremental margin business if, you know, whatever that percentage growth is next year. So I can sort of get a sense for the pace to the high teens, that you mentioned in 2030. Is that still a good rule of thumb?
Yeah. Yeah. Yep, that's a good rule of thumb.
Fantastic. Well, look, let me, I hit some of the questions that are in the, the Q&A, but let me just bring some more up here. And again, keep on writing them in. I'm gonna try to group some of them together. Benjamin, this is probably for you. Can we talk more specifically about the Gen 6 program, traction with other OEMs? And a couple of questions here about the, the path from Gen 6, can that grow to a Level 3 solution as well?
Yeah, Matt, you want to jump in on that, and then I'll-
Yeah, maybe I could start with this one. So just to give an overview of the solution, the award we have, you know, with the EMEA OEM, I think that we mentioned in the past, right? And that's kind of full end-to-end. Importantly, that's a Level 2 what we'd call plus-plus. I know everyone sort of has their own definitions of that. For us, it means it's hands-free, eyes on the road, hands free, but it's also moving off-highway. So you know, that was an important kind of everything that we showed in the technology stack there, that's gonna be deployed. Importantly, it's gonna be leveraging, you know, another vision provider that we're bringing into this discussion.
Importantly, it's gonna be leveraging the full Wind River software stack, the containerized ADAS architecture that we described, so all of the ML assets I described, you know, going into perception as well as behavior and path planning, so it really is bringing the full breadth of our technology and capabilities that's gonna go into production at the end of 2026, and it does scale, you know, absolutely into Level 3, if that was the question, right, so you know, if you think about Level 3, you're bringing, like, a fully redundant architecture that's required for that type of a system capability, and all of those same assets are leveraged when you go up to Level 3, so you might add, you know, as Ben mentioned, potentially a LiDAR.
We're looking, obviously, what else we can do with just radar and vision fusion together, but you know, you potentially add some more sensors, you add another, you know, high-performance compute, but everything that I just used in this Level two plus-plus, I can carry over, and I'm really just adding some more sensing and compute redundancy and capabilities around it, so it's that reusability going up, and then, importantly, even if you're going back down to lower levels of ADAS, that's what really drives a lot of value for our customers.
You mentioned end-to-end, so just my question is, where does that training data sit for something like Gen 6?
Yeah. So we have quite a bit of data, you know, that we leverage from our existing and prior programs, right? So we've collected millions of miles of data, you know, in testing and validating prior systems. We can take all of our kind of new software and test and validate it against, you know, that data to identify use cases, to do resimulation, to really improve the performance of our, our software stack. And so that improves the machine learning. Whether we're, you know, as Benjamin mentioned, for our customers, we may need some natural breakpoints, right? If we're just delivering a perception system, and we wanna maximize the machine learning capabilities for that, as well as if we wanna go, you know, full-on from perception all the way into the, to the full software and feature stack, we can leverage the data for that.
We have, I think, you know, given how many Level 2+ deployments we've done in the past, I think we probably have a fairly unmatched data set that we can leverage for those capabilities.
That's it.
Chris, I'd just add one thing to what Anant said, which is, you know, there's the feature that you deploy to the customer, but there's actually the AI factory that you use to develop and do the training, do the testing, because you want to re-- you know, guarantee safety, security, and of course, the increased performance. And that's, like, the foundation that you build the house on. And so one of the things that's great about what Aptiv is doing is, as we invest in ML, we're investing in ML in the DevOps and stack, the AI factory, not just in the feature development and deployment.
Great. I'm gonna combine a couple of questions here about Wind River, but I think this one actually encapsulates it the most. How do we think about any of the challenges of incorporating the Wind River sort of middleware for traditional OEMs? Do they have to make changes to their software architecture? Do they need zonal architecture, OTA capabilities? Because we are seeing a slower than expected changeover in the electrical architectures. I mean, you're seeing that, we know, we see stuff like VW, Rivian, that's been a discussion point for another webinar that would be great to do with Aptiv. But everyone's going slower than expected there. Does that Wind River help, or is it actually a reason for a bottleneck with these traditional OEMs?
Yeah, I can start there, Chris. So there's no bottleneck related to deployment of zonal architectures, right? So even if an OEM is delaying or changing the timing of a zonal architecture, typically, they are deploying ADAS in some other form, right? They might wanna just use a domain controller within a domain architecture, or, you know, in some of these other variants I mentioned, right? It may just be a smart camera. So the compute may just be embedded in the camera, and you're surrounded by radars, right? We can deploy the Wind River operating system, whether it's in a smart camera or a domain controller or an even higher performance compute than just an ADAS domain controller. So it has that flexibility. The porting is actually not terribly difficult, right? So VxWorks is a POSIX operating system.
That means it's specifically designed to be able to migrate from any POSIX operating system. That's a standard that is specifically meant to drive that portability, and Wind River also has a Linux stack, right? So any of our customers who are, have deployed on Linux environments, we can, you know, we can do that porting. And specifically, especially if, you know, like our stack, if they're using a container-based or services-based architecture that's leveraging OCI containers, whether that's on VxWorks or whether that's on Linux, right? We basically are deploying to the same type of software architecture, so we actually think that, you know, it can scale across, whether they're doing zonal or distributed or domain architectures, and whether it's low-end variants or high-end variants. You know, we don't see a significant porting effort. We've...
You know, we actually, when we showed the demonstrations at CES and for other customers, we've been able to port, you know, our own stack actually quite easily.
That’s great. To Kevin’s point, I think it gives a little bit extra confidence that we’re not gonna see these sort of stairsteps and volatility on the year-to-year growth. It’s not gonna be an inhibiting factor. I’m gonna be true to time. I see one more question I’m gonna pull up because it’s a good one for me, and it’s probably for Simon, but it’s around the eight hundred-pound gorilla. We used to always get questions in ADAS around Tesla, right? We all kind of know their approach, high vertical integration. The question here, or a couple of questions, are around BYD. Obviously, also using a high amount of vertical integration, four million units, it’s becoming the de facto sort of leader in China, also uses Horizon and NVIDIA.
Simon, you don't have to say. I know it's always hard to talk about a customer, but the approach of highly vertically integrated OEMs in China and how you see the opportunity for Aptiv.
Okay. So we heard a lot about the fully integrated, vertically integrated, but actually, if you look, the China market right now, they're really trying to do in-house, we are talking about ADAS, you know-
Yeah
... here today is, you know, handful. You got Tesla, you got, you know, Li Auto, NIO, XPeng, maybe Xiaomi is coming along. And BYD actually is not fully in-house on the ADAS standpoint, right? So the remaining customers, you know, are basically, I would say, very similar to our current situation right now. Obviously, with more data involved, you know, there are several. I would say the second group of the customers, such as Great Wall, Geely, I will use them as example, you know-
Mm-hmm
... they are trying to do a little bit more in-house, you know, because it, they still buy, they still buy the hardware portion of it. They still buy the software stack, you know, algorithm from, from the supplier, and they do a little integration in-house, right? So but, but overall... the majority of the customers still looking for the suppliers, like Aptiv, to pull the, to, to, you know, providing the turnkey solutions. So there's a large group. You know, you can name Dongfeng, you know, like, other, BAIC. Those customers tended to be more reliant on full service supplier like us. So we do see the opportunities, even with vertically integrated suppliers, like I just mentioned, right? For L2++, next, what we talk about, L2++. They tended it so far, are more in-house, those five, right?
You know, Tesla is, you guys know Tesla much better than I do. But the remaining four is still doing L2++ in-house. But as I mentioned, for the entry level, for L2, L2+, they already started talking to us. Which means for us, our, you know, scalable modular approach, and especially I mentioned, you know, the, the, the ability we're able to work with the local, you know, ecosystem suppliers. There is another, you know, very important element, you know, on top of SoC, SoC I just mentioned about. The ones you mentioned, we are working with every one of them because, you know, the way we do the architecture. Another thing is, you know, actually, Kevin mentioned about that, is the supplier resilience.
They're actually, you know, like Aptiv has a huge advantage how we can do it. We mentioned about exporting, you know, the Chinese local, yes, exporting outside. There's an opportunity for us. The local SoC is an opportunity also to reduce the overall system cost. So because when the volume goes up, you're competing with very competitive, you know, your dynamic market over there. You know, people here seems like not mentioning about competitiveness. So with everything I just mentioned, Aptiv actually is well positioned. We do not see this vertical integration is gonna preventing us, like us, right? So alone, just talk about, you know, Wind River, toolchain . Actually, we have a real example. We are launching a project right now with Geely. So the with Wind River, you know, edge products in there.
So the customer side doesn't really see that much change because we have a lower capability. We are able to work with SoC supplier, and I just mentioned the MaxEye, you know. It's more integration work we can do with SoC, with vision perception suppliers, rather than, you know, the customer side needs to make any changes. So to answer your question, you know, the supplier like us is well positioned over there to win.
Well, Simon, that's absolutely fantastic to hear, because I think if, you know, a takeaway for me is that if you're able to continue to deliver that 15%-25% lower BOM cost on a similar solution in China, whether that's the $3,000 sort of Level two plus solution, which everyone's racing to, or it's the lower end solutions like the BYDs, because they're putting out some $15,000 EVs, it sounds like Aptiv is really well positioned. Gentlemen, thank you so much for joining today. I know we went slightly over an hour, but the content was fantastic. And, Kevin, really appreciate the numbers that you shared.
Like I said, I think for investors, knowing that sort of, that three points of outgrowth is sort of in safe hands for a very long time, in ADAS is gonna be really, really important. Again, thank you so much for joining. For those on the line, if you want to learn more about Aptiv, feel free to email the team. The deck and the replay of this webinar will be up on their site shortly. But for that, Kevin, Simon, Anant, and Benjamin, thank you so much, and hope to see you again soon.
Thanks, Chris.
Thanks very much.
Thanks, Chris.
Take care.