NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.50
-8.75 (-4.18%)
Apr 30, 2026, 12:16 PM EDT - Market open
← View all transcripts

New Street Research Future of Transportation Conference

Jun 12, 2023

Moderator

Hey, everybody, again. Good afternoon or good evening now to all of you. Thanks for attending this virtual conference on the future of mobility. It's now my pleasure to welcome Ali. He's at NVIDIA as the Vice President and General Manager of the Automotive Business, the Automotive product line. He's overseeing their product engineering, and marketing for the DRIVE end-to-end platform. NVIDIA has been developing and has introduced, in the automotive industries for a few years now. Ali is going to give us, like, a quick presentation, like an intro on, on his perspective, and after that, we'll have a Q&A.

As always, if you want to participate in the Q&A, feel free to email me questions, and I'll do my best to integrate them into the flow of questions I'll ask Ali. Ali, with that, thanks a lot for joining in, and the floor is yours.

Ali Kani
VP and General Manager, Automotive, NVIDIA

Hi, everyone. How are you? It's my pleasure to be here today to talk to you guys about NVIDIA's automotive products and strategies. I'm gonna jump in here and just sort of say that, you know, you guys know that, you know, everything we're talking about here today is, you know, subject to our disclosures and forward-looking statements. In our financial statements, we talked about risks and uncertainties that provides more detail. Within that, I'm gonna jump in and just talk about, you know, NVIDIA's strategy. This slide really is the cornerstone of what makes NVIDIA unique in automotive, and it's that, you know, we're providing products and services that support partners in their automotive ecosystem, from sort of, we could say, end-to-end.

What that means is people need to design their cars, and, you know, NVIDIA has solutions that help customers design their cars, train their AI models, simulate their AV stack. We use DRIVE Omniverse to help customers build their automotive factory, design their automotive factories, also build retail configurator. In the engineering side, we use Omniverse to help people do synthetic data generation such that they can sort of augment the data they have from their real fleet with synthetic data so that their AV product is better. That's what sort of sets us apart. We're not just sort of... We're actually not selling chips in a car and, you know, go build it yourself. We're selling this end-to-end platform for development of our partners.

We're always thinking, how do we make it easier for partners to develop their end-to-end requirements? How do we create those tools and software and services that accelerate their flow? We've said this many times, but the company that has the best development flow, the fastest, will be the company that is most successful in automotive because it's so hard to build sort of Level 4, Level 5 software, to build sort of a concierge experience inside the car, where there's an AI, knows everything about you, knows everything about the car, and can sort of talk to you, and sort of build this end-to-end framework. The more we can make that life cycle better, the better our customers will be and the more differentiated NVIDIA is versus other players in the ecosystem.

This kind of shows you how we think about the ecosystem then. It's not just like OEMs, where we, you know, say, "Oh, we want you, we want this OEM." We want to work with the entire ecosystem. There are software companies that are building AV software. I'm just showing a few of them on this slide, but the point is, NVIDIA is designed to be the best platform for anyone to build their self-driving software on our platform. We take all the learnings from our own development, because as you guys know, we also build AV software and cockpit software, but we have many partners who are building AV software and cockpit software on our platform. We take the learnings of that entire ecosystem, and we put it into the platform for the industry to share and benefit from.

Same thing for simulation. There's all these simulation options. It doesn't just have to be DRIVE Sim. DRIVE Sim actually has APIs for the entire validation ecosystem, such that you can hook into our platform and accelerate your end-to-end development cycle. The other thing to note is, we're quite successful across the stack. You know, when customers are building NEVs that are software-defined, we happen to play in the majority of those platforms. We have partnerships with many of the major Tier 1s. We're in, you know, eight of the largest robotaxi and L4 truck companies. We have partnerships with all of the mapping companies and simulation companies.

The point of it is across the ecosystem, some of them are buying our products in the car, but other of them are just hooking into our ecosystem, such that other partners can take advantage of it for their own AV development. This just shows you guys some of the product roadmap inside the car. I think the key point that I want to stress here is that the needs of AV are constantly increasing in compute requirements because L4, L5 is not even close to being solved. Sensor resolutions are growing. The complexity of networks that people are developing in the car is growing in sophistication. We now have people who are building BEV transformers in their self-driving cars.

We have customers now who are trying to build, like, a single model, almost like, you know, we say large language models, but it's like an end-to-end AV network in the cars. When you have a situation like this, you need a general-purpose programmable architecture because you don't know what algorithm you want to use. You just want to be flexible to be able to change rapidly. You know, we have these products, the performance is growing significantly, but what we ensure is that the APIs, CUDA, the same CUDA APIs, the same TensorRT APIs, are architecturally compatible, not just like from a high-end SKU to a low-end SKU of Thor, but across generations. Thor is software-compatible to Orin, is software-compatible to Xavier.

When a customer takes a car, you could essentially imagine someone takes an Orin car, takes out the Orin computer, swaps in the Thor computer, and the software will just run. Now you just have more performance headroom to be able to create new software and services over the life of the car. That's the platform we're building in automotive. I think one other trend that I want to talk about is that, you know, we lived in this world where every new service or feature a car had was solved with a new computer in the car in the past.

We were getting to the point where people had 100 computers in their car. What happens there is it's super hard to program, super hard to sort of improve that experience, maintain it, and ensure the security of a car, which is super critical. Now what we see is the industry is moving towards centralization, such that multiple computers are being sort of integrated with, unified with one central computer. That's what Thor was designed to do. We're taking a lot of the innovation we have in the data center. We talk about sort of things like Multi-Instance GPU, which essentially means that Thor actually has, like, multiple compute clusters inside it. You can run one application on one, keep it separate from another, keep it separate from another.

Some of those data center technologies and innovation that NVIDIA has is being put in the car. The reason is fundamentally, this next generation self-driving car is truly a data center on wheels, we call it. We get to benefit from the innovation we have in infrastructure by being able to pull it into our vehicle roadmap. The other value of this is that customers now have a lower power, lower-cost solution that they can sort of program efficiently over the life of a car. A range of electric vehicle is better when you sort of can integrate all these vehicles into one core architecture. There's value to the customer beyond just programmability. It's also lower cost and lower power.

One other thing I wanted to talk about is, you know, when we have such a huge investment in platform strategy, that you guys kind of now see when we talk about ecosystem platform, and we have such a huge investment in software, we care about increasing the installed base of our platform. We're excited about the partnership we announced with MTK, MediaTek, where essentially, you know, our GPUs, CUDA, TensorRT, gets integrated into their SoCs for automotive. Now if a customer wants to have access to CUDA TensorRT or DRIVE software, the operating system, the middleware, the application software we've built, we can support that platform with our software, our platform ecosystem advantages. This is just the beginning.

You know, this chiplet that we essentially sort of gave access to MediaTek is available for other partners in the ecosystem, not just in automotive. The point is, when the platform is so valuable and such a huge part of our investment, software is such a huge part of our investment, CUDA TensorRT, the more we can sort of increase the scale and install base of that platform, the better for the industry, and we're all in on expanding our footprint. Jump away from this slide. This slide just gives you guys a picture of how our pipeline... You know, we've announced that the pipeline is $14 billion over the next six years. This shows how the pipeline ramps.

If you sort of take a look at this slide, what I mean by this is XPENG and NIO have ramped in production. Over time, you'll start to see, you know, new Hyundai, Kia models ramp. You'll start to see Polestar line. You'll start to see sort of some of the NEVs, like Lucid, BYD, start to ramp, and then you'll see Mercedes and Jaguar Land Rover. You know, with automotive wins that you announce start to ramp many years in the future. That's why we have a six-year pipeline to give you a sense of sort of the value of the wins in the ecosystem. It takes time to ramp, this kind of shows you how that pipeline ramps in. It ramps in over time.

Those are all the slides, now, you know, I'll open up to Q&A.

Moderator

Thanks, Ali. Thanks for the slide, and thanks for bringing in, like, the broader perspective. Like, in the fact that the NVIDIA DRIVE platform is much more than just a chip, a chip in a car. It's, it's very useful. Maybe I'd like to focus my first few questions on, you know, better understanding that and helping investors understanding it. How does, like, your NVIDIA DRIVE product strategy translate in terms of business model? Do you, at the end of the day, sell mostly chips, or do you think, like, the revenue split in your automotive division is actually going to be to make, like, the money you make out of selling chips, like, a minority of your overall revenue opportunity?

Ali Kani
VP and General Manager, Automotive, NVIDIA

Let me try to summarize the types of products we have. You know, when someone buys our chips, there's, you know, a chip ASP, and there's a software ASP, always. Even if you have NVIDIA DRIVE operating system. You know, if you go to production, it needs to be safe. There's revenues for just the operating system being safe, a license for that. As you go up the stack, you know, we have a customer who might say: I want you to build the entire stack end-to-end. We want you to do the L2+ or L3 software. In those cases, the value of the software can grow to be larger than the value of the hardware, and it just depends on what the customer wants.

One of the things we talk about is our architecture is designed to be modular, such that you take what you want. You know, when you look at that slide of all the partners that we have and how they ramp, note that the vast majority of them take our chip and operating system and middleware, not full stack. You know, full stack partners are like Jaguar Land Rover and Mercedes. The point of it is for us, it's open, it's whatever is best for the partner, and we can layer in more software and services based on what they need. You know, in aggregate, it ends up being somewhere in the middle of that. You know, if someone wanted us to do all the software work, then the value of the software can be more than the value of the hardware.

The other thing is those are not our only products, right? Like, if someone wants DRIVE Sim synthetic data generation, then in that case, it's only software, right? We're giving them access to a simulator software, and they're just using that to simulate their AV experience. Same thing with, you know, if someone wanted. There's other hybrid products like OVX, Constellation, and so then people buy our hardware, so it's like a replica of an Orin or a Thor that you would have in the car, or it could be x86 CPUs and discrete GPUs, because we have some customers who architect their robotaxis like that. They want some of our DRIVE Sim software, and so then that would be a software revenue on top. It really depends.

One of the things is we have so many products, it can start with hardware, but actually, in some cases, a customer could not buy any hardware from us and just pay for software, because DRIVE Sim, you could actually just license the software, not use our hardware at all. We kind of run the gamut, and I think the one thing that's consistent across the stack actually is the software.

Moderator

Okay, that's very helpful.

Ali Kani
VP and General Manager, Automotive, NVIDIA

From us. Yeah.

Moderator

That's very helpful. Then you mentioned, you know, the very important, like, ecosystem aspect of what you do. Is that, you're not aimed to be a platform just for, like, NVIDIA products, but you're very open to a broad ecosystem of third parties. The question I was wondering is, do you open also this ecosystem in terms of hardware? You kind of answered that question already, partly, but I'd like to push it, a bit further. Okay, do you have clients who would choose, like, an onboard hardware that is not, NVIDIA, but then use NVIDIA elsewhere?

As I was thinking of the question, I actually thought about Tesla, who is a very well-known, very large client of NVIDIA, because they run amongst, like, the larger GPU clusters in the world. Why is they have, like, their own chip in the car? I'm pretty sure you can't comment too specifically on NVIDIA, but, yeah, tell us about how open you are to alternative onboard hardware and how NVIDIA can help an OEM that would choose, like, Mobileye or Moto, or Qualcomm or another solution on board, but we still want to have a NVIDIA involved.

Ali Kani
VP and General Manager, Automotive, NVIDIA

Yeah. We're very open. That's a great question, by the way, because I think it explains our strategy. Most of our partners, actually, if you think about sort of the partner list that we have, we have many cases of customers who do not use our computers in the car, and they're really important customers for us in the cloud. You gave an example of Tesla, we help Tesla with training and simulating their AV model, using our hardware in the cloud, but they don't use us in the car, and that's perfectly fine. They are a great customer of ours. We architect our stack such that that's totally fine. Now, of course, if someone used us in the car and someone used us in the cloud, there's some like, advantages for them.

Like they can train on CUDA and TensorRT in the cloud, and those same APIs, the same algorithms that are developed there, can actually run inference in the car. You could always train on GPUs in the cloud and then change it up to your exact architecture, Mobileye or Qualcomm in the car, and that's perfectly fine. We have many cases of that, and we're very open to that. We're very supportive of that. The way I would say it is, we wanna help partners regardless of where they use us. Because we have so many layers of software, it's perfectly fine, right? Someone may say, "I only want your help on simulation.

Moderator

Yes.

Ali Kani
VP and General Manager, Automotive, NVIDIA

I'm using someone else." We would say: "That's great. Let us talk to you about how we can help you on simulation. We can give you synthetic data generation. We can help you with retail configurator. We can help you with training of your software, of your own software on Mobileye, in the cloud." Those are all great opportunities for us, and so we're very open to it. We architect it like that. The other thing you kind of mentioned and you hinted is that. Of course, there's cases where someone says: "Well, I'd love to have access to some of this capability in the car, but I don't want to buy your chip.

Is there a way that I could sort of license some of the things that you have in your car and put it in my own SoC? The answer to that, of course, is yes, because that's what we just announced with MediaTek, right? Yeah, that's fine. If you want to have access to some of our capability, put it in your car, it could be a competitor. It could be just an OEM, and we would be happy to sort of give them access to, you know, the GPU, and let them build their own solution in the car. Then, of course, then we can help them in the cloud based on that.

Moderator

Yeah. Okay, that's very useful. Now I'm going to ask you, like, the really tough question on this one. You mentioned, like, a $14 billion-$16 billion pipeline, and now, as you can imagine, I want to put that in my spreadsheet, and I'm wondering, you know, how this pipeline splits between, like, hardware and software and platforms and full stack and operating system and middleware, et cetera. Can you comment on that in some ways?

Ali Kani
VP and General Manager, Automotive, NVIDIA

Let me answer it slightly differently. I feel like the pipeline, if you kind of saw the way I showed it to you saw the full stack customers are at the very end of this forecast period, right? You know, the majority of what you see are, you know, the customers that sort of skew more on SoC and lower software revenue. Like I said, as you know, we scale up and we do the full stack, those numbers can be larger than they are. I'm just gonna answer it like that and not kind of say the average across the six years. I think it's just more of customer base and, you know, what you end up wanting. There's a mix in that pipeline.

It's true that, you know, some of the partners, the software revenue is, you know, larger than the hardware revenues.

Moderator

Okay, before we move on to another topic, one very last question. Like an OEM customer who partners with you for, like, training, for like, simulation clusters, we would be more of a data center client at the end of the day. Is that typically a customer recognized in your division, or is that a customer that will be more like sitting in the data center division of?

Ali Kani
VP and General Manager, Automotive, NVIDIA

Yeah. The way I would answer this is, first, we report it in data center. The cloud, the infrastructure, we report it in data center, but actually as an internal business unit, like, how do we run the business? We actually manage it within the automotive team because we're constantly working together, right? We're talking to a customer in the car, and we talk to them about the cloud at the same time. Internally, we manage it as one business, but just to keep it simple to the investor community, we just choose to take the cloud and put it in the data center revenues.

Moderator

Okay, the same logic applies to your pipeline. All the pipeline you talk about is always a pipeline that will be. That's the revenues that will be re-reported in the automotive division. Okay, great. Thanks a lot for these clarifications. Let me ask you one question. If we look at the broader, like, passenger car opportunity, it's really NVIDIA, Tesla, Mobileye today. Tesla is as integrated as can be. They do everything in-house, and Mobileye is offering a fairly integrated package. They've opened it a bit over time, offering APIs and customization opportunities, because very clearly there was a need for that in the market. You are, like, the most, like, open player.

The question I have for you is: How do you think about that potentially creating, like, a dependency for, like, the adoption of your technology, which is that if you offer a more open environment, then that means OEMs. One of the reasons why they will choose you is because they will want to develop their own technology, have their own, make their own system integration. We unfortunately know that sometimes these companies struggle to do a good job at that, and might end up, you know, delaying the technology. We've heard that. I'm not going to give any names, but we've heard news flow about, like, large car manufacturers throwing themselves into these initiatives and actually failing to deliver.

Do you see that as a risk, to your, to your, like, open strategy versus offering more of a turnkey solution to your OEM clients?

Ali Kani
VP and General Manager, Automotive, NVIDIA

First, let me sort of say that just remember that our strategy is, all of the above.

Moderator

Yes.

Ali Kani
VP and General Manager, Automotive, NVIDIA

We have customers who want to sort of have someone just build the full stack for them. In that case, of course, we can, we can deliver that solution. We might have someone who wants, like, half and half. They want to be, like, really building the software and also taking some software from us. We have some cases where customers say, "I'm going to build the whole stack. Just want to develop on your hardware." I think for each of those, there's risk, but it's the risk that the OEM wants to take because it's most aligned to their strategy. What I would say is that, you know, you started with like, you know, Mobileye is like this, and... The point is there are cases where some customers say: This is our strategy.

We do not want you to force us to take your perception software. We want to build it ourselves. Of course, for us, that's okay because we do that all the time. For some partners, it doesn't work like that. Like, there's no such case of a Mobileye customer who isn't taking a good chunk of their stack. You would never do that. The risk is really more about what does an OEM want? We support them, but the value of our approach is that if a customer does have challenges, and they later come, and they say, "Can you help us?" Let's say they wanted to do everything, but they just aligned the hyper architecture. We actually just come in later and say, "Oh, yeah, like, we could...

Like, if you're having problems with parking, we can just give you the parking stack 'cause we've developed it on the platform. There's an opportunity for us to help them if some of those risks come to fruition. It's, of course, not even something they envisioned doing at the beginning. There are cases where partners come to us later and say, "Hey, it ended up harder than I thought. Can you help us?" Then, you know, in our, in our case, we're like: Yeah, we're happy to help. We sort of add more of the software to the offering that we provide them.

Moderator

Okay. Do you see a trend in the market? Would you say, like, OEMs are more and more, like, moving towards, like, a more integrated approach where they will want NVIDIA to do more?

Ali Kani
VP and General Manager, Automotive, NVIDIA

I would say that the OEM strategy, the OEM strategy long term, is to be able to have this competency in-house. Long term, like, I don't mean, like, the next SOP. I just mean, in 15 years or, you know, in the future. I think all OEMs feel like the software that runs in their car is something that they should be responsible for. It just so happens that there's a lot of investments OEMs are making right now, you know, the electrification of their fleet, software-defined cars, the cockpit software, the AV software. Not all of them can make all these investments at the same time, and not all of them actually have the know-how to hire, you know, the world-class team for each of these functions. We try to partner in those places, you know, where it helps them. And...

I think long term, everyone wants to be able to do it. What I would say is for L4, L5, I think what we've seen is it's far harder than anyone imagines, and I don't believe that we're gonna see, you know, an OEM doing that in the short term themselves. I think they're gonna be partnering. Then, you know, in the future, you know, after it's in production and, you know, their team is ramped up, you could imagine that, you know, in 20-year horizon, that, you know, an OEM might have the ability to be able to do something, but that's after it's been proven, you know, and after all the training and all those things, so it'll take a long time.

Moderator

Okay, that's interesting. Even if you see maybe a trend of them coming back to you to ask for more of the stack, the long-term ambition is still to be able to develop internal competencies. That's right?

Ali Kani
VP and General Manager, Automotive, NVIDIA

Yeah, the ambition is absolutely right.

Moderator

Yeah.

Ali Kani
VP and General Manager, Automotive, NVIDIA

I think every OEM wants to be able to do it, and I think I'm just saying it's a lot harder than people think, and, you know, when it happens, it's hard to predict, but I'm sort of telling you, I think it's, like, 20 years out.

Moderator

Very good. I'm going to try and sneak in, like, two last questions in three minutes. That might be a bit challenging. The first one is, our work on NVIDIA shows the cost base for NVIDIA, especially on a cost of goods sold, like on a bill of material basis, is very, very competitive. The hardware they have in the car, because they only have Vision, because they have their own chip, is very competitive. Mobileye this morning really talked about their cost competitiveness as being a very strong, like, underlying driver of their very strong market share today in what's deployed, and also, you know, underlies their confidences in having a very strong market position going forward.

With, like, I would say a total system that costs $3,000-$4,000 with, like, the actual, like, electronics and semiconductor parts that cost between $1,000 and $2,000. What can you tell us about, like, the cost positioning of NVIDIA? It's, of course, a difficult question because you have an open ecosystem. It's, like, really not like an app, but how do you think about your, your competitiveness on cost, on a like for like kind of feature basis?

Ali Kani
VP and General Manager, Automotive, NVIDIA

Again, there I would sort of say two things. One is, our focus is on L2+, so L2+ and higher. We believe that every entry car will be L2+ over time. Like, we're gonna see that shift just because there's software and service opportunity for a fleet. You know, if you're building an L2+ car, our position from a cost and perspective is very competitive, and you kind of see that, right? Like, who's really building the L2+ cars in the industry? There's Tesla, and then outside of Tesla, there's, like, a bunch of Chinese OEMs. There's, you know, Volvo, Mercedes, Jaguar Land Rover. Those are the guys who are really building L2+ software-defined vehicles.

You know, as you see, our share is very successful in those markets, and so the price, performance, and positioning of NVIDIA is quite good in that segment. Then as far as sensor set goes, our strategy is unique, right? Like, we've shown a Hyperion architecture, but that architecture is scalable. We, you know, that's the L3 configuration. Some people say they just wanna build a single Orin computer with just cameras and radar, and they can do that. Then the cost of the sensors is super low cost, the cost of the computer is low, and so people are building systems in the L2+ market, at really attractive price. I think our positioning is good, and it's scalable. Does that make sense? It's scalable.

It's good for L2+, it's good for L3, it's good for L4, and it scales up. You certainly don't need lidar on L2+, okay? You just need camera and, you know, a couple of radar, and our architecture supports that. The other thing is each customer for our platform can choose the sensor set themselves because we don't have to deliver the software for them. There's some customers who say, "Hey, I want even a lower cost sensor set," and they're able to do that. That's why our architecture is quite successful. We never force a sensor set on you. You could actually come up with your own sensor set, and use that as your...

Moderator

Okay, that's very clear. My last question would be, we've talked mostly about, like, the passenger car opportunity. On the, on the robotaxi, like the driverless experience, I'll have just one question from you. Based on all the projects and teams with which you interact at different levels, what's your view on, you know, the path for adoption? Where do you see the first, like, significant rollout of these vehicles and timeline as well?

Ali Kani
VP and General Manager, Automotive, NVIDIA

First I'll say that, you know, I think we'll find a couple cities that will sort of green light these trials, and, you know, we just need to see how successful they are. We're already seeing it in some of the cities where, you know, you know, it could be at night, a rogue taxi allowed to operate just because it turns out that they're proving that they're safer than drivers in the real world. I think that we'll see rollout in commercial goods delivery, because then there's no one in the car, and you're just transporting goods from one place to the other. I think we'll see adoption start there. You know, I think that this, the whole industry needs to be super cautious, need to be super responsible.

Don't rush anything, because, you know, even one accident kind of becomes such a big deal for the industry. I actually believe it will take time. You know, you'll get some early success, then, you know, we just need to take our time, make sure we're doing it right. I don't expect, like, passenger cars in this decade. That's a much harder problem. It's unrestricted, it's undefined. It could be anywhere. I think that that will take longer. It's not just about the technology being better than a human. It's you need to be even much better than a human. Because, you know, if a human makes an accident, we're sympathetic to it, but if AI makes an accident, I think we're just all a little bit less sympathetic to it.

While I do think there'll be trials starting, you know, now or in the next year or two in certain cities, I think sort of the L4 true consumer car is next decade, and could even take longer. It just depends on how good it gets. We really can't make any mistakes. It's super challenging. This, by the way, is why you need not just validation, but simulation. Because then we help partners with that, you know, and I think we have to come up with even better ways to simulate all the scenarios that could possibly happen to make sure it's safe.

Moderator

Ali, I would like to spend, like, two more hours chatting with you about this fantastic opportunity in the industry. Unfortunately, we're already over time. Thank you very much for making the time, for participating to the conference and bringing your perspective and the perspective of NVIDIA. I hope we stay in touch.

Ali Kani
VP and General Manager, Automotive, NVIDIA

Thank you.

Moderator

Thank you.

Powered by