I'm doing a fireside chat with Colette. Colette, how was your Christmas?
Mine was good. And yours? Did you do anything much?
Yeah, I prepared for this.
We got a big room.
Oh, nice to see all of you.
They all seem to be.
Everyone, welcome.
I guess they want to talk about a few things. Okay, I'm going to start with my great little opening statement. As a reminder, this presentation contains forward-looking statements, and investors are advised to read our reports filed with the SEC for information related to risk and uncertainties facing our business. Welcome, everybody. We're excited that you all came out and hope many of you got the opportunity to do the keynote last time. It's been a while since Jensen's come out and done a keynote. I think it's more than four to five years, and it was just exceptional to have so many of you here. We're using this time for a Q&A. I have with me my team from IR. I've got Stewart, and I have Andrew here that will help in terms of what your questions are that we can answer.
So we'll raise hands, and we'll see if we can address the room here. We've got about an hour or so to go through, and we're not just going to start with any opening statements. We're just going to go immediately to Q&A. How does that sound? Okay, we'll go from there.
Who's in charge?
I've got Andrew and Stewart.
Stewart, welcome to see us. They're up here. Stewart, you're hilarious. I believe the system's already broken down. One shot.
I think we're good.
We're good?
We're good.
Let me tell you about this. Go ahead. Hurry.
We're good?
Yep.
Awesome. Thanks. Thanks for doing this, guys. Stacy Rasgon, Bernstein Research. Maybe one question, one tiny follow-up. Just on the first question, the near-term guidance, I think, had revenues up a couple of billion dollars sequentially. It sounded like from your commentary on Blackwell, though, that the Blackwell sequential should be more than that. You talked about it being better than you thought before, which is better than more than several billion, whatever that means. But it sounds like it should be more. So I'm just curious, what is going on with the rest of the business? Is there something around near-term Hopper demand or crossover that we should be aware of? Is it just conservatism? How should we be thinking about the different components, especially as now Blackwell, as you mentioned yesterday, I think, is now in full production and looks like it's ramping pretty strongly?
I'm not even too sure how to answer that question. What is the question?
The question is, if total revenue guidance for the quarter - and I apologize to just dive right into it like this, but it's been something I've been wondering about. If total guidance is up a couple of billion and it sounds like Blackwell should be up more than that, Colette, maybe what's going on in the rest of the business to get there?
Stacy, we're not changing our guidance.
I'm not asking. I'm asking you to describe your guidance for me. Does that make sense?
Yeah, but I thought we did a pretty good job describing it at the last quarter.
Could you describe it again? I didn't exactly get a good answer on that quarter.
We're going to ship Blackwells and Hoppers this quarter.
Okay.
We're going to ship Blackwells and Hoppers this quarter.
Okay.
Are they both growing this quarter?
The total is growing. We believe our Blackwell, as we provided in guidance, was $several billion, and we probably will do a little bit more than that. We do believe that is. Blackwell's doing quite well. I know you all are watching in terms of many of our data center customers standing it up, getting these very complex systems up and running. But all in terms of what we had planned is exactly what we're working on right now.
I guess I'm also just within that interest in what's going on with Hopper, because you said you were supply constrained on Hopper. Well, is that constraining Hopper growth into the quarter? Or I'm just trying to figure out what those pieces are doing.
There's always going to be a situation that with Hopper and Blackwell, in many times and many different configurations, they may share some components between that. And so there's not unlimited in anything that we can build. But all is going fine. Hopper and Blackwell both will be shipping well this quarter. Absolutely.
Okay.
Just one more really quick, unrelated. Why are you using MediaTek for the SoC on the Project Digits? Why wouldn't you just be able to do that yourself?
Because they do such a good job building low-power SoCs. And we've got no trouble partnering with other people to build something. If somebody else can partner with us to do something, then we could do something else.
Okay.
Yeah. And they did a wonderful job. And we shared our architecture with them. And they now have NVLink in their company. We designed a coherent CPU with them, architected a high-performance CPU with them. And now they could provide that to us, and they could keep that for themselves and serve the market. And so it was a great win-win. And saved a lot of engineering, and they did a great job.
Thank you. Aaron Rakers with Wells Fargo. Kind of sticking on the Project DIGITS announcement yesterday. Pretty interesting given the expectations were out there for just a CPU strategy. So I guess my question, and I have two quick questions, is one, where do you see the PC opportunity presenting itself? Is this a PC move by NVIDIA? Is it that form factor that we think about? Or is this an iterative process of moving deeper down the path of the PC market? And then I have another quick follow-up.
There's a gaping hole. I'm getting incredible emails from developers about digits. There's a gaping hole for data scientists and ML researchers who are actively working. You're actively building something, and maybe you don't need a giant cluster. You're just developing the early versions of the model. And you're iterating constantly. You could do it in the cloud, but it just costs a lot more money. So now that's the reason why personal computers exist, so that you could have an OpEx versus CapEx trade-off. And so for a lot of developers, they're trying to get through using Macs or PCs. And now they have this incredible machine sitting next to them. You could connect it through USB-C, or you could connect it using LAN, or you could mostly just connect it using Wi-Fi. And now it's sitting next to you. It's your personal cloud.
It runs the full stack of what you run on DGX on this little device. PyTorch runs exactly the same way. NeMo runs exactly the same way. Everything runs exactly the same way. When you're done, you could even host it because it's essentially your own private cloud. You could just host it. If you would like to have your own AI assistant sitting on this device, you can as well. This is really designed not for everyone, but it's designed for data scientists, machine learning researchers, students. This is just going to be completely groundbreaking for them. Engineers, that's who it's designed for, developers.
As a quick follow-up, there's a lot of debate questions out there around GPU general purpose relative to where the market might be evolving.
I didn't answer your other question. You didn't harass me about it. I appreciate that. Your other question is, what else can I do with it? I'm going to have to wait to tell you that.
That's what I figured the answer was.
Yeah. Yeah, and obviously, we have plans.
I think you knew where I was going with that second question. Where do you see the lines between GPU and custom ASIC evolving? Has that changed in your mind at all? Just curious of kind of given some of the recent reports from some others, how that's evolved in your mind. Thank you.
Yeah. A couple of things. First thing is take a step back and even just ask yourself, why is that question important today? And the reason why it is, is because it is now universally clear that accelerated computing is the path forward. I think that's one grand observation, that general purpose computing is done. Hand coding instructions on CPUs as a way of developing all software is over. And that the future is likely machine learning, accelerated GPUs, neural networks, and so on and so forth. And so that's the first observation, that we are seeing across the board recognition by literally every computing company on the planet that machine learning and accelerated computing is a path forward. And so you have to take a step back and just say, okay, what does that imply?
That implies that every data center in the future is going to be built different. Literally. Going forward, if you're not putting accelerated computing and machine learning capable systems in your data center, you're doing something wrong. Nobody should be building a data center that is filled with a whole bunch of general purpose computers in the future. Makes no sense. Okay? So that's the first observation. I think we've been saying that for some time, and now it's broadly, without saying it loudly, it is broadly accepted. Okay? Universally accepted. I think the second thing is, and so we obviously agree with that philosophy and agree with that vision of the future. The second thing is we're doing something very different. Rather than a custom ASIC for one company, we're building a computing platform for every developer, for every company. And so we're not building a custom ASIC.
We're building a computing platform. And the benefit of a computing platform is that you never know what's the next amazing application you're going to attract. So let me give you an example. All of a sudden, we attracted world foundation models. All of a sudden, we attracted robotics. All of a sudden, we attracted the ability to do computational lithography for TSMC. All of a sudden, we attracted all these different things. A computing platform has the benefit of computers, the thing that we all valued about the last 60 years. We don't want to use the same architecture anymore because it's general purpose, but we love the concept of a programmable architecture. In the last couple of two, three years, what has happened?
Everything from incredible new ways of doing attention, the attention is the mechanism for transformers, this incredibly computationally intensive thing, the invention of large context windows, speculative decoding, a whole bunch of new technologies are being invented. As a result, we now have this multiple ways of doing scaling. So much of it is invented on NVIDIA GPUs. And the reason for that is because we're easy to program. If you want to change the algorithm, you want to change the architecture, so be it. And there's state space decoders, fantastic. Hybrid versions, fantastic. And so all kinds of different architectural innovations are happening as a result of the fact that we are a computing platform. You could use us for data processing. Notice all of a sudden, the amount of video that you have to process because of these world models and multimodal models.
We trained Cosmos, the foundation model we're just about to release, 20 million hours, 9 trillion tokens, 9 trillion tokens. It would have taken several years on CPUs to process that data, and so instead, we accelerated the entire pipeline, and you do it in hours, and so that's the fundamental difference of having a programmable architecture like NVIDIA's CUDA and having a rich ecosystem that you could go into literally every cloud, and you could go on-prem, on the edge, you can go into a base station, telco base station, the work that we're doing with SoftBank and the work that we're doing with Verizon, and so we could put this architecture literally everywhere. It's hard to do that if it's a custom ASIC. Now, the world of computing is large. Today, we're looking at CSPs, and that's terrific.
But don't forget, the world of computing is large. There are Sovereign AI systems to be built there. AI is going to be part of the national infrastructure the way telco is part of a national infrastructure. Every country has its own telcos. They all have their own banks. They have their own, et cetera, et cetera. And so the ability for every country to be able to build their own national infrastructure, you can't do that with a custom ASIC. And so our opportunity in enterprise, private clouds, in regions, in edge, in robotics, in self-driving cars, in all of the universal, all of the other accelerated computing simulations, I mean, computation you have to do, our reach is just a ton broader. If you're going to cut that little tiny sliver of an ASIC, it's going to be hard to do.
And then the third point that I'm going to make is this is hard. There's some evidence that it's hard. Look at the number of startups that have been started and not successful. It's incredibly hard. And the reason for that is because it's not a chip problem. This is definitely a systems problem, and this is definitely a full stack problem. The software stack necessary to make one of these systems useful is not about 10 engineers. We're talking thousands of engineers, and it's going to take years to make it great. And how many stacks can the world make? That's the next question. How many stacks can the world make? The number of stacks that the world can currently sustain is about two and a half, three, barely. And so that kind of puts it in perspective.
Now, when the world then goes off and builds 20 stacks, is that good for us or less good for us? Surprisingly, the answer is it's better for us. I'd rather compete against 20 competitors in a world where there's limited resources than one. Right? It's very logical, and so I would look at it in those three things, and we'll find out in a few years. It takes a couple of two years to build something, and let's say people are doing it for the first time, takes a couple of two, three years, and then it's hard to get the first one right, and so let's say it takes a couple of iterations. We'll find out in a couple of few years.
Joe Moore from Morgan Stanley. I wonder, you gave the $5 billion number for the automotive vertical in the keynote. I assume that's still mostly data centers this year. But can you just talk about that investment and what does it tell us? It's taken longer to get to autonomy than certainly I would have thought. Can you see the investment now that there's an inflection point coming?
Yeah, thanks, Joe. Yesterday, I said that we were in three robotic systems. Agentic AI, and agentic AI is the perfect example of test-time scaling. This is that phrase called test time, training versus testing. Test-time scaling. The reason why agentic AI is test-time scaling, the perfect example, is because it has to reason, break it down step by step, go retrieve information, read that information, understand it, put it into context, reason about it, do something, eventually generate some answers. Agentic AI could reason about things, work on things for thousands of inferences. What used to be what people thought inference was, and it was in the beginning, computer vision inference, people used to tell me, and you know I've said this before, people used to say, "Jensen, inference is easier. Training is hard.
That's why that's NVIDIA, but inference is easier." There is nothing easy about thinking. Anybody who thinks inference is easy, they're just not thinking. And that's the absence of intelligence. And so when you reason about it, let's say, you just think about what is inference? What is inference compared to foundation learning? What is reasoning? What's thinking? What's deep thinking? You do mental simulations, right? You game it out yourself, isn't that right? You do purposeful practice, so on and so forth. I mean, those are all part of thinking. And so agentic AI, test-time scaling for sure. Fine-tuning, not much necessary because you already started with pre-trained models, but test-time scaling computing intense. Okay? That's one. The second is robotics and AV. That is going to be data processing and training intense.
Look at the size of the AI superclusters that Elon has with NVIDIA's Hoppers and Amperes and Hoppers. It's gigantic. Every car company in the world will have two factories: a factory for building cars and a factory for updating their AIs. Makes sense? Because every single car company will have to be autonomous or you're not going to be a car company. And so we know now everything that moves will be autonomous. That's a foregone conclusion. We know now that Waymo has got, right? They have turned a corner. I mean, that's clever, right? Waymo's turned a corner, and it's accelerating. We know that FSD version 13 is, I thought version 12.5 was incredible, and version 13 is incredible. We know that every single EV company in China has AV capability, every single one. If it's EV, it's got AV. That's going to be the standard.
And so that's going to put enormous pressure on every single car company on the planet. If you don't have two factories, an AI factory along with your car factory, you're not going to be a car company. And so the amount of data processing is intense. Now we're going to have robotics. There can only be so many cars because there's so many humans to move around. But there's no limitations to robots. It could very well be the largest computer industry ever. And the reason for that is we don't need more cell phones than people, but robots, you could build as many as you like. And there's a very serious population and workforce situation around the world, as you guys know. The workforce population is declining in some manufacturing countries fairly seriously.
It's a strategic imperative for some countries to make sure that robotics is stood up and productive in the next several years. Their population is not going to grow, not for a foreseeable future. Now you have these three robot industries, three robot platforms that we serve, the NVIDIA AI Enterprise, which is our agentic AI, that's a test time scaling. The other two are across the board, three computers, lots of data processing, lots of training, lots of simulation. Whenever I say Omniverse simulation, think RTX Blackwell, racks and racks and racks of RTX Blackwells. Those are used for simulation and generative AI and Cosmos and things like that. That's the second computer, and the third computer is the computer in the car. We're going to do about $5 billion this year in automotive. The car computer itself is very sizable.
It's not insignificant, obviously, because we're doing very well in EVs and AVs and robotaxis, and all of those things are scaling now. All those things are now scaling. And so anyhow, I think, sorry if I didn't answer your question after all of that, but we have three robotics. The two robotics businesses, oh yeah, the AV, the reason why that's going to continue to grow so significantly is because the amount of video data we have to train the robots with, the video data to train the cars, the video data to train the autonomous robots.
Hey, thanks. This is Ben Reitzes from Melius Research. First, I just want to say thanks for having us all here. Really grateful for how you treat all of us. Look how many people are here, obviously.
Thank you. Thank you.
It's very unique and very appreciated. You made this job fun again. So I wanted to ask you, now that I buttered you up, I wanted to ask you about something. A lot of, and Stacy.
Everything would have changed for me last night if Mjolnir actually came to my hand. For a second there, they said, "Jensen, you're Captain America." And I was like, "If I'm Captain America, then I must be worthy. If I'm worthy, Mjolnir must come." And you guys didn't get that? As I was walking on stage, somebody goes, "You're going to be Captain America." And then while I was on stage, I was like, "If I'm Captain America, then I'm worthy." You guys know? Nobody got that. Wow, you guys worked so hard.
You're the most worthy person in the world to answer this question.
Cut it out.
My question is about the long term. And I asked you this a little bit on the call that you had for the quarter, and you said you guide one quarter at a time. So I don't want to end up in the same body bag as Stacy here, but the.
Wow.
Talking about the long term.
Love you, Stacy.
The main question.
I didn't know how to answer it.
The main question we get is what happens after 25? And you created this market. You've talked about $1 trillion in infrastructure that needs to be upgraded, a few other trillion. But Jensen, you created this. Nobody knows. You have people saying, "Oh, it's going to peter out next year because of this, this, this scaling laws." We're all dying to know. I don't think you're going to tell me, "Ben, it's 37% CAGR through 2030." But how should we think about it? What inning are we in? How do you want us to think about the long-term growth potential? Because no one knows better than you. You're the most worthy.
Thank you. I appreciate that. I appreciate that. Yeah. As you know, usually when I answer a question to you guys, I reason through it. I'm actually giving, and Colette knows, if you're all employees, I would answer it exactly the same way, and this is how I reason through it. Okay? So number one, the first reason why the growth is likely sustained long term is because general purpose computing is over, machine learning is the future. Do you guys agree with that? That first principle is vital. That general purpose computing is over, that the path forward is accelerated computing. That first thing that I said, if there's no belief, then we're in trouble. That I believe that accelerated computing is the path forward, and the reason for that is because machine learning is the path forward.
That hand coding is likely not the right answer going forward. Okay? And you're not going to hand code anyways because guess what? You're going to have an AI assistant help you code, and that AI assistant is, guess what? Machine learning based. And so it is very likely that the first premise of what I said is very important, is very true. And I think there's abundant evidence that that is true. Therefore, the trailing $1 trillion worth of general purpose computers will be modernized over the course of, call it a few years, and pick your favorite number of years, say four, it's going to get replaced with modern computers.
Another way to say it is, if you were a computer company, if you're a computing company and you're building a data center and that data center is full of CPUs, general purpose computers, you really ought to self-evaluate whether you understand computing at all and whether your company is moving forward. And as you know, all the cloud service providers, whatever their CapEx is, whatever their CapEx is, is going to largely go into accelerated computing going forward. They're not going to go, "Why build more general purpose computers, data centers, when the gross margins of renting a CPU is basically below cost?" You're renting it basically for free so that you could get their storage. Why do that? Why not invest in a modern computer where it's margin accretive, actually generates revenues? Okay? So that's number one.
Number two, the second thing we have to believe is that AI is a new layer above the computing stack. That it's a new layer above everything that we've done. The reason for that is this. The last layer is called software, and software are tools, tools used by humans that we build. We use them, and that layer, let's use an example. My favorite tool, Outlook, I use it. Excel, I use it. PowerPoint, I use it, and VS, I use it. Does that make sense? So these tools, Cadence, Synopsys, we use them. They're tools. What AI is, is not a tool. AI is an agent, is a robot that sits on top of tools that use tools. That's what an AI is, right? What is a self-driving car? A self-driving car is a digital chauffeur. It's not an FM radio.
It's not an operating system. It's a digital chauffeur. And so it's an agent that sits on top of the current stack. That layer has never existed before. Do you guys agree with that? That's why AI is a growth industry. If we're successful, and there's every evidence we're successful, that AI will be a growth industry. And then here comes the next part. Whereas software runs on general purpose computing, the AI that is created, where we are fed by our cafeteria and fed off our paycheck, AI is fed off of an AI factory. This is that factory I was talking about to Joe earlier, right? We have two factories, a car factory and an AI factory. Every robot will be fueled by factories, and these factories are AI factories. And they're going to generate these things called tokens.
And so we're going to have a whole bunch of factories. Those factories never existed before. Do you guys understand what I'm saying? They're not a replacement of data centers. They're a new thing. New thing. And so my belief is that this AI industry is a new industry. Part of that industry will be AIs that are services, and part of that industry will be the factories that power those services, which is the reason why everybody knows that AI is, whereas the software industry was CapEx light, AI is going to be CapEx heavy industry because AI has a factory. Makes sense? So I just reasoned about it layer by layer by layer for you. Okay? And now the question is about timing. We're working as fast as we, of course, the cloud services.
AI started in the cloud, and it's got wonderful cloud services and things like that. But enterprises need AI as well. And the mental model for AI in enterprise is really assistants, AI agents. And those are really a digital workforce. Okay? And so that's where NeMo and NIMs and Blueprints all come in, NVIDIA's AI Enterprise. We work with ServiceNow and SAP and Cadence and Synopsys, and companies who would like to create these AI agents that use their tools that they can rent to their customers. Okay? These AI agents, they can rent to their customers. And so that, I think, is our way of addressing enterprise. There's a billion knowledge workers. Everybody's going to have AI assistants. There are 30 million software engineers.
Starting next year, if a software engineer is not assisted with an AI, if a software engineer in your company is not assisted with an AI, you are losing already fast. Every software engineer at NVIDIA has to use AI assistants next year. That's just a mandate. As soon as we're using a whole bunch of different ones right now, but as Colette knows, but I would like us to gravitate towards a couple. There's a reason why there's more than one, and gravitate towards a couple. But every software engineer must use an AI. It's a mandatory thing because otherwise they're not coding fast enough. Now, that's 30 million of the most expensive professionals in the world. Augmenting that, accelerating that is good value. That's just one. Digital marketing. If your marketing organization is not using AI next year, it's because you're not doing marketing.
You're going to be in trouble. And so one sector after another, one function after another function in your company, enterprise. And then we already talked about robotics, which the self-driving car, we've been working on it for 10 years. It's now kicking into gear. And then robotics, all of you see the trends, the enabling technologies, the critical technologies for humanoid robotics. Humanoid robotics, let me tell you why that's so important. It is the only robot you could deploy at Brownfield. We don't have to change one thing. We don't have to change a lick of the planet, and we can deploy robots. The reason why it was so easy to deploy smartphones is because we all have pockets. Okay? And so the problem with robots with wheels and tracks and things like that is you have to change, you can only use Greenfield to do that.
Pick and place arms, you got to go build it into something. But humanoid robotics, just buy them, deploy them. That's why you can scale them. And because you could deploy them easily, you could scale them, because you could scale them, the economics will work for you. Economy of scale will work for you. And because you could scale it in volume, the technology flywheel will work for you. As a result of that, the innovation will accelerate less than 10 years from now. I'm certain of it. Humanoid robots will surprise everybody how incredibly good they are. And so that kind of now I came up through the platform stack for you, and then I went through time for you. Okay? That's kind of how I reasoned about it. And when I think through that, as I reasoned through that, I work backwards.
And I said, "Okay, what does NVIDIA have to do to realize that future for the world? What do I have to do to make that possible for the world to do?" And that's the reason why NVIDIA AI Enterprise, the agent toolkits are made the way they are. And that's the reason why our partners are who they are, because we're not trying to disrupt the enterprise IT ecosystem. We're trying to enable them to be able to serve the world's enterprises with agentic AI. We're not trying to disrupt car companies. We're trying to enable them.
And so we create all the pieces, and then we can make them available to Toyota, and we can make them available to Waymo the way they want to use it, make them available to Tesla the way Elon would like to use it, make it available to BYD and Xiaomi and NIO, XPeng and Li Auto and Mercedes and JLR. Everybody wants to use the stack differently. And so we create it in such a way that we have deep domain expertise, but they pick the parts that make sense for them because of their corporate strategy or corporate identity or whatever it is, skills or whatever it is. And so, but that's our basic strategy. Okay?
Thank you, Jensen. Colette. Vivek Arya from Bank of America. I appreciate you hosting the event. Jensen, you mentioned physical AI and humanoid robotics several times in the keynote yesterday and today. What is NVIDIA's strategy in this and the go-to-market in this specific business, right? If this is really a multi-trillion-dollar business, when will we start to hear about some near- to medium-term milestones? And is your approach essentially just at the processor level plus the software and enablement level, or do you see NVIDIA actually making the actual robot? So just what is the strategy and when will investors get to see some of the results?
Okay. I appreciate that. Thanks, Vivek. Our robotic strategy for automotive and humanoid robots or even robotic factories are exactly the same. And the reason for that is because it is technologically exactly the same problem. And I've generalized it into an architecture that through some adaptation, I could address all three. Okay? So three computers. One computer is the training computer, train the AI. Robotics is going to be a massive data problem. Lots and tons and tons of videos and human demonstrations and synthetic data generation, as we showed yesterday, and huge data problem. We're going to take all that data. We're going to train AI models for them. Okay? So the first business opportunity for us in robotics is training. But I have to enable everybody to be able to train at scale. And everybody lacks data. That's the reason why NVIDIA built Omniverse and Cosmos.
That between Omniverse, which is a physics-grounded, you guys know large language models, when it first came out, the criticism was it hallucinated. Remember that? Notice that the criticism is starting to decline. The reason for that is because of retrieval-augmented generation. Isn't that right? Grounding it in facts. Okay? If we had a generative model that can generate physics, which is essentially what Cosmos is doing, physics generator, but it hallucinates, then I can ground it with physics because Omniverse is physically grounded, physics-grounded. Does that make sense? Omniverse is calculated. It's based on principled solvers. Its physics is known and believed. Okay? Then I connect that to a generative model. Now I can create Dr. Strange, right? I can create all these futures, and all these futures could then be used to train my model.
And once the model learns it and actually lives all these futures, then it becomes has a generalized understanding of the world. Okay? And so training, I need to create a bunch of data for it, and that's why Omniverse and Cosmos is available. Now we've deployed a system for generative physics-grounded world models that every single robotics company, every single car company now has a torrential amount of data to train on DGX. Makes sense? That's how I reasoned through it. Second, we also want to enable the industry, period. And one of the things that we have to do, the industry needs, is simulation of the alternative worlds. And who's a better simulation technology company than NVIDIA? This is something that we must do. If we don't do it, who is?
This is one of those mandates of our company that we have to go select work that is impactful because look at $50 trillion industries are affected. Only we can do it, and we have the expertise to do it, and we have the will and might to do it. The second computer is Omniverse. These two computers are the first to go. We see the growth. Notice we were talking about autonomous vehicles earlier, and data centers was the bigger part. That's data center business. Okay? When eventually they scale out, then we have the robotics processor, Orin and Thor coming, and the architecture is exactly the same across all of them. Everything is software harmonious. You can run anything anywhere and anyhow. Does that make sense? Robotics, three computers. Robotics, big training, big data center opportunity, big data center opportunity.
All right. Thanks. Mark Lipacis, Evercore ISI, thanks a lot for hosting the meeting. Really appreciate it. Jensen, you guys have made some announcements on quantum computing. Can you share with us your view on how this technology develops over time, what your strategy is, and longer term, pick the timeframe, five, 10, 15 years? What is the difference between what quantum computing will be doing versus the accelerated computing platforms that you have? Thank you.
Sure. Quantum computing can't solve every problem. It's good at small data, big computation, big combinatorial computing problems. It's not good at large data problems. It's good at small data problems. And the reason for that is because the way you communicate with a quantum computer is microwaves. And terabytes of data is not a thing for them. And so just working backwards, there are some very, very interesting problems that you could use quantum computers for, truly generating a random number, cryptography. So these are problems that are small data, big compute. And working backwards, we're a computing company. We're an accelerated computing company. And as you know, we work with CPUs. We obviously build Grace. We're not offended by anything around us. And we just want to build computers that can solve problems that normal computers can't.
And so in the case of quantum computing, it turns out that you need a classical computer to do error correction with the quantum computer. And that classical computer better be the fastest computer that humanity can build. And that happens to be us. And so we are the perfect company to be the classical part of classical quantum. And so we are working with just about every quantum computing company in the world is working with us now. And they're working with us in two ways. One, this quantum classical, we call it CUDA-Q. So we're extending CUDA to quantum. And they use us for simulating the algorithms, simulating the architecture, creating the architecture itself, and developing algorithms that we can use someday. And when is that someday?
We're probably somewhere between, in terms of the number of qubits, order of five orders of magnitude or six orders of magnitude away, and so if you kind of said 15 years for very useful quantum computers, that would probably be on the early side. If you said 30 is probably on the late side, but if you picked 20, I think a whole bunch of us would believe it, but what we're interested in is we want to help the industry get there as fast as possible and to create the computer of the future, and we'll be a very significant part of it.
Thanks. Jensen, I'm going to add my thanks for hosting this wonderful event between last night and today. I'd like to ask about the client side again. We saw this very interesting announcement with Digits, I think it's called, in the GB10. But you also made a reference to support for CUDA in WSL2, I think it's called. And I'm wondering if you can explain that a little bit to us because I'm not sure that so many of us are familiar with that.
Okay. Yeah.
So relate the two to each other. My suspicion is that these two are that there's either a current relation or maybe a future one that we should anticipate coming down the road on the client side. Thank you.
Okay. I guess they're related in the sense that they're both related to AI, but here's the way to think about that. This is Windows Subsystem for Linux, WSL. WSL is how a cloud-native developer can use Windows to develop for the cloud, and so it's basically a hypervisor of Windows. It sits in a basically two operating systems in one. Your Windows computer has an instance inside, and that instance is Linux. That Linux instance is compatible with Docker containers, and it's compatible with CUDA. As a result, WSL2 is a great development platform for cloud-native AI. And that system is used by developers, and so from now on, we're going to take NVIDIA AI, and we're going to target that system and make sure that it's world-class. And so we announced that, and the PC OEMs are super excited.
And the reason for that is because although it started out as a developers' platform, as you know, AI is ready to be an end-user platform or enthusiast or other application developers. And so the question is, how do you make Windows an AI platform when most of the AI is done on Linux and in the cloud? Well, the answer is WSL2. It's sitting right in front of us. And so now we're going to make that a mainstream product, and we'll support it with all the things that we do to support professional and high-quality software. And the PC OEMs will make it available to end users. Now, what can you do? The really great thing is let's say you're a software developer, Adobe or Autodesk or whoever it is, and you created an AI, and it works in the cloud.
You could take that AI now and put it into WSL2 without changing anything. You'll just put it into WSL2, and over a gRPC or REST API, just the way that cloud talks to each other, you use exactly the same protocols. And now this application could talk to the AI that's sitting in the PC. And so this application can now talk to an AI in the cloud or in the PC perfectly without any change. That's what we announced. And I could have done a better job explaining it, but nonetheless, it's, I think, quite exciting. We finally turned a PC into an AI PC because the momentum and the development and the R&D of AI in the cloud is just so significant. There's no way you're going to add another platform called Windows to it. Linux is just too powerful at this point.
I think the answer is very clearly WSL2, which is Linux and Windows. That's one. Digits works exactly like a private cloud. Whatever runs in the cloud runs on Digits. The amazing thing is you put the cloud AI, you put it on Digits, you point your application at Digits, and this application used to be backed up by API in the cloud is now backed up by Digits completely invisibly. Digits is like your own personal cloud. For an application developer and enthusiast or somebody you're developing, now look at this. You can run your application in all these different places, develop here, run it there, develop up there, run it down here, however you like to do it. One common cohesive platform, which is the way computing should be, from personal to private to cloud.
I think we have a question over here, Jensen, but I think we're almost at time. So I think this will have to be the last question.
Unless you guys are going somewhere.
I'm not going anywhere.
Yeah.
Hang on.
Yeah.
We can keep going then.
Yeah, yeah. It's my fault. I over-explain because I want to make sure that you understand something. That's kind of. Don't get me started.
It's Toshiya Hari from Goldman. Thank you so much for hosting this event. Jensen, since the emergence of ChatGPT, if consensus is correct, I think you will have added around $200 billion in data center revenue between 2022 and 2025. You mentioned last night the ChatGPT moment for, I forget if you said physical AI or robotics is just around the corner. Is there a way to contextualize sort of the magnitude of revenue or your business from physical AI over the next three years vis-à-vis what we've seen over the past couple of years? And then part B, kind of unrelated. I was hoping you could give us an update on how you're thinking about sovereign AI. You've been really busy traveling the world over the past couple of years. It's really difficult to forecast that business from the outside. You know what the country leaders are thinking.
How should we be thinking about that business for you over the next couple of years? If there's a methodical way of thinking about that, that would be helpful. Thank you.
Yeah. Okay. While I answer the first one, I'll think about the second one. As one way to think about the first one is this: in order to support a few million cars on the road, you need a few billion dollars of data center to support the few million cars. And the reason for that is because those cars are going to collect more data. And when they collect more data, you need more data to process. You have more bugs to fix, more corner conditions to go deal with. And so that loop, that cycle is understandable. The more cars you have on the road, the larger data centers you're going to need. And eventually, I think you're going to have a billion cars on the road, and they're all going to be autonomous.
Or another way of saying it, there's going to be a trillion miles driven. They're all going to be autonomous. And that has to be backed up by a whole bunch of data centers. Okay? So if it's a, let's say, if it's a couple million cars on a road, it's, call it, maybe the simple math is just a million cars on a road's $1 billion worth of data center. And a million cars is a lot of cars, and it's supporting a lot of AV service. Do you see what I'm saying? The app services, the service itself pays a lot of money. And so they're more than delighted to have a data center to support that. And now the question is, how many AV cars are on the road over time?
If the math that I just kind of roughly did is yeah, if the math that I did is wrong, let me see. Is my math right? Let's see. 10. Yeah. I was off by probably a factor of two or three, something like that. So it's either a couple, two, three. It's probably $2-3 billion of data center for every million cars, say. Then we just got to scale it out. Okay? So that's one easy way of thinking about it, probably. Then the second answer, I think it's related to the GDP. I think it's related to their industrial GDP. Information, knowledge, information in the industrial GDP. I don't think agriculture is as much. Obviously, tourism is not going to be as much.
So, depending on the GDP of your country, the shape of it, manufacturing countries, information—for example, information knowledge workers, financial services industry. The financial services industry, as you guys know, the quantitative trading is moving from human-engineered quants, right, to machine learning robotic trading systems. And these robotic trading systems are giant transformers, and they're just cranking on new data, making recommendations with human-in-the-loop and such. And so for the information-rich ecosystems, it's just related to the size of the GDP. I think that's probably this. Near term, it's kind of hard to gauge. Every country will put up. You might have seen the news in Europe. They literally adopted our phraseology and this entire initiative across seven supercomputers called AI factories. And so this is the European AI Factories initiative. And so you see Europe doing that.
Japan has already done that. They allocated a couple of billion dollars or so of government subsidies that will be also matched with the private enterprise spend for building out their infrastructure. So let's say there's a few billion dollars of infrastructure being stood up in Japan shortly. So that kind of gives you a flavor of it. But I think over time, it's probably going to be related to the size of the GDP.
I'm Harsh Kumar, Piper Sandler. Colette Kress, thanks again. My thanks as well for hosting this event. Incredibly helpful. I had two questions. Last night, you talked about generative AI going to agentic AI going to physical AI. Is that for NVIDIA? Is that a function of more horsepower, more compute, more networking? Or is there a fundamental change needed from your company on the software or the hardware side? Just some thoughts would be helpful.
Each stage is harder to do, and each stage is harder to deploy, so for example, we've had self-driving cars for, I want to say, eight years, but it's taken eight years to reach this level of maturity, and the reason for that is because of risk, safety, but recommending a movie, recommending a product to you, generating a unicorn flying under the sea surrounded by ocean turtles, that's not going to hurt anybody, and so depending on the application, each one of them has a different degree of risk. Agentic AI for enterprise applications is not likely to be high risk depending on the functionality it is. In legal, you're going to have to be careful, a bit more careful. In accounting, obviously, you have to be more careful, but in marketing, you could probably take a little bit more risk.
In customer service, it's probably domain specific, and you'll guardrail it very specifically. In software engineering, very low risk because humans in the loop. In chip design, very low risk because humans in the loop. And so you have to gauge kind of for each one how you would go to market. In the case of robotics, self-driving cars, that's very hard. In the case of human robotics, less hard. And the reason for that is because a car has to drive all over the world and has to drive all over the United States. It's got bumpy roads. You've got Vegas. You've got Arizona. You've got San Francisco. And so the domain variation is very high. But human and robot, once you bring them into your facility, the domain adaptation is rather limited, rather narrow.
And so I wouldn't be surprised if humanoid robotics is much, much faster to deploy if the technology works well. Okay? And I'm hoping that I'm looking forward to technology working well. And so it's really about the rate of adoption as a function of the complexity of the technology, how you would take it to market, the safety of the system, so on and so forth. You got to balance all that.
Great. Thanks for taking my question, David O'Connor from BNP Paribas. Colette, question for you and a follow-up for Jensen, if I may. Just Colette, on the supply chain side of things, we've gone through Hopper. That's been supply constrained. We're going into Blackwell now, and that's supply constrained as well. How should we think about the kind of supply chain going forward if you look over the next couple of years? I mean, is this just a kind of fact of life, given the demand that's out there and the complexity of the supply chain? Or is there any way you're thinking about that differently going forward to kind of ease those constraints? And then the follow-up, just again, going back on the ASIC side of things for you, Jensen. Your largest customers, the CSPs, have kind of gone down the custom ASIC route for some workloads.
Yet, at the same time, you talk about those new reasoning models, and that requires a factor more thinking before on the inference side of things. So they start to look a bit more like GPUs again than kind of custom ASICs. So just kind of your view kind of long term, how much of is that an opportunity for you to help customers kind of do a bit more maybe customization, but maybe not a full ASIC, as you talked about? It's more general purpose given the market. But just maybe specific for CSPs, if you have a view for like five, six years down the road, how should we kind of frame how much of the market could be dominated by that? Just kind of in two minds with the new reasoning models because those ASICs start to look more like GPUs again. Thanks.
Let me first start in terms of our suppliers. Our suppliers have been such an amazing group of folks that we have worked with for years when we needed them over the last seven quarters. It's been interesting to see the scale that we've seen, but interestingly, yes, we would still love more, so even over those seven quarters, it's not about just supply. It has been about building out data center-scale systems, moving from just computing to a full computing platform with networking and all of the different components. When you think about the systems that are coming together now, many, many chips, many, many different types of switching, many different types of all of that having to come together. We've added more suppliers to make sure we have the continued redundancy need with multiple suppliers. Many of our suppliers, we've asked for another factory.
They have come forth absolutely willing, working hand in hand with us in terms of what we built. I believe what we've seen over the last seven quarters is not a supply constraint. It has been just an amazing journey of our suppliers and our partners and our ecosystem working with us. Is there a point in time where that balance is out? That depends. It really depends in terms of the sophistication and what you will likely see us building going forward and how we can scale with more and more newer suppliers and building that all together. But I look at it as being a great journey. They have been tremendously helpful rather than looking at it in a perspective that we're constrained.
We work very closely with the researchers and the AI researchers and engineers at just about every CSP and every startup. There are none that I think is left behind, and we get input from them. We get dialogue. We obviously have dialogue with them on how to improve our GPUs for the future. It's one of our advantages because everybody wants to see a better cluster and better infrastructure for their next generation models, and so we work very closely with them. It doesn't ever have to go to a point where they add so much to our GPU that they want to do a custom version of it. We just work together, and we decide what ideas that we could put in there on their behalf, and very few of them, well, none of them so far has needed it to be proprietary to them.
And the reason for that is because there's a lot of ideas coming from a whole bunch of other people. And inside our architecture, it's so complicated. We call it FP4, but there's nothing about FP4 that simple like FP4. The Transformer Engine has to figure out when it can take advantage of a lower precision. When does it have to move back to a higher precision? You can't do this a priori during compile time. You have to do this during runtime. And you have to do it so fast because otherwise, you're sitting around waiting around to decide. And so that suspension system, if you will, and the numerical formats that you could drift between such that after you're done running this model, which is gigantic, the answer is actually something you're looking for, that is just incredibly hard to do.
Just numerical precisions alone is an area that we do a lot of collaboration with a bunch of people. And so we already, in a lot of ways, all of our GPUs are customized with the entire ecosystem. It's not just us coming up with ideas. Most of the ideas, I would say, largely come from us. But we're modifying and iterating and collaborating across the board.
Okay. Yep.
Okay.
We'll take two questions now. Now, listen, guys, the last question, you know the burden on the last question is very high, right? You can't leave everybody in a sad mood, okay?
Okay. I guess I'm second.
Whoever wants to be the last question? You better be second last is what you're saying.
I'm second to last.
Okay.
Louis Miscioscia here at Daiwa Capital Markets. So going back to Ben's question and continuing with Blackwell, I thought that was a great answer. I'm wondering if you could, though, relate that, how things are moving forward with AI to PCs and phones. How soon do we think there will be something out there that could really start, take all the money that's being invested in capital with this training, and then something that actually all people here and consumers around the world could use?
I think consumers around the world are all using ChatGPT. I use it every day. Increasingly, I use Gemini Pro as well. It's really good. Deep research, Google Deep Research, is mind-blowing, completely utterly mind-blowing. So it's my tutor. I ask it a lot of questions and walk me through many things and it answers. I said, walk me through it step by step and reason through it step by step with me. So I really find that almost everybody ought to be using ChatGPT. Everybody should use these AIs at least as a tutor. Every student, every kid growing up should use it as a tutor, no doubt. Then, of course, you know these systems now have computer vision and speech. The continuous dialogue, the interactive dialogue is insanely good. It's going to get better.
And so very soon, I'm fairly excited about the Meta smart glasses and other smart glasses coming that just connected to cloud. And if you have questions, just ask. And what am I looking at? And it knows. And while you're reading, explain this equation to me. You're sitting here looking at studying calculus. And you've got an equation. Just explain it to me. And so there's a whole bunch of things that I can imagine the usefulness for. Obviously, autonomous vehicles, that's a sure thing now. Every car that's going to get sold in a few years, you either have autonomous capability or it's not going to sell. And just by comparison to all of the other technologies, other car options that are out there, you're going to suffer by comparison severely.
And so I think AI is, from all of those applications, for consumer applications, I actually think, is actually moving faster than enterprise. Enterprise is waiting for the use case. And the use case is not an extension to the tools we currently have, but it's a whole bunch of AI agents that are specialized for different domains. And all of those pieces of technology are now finally coming together. We've solved the grounding problem with retrieval. We've solved largely the, excuse me, guardrailing problems. We've solved the domain adaptation problems. The list of issues that people had in deploying these things are largely being solved. And so I think this year is going to be the year AI agents are going to get deployed.
And I just told you at least one is going to get deployed for sure, which is software engineers and digital marketing campaign managers for sure, customer service for sure. Those three, you're already talking about a giant part of the information knowledge worker spend. And so I think this year we're going to see a takeoff.
Jensen, on the final question, Ruben Roy from Stifel. I'm going to keep this maybe a little bit high level and short. I wanted to talk a little bit about scaling and within the context of bottlenecks. You talked about power yesterday and power going to the data center. But on the other hand, you're accelerating the cadence at which you're bringing products to market with the use of AI. As you think about bringing more of your software engineers onto using agents, et cetera, are there other areas of your roadmap that you're thinking about that you could bring to market, whether it's networking or maybe we saw DSP, for instance, at a recent conference? Can you help with the bottlenecks? I guess, is the question as you get more of your own. I guess, eating your own cooking is the question. Thank you.
I appreciate the question. The AI agents, the technology layers, NIMs and libraries and the blueprints, we use internally. We use them internally and make sure that they're built like products that could be then shared with the ecosystem externally. We're trying to go as fast as we can ourselves in making sure that AI agents are all over NVIDIA. We start with the most important parts of our company. The reason for that is because why go deploy a technology of this complexity and importance in areas where it doesn't move the needle for the company. We started with chip design. We have an AI in the company called ChipNeMo. It's used every single day by all the engineers. The second one is going to be software engineers. The third will likely be our digital marketing organization.
Those are likely the first ones to go. We're trying to deploy it as quickly as we can ourselves.
Thank you, Jensen.
All right, you guys. Thank you, everybody. Happy New Year.