NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.50
-8.75 (-4.18%)
Apr 30, 2026, 12:16 PM EDT - Market open
← View all transcripts

Morgan Stanley Technology, Media & Telecom Conference 2026

Mar 4, 2026

Jensen Huang
President and CEO, NVIDIA

I'm just saying I'm not used to coming to work in this way, this total silence. I'm just kidding.

Mark Edelstone
Managing Director, Morgan Stanley

There were a lot of Taylor Swift comments along the way.

Jensen Huang
President and CEO, NVIDIA

Yeah.

Mark Edelstone
Managing Director, Morgan Stanley

The crowd is ready.

Jensen Huang
President and CEO, NVIDIA

This conference needs humor too. Is humor allowed here?

Mark Edelstone
Managing Director, Morgan Stanley

Humor is very allowed.

Jensen Huang
President and CEO, NVIDIA

All right.

Mark Edelstone
Managing Director, Morgan Stanley

I made investment banking jokes yesterday, Jensen, thank you for being here. For the last, I think, 25, 27 years, you've been such a great supporter of this conference. I think we sometimes become numb to the scale of the numbers and the transformation we're experiencing. I don't think I'm the only one in this audience. I'm getting billions and trillions confused constantly. My partner, Mark Edelstone and I, 27 years ago, we sat on a stage much smaller than this one on the Morgan Stanley trading floor, we announced and introduced NVIDIA and you to the Morgan Stanley sales force. Believe it or not, $48 million IPO, 1998 revenue. Trailing revenue, $30 million. Jensen and his team, Colette, were so generous.

Two years ago, you hosted our board meeting in your headquarters. I think you had just announced a $30 billion quarter, in terms of revenue. Then last week, a $46 billion net income quarter. We've moved from years to quarters, from millions, to billions. It's really amazing and unprecedented scale and growth. Then you've changed our lives. You've changed our lives. I guess my question after that is what had to come together strategically, culturally, technically, to deliver that type of hypergrowth at scale? The scale is really astounding. Again, thank you.

Jensen Huang
President and CEO, NVIDIA

That's gonna take 37 minutes and 13 seconds, and slightly more. You know, obviously, NVIDIA, it wasn't built overnight. It's taken us 33 years. I sort of remember somehow that when we went public, our price was $13, and I just read here it's $12. I overstated it. I remembered it to be much more optimistically than it was actually. The company's valuation at the time, I think, was like $300 million.

Mark Edelstone
Managing Director, Morgan Stanley

Yeah.

Jensen Huang
President and CEO, NVIDIA

Mark Edelstone did such a good job preparing all of our investors that they really only had 1 question. It was literally a one-question IPO roadshow, and the question was, "When are you going out of business?" I'm not kidding. That question is about as hard to answer as the 1 you just gave me. Well, the answer is, as it turns out, we started the company with the idea of creating a new computing platform, a new way of doing computing. Not that the old way was wrong, it's just that the new way, a new way is essential to solve some unique problems. The type of things that we were extremely good at are algorithms.

Algorithms because the inner loop of the software tends to be about 5% of the code, but 99% of the compute time. Back then, the algorithms in the world of computers was quite rare. One of the most important algorithms was computer graphics, the simulation of light and how light travels through space. Computer graphics was used for things like animation movies, of course. At the time that we were founded, the cover of, I forget which magazine was, you know, Jurassic Park was there. It was during that time where computer graphics was becoming more capable and we could simulate, you know, virtual real ity with it.

We applied it to creating a new industry, which did not exist at the time, called video games. 3D graphics was modernized in my time, consumerized in my time, and the whole video game industry was created in my time. When I say in my time, meaning it was NVIDIA that pulled it all together. The reason why we're so beloved in the video game industry and we're so deep in it still is because in a lot of ways we created the modern video game industry, from the algorithms associated with it, the libraries. You know, the in the computer graphics industry, without RTX, there would be nothing today.

Without our contribution of all the algorithms that goes into all of the game engines, you wouldn't be able to enjoy the type of video games you enjoy today. NVIDIA has been deep in the world of algorithms since day one, 33 years ago. Now, accelerated computing requires what is described as a full stack. Meaning the architecture, the chip design, the libraries that sit on top of it, how it's integrated forwardly. You know, I'm using that. Apparently there's this new idea called forward deployed engineers or something like that. NVIDIA's had DevTech engineers 33 years ago. We deployed them into the world's video game industries and video game companies in game engines, and we integrate our technology into their game engine. Today, if you look at Epic's Unreal Engine, NVIDIA's technology is all over it.

You go into every game developer, NVIDIA's technology is all over it. That's the reason why all the games run best on NVIDIA, for good reason. That's the reason why NVIDIA is the world's largest game platform. You probably don't know this, but there's several hundred million active GeForce gamers in the world, many of them turned into AI researchers. Is because of GeForce GTX 580 that, you know, that Ilya Sutskever and Alex Krizhevsky and Jeff Hinton, it was Jeff that told him to go buy it to discover CUDA. The first idea about NVIDIA is that we're a full stack company.

The second idea about our company, you know, really old history that long many people might not have been born yet. During that time, the PC architecture was incompatible with today's computer graphics capabilities. We created some new technology called Direct NVIDIA. It was a way for applications to directly communicate with our APIs, we exposed it to some very important companies. It became DirectX. If you look at the way that we communicate between us and the application, that was completely revolutionary to bypass a whole bunch of software that makes it slow to make accelerated computing possible. We introduced the idea of virtualized frame buffer memory into system memory. It was initially called AGP, which then became PCI Ex press.

Many of the system architecture had to be reinvented so that we could accommodate video games and 3D graphics in a PC. Well, that same sensibility of both innovating the full stack to be integrated into in algorithms, as well as changing the architecture of systems so that we could create new computer systems, led to that same sensibility expertise, led to DGX-1, which was the world's first AI supercomputer that I delivered, you know, by hand to San Francisco here, very close by to a company that eventually became OpenAI. The fundamental attitude, if you will, expertise on how we see the world, propagated in this way. It's literally 33 years. The company's entire culture is designed to be full stack. The organization is designed to be full stack.

The entire system is designed to create new stacks and new system architectures that allow us to do this. What we started with, of course, if you look at NVIDIA's graphics cards, GeForce, it's a technology marvel. How it's integrated into the operating system, how it's integrated into the system architecture completely reinvented how computers worked before. Well, we have no trouble with that with DGX-1. I have no trouble with that with the first supercomputing cluster, which then went to Satya for their first supercomputer. You know, people have noticed that Microsoft first supercomputer and NVIDIA's supercomputer had exactly the same benchmark, like down to the. You measure the performance of the system across all of these GPUs. It was about 10,000 GPUs or so.

It was exactly the same performance. The reason for that is because we designed it and we delivered to Azure Cloud. It was all based on InfiniBand, was all based on, based on, based on Ampere eight. This is the A100, which became the first computer that OpenAI used. We're quite comfortable with this full stack, full system approach. Without being able to do that, it is impossible to stay at the bleeding edge. It is literally impossible to keep up with a company that's building not just one chip each year, but we're building an entire infrastructure each year because we own the CPU. We revolutionized the new way of designing CPUs, and you'll see more examples of that.

We revolutionized the way we do CPUs, revolutionized the way we obviously do GPUs, connect them together using this thing called NVLink, which revolutionized the way you build computers altogether, connected together with a new type of AI Ethernet called Spectrum-X. We connected everything together. Now we own the entire stack. We know all the chips inside. When you own the entire stack and you own all the chips inside, you could change it every single year. If you don't own the entire stack and you don't own all the chips, it's hard to innovate every year. The reason for that is because you're connecting too many cats and dogs, and there's too much innovation to pull together once a year if you can't control it, because it's a full stack problem. That's how we got here.

Mark Edelstone
Managing Director, Morgan Stanley

It's amazing. In the last two years since you were here last and our board meeting, we've sort of gone from generative AI models to reasoning and now agentic. Satya just finished a panel on the enterprise. At the enterprise level, you know, we're working with Microsoft, the OpenAI, xAI, Gemini. We have Dario here from Anthropic. The capabilities are extraordinary. What does it mean around the size of that enterprise market? How is it changing and how is it gonna be adopted? How do you sort of see that playing out over the years because it's a big, big topic of the con.

Jensen Huang
President and CEO, NVIDIA

Yeah, really good. Literally in the last 2 years, we went through 3 inflection points in AI. The first inflection point, first of all, the technology sat there in plain sight for months. GPT-3 sat there in plain sight for months until somebody wrote essentially a wrapper around it and turned it into ChatGPT, turned it into an API, made it available and easy to use by everybody. The first inflection point was generative, as you mentioned. The ability to translate, convert information from one form to another form. Auto regressively generate tokens. The second, of course, the problem with generative AI is that it's prone to hallucinate. The reason for that is because...

not because there's something fundamentally wrong with the technology, not because it didn't learn all the right things, but because it's not grounded on contextual information. It's not grounded on relevant information. The second thing that happened was o1 and reasoning came about. Behind o1 is also grounding on research, grounding on truth and the ability to combine generative with semantic, we call it Retrieval-Augmented Generation, but basically conditional generation. Okay? Conditional generation meaning that what you're about to generate depends on context and ground truth or whatever research or whatever it is. The second generation introduced reasoning, self-reflection, the ability to self-correct, because sometimes what comes out, you know, your mouth, you kind of wish that you pulled back and you go, "Oh," you know.

In the case of AI, it has the ability to do that in real time. o1 became much more grounded, and the information that was generated was more reliable. What happened? What came out as a curiosity and incredible excitement, and the tech industry jumping onto it because we realized what can happen, the next phase of it, the usefulness of ChatGPT just skyrocketed. The amount of tokens that it generated was much, much more than the first generation. Maybe, you know, 100x more tokens. The model is maybe 10x larger, so it's probably something like 1,000x more compute. From o1 over ChatGPT, call it 1,000x . Because it was so useful, maybe 1 million times more usage. Okay?

The combination of usage, and its usefulness, and groundedness, allows us to, we saw that next phase of growth. In the end, what o1 did was it provided information, essentially, a chatbot that was much more factual. It was informational. Of course, for many of us, we use it for research and we use it all the time. Instead of searching, you know, our goal isn't to search, our goal is to get answers. ChatGPT gave us that. That was kind of the second inflection. The inflection that we're seeing here also sat in plain sight for quite a long time. It's basically the ability for AI to use files, access files, and use tools.

Now it could reason, it could think, it could use tools, it could solve problems. It could do search, it could do planning. Probably the biggest phenomenon that's happening, and if you're paying attention to it, I'm sure you are, OpenClaw is probably the single most important release of software, you know, probably ever. If you look at OpenClaw and the adoption of it, you know, Linux took, right, some 30 years to reach this level. OpenClaw in, what is it, 3 weeks, has now surpassed Linux. It is now the single most downloaded open source software in history, and it took 3 weeks. If you look at the line, you know, even in semi-log, this thing is straight up. It's vertical. It looks like the, it looks like the Y-axis.

I've never seen anything like it. Okay? It really looks like a Y-axis. What's happening now? That you could give a problem statement, Start where the prompt goes create. You know, the last prompt, the way you kind of think about it, was what is, when is, who is, right? That's the last prompt. This now prompt goes create, do, build, write. Does that make sense? What's happened? The last prompt was queries. This prompt are actions. They're tasks. Do something for me. You describe it as, you know, expressively as you like, as with a lot of intention, you know, and let it infer or very specific. It goes off and it just churns. It just thinks. It goes off and it does research and it reads.

It reads a manual. If it has to use a tool it's never used it before, it reads the manual of the tool. It goes off and studies what's on the web and it, you know, applies the tools and performs the task. Now, I just said we went from one, you know, one generative prompt, one generative response to now one that is 1,000 times more tokens. Agents, you know, we call them at the company OpenClaw. These claws are now consuming, what, 1 million times more tokens. They're running continuously in the background. We have a whole bunch of claws in the company, they're all continuously running, doing things for us, writing, developing tools, developing software. Now the question is the implication.

The amount of compute in our company that we need has just got skyrocketed. The amount of compute every company needs is skyrocketing.

Mark Edelstone
Managing Director, Morgan Stanley

In that context, I think over the last few days it's come out, certainly at Morgan Stanley as a user, maximum bullish on tokens, maximum bullish on doing and creating. It does require the compute you just mentioned. The question is around the financing and the CapEx around that to support that extraordinary large compute. How does it all get financed, as you see it, from a sort of top of the ecosystem? How do the AI factory economics play out and evolve?

Jensen Huang
President and CEO, NVIDIA

Yeah. There's a couple of thoughts that's really important. Remember, I appreciate you using the word factory. You know, several years ago, I described that these data centers, what people call data centers, is not for storing data as in a data center. They are producing tokens. A facility, a plant with the fundamental purpose of producing tokens is a factory. It's an AI factory. At the time people went, "Jensen, that sounds so grungy." You know, it's clean. But it produces tokens and nobody likes to build data centers because, you know, who knows what kind of return you're gonna get on a data center, but everybody loves building factories. The reason for that is because factories make money.

We now know for certain that these factories directly generate tokens, and these tokens are monetizable. The more compute you have, the more tokens you can produce, the more tokens you produce, the greater your top line. We now know for certain that companies' revenues are directly correlated to compute. We know that for a fact because if Anthropic had 3x more compute, their revenues will be 3x higher. We know that. We know that Anthropic is compute limited, factory limited. It's no different than Mercedes being factory limited or any company being factory limited. If they had more compute in their factories, they will have higher revenues. If OpenAI had, right now, had more compute, they will have higher revenues.

The first thought is that compute equals revenues. The big idea, of course, compute equals GDP. That we also know. Compute equals a country's GDP. That's one thought. The second thought, the reason why NVIDIA is so successful is because we engineer these systems full stack end-to-end, and they're architected from the ground up to generate tokens at incredible effectiveness. NVIDIA's tokens per watt is an order of magnitude ahead of the competition alternative. Tokens per watt. What does that mean? Remember, your factory has 1 gigawatt, and if your tokens per watt is 10x the alternative, your revenues are 10x the alternative. For the very first time in history, the computer architecture chosen in a company's factory must go through CEO review, no question about it.

That company only has a gigawatt or 2.3 gigawatts for next year. If they put the wrong system inside, it will affect their revenues the next year. I promise you that. We see it. Our architecture being so advanced now and pulling further and further ahead, you know, those are probably one of the most exhaustive benchmarking done is by a firm called SemiAnalysis, they declared NVIDIA inference king. Inference king. Inference is tokens per second, tokens per watt. It's about generating tokens and tokens per dollar. When our performance per watt or per anything is so much ahead of the competition or the alternative, our tokens per dollar is also the best, which means we're the cheapest tokens you can produce today. Not even close. An order of magnitude better. That's the second thought.

The second big idea for AI is AI is a factory. Factories are power-limited always, doesn't matter how many plants you have, each plant is still 100 megawatts or 1 gigawatt, and therefore, tokens per watt is the single most important thing for the top line of companies. They have to make those decisions very, very carefully. You know, it's no longer just about PowerPoint slides. You're not gonna go put $50 billion dollars down on somebody's PowerPoint slides.

Mark Edelstone
Managing Director, Morgan Stanley

The token demand is extraordinary, as you just mentioned. We're seeing it in your numbers, right? I think I mentioned $46 billion in net income, but $70 billion in revenue.

Jensen Huang
President and CEO, NVIDIA

You were gonna ask me something about how to fund it. Can I just tell you how to fund it?

Mark Edelstone
Managing Director, Morgan Stanley

Yes.

Jensen Huang
President and CEO, NVIDIA

First of all, I just told you, the reason why you have to build these factories in the future is because you either. You just believe that, 1, software is important. I hope this audience believes software is important. Software runs the world. First thought. The second idea is this: there will be no software in the future that's not agentic. Do you guys agree with that? How could you have software that's dumb? It is absolutely true that every software company will become an agentic company. They're gonna simultaneously use open models, okay? Open models mean the ones that they download themselves and they fine-tune themselves. They're also gonna use closed models.

The combination of all that, just like all of our companies, we have employees that we hire, we have employees that we're grooming, we have contractors that we bring in, we have specialists like yourself that we bring into the company just to do our work. Our job is not to do the job. Our job is to have the job be done. That's what every company does. Therefore, every company will realize that these AI models, some of it you rent, some of it you build. That's not illogical. Just like biological workers, you will do that with digital workers. Every single software company in the future will no longer just rent tools, but they'll rent also experts to use the tools.

They'll not just rent tools, but rent experts that use those tools because their agents are going to be extremely good at using their specialized tools. Every single software company, what is the IT industry, is a couple trillion dollars. Today, they're tool renters. In the future, they will of course they'll rent agents that use those tools. Means that the software industry in the future will be much larger than the software industry of today. You pick your favorite software companies, and I can imagine a much larger future for them. Cadence is gonna be much larger. Synopsys is gonna be much larger. Siemens is gonna be much larger in the future, but their business profile will change because today they're basically a software licensing company.

In the future, they will also rent tokens, specialized tokens, which also means that $2 trillion industry today with no token consumption in the future will be extraordinary token consumers. That's where that money's gonna come from. All of those software the IT industry of today, not the enterprise companies, the IT industry alone is gonna consume enormous amounts of tokens in the clouds, and they're either open models or private.

Mark Edelstone
Managing Director, Morgan Stanley

That extraordinary token economy is facing some constraints. We've got memory constraints. We've got power permitting constraints. I was in Texas with builders. We have electrician constraints. How do you see that playing out? Satya raised it in the last session. You're closer to it. Also, if it takes a little longer, is it still okay, or is it really negative if we just, you know, the cycle on building this extraordinary token economy?

Jensen Huang
President and CEO, NVIDIA

I love constraints. I love constraints. The reason for that is because in a world of constraint, you have no choice but to choose the best. You can't squander your choice. If the data centers, if the land, power, and shell is constrained, you're not gonna randomly put something in there, just to try it out. You're gonna put something that you know for certain is gonna deliver the tokens per watt that you know for certain is going to allow you from the moment you secure the capacity, we're gonna be able to stand up an entire factory for you. We're the only company in the world that can come into your company and help you stand up an entire AI factory. You know, anybody here that needs an AI factory, you know, I'm happy to help.

You call one person, and that one person comes in, and next thing you know, you're in the AI factory business. Okay? We have the expertise. We know the architecture works. We know there's enormous demand for the architecture, you know, after you're done standing it up, so we can help you get into business. When you're constrained that way, you have no choice but to make the best choice. Because your revenues next year is directly correlated to it. This is one of those questions now for all the CEOs that are in the cloud service providers or software providers. If they make poor choices, this is no different than me choosing the wrong foundry. This is no different than me choosing, you know, the wrong memory, than the wrong anyt hing.

Because everything is so constrained, if I choose poorly, my revenues are affected, everything is affected. They can't choose poorly. The second thing is, you know, NVIDIA's, as you mentioned, working at such a large scale. Our supply chain, one of the things that we do with our money, of course, is to secure our supply chain. One of the things that we do with our capital is to secure our supply chain so that when Satya asks me to help him stand up a few gigawatts, the answer is, "No problem." The reason for that is I got all the memories. I got all the wafers. I got all the CoWoS. I've got all the packaging. I've got all the systems. I've got all of the connectors. I got all the cables.

You know, everything from copper to multilayer ceramic capacitors, everything's secured. That's one of the reasons why NVIDIA's balance sheet being strong is so strategic. A strong balance sheet today is not only helpful, it's strategic. You look at the amount of revenues we're shipping into, just look backwards and look at the amount of supply chain capacity we had to go secure or that they have to believe. You know, if you set up a factory, a plant, a DRAM plant, and I come in and say, "You know what? Go ahead and set up the DRAM plant because I'm gonna use it," that goes a long way. You might as well take that to the bank, as many of them have.

I think the fact that everything is scarce is fantastic for us.

Mark Edelstone
Managing Director, Morgan Stanley

And it-

Jensen Huang
President and CEO, NVIDIA

Yeah.

Mark Edelstone
Managing Director, Morgan Stanley

I think it does create duration, which I think is extraordinarily powerful for you. I think just another layer, which is the ecosystem. You are the greatest cash flow generating company in history. Then you've taken that capital and really created, it feels like stability, diversity in the ecosystem. So how do you think about that in both a financial and a strategic context as you build, I think, both duration and durability in the entire ecosystem?

Jensen Huang
President and CEO, NVIDIA

Yeah. You know, when Mark took me public, I think it was probably, you know, a little bit less energetic than I was delivering it just now, but I am fairly certain I said all the same things. NVIDIA has been building... Remember, accelerated computing requires that I build an ecosystem. You can't just take code and decompile it, and it works. There's no such thing as a universal accelerated computing system. Accelerated computing is by definition proprietary. There is nothing about our architecture that is compatible with somebody else's. It's just not. The instruction set's different, the architecture is different, the microarchitecture is different. Everything is different.

We hide it underneath these, all, you know, these things in such a way that makes you feel like and because of NVIDIA, we accelerate everything from data processing, molecular dynamics, fluid dynamics, particle systems, you know, biology, chemicals, you know, all the way to deep learning, right? Robotics, you know, long sequence, spatial, three-D, you name it, right?

Mark Edelstone
Managing Director, Morgan Stanley

Sounds like a five-layer cake.

Jensen Huang
President and CEO, NVIDIA

It's a five-layer cake. Right. Exactly. Because we've been working on it so long, it looks like everything's accelerated. It's not true. It's because I did it one at a time, one domain at a time, that all of the important domains in the world are now fully accelerated. The thing that we do on the, on the, on the supply chain side, our balance sheet is incredibly valuable because it provides security for our customers. On the upstream side, I'm cultivating new ecosystems for the future. All these AI natives that I'm investing in, the companies that we're partnering with, these are expanding, extending the CUDA ecosystem. 100% of everything that we do is on top of CUDA. Every investment that we've made is on top of CUDA.

Recently there was a question about, are we gonna invest $100 billion in OpenAI? We just for everybody's update, we finalized our agreement. We're gonna invest $30 billion in OpenAI. I think the opportunity to invest $100 billion in OpenAI is probably not in the cards. The reason for that is because they're gonna go public. I'm fairly sure that if we provide the capacity they need, which the compute capacity they need, which we're ramping up hard to go to, the revenues will more than follow. They're gonna go public towards the end of the year.

This might be the last time we'll have the opportunity to invest in a consequential, you know, company like this. Our $10 billion investment in Anthropic probably will be the last as well. Speaking of that, the one of the things that I wanted to make sure I told you guys this time, and something new that you probably haven't internalized, you see all the news, you probably haven't internalized some of the really great work that we did last year, the last year and a half or so, last year or so. We expanded OpenAI's capacity from Azure to OCI to now AWS. We expanded OpenAI's reach of capacity to AWS. We're ramping AWS like mad. We're ramping them as hard as we can so that OpenAI has access to even more capacity.

That's one. The second thing that we did, this was a really, really great outcome, is we're now also working with Anthropic. In the case of Anthropic, we're expanding their capacity as aggressively as we can at AWS as well as Azure. Notice what we're doing in both, they used to be one on one, now they're kinda cross-product. The amount of capacity that we're gonna bring online for them, you know, supporting their revenues, their quality of revenues are so good, we just need a lot more capacity for them. I think that this is something that is somewhat new. Of course, the third thing that happened is a brand new AI lab flashed into the world. Isn't that right? I don't know who mentioned them.

A brand-new lab came into the world, and they're in need a few million GPUs, and that's MSL. That MSL is a net new on top of Meta. We've worked with Meta a long time. MSL is a net new on top of Meta. These three things happened, three new growth vectors. OpenAI at AWS, Anthropic, totally at both AWS and Azure, and MSL. The our demand profile went from being incredibly high to higher than that.

Mark Edelstone
Managing Director, Morgan Stanley

Speaking of more than that, there's Waymo everywhere. I wanna walk my new dog with my new robot. Physical AI could be the next place. How does that take TAM and tokens to a whole another level?

Jensen Huang
President and CEO, NVIDIA

Yeah, that's really great.

Mark Edelstone
Managing Director, Morgan Stanley

NVIDIA.

Jensen Huang
President and CEO, NVIDIA

That's really great. AI is all the stuff that we're doing inside the building, but obviously, ultimately, the largest industries are outside the building. That AI needs to be, needs to have physical awareness, physical understanding. You know, causality, you push a bottle, it falls over and understands gravity, understands collision, you know, understands inertia, understand those things, okay? Understands, for example, object permanence. I take this and I put it behind my chair. In your mind, you can't see it, but you realize it hasn't disappeared. Okay, so object permanence. Things like that affects physical behavior and physical intelligence, you know, fairly importantly. You probably also don't know this, that NVIDIA is the frontier of physical AI. Cosmos is the most downloaded physical AI model in the world.

NVIDIA is also the frontier of autonomous AI. Two versions, autonomous vehicle called Alpaca-Mayo . Look it up. Number one downloaded. Then the next one, GR00T, human or robotics physical AI. We are at the frontier on all three of those. We're also at the frontier of digital biology AI. Look up La Proteina. Incredibly successful. La Proteina for digital biology. There's a whole bunch of other models. GR00T is now number one most downloaded human or robotics model in the world. We are at the frontier on physical AI. Physics, laws of physics, multi-physics. Earth-2, we're at the frontier of physical physical AI. That is physical AI and AI physics. This whole area of physical AI, NVIDIA defines the frontier. It is completely open.

We open it because we wanna enable every company, new or old industry, to be able to take advantage of this capability. We've got the whole stack and the necessary computers for you to advance the AI for your own use, as well as deploy it inside a robot, inside a plant, at the edge, at a radio tower, deploy it everywhere. This is the next frontier. In 2 years' time, we're gonna be largely done talking about agentic AI because we're all gonna be using it. In 2 years' time if you invite me back again.

Mark Edelstone
Managing Director, Morgan Stanley

Every year.

Jensen Huang
President and CEO, NVIDIA

We're gonna be talking about all these new companies. Of course, we announced a very important one, a co-innovation lab with Lilly. There'll be others, but, you know, in order to set up Lilly's AI factory, unless you are, unless you have the capabilities of NVIDIA and this full software stack and the capabilities of all the model and the expertise in that digital biology domain, how would you even do it? The things that we are building, in the next couple of years you'll see, really come to fore. We're gonna be talking about physical AI for, you know, starting the next couple of two, three y ears and for a decade.

The speed of innovation and the pace that you're operating in is truly extraordinary. At the beginning of the week, my partner, Joe Moore, made NVIDIA his number 1 pick.

Is that right?

Mark Edelstone
Managing Director, Morgan Stanley

It's his number one pick.

Jensen Huang
President and CEO, NVIDIA

Thank you.

Mark Edelstone
Managing Director, Morgan Stanley

Thank you.

Jensen Huang
President and CEO, NVIDIA

Thank you.

Mark Edelstone
Managing Director, Morgan Stanley

Good timing, Joe.

Jensen Huang
President and CEO, NVIDIA

33 ye ars later.

Mark Edelstone
Managing Director, Morgan Stanley

How do you think about the stock? Do you think about the stock? Do you have perspectives on it? You're so extraordinarily important and busy around driving all this innovation for, in essence, everything that's going on with 3,500 attendees, we have $40 trillion of market cap here. How do you think about that?

Jensen Huang
President and CEO, NVIDIA

Well, you know, of course I care about the stock. I care about shareholders, I care about our employees, I care about all of you. You might be referring to, we just had the best earnings of earnings in the history of earnings. Is that what you're saying? I think somebody actually told me that, this might be the single best print in the history of humanity. I said it must be only, you know, recorded humanity. I'm sure somebody had better re-returns. But anyways, we had a very good quarter. Listen, you can't hold the stock back. You can't hold it back. The reason for that is very simple. Compute equals revenues for companies.

In the future, every single company will need compute for revenues. I'll just make that prediction for now. Every single company will need compute for revenues, and the reason for that is because compute translates to intelligence, which translates to your digital workforce, which translates to your revenues. I am certain compute equals revenues. I'm certain also that compute equals GDP. Therefore, every country will have it, because not one country in the future will say, "Guess what? You know, we're gonna opt out on intelligence. We've got I don't know what we got, but we don't need intelligence. That's the one thing we don't need." Okay? If you need intelligence, you're gonna need digital, you're gonna need AI, you're gonna need compute. Compute equals GDP. I know that for certain.

I also know that we're at the beginning of this journey, and I see crystal clearly exactly how it's gonna get funded. We know for a fact that all the CSPs took all of their CapEx, and they converted to generative agentic systems, AI systems, because it helps search, because it helps shopping, because it helps ads, because it helps social, because it helps literally every single internet service in the world has been reinvented into generative AI. They could take 100%, the entire internet industry could take 100% of their CapEx and make it AI because it's better. We've proven it to be better. Meta has proven to be better. Google has proven to be better. AWS has proven to be better. You can now take your CapEx and convert to this.

Number two, I just said the entire software industry will be token-driven. The entire software industry. You pick your favorite software company, and I can show you exactly how they're going to be token-driven. That token, you take your favorite, you know, software company, their token will be either produced by themselves, which needs compute, or they could be resold from Anthropic or OpenAI, and that needs compute. What that says for the first time is the entire IT industry will have to be fueled by compute. That's exactly where all this is going to come from, trillions of dollars of it, and we're at the beginning of that. That's my prediction.

Mark Edelstone
Managing Director, Morgan Stanley

Thank you, Jensen, for making history at this conference. 27 years. Thank you.

Powered by