NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.50
-8.75 (-4.18%)
Apr 30, 2026, 12:16 PM EDT - Market open
← View all transcripts

Goldman Sachs Communacopia + Technology Conference 2024

Sep 11, 2024

David Shannon
Analyst

Good morning, good morning.

Jensen Huang
CEO, NVIDIA

Thank you. Good morning.

David Shannon
Analyst

How's everybody doing?

Jensen Huang
CEO, NVIDIA

Great to see everybody.

David Shannon
Analyst

You know, I flew in late last night. I didn't really expect to be on stage at 7:20 A.M., but, you know, seems everybody else did, so here we are. Jensen, thank you for being here. I'm delighted to be here. Thank you all for being here. I hope everybody's been enjoying the conference. It's a fantastic event. Lots of great companies, couple thousand people here, and so, you know, really, really terrific, and obviously, a real highlight and a real privilege to have Jensen, you know, President and CEO of NVIDIA, here. Since you founded NVIDIA in 1993, you've pioneered accelerated computing. The company's invention of the GPU in 1999 sparked the growth of the PC gaming market, redefining computers and igniting the era of modern AI.

Jensen holds a BSE degree from Oregon State University and an MSE degree from Stanford, and so I want to start by welcoming you, Jensen. Everybody, please welcome Jensen to the stage.

Jensen Huang
CEO, NVIDIA

Thank you. Thank you. Thank you.

David Shannon
Analyst

So, we're gonna try to do this really casually, and I'm gonna try to get you talking about some things that I know you're passionate about. But I just wanna start. Thirty-one years ago, founded the company. You've transformed yourself from a gaming-centric GPU company to one that offers a broad range of hardware, software to the data center industry, and I'd just like you to start by talking a little bit about the journey.

You know, when you started, what were you thinking? How has it evolved? Because it's been a pretty extraordinary journey, and then maybe, you know, you can, you know, break from that and just talk a little bit as you position forward on your key priorities and how you're looking at the world going forward.

Jensen Huang
CEO, NVIDIA

Yeah, Dave, it's great to be here. The thing that we got right, I would say, is that we-- our vision that there would be another form of computing that could augment general purpose computing to solve problems that a general purpose instrument won't ever be good at. And that processor would start out doing something that was insanely hard for CPUs to do, and it was computer graphics. But that we would expand that over time to do other things. The first thing that we chose, of course, was image processing, which is complementary to computer graphics.

We extended it to physics simulation, because, in the domain, the application domain that we selected, video games, you want it to be beautiful, but you also want it to be dynamic, to create virtual worlds. We took step by step by step, and we took it into scientific computing beyond that. One of the first applications was molecular dynamic simulation. Another was seismic processing, which is basically inverse physics. Seismic processing is very similar to CT reconstruction, another form of inverse physics. And so we just took it step by step by step, reasoned about complementary, types of algorithms, adjacent industries, kind of solved our way here, if you will. But the common vision at the time was that accelerated computing would be able to solve problems that are interesting.

That if we were able to keep the architecture consistent, meaning, have an architecture where software that you develop today could run on a large install base that you've left behind, and the software that you created in the past would be accelerated even further by new technology. This way of thinking about architecture compatibility, creating large installed base, taking the software investment of the ecosystem along with us, that psychology started in 1993, and we carried it to this day, which is the reason why NVIDIA's CUDA has such a massive installed base, and that, you know, because we always protected it. Protecting the investment of software developers has been the number one priority of our company since the very beginning.

Going forward, some of the things that we solved along the way, of course, you know, learning how to be a founder, learning how to be a CEO, learning how to conduct a business, learning how to build a company.

David Shannon
Analyst

Not easy stuff.

Jensen Huang
CEO, NVIDIA

These are all, you know, new skills, and which is kind of like learning how to invent the modern computer gaming industry. You know, NVIDIA, people don't know this, but NVIDIA is the largest installed base of video game architecture in the world. GeForce is some 300 million gamers in the world, still growing incredibly well, super vibrant. So I think the... Every single time we had to go and re-enter into a new market, we had to learn new algorithms, new market dynamics, create new ecosystems. And the reason why we had to do that is because unlike a general-purpose computer, if you built that processor, then everything eventually just kind of works.

But we're accelerated computing, which means the question you have to ask yourself is: What do you accelerate? There's no such thing as a universal accelerator, because yeah-

David Shannon
Analyst

Dig, dig down on this a little bit, David. Just talk about the differences between general-purpose and accelerated computing.

Jensen Huang
CEO, NVIDIA

If you look at software out of your body of software that you wrote, you know, there's a lot of file IO. There's setting up the data structure, there's a, you know, there's a part of the software inside which has some of the magic kernels, you know, the magic algorithms. And these algorithms are different depending on whether it's computer graphics or image processing or whatever it happens to be. It could be fluids, it could be particles, it could be inverse physics, as I mentioned, it could be image domain type stuff. And so all these different algorithms are different.

If you created a processor that is somehow really good at those algorithms, and you complement the CPU, where the CPU does, you know, whatever it's good at, then theoretically, you could take an application and speed it up tremendously. The reason for that is because usually some 5%-10% of the code represents 99.999% of the runtime.

And so if you take that 5% of the code, and you offloaded it on our accelerator, then technically, you should be able to speed up the application 100 times.

It's not abnormal that we do that. It's not unusual. So we'll speed up image processing by five hundred times, and now we do data processing. Data processing is one of my favorite applications because almost everything related to machine learning, which is a data-driven way of doing software, data processing is involved. It could be SQL data processing, it could be Spark type of data processing, it could be a vector database type of processing, all kinds of different ways of processing, either unstructured data or structured data, which is data frames, and we accelerate the living daylights out of that. But in order to do that, you have to create that library, that.

David Shannon
Analyst

Right

Jensen Huang
CEO, NVIDIA

... fancy library on top. And in the case of computer graphics, we were fortunate to have Silicon Graphics' OpenGL and Microsoft DirectX. But outside of those, no libraries really existed. And so, for example, one of our most famous libraries is a library, kind of like SQL is a library. SQL is a library for in-storage computing. We created a library called cuDNN. cuDNN is the world's first neural network-

... computing library. And so we have cuDNN, we have cuOPT for combinatorial optimization, we have cuQuantum for quantum simulation and emulation, all kinds of different libraries. cuDF for data frame processing, for example, SQL. And so all these different libraries have to be invented that takes the algorithms that run in the application and refactor those algorithms in a way that our accelerators can run. And if you use those libraries, then you get 100X speed up.

David Shannon
Analyst

Get much more speed.

Jensen Huang
CEO, NVIDIA

Incredible!

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

And so the concept is simple, and it made a lot of sense, but the problem is, how do you go and invent all these algorithms and cause the video game industry to use it, write these algorithms, cause the entire seismic processing and energy industry to use it, write a new algorithm, and cause the entire AI industry to use it? You see what I'm saying?

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

These libraries, every single one of these libraries, first, we had to do the computer science. Second, we had to go do the ecosystem development, and we have to go convince everybody to use it, and then what kind of computers does it want to run on? You know, all the different computers are different. So we just did it one domain after another domain, after another domain. You know, we have a rich library for self-driving cars. We have a fantastic library for robotics, incredible library for virtual screening, whether it's physics-based virtual screening or neural network-based virtual screening. Incredible library for climate tech.

One domain after another domain.

David Shannon
Analyst

Another domain.

Jensen Huang
CEO, NVIDIA

We have to go meet friends and, you know, create the market.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

And so what NVIDIA's really good at, as it turns out, is creating new markets. And we've done it for so long now that it seems like NVIDIA's accelerated computing is everywhere.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

But we really had to do it one at a time, one industry at a time. Yeah.

David Shannon
Analyst

So I know that many investors in the audience are super focused on the data center market, and it would be interesting to kind of get your perspective, the company's perspective, on the medium and long-term, you know, opportunity set. You know, obviously, your industry is enabling, you know, your term, the next industrial revolution. What are the challenges the industry faces? Talk a little bit about how you view, you know, the data center market as we sit here today.

Jensen Huang
CEO, NVIDIA

There are two things that are happening at the same time, and it gets conflated, and it's helpful to tease apart. The first thing, let's start with a condition where there's no AI at all. In a world where there's no AI at all, general-purpose computing has run out of steam still. We know that Dennard scaling, for all the people in the room that enjoy semiconductor physics, Dennard scaling and Mead-Conway's shrinking of transistors, scaling of transistors, and Dennard scaling of a you know iso power increased performance or iso cost increasing performance, that those days are over. We're not gonna see CPUs, general-purpose computers, that are gonna be twice as fast every year ever again.

We'll be lucky if we see it twice as fast every ten years. Now, Moore's Law, remember, back in the old days, Moore's Law was ten times every five years, a hundred times every ten years, and so all we have to do is just wait for the CPUs to get faster, and as the world's data centers continue to process more information, CPUs got twice as fast every single year, and so we didn't see computation inflation.

But now that's ended. We're seeing computation inflation, and so the thing that we have to do is we have to accelerate everything we can. You know, if you're doing SQL processing, accelerate that. If you're doing any kind of data processing at all, accelerate that. If you're creating an internet company, and you have a recommender system, absolutely accelerate it, and they're now fully accelerated. This, a few years ago, was all running on CPUs, but now the world's largest data processing engine, which is a recommender system, is all accelerated now. And so if you have recommender systems, if you have search systems, any large-scale processing of any large amounts of data, you gotta just accelerate that.

And so the first thing that's gonna happen is the world's $1 trillion of general-purpose data centers are gonna get modernized into accelerated computing. That's gonna happen no matter what. That's gonna happen no matter what. And the reason for that is, as I described, Moore's Law is over. And so the first dynamic you're gonna see is the densification of computers. You know, these giant data centers are super inefficient because it's filled with air, and air is a lousy conductor of electricity. And so what we wanna do is take that few, you know, call it 50, 100, 200-megawatt data center, which is sprawling, and you densify it into a really, really small data center.

And so if you look at one of our server racks, you know, NVIDIA server racks look expensive, and it could be $2 million per rack, but it replaces thousands of nodes. The amazing thing is just the cables of connecting old general-purpose computing systems costs more than replacing all of those and densifying into one rack. The benefit of densifying also is, now that you've densified it, you can liquid cool it because it's hard to liquid cool a data center that's very large, but you can liquid cool the data center that's very small. And so the first thing that we're doing is accelerating, you know, modernizing data centers, accelerating it, densifying it, making it more energy efficient. You save money, you save power.

You save, you know, much more efficient. That's the first. If we just focused on that, that's the next ten years. We'll just accelerate that. Now, of course, there's a second dynamic is because of and NVIDIA's accelerated computing brought such enormous cost reductions to computing. It's like in the last ten years, instead of Moore's Law being one hundred X, we scaled computing by a million X in the last ten years. And so the question is: What would you do different if your plane traveled a million times faster? What would you do different if. And so all of a sudden, people said, "Hey, listen, why don't we just use computers to write software?

Instead of us trying to figure out what the features are, instead of us trying to figure out what the algorithms are, we'll just give the data, all the data, all the predictive data to the computer and let it figure out what the algorithm is. Machine learning, generative AI. And so we, we did it in such large scale on so many different data domains, that now computers understand not just how to process the data, but the meaning of the data, and because it understands multiple modalities at the same time, it can translate data. And so it can go from English to images, images to English, English to proteins-

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

Proteins to chemicals. And so because it understood all of the data at one time, it can now do all this translation we call generative AI. Large amount of text into small amount of text, small amount of text into large amount of text, you know, and so, so on and so forth. We're now in this computer revolution. And now what's amazing is... So the first $1 trillion of data centers is gonna get accelerated. It invented this new type of software called generative AI. This generative AI is not just a tool, it is a skill. And so this is the interesting thing. This is why a new industry has been created. And the reason for that is, if you look at the whole IT industry up until now, we've been making instruments and tools that people use.

For the very first time, we're gonna create skills that augment people. And so that's why people think that AI is gonna expand beyond the $1 trillion of data centers and IT, and into the world of skills. So what's a skill? A digital chauffeur is a skill, autonomous, you know. A digital assembly line worker robot. You know, a digital customer service chatbot.

Digital employee for planning NVIDIA's supply chain. It could be a-- That would be somebody that's a digital SAP agent. You know, we use a lot of ServiceNow in our company, and we have digital, you know, employee service. And so now we have all these digital humans, essentially, and that's the wave of AI that we're in now.

David Shannon
Analyst

So, step back, shift a little. Based on everything you just said, there's definitely an ongoing debate in financial markets as to whether or not, as we continue to build this AI infrastructure, there is an adequate return on investment.

Jensen Huang
CEO, NVIDIA

Yeah.

David Shannon
Analyst

How would you assess customer ROI at this point in the cycle? And if you look back and you kinda think about, you know, PCs, cloud computing, when they were at similar points in their adoption cycles, how did the ROIs, you know, look then compared to where we are now?

Jensen Huang
CEO, NVIDIA

Oh, fantastic

David Shannon
Analyst

... as we continue to scale?

Jensen Huang
CEO, NVIDIA

Yeah, fantastic. So let's take a look. Before cloud, the major trend was virtualization, if you guys remember that. And virtualization basically said: Let's take all of the hardware we have in the data center, let's virtualize it into essentially virtual data center, and then we could move workload across the data center instead of associating it directly to a particular computer. As a result, the tendency and the utilization of that data center improved. And we saw essentially a two to one, you know, two and a half to one, if you will, cost reduction in data centers overnight, virtualization. The second thing that we then said was after we virtualized it, we put those virtual computers right into the cloud.

As a result, multiple companies, not just one company's many applications, multiple companies can share the same resource. Another cost reduction. The utilization, again, went up. By the way, this last 10 years of all this stuff, 15 years of all this stuff happening, masked the fundamental dynamic which was happening underneath, which is Moore's Law ending. We found a 2X, another 2X in cost reduction, and it hid the end of the transistor scaling. It hid the transistor, the CPU scaling. Then all of a sudden, we already got the utilization, cost reductions out of both of these things, we're now out.

And that's the reason why we see data center and computing inflation happening right now. And so the first thing that's happening is accelerated computing. And so it's not uncommon for you to take your data processing work, and there's this thing called Spark. If you, anyone who's used Spark, it's probably the most used data processing engine in the world today. If you use Spark and you accelerate it with NVIDIA in the cloud, it's not unusual to see a 20-to-1 speed up. And so you're gonna save 10, you know, and you pay. Of course, you got it, the NVIDIA GPU augments the CPU, so the computing cost goes up a little bit.

It goes, maybe it doubles, but you reduce the computing time by about 20 times, and so you get a 10x savings.

David Shannon
Analyst

Sure.

Jensen Huang
CEO, NVIDIA

And it's not unusual to see this kind of ROI for accelerated computing. So I would encourage all of you, everything that you can accelerate, to accelerate, and then once you accelerate it, run it with GPUs. And so that's the instant ROI that you get by acceleration. Now, beyond that, the generative AI conversation is in the first wave of gen AI, which is where the infrastructure players like ourselves and all the cloud service providers put the infrastructure in the cloud so that developers could use these machines to train the models or fine-tune the models, guardrail the models, so on and so forth.

And the return on that is fantastic because the demand is so great, that for every $1 that they spend with us translates to $5 worth of rentals. And that's happening, you know, all over the world, and everything is all sold out. And so the demand for this is just incredible. Some of the applications that we already know about, of course, the famous ones, OpenAI's ChatGPT or GitHub Copilot, or code generators that we use in our company, the productivity gains are just incredible.

You know, there's not one software engineer in our company today who don't use code generators, either the ones that we build ourselves, for CUDA or, USD, which is another language that we use in the company, or Verilog, or C and C++, and, you know, code generation. And so I think the days of every line of code being written by software engineers, those are completely over. And the idea that every one of our software engineers would essentially have companion digital engineers working with them 24/7, that's the future. And so, you know, I the way I look at NVIDIA, we have 32,000 employees, but those 32,000 employees are surrounded by, you know, hopefully 100X more digital engineers.

David Shannon
Analyst

Sure.

Jensen Huang
CEO, NVIDIA

Yeah.

David Shannon
Analyst

Sure. Lots of industries embracing this.

Jensen Huang
CEO, NVIDIA

Yeah.

David Shannon
Analyst

What cases, use cases, industries are you most excited about?

Jensen Huang
CEO, NVIDIA

In our company, we use it for computer graphics. We can't do computer graphics anymore without artificial intelligence. We compute one pixel, we infer the other thirty-two just to... I mean, it's incredible. And so we hallucinate, if you will, the other thirty-two, and it looks temporally stable, it looks photorealistic, and the image quality is incredible. The performance is incredible. The amount of energy we save, computing one pixel takes a lot of energy. That's, you know, computation. Inferencing the other thirty-two takes very little energy, and you can do it incredibly fast. One of the takeaways there is AI isn't just about training the model. Of course, that's just the first step. It's about using the model. And so when you use the model, you save enormous amounts of energy, you save enormous amount of time, processing time.

So we use it for computer graphics. We, if not for AI, we wouldn't be able to serve the autonomous vehicle industry.

If not for AI, the work that we're doing in robotics, digital biology, just about every tech bio company that I meet these days are built on top of NVIDIA, and so they're using it for data processing or generating proteins or for new-

David Shannon
Analyst

That seems like a super exciting space.

Jensen Huang
CEO, NVIDIA

Oh, it's incredible.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

Yeah. Small molecule generation, virtual screening. I mean, just that whole space is gonna get reinvented for the very first time, with computer-aided drug discovery because of artificial intelligence.

Incredible work being done there.

David Shannon
Analyst

Yeah. Talk about competition, talk about your competitive moat. There's certainly groups of public and private companies looking to disrupt your leadership position. How do you think about your competitive moat?

Jensen Huang
CEO, NVIDIA

First of all, I think I would say several things that are very different about us. The first thing is to remember that AI is not about a chip. AI is about an infrastructure. Today's computing is not build a chip, and people come buy your chips, put it into a computer. That's really kind of 1990s. The way that computers are built today, if you look at our new Blackwell system, we designed seven different types of chips to create the system. Blackwell is one of them. And

David Shannon
Analyst

Can I have you talk about Blackwell?

Jensen Huang
CEO, NVIDIA

Yeah.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

The amazing thing is when you want to build this AI computer. People say words like supercluster, infrastructure, supercomputer for good reason. Because it's not a chip. It's not a computer per se, and so we're building entire data centers. By building the entire data center, if you just ever look at it, one of these superclusters, imagine the software that has to go into it to run it. There is no Microsoft Windows for it. Those days are over. So this, all the software that's inside that computer is completely bespoke. Somebody has to go write that.

So the person who designs the chip and the company that designs that, that supercomputer, that supercluster, and all the software that goes into it, it makes sense that it, that it's the same company, because it'll be more optimized, it'll be more performant, more energy efficient, more cost effective. And so that's the first thing. The second thing is, AI is about algorithms, and we're really, really good at understanding what is the algorithm, what's the implication to the computing stack underneath, and how do I distribute this computation across millions of processors, run it for, you know, days on end, with the computer being as resilient as possible, achieving great energy efficiency, getting the job done as fast as possible, so on and so forth. And so we're really, really good at that.

Then lastly, in the end, AI is computing. AI is software running on computers. We know that, we know that the most important thing for computers is installed base, having the same architecture across every cloud, across on-prem to cloud, and having the same architecture available, whether you're building it in the cloud, in your own supercomputer, or trying to run it in your car or some robot or some PC, that having that same identical architecture that runs all the same software is a big deal.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

It's called installed base. And so the discipline that we've had for the last thirty years has really led to today, and it's the reason why the most obvious architecture to use if you were to start a company is to use NVIDIA's architecture, because we're in every cloud, we're anywhere you'd like to buy it, and whatever computer you pick up, so long as it says NVIDIA inside, you know you can take the software and run it.

David Shannon
Analyst

Yeah. You're innovating at an incredibly fast pace. I want you to talk a little bit more about Blackwell.

Four times faster on training, thirty times faster, inference than its predecessor, Hopper. You know, it just seems like you're innovating at such a quick pace. Can you keep up this rapid pace of innovation? And when you think about, you know, your partners, how do your partners keep up with the pace of innovation you're delivering?

Jensen Huang
CEO, NVIDIA

The pace of innovation, our basic methodology is to take... Because remember, we're building an infrastructure. There are seven, seven different chips. Each chip's rhythm is probably, at best, two years.

At best, two years. We could give it a midlife kicker every year, but architecturally, if you're coming up with a new architecture every two years, you're running it at the speed of light, okay? You're running insanely fast. Now, we have seven different chips, and they all contribute to the performance, and so we could innovate and bring a new AI cluster, a supercluster, to the market every single year that's better than the last generation because we have so many different pieces to work around. And the benefit of performance at the scale that we're doing, it directly translates to TCO. And so when Blackwell is three times the performance, for somebody who has a given amount of power, say one gigawatt, that's three times more revenues.

That performance translates to throughput, that throughput translates to revenues, and so for somebody who has a gigawatt of power to use, you get three times the revenues. There's no way you can give somebody a cost reduction or discount on chips to make up for three times the revenues, and the ability for us to deliver that much more performance through the integration of all these different parts and optimizing across the whole stack, and optimizing across the whole cluster, we can now deliver better and better value at much higher rates. The opposite of that is equally true. For any amount of money you want to spend, so for ISO Power, you get three times the revenues. For ISO Spend, you get three times the performance, which is another way of saying cost reduction.

And so, we have the best perf per watt, which is your revenues. We have the best perf per TCO, which means your gross margins. And so we keep pushing this out to the marketplace. Customers get to benefit from that, not once every two years, and it's architecturally compatible, and so the software you developed yesterday will run tomorrow. The software you develop today will run across your entire install base, so we can run incredibly fast. If every single architecture was different, then you can't do this. It takes a year just to cobble together a system.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

Because we built everything together, the day we ship it to you, and, you know, it's pretty famous, somebody tweeted out that in nineteen days after we shipped the systems to them, they had a supercluster up and running. Nineteen days. You can't do that if you were cobbling together all these different chips and writing the software. You'll be lucky if you could do it in a year.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

I think our ability to transfer our innovation pace onto customers getting more revenues, getting better gross margins, that's a fantastic thing.

David Shannon
Analyst

The majority of your supply chain partners operate out of Asia, particularly Taiwan.

Jensen Huang
CEO, NVIDIA

Yeah.

David Shannon
Analyst

Given what's going on geopolitically, how are you thinking about that as you look forward?

Jensen Huang
CEO, NVIDIA

Yeah, the Asia supply chain, as you know, is really, really sprawling and interconnected. People think that when we say GPUs, you know, because a long time ago, when I announced a new chip, a new generation of chips, I would hold up the chip, and so that was a new GPU. NVIDIA's new GPUs are 35,000 parts, weighs 80 pounds, you know, consumes 10,000 amps. When you rack it up, it weighs 3,000 pounds, you know, it's. And so these GPUs are so complex, it's built like a like electric car. It, you know, components like an electric car. And so the ecosystem is really diverse and really interconnected in Asia. We try to design diversity and redundancy into every aspect wherever we can.

The last part of it is to have enough intellectual property in our company, in the event that we have to shift from one fab to another, we have the ability to do it. Maybe the process technology is not as great.

Maybe, you know, we won't be able to get the same level of performance or cost, but we will be able to provide the supply, and so I think in the event anything were to happen, we should be able to pick up and fab it somewhere else.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

We, you know, we're fabbing at TSMC because it's the world's best.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

It's the world's best not by, you know, a small margin-

David Shannon
Analyst

Yeah

Jensen Huang
CEO, NVIDIA

... it's the world's best a lot by just incredible margin. And so not only just the long history of working with them, the great chemistry, their agility, the fact that they could scale. You know, remember, NVIDIA, last year's revenue had a major hockey stick. That major hockey stick wouldn't have been possible if not for the supply chain responding.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

And so the agility of that supply chain, including TSMC, is incredible. And in just less than a year, we've scaled up CoWoS capacity tremendously, and we're gonna have to scale it up even more next year and scale up even more the year after that. But nonetheless, the agility and their capability to respond to our needs is just incredible. And so what we use them because they're great, but if necessary, of course, we can always bring up others.

David Shannon
Analyst

Yeah. Company's incredibly well-positioned. A lot of great stuff we've talked about. What do you worry about?

Jensen Huang
CEO, NVIDIA

Our company works with every AI company in the world today. We're working with every single data center in the world today. I don't know one data center, one cloud service provider, one computer maker we're not working with. And so what comes with that is an enormous responsibility, and we have a lot of people on our shoulders, and everybody's counting on us, and you know, demand is so great that delivery of our components and our technology and our infrastructure and software is really emotional for people.

Because it directly affects their revenues, it directly affects their competitiveness. And so we probably have more emotional customers today than. And deservedly so. And you know, if we could fulfill everybody's needs, then the emotion would go away. But it's very emotional, it's really tense. We've got a lot of responsibility on our shoulder, and we're trying to do the best we can. And here we are ramping Blackwell, and it's in full production. We'll ship in Q4 and scale it, start scaling in Q4 and into next year. And the demand on it is so great, and everybody wants to be first-

David Shannon
Analyst

Yeah

Jensen Huang
CEO, NVIDIA

... and everybody wants to be the most, and so the intensity is really, really quite extraordinary.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

You know? And so, I think it's fun to be inventing the next computer era. It's fun to see all these amazing applications being created.

David Shannon
Analyst

Yeah.

Jensen Huang
CEO, NVIDIA

It's incredible to see robots walking around. You know, it's incredible to have these digital agents coming together as a team, solving problems in your computer. It's amazing to see the AIs that we're using to design the chips that will run our AIs. All of that stuff is incredible to see. The part of it that is just really intense is just, you know, the world on our shoulders, and so-

David Shannon
Analyst

Sure

Jensen Huang
CEO, NVIDIA

... so less sleep is fine and, you know, three solid hours, that's all we need.

David Shannon
Analyst

Good for you. I need more than that. I could spend another half hour. Unfortunately, we've got to stop. Jensen, thank you very much. Thank you for being here.

Jensen Huang
CEO, NVIDIA

Thank you

David Shannon
Analyst

... and sharing with us today.

Jensen Huang
CEO, NVIDIA

Thank you. Appreciate it. Thank you.

Powered by