NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.50
-8.75 (-4.18%)
Apr 30, 2026, 12:16 PM EDT - Market open
← View all transcripts

Goldman Sachs Communacopia & Technology Conference

Sep 5, 2023

Toshiya Hari
Managing Director, Goldman Sachs

Okay, great. Good afternoon, everyone. Thank you so much for joining us. As expected, standing room only. My name is Toshiya Hari. I cover the semiconductor space at Goldman Sachs. Very pleased and very honored to have Manuvir Das, VP of Enterprise Computing. Manuvir, he leads the team working to democratize AI by bringing full stack accelerated computing to every enterprise customer. He has more than 25 years of experience in the technology industry, and prior to joining NVIDIA in 2019, he held a range of senior roles at both Dell and Microsoft. At Microsoft, I believe you helped to create the Azure platform. Amazing. Thank you so much for coming.

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah, it's. Thank you, Toshiya. It's an honor to be here, and thank you to everybody for taking the time. I'm just a small cog in the wheel, but happy to represent NVIDIA here.

Toshiya Hari
Managing Director, Goldman Sachs

That's awesome. So, so Manuvir, before joining NVIDIA in 2019, again, you had a very successful career at both Microsoft and Dell. What initially attracted you to NVIDIA? How has the experience at the company played out so far relative to your original expectations? I think I know the answer to that question, but I'll ask it anyways. And as the head of enterprise computing, how do you spend your time? What are some of the key priorities for you?

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah, that's, well, that's three great questions, so let me do them one by one. I think, you know, the reason I joined NVIDIA, Toshiya, is because I grew up in Microsoft, you know, the ultimate software platform company. We understood that a platform is only as good as the applications that are developed on it. We had a big focus on developers. In fact, we had a whole thing called the developer division, where we, we focused on developers. And then I had my first conversation with Jensen, who is the CEO of a chip company, and all he would talk to me about was developers. And I really didn't get it.

And then I watched his keynote at GTC conference from the previous year, and here's the CEO spending three hours talking about use case after use case of accelerated computing that developers can embrace and build applications for, right? And it was an eye-opening thing for me. And I guess what I realized was, at the time, was this company had this vision that there's a new form of computing coming, accelerated computing, but it's not a free thing. You don't just take applications and move it to accelerated computing. You have to work on it one domain at a time, and that means you need developers to embrace it, right? And that's the journey this company had been on, really, for 25 years already by then, you know. And you could see it then.

We were on the cusp of about 1 million developers who were using the NVIDIA platform. That number has now reached 4 million, right? So it's been remarkable. So that's the reason I came to NVIDIA, because I realized that these people are really seeing the world at the cusp of something. It's not about the chips, it's about the whole stack. And then to your second question, I would say what I've experienced differently since I got here was NVIDIA has grown up a lot because what NVIDIA realized was that at the same time... So you have new technology, right, that is groundbreaking, but people adopt in ways that they're familiar with adopting.

There's a whole ecosystem in the enterprise for how you adopt technology, whether it's, you know, hardware manufacturers like a Dell or an HPE or software platforms like a VMware or an SAP or service integrators, you know, like a Deloitte or an Accenture. So we've spent a lot of time the last few years at NVIDIA really getting that flywheel going. That's how we've got to this point now. I think that's what's been a little different for me. Then I think your third question is about what I do.

Toshiya Hari
Managing Director, Goldman Sachs

Key roles, priorities, how you spend your time-

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah. So I think, you know, at NVIDIA, we have a big team of people who do actually all the work. I'm just a talking head, but I think I do two things, right? One is I work with the leadership team on our strategy, how we actually approach this opportunity, the enterprise, how we think about it. And then the second thing is this ecosystem I talked about. You know, all the announcements you see with VMware, with Snowflake, et cetera. So I spend a lot of my time working with this ecosystem so that everybody can win together. You know, the customers, the partners in the ecosystem, and then, of course, NVIDIA coming along.

Toshiya Hari
Managing Director, Goldman Sachs

That's great. Thank you for that. Jensen talks about the iPhone moment-

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah

Toshiya Hari
Managing Director, Goldman Sachs

- of AI arriving. As sort of outsiders, we were exposed to ChatGPT and the likes-

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah

Toshiya Hari
Managing Director, Goldman Sachs

maybe late last year, maybe earlier this year, depending on who you are, where you stand. I'm sure you saw this way earlier, given what you do.

Hopefully.

Can you describe what the aha moment was for you and the broader NVIDIA team as it pertains to-

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah

Toshiya Hari
Managing Director, Goldman Sachs

- this big movement?

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah, I think I'll start by saying, firstly, that I think for the world at large, really, ChatGPT was the, was the aha moment, right? And NVIDIA works very closely with OpenAI, has for some time. We're very proud of those folks. And really, every customer conversation that we have, about, you know, LLMs and generative AI, the first question we ask them is, "Have you looked at the toolkit that OpenAI provides? And if that's a great starting point for you, just go with that," right? And they've done a great body of work, right? So I just want to put that out there. I think for us, you know, the aha moment came, I would say, probably about five years before that.

Probably, yeah, September of 2019 is where we first put out the first version of a library called Megatron that NVIDIA built. That was really the framework for doing this kind of training for large language models. The way to think about it is, you know, for years, we'd worked on all these AI use cases, which are based on what is called supervised learning. So basically, you teach the model by giving it human-generated examples, right? Here's a photograph. I'm telling you there's a cat in it. Here's a photograph, I'm telling you there's a dog in it, right?

And so humans have to go to the process of creating a lot of this data set that is fed into, to training, and that creates a bottleneck, and that creates this barrier to entry.... And then the aha was this advent of unsupervised learning, which is, if you think about it, a lot of how people learn, too, right? You don't always learn just by sitting in a college and a professor is teaching you. You learn by just observing. And what's happening with LLMs is it's unsupervised learning, where you put a lot of data in front of it, and the model just learns on its own, right? So that was really the aha moment. And so we built this framework on the one hand, called Megatron, to do the training.

And then secondly, we realized that you needed different circuitry in the hardware, what we call Transformers, to really accelerate this form of real learning. And so in our roadmap for creating GPUs, we started putting that circuitry in, the Transformer circuitry, right? And so that happened years before ChatGPT. Yeah.

Toshiya Hari
Managing Director, Goldman Sachs

Got it.

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah.

Toshiya Hari
Managing Director, Goldman Sachs

In terms of customer engagements, being the leader of enterprise computing, you must interact with thousands, tens of thousands of big customers and potential customers who are either already deploying AI or looking for ways to leverage the technology. Can you give us a feel as to how customer engagements have evolved since the beginning of the year? And what are your customers coming to you for today?

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah, I think I'll start by saying, I think we have about 40,000 companies working with us on our technology. So no, I do not actually meet with all of them. That'd be a bit challenging. But there has been a dramatic shift, Toshiya, and I would put it this way, right? I think in all the years we've been working with companies to date, before this year, the customer conversation would always be about: Well, what's the use case that I, as the company, should care about, I, as a customer, should care about? And we'd be pitching the use case. You know, NVIDIA really, in many ways, made the AI market, all these different use cases. We created them vertical by vertical.

And depending on which industry you're in, we'd show you why fraud detection, for example, would be a good use case for you. So a lot of the conversation would be about that. I would say this year, when customers come to see us, now they already know what the use case is, right? It's the intelligent assistant helping the employee, it's the customer interaction, what have you. And so the conversation is actually about: Okay, NVIDIA, what do I need to know and how can you help me? And who can I work with to implement this, right? And I think this is a good point to point out that NVIDIA, by DNA, we're a platform company, right? We work with the ecosystem. We very rarely produce direct solutions for the customer ourselves, right? We encourage our ecosystem to do that.

So often the conversation is, let's educate you on the landscape, on the, the technology stack, and then let's show you who you can work with to implement, right? And we're underneath everybody.

Toshiya Hari
Managing Director, Goldman Sachs

Got it. In terms of the long-term market potential for AI, it's extremely difficult from where we stand in predicting how big this could be, and perhaps it's challenging for you as well. You and Jensen and the broader team did throw out a couple of numbers at the Analyst Day. You know, $300 billion in chips and systems, $150 billion in NVIDIA AI Enterprise software, and another $150 billion in Omniverse Enterprise software. When you guys construct something like that, a long-term TAM, how do you go about it? Is it bottoms up? Is it tops down? Is it a bit of both? If you can kinda share that with us, that'd be helpful.

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah, I think this is a great topic, and maybe if it's okay, I'll take a couple of minutes to unpack it, because I think it's pretty important, right? So firstly, we don't do TAM, right? So when we talk about all these numbers, we're really talking about the long-term market opportunity that we see, and then it'll play out, you know, over the years to come as it'll play out. I think if you go back to the basics, the fundamental thing we see going on here, Toshiya, is that there is a new form of computing that is beginning its journey, and that's what we call accelerated computing, right? And the whole point here is, if you think about the traditional computing systems based on CPU computing, what has changed over the decades is simply the location.

You know, you're doing it on-prem, you're doing it in the cloud, you're doing it on your phone, but it's essentially the same style of computing. And as the world has evolved, more and more of the function of companies is being done in computing, which means you need more and more computing in the world, which means you need more data centers, you need more energy, you need more horsepower, and it's just not sustainable. It's just not on a sustainable trajectory, right? And what we saw was accelerated computing, which is this way where the same amount of footprint can do 10 x the work, 100 x the work. That was going to be the only way.

So the simplest way of thinking about NVIDIA is, we made this big bet, you know, it's been decades in the making, that the way forward is gonna be accelerated computing, okay? And so how do we think about our market opportunity at a fundamental level? What we say is the footprint that is out there, there's about $1 trillion of spend on data centers. That footprint that already exists, which is traditional computing, plus all the growth that is gonna happen in the years ahead, that is all gonna shift from traditional to accelerated computing, and we've set ourselves at NVIDIA to be at the forefront of that shift. So all our analysis of our market opportunity starts with that, that there's gonna be this dramatic secular shift. We believe in, in it very strongly.

Now we have the ultimate killer app for it, if you will, with generative AI, and that is the beginning of our market opportunity. And then the other stuff, the numbers that we did, a lot of which is bottom-up, is to ask ourselves: Okay, if I break that down, how much of that is systems and hardware, right? And you do a refresh cycle, and that's where the $300 billion came from, and then the software opportunity. You know, the interesting thing for NVIDIA is, for a long time... We've invested a lot in software. I would say probably 80% of our R&D over the last decade has been in software, not in hardware. But that software has just come along with the hardware.

The reason for that has been because a lot of the early shift has been the developer ecosystem, the researchers, the R&D, where you need the software, but it's sort of okay if I go through some pain adopting the software, if you will. But we're now moving into the world of production. We're moving into the world where enterprise companies are betting their business on the AI models that are running under their applications, and so you need enterprise-grade software. The reason we put the other number for the software is because we see incrementally for NVIDIA in the years ahead, we have this big opportunity that we are actually the operating system of AI. We are the runtime of AI.

When you have a model and you take it with you, and you run it under all your applications, that model needs to be running at three in the morning, right? It needs to be a supported, enterprise-grade thing. And so NVIDIA is the provider of that runtime, no matter where you are. And so that's where that incremental software opportunity comes, comes from for us. So on the one hand, we take the secular shift. We believe this secular shift is being driven by NVIDIA, so it's a big opportunity for us. And then on the other hand, bottom up, we look at these individual parts of the stack, if you will, to add it up.

Toshiya Hari
Managing Director, Goldman Sachs

Got it. So it sounds like you still feel pretty good about those numbers, and-

Manuvir Das
VP of Enterprise Computing, NVIDIA

I think we do.

Toshiya Hari
Managing Director, Goldman Sachs

More about timing.

Manuvir Das
VP of Enterprise Computing, NVIDIA

I think we do, and I think we realize that it's a generational change. It's a multi-year transition that the industry is gonna go through, and we're here for the long haul. Yeah.

Toshiya Hari
Managing Director, Goldman Sachs

Got it. Got it. The CSPs are very sophisticated. They're informed. In your field of enterprise, I'm sure there's a pretty broad range in terms of sophistication. You guys are doing quite a bit to democratize AI. I think, you know, you've got frameworks, you've got, you know, partnerships with the likes of Snowflake and VMware, Hugging Face. Maybe talk about the significance of those partnerships and what you're doing to make it easier for your customers to deploy AI.

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah, you know, this is the thing I, I work on and think about, every day, Toshiya. I think there's, there's a couple different aspects to the democratizing, right? I think the first thing is, we need to really understand the power of large language models, or LLM, in democratization, because I think we all understand from a use case point of view, the value of this, right? That is, we see it every day. We all use ChatGPT, we see generative AI, generating images and all of that every day. But I think there's a very different way of thinking about it, right? The thing about AI is it's a new form of computing, where basically what you do is you have a corpus of data, you train a model with a corpus of data, it's learned, and now you can ask it questions.

That sounds great, but the challenge for years has been that first, the front-end part of the process has a very high barrier to entry because you have to find all the data, the right data, you have to curate it, you have to go through this whole training process before you get a usable model, right? The beautiful thing about large language models is these things called pre-trained foundation models, right? Where some smaller set of people have done the work with hundreds of millions of dollars and many, many servers and large amounts of data and trained up these models, and now they're ready to use, right? And you start from there, you fine-tune with your own data, and you use the model. So the whole front end of the process has been done for you, right? So the barrier to entry...

You know, when we work with customers, we certainly have the AI unicorns that are sitting there doing large amounts of training. And then in every industry, you have a few companies that say, "I'm at the forefront of this. I'm gonna train my own giant models," right? Like Bloomberg is a great example. But I think for the bread-and-butter enterprise customer, a great place to start now is: Let me pick up one of these pre-trained foundation models, right? Whether it's somebody like OpenAI or it's, you know, Llama 2 from Meta or NVIDIA's models or what have you. And so it's the ultimate democratization, because that front end of the process, which is so difficult to do, you can now basically just jump ahead, right?

And just start with the output of that and do your fine-tuning and your inference and embedding your application. So I think at the technology level, that's one kind of democratization, right? That we work on and that we see. And then the other one is at the ecosystem level. So for example, we made an announcement with VMware very recently. What was that about? What that was saying is, if you're one of these enterprise companies where you wanna jump in at that sort of 75% point, where you take a pre-trained model and now you just embed it in your application, well, what do you need? You need to be able to take the model, put it in your briefcase, and take it wherever your application is. That's what you need, right? You need this, VMware calls it Private AI.

Toshiya Hari
Managing Director, Goldman Sachs

Mm.

Manuvir Das
VP of Enterprise Computing, NVIDIA

Basically, how am I empowered so that if I just pick a model, I have the right runtime, and I use all the tools I know. I already have a farm of servers deployed with VMware. My IT team already knows how to manage that, right, how to scale that, all of that, right? I work with people who do my big transformations and deployments. I have a contract with Deloitte. I have a contract with Accenture, right? I have certain software platforms I use. And then the ultimate level of that democratization is, for example, the work we do with ServiceNow, now that we've talked about, right?

The ultimate democratization of AI is when we work with an enterprise company that does not know they're doing AI, that does not know they're working with NVIDIA, because they just upgraded to the newer version of ServiceNow that is infused with the AI work that NVIDIA and ServiceNow did together. And from their point of view, they just got a new functionality, right?

Toshiya Hari
Managing Director, Goldman Sachs

Mm.

Manuvir Das
VP of Enterprise Computing, NVIDIA

So, so to put it another way, we are a platform company. We believe in the network of networks, and for us, the ultimate democratization of AI, and where NVIDIA is going, is the ecosystem of applications.... that are powered by NVIDIA's platform and this network of customers that all of these companies individually have, that we don't have a sales team for, but they have their sales teams for, right? And they're just taking NVIDIA technology to all of their customers, and that's really the journey that we're on.

Toshiya Hari
Managing Director, Goldman Sachs

That's great. DGX Cloud, had a couple of questions on that. I think you recently introduced the concept of DGX Cloud. To level set the audience, you know, can you describe what DGX Cloud is? How did it come together? How does it work? One of the questions I often get from investors is, do you actually end up competing with your customers? So if you can address that concern-

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah.

Toshiya Hari
Managing Director, Goldman Sachs

That would be great.

Manuvir Das
VP of Enterprise Computing, NVIDIA

So I'll start by saying that we have no intention of competing with the CSPs. In fact, they are our best partners, right? Among our best partners, and we do a lot of business together, and we work with a lot of customers together mutually with all of the CSPs, right? I think the simplest way to explain it would be, if you think about our server system that we built called DGX, right? That we've, we've worked on for many years now on premises. You know, accelerated computing is a new kind of paradigm, okay? It's not just about chips, it's not just about the systems, it's about the networking, the software, everything that goes into it.

And when we started this journey, you know, the system manufacturers, the Dells, the HPs of the world, they sold a certain kind of system, and their customers expected to procure a certain kind of system from, from these manufacturers, which was not really accelerated computing. And so we built our own system to sort of show the way, to be the scout team, right? And we established DGX with all these use cases, and that helped the system manufacturers understand that this new computing model is actually interesting and there is a market here. And as soon as they realized there was a market, we actually took the secret sauce of DGX, we actually took the internals of it, and we productized it into an engineered solution that we gave to the system manufacturer.

Toshiya Hari
Managing Director, Goldman Sachs

Mm.

Manuvir Das
VP of Enterprise Computing, NVIDIA

And we said, "You go sell your own systems now. Build your own systems with your own IP, sell them. We're actually very happy when you sell one of your systems to the customer, because we are not gonna scale this business with DGX." DGX is a scout team, right? And we are always innovating and putting the next stuff into it. Now, the way we came about the DGX Cloud was exactly the same thought process, but in the cloud, because we see more and more customers doing their work in the cloud, especially with AI. And so it's the same journey where what we said to the CSPs was: Within each of your clouds, how about we mutually create footprints of NVIDIA DGX technology, where, again, we are putting in footprint for the next step in computing, right?

And we need to do that because the networking, the storage, all of this is quite intertwined. Because the computer is not a single computer now, it's a whole set of servers in the data center. So we work with them, we put this footprint in, customers come in, they experience the latest and greatest advances, and our expectation, in fact, is that the CSPs themselves watch that in operation and say, "Okay, thank you very much, NVIDIA. Now I'm scaling that out. I've got it." Right? And that's a really good outcome for everybody, for the customer, for the CSP and everyone else, you know. Jensen makes a statement that we wanna be the best sales team for the CSPs-

Toshiya Hari
Managing Director, Goldman Sachs

Mm.

Manuvir Das
VP of Enterprise Computing, NVIDIA

Just like we have been for the system manufacturers, right?

Toshiya Hari
Managing Director, Goldman Sachs

Right.

Manuvir Das
VP of Enterprise Computing, NVIDIA

So I think that's how we see it.

Toshiya Hari
Managing Director, Goldman Sachs

Okay, that makes sense. It seems like the response so far has been very positive?

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah. I think it's great relationship. We are working with all the CSPs, and they're keen to expand the footprint, yeah.

Toshiya Hari
Managing Director, Goldman Sachs

Okay, got it. I mean, you talked about DGX and how you work with your systems partners. Definitely wanna go there. So you recently introduced the L40S GPU. Maybe spend a couple of minutes describing the significance of the L40S and how the systems business or the OEM business is going, because I know that's an important route-

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah.

Toshiya Hari
Managing Director, Goldman Sachs

for enterprises as well.

Manuvir Das
VP of Enterprise Computing, NVIDIA

I think, you know, we've realized now that we've got to the point with accelerated computing, where all the system manufacturers, they have their mainstream product line, and then they've eventually reached the point where they have a product line for accelerated computing with NVIDIA. And the L40S is really saying, those two worlds don't have to be separate worlds anymore. You know, your mainstream server line, every server can have a GPU in it, and that's where we think data centers are headed. I wanna say this, Toshiya, you know, that if you ask me, what is the thing that makes NVIDIA special, okay, in how we build our GPUs? It's that we have one architecture. We have one programming model based in CUDA.

And so what that means is that we see the market, and we're able to produce a family of GPUs for different situations, but they're all programmed the same way. So the developer ecosystem doesn't have to do different things. So why do we do the L40S? Okay, so you know about the Hopper generation, the H100. There's very high demand for that, right? And obviously, we're working on that, and there are certain ways in which the chip is packaged, you know, that system is packaged. The beauty of the L40S is, it's really good for that back end of the workflow I talked about, the fine-tuning, the inference. Of course, you can do training with it, but it's really good at the back end, and it's not constructed the same way, right?

It doesn't have the same requirements for the chip-making process as a Hopper family, right? So, it creates another channel of supply, if you will, of this kind of computing infrastructure. Everything in it is designed to fit into standard servers. You know, the form factor, the power consumption, and all of that. So now we're on this journey. We just announced, you know, with Dell, with HPE, with Lenovo, a number of server manufacturers. I think we have 100-odd systems coming online, where for customers we say: As you are tech refreshing your data centers going forward, you know, what do you rack and stack?... Right, what's the standard server you rack and stack now? You put one of these servers with a GPU in it.

And the reason for that is because there's so many different use cases you can do, whether it's AI or data processing, and there's a plethora of use cases, and it just saves you money because, you know, one of these servers can do so much more than a single CPU server, right? So it's all adding up to this secular movement. You step back and you say, you look inside a data center today, and you see how many of the servers in a typical data center today have a GPU in them? It's single-digit %, right? And what we expect to see, in a few years down the road is the majority of those servers will have GPUs in them. Getting the right systems with the L40S and processors like that is one part of it.

Working with the developer ecosystem, these 4 million developers to build domains and move more and more domains to accelerated computing, so there's the different use cases. I mean, this is the journey, right?

Toshiya Hari
Managing Director, Goldman Sachs

I think you guys have been extremely dominant in training. I think the competitive setup or the landscape and inference is a little bit more nuanced.

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah.

Toshiya Hari
Managing Director, Goldman Sachs

Something like the L40S, it sounds like will be effective in addressing the inference market.

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah, I think it's, of course, the hardware. You know, we're very proud of our chips-

Toshiya Hari
Managing Director, Goldman Sachs

Yeah.

Manuvir Das
VP of Enterprise Computing, NVIDIA

And we innovate on our chips in a very rapid fashion, you know, and we are standing on a body of work, right? It's the hardware we've developed over time, all the circuitry. But remember, we're 80% a software company, and our software and hardware team work in conjunction, right? The reason why the H100 does what it does so well for deep learning and for LLM inference is because the software team at NVIDIA is at the forefront of figuring out what I actually want that chip to do, right? Which allows me to build the right chip, right? But with inference now, it's been clear to us for many years now that the training leads up to the inference, right? So there's two movements going on, okay? On the one hand, you have to understand, training is not a one-off thing, okay?

Because what happens is you train a model, but a model is only as good as the data you train it on. And the reality is, as you're running your business, you're getting new data every day. So training is not a one and done. So just to be clear, right, that training will be done continuously. Any big company is gonna have what we call an AI factory, where you're basically doing training all the time. But clearly, this is all useful because you're using the models in your application for inference. So we've spent the last few years really designing a hardware, but especially the software stack for inference, right? And we think that's the a big opportunity going forward.

And as I said, every place where you're doing inference, there's only one runtime you can pick up and take with you in your briefcase, along with the model today, to run that model where you want to run it, in your own data center, in a colo, in a public cloud of your choice, in your Tier 2, Tier 2 CSP, on your workstation, on your PC. There's one runtime that the world has that you can take with you to do this work anywhere, and that's NVIDIA software runtime for inference, right? So it's both a hardware opportunity and a software opportunity.

Toshiya Hari
Managing Director, Goldman Sachs

That's really fascinating. I'm gonna pause here and see. I guess there's one person. Do we have mics in the room? Check that first. There you go.

Speaker 3

Thank you. First of all, congratulations on all your success. I'm struck by your comment about NVIDIA positioning itself as the operating system for AI applications. Given that, and given your support of OpenAI, what responsibility do you think companies like NVIDIA have to make certain that the AI is used in non-nefarious ways? And what kind of services do you offer enterprise-grade customers who care about, you know, the ability to audit results, and to make certain that the results that are being put out are, you know, non, non-troublesome?

Manuvir Das
VP of Enterprise Computing, NVIDIA

You know, it's a great question, and I like the fact that you used the word responsibility, because, you know, responsible AI, I think, is a thing, right? So we're a platform company, right? And our job is to provide foundational technology. So we, we think of this in two ways, right? The first way is that from the point of view of companies who are deploying AI, how do we empower them to run AI in the way that they want it, right? So how do we make it portable? And this is why we produce the software stack that you can take with you, right? So you can run the AI in your building, on your premises, you have a compliance regime, right? Your data is sitting in certain places.

So instead of making you come to the AI, we bring the AI to you, right? So that's one part of it. The second part of it, in the hardware, we do work on confidential computing. We've got our latest generation of hardware has this built in, because if you think about it, the models are now the IP, right? The models are the software. So how do you protect the models when they are deployed and running? So we've done the work in the hardware to create confidential computing in this world of AI. So that's one aspect of it. The tooling, we provide software tools like we have a technology called Guardrails, which allows you to control what the model actually does, what questions it should and shouldn't answer, how to control how much hallucination you get when you use the model, right?

So I think that's one part of it. The other aspect with responsible AI is it all starts with the data, right? The IP that is in your model actually came from the data. So where is the data coming from? How is it sourced? Are the right people who actually contributed the data getting the economic credit for producing that data? And so this is why we work with companies like Getty Images, Shutterstock, who have licensed content, right? And what we're doing as NVIDIA is, we're enabling them to create these models for generative AI based on those data sets that they have sourced responsibly, right?

And then we have, for example, we work with, WPP as a company, in the advertising realm, where they are then able to reliably take the output of those models that have been built by Getty and Shutterstock, et cetera, knowing that when they get assets out of that, that they're using in their campaigns, there is a, complete knowledge as to where these came from, and they were responsibly sourced, right? So I think that's the other aspect of it, which is: Where is the data coming from? Is it responsibly sourced? I think there's many aspects in general, and we are just one company in the ecosystem, but certainly, this is a big focus for NVIDIA.

Toshiya Hari
Managing Director, Goldman Sachs

Okay.

Speaker 4

Hi. Can you talk about the inference opportunity for NVIDIA as opposed to training? So what would your share of inference be maybe three years down versus your training share?

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah, that's a great question. So you'll have noticed, generally, philosophically, we are not a company that thinks about, should I have 25% share of something or should I have 40% share of something? I think our view on this is very simple, that most, most computers, servers, and workstations going forward will be in a position where they'll be used for inference, and we believe that we have been working on the best hardware and software stack for inference, and so that's my answer to you.

Toshiya Hari
Managing Director, Goldman Sachs

Great. I wanted to squeeze in a supply question. It's not the best place to end, but I wanted to squeeze it in. It's no secret that there's a supply-demand mismatch today. I guess, A, how significant is the supply-demand mismatch, to the extent you can quantify it? As the head of, you know, enterprise computing, how do you plan your business when supply is so tight in GPUs? And, at what point would you expect, you know, supply-demand to meet, if you will?

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah, that's a great question and an interesting one to end on. I would say, as firstly, as you said, yes, definitely we see more. We see more demand than the supply. You know, we've talked about that in our earnings. We've been racing to increase supply. I think our partners in the supply chain have done an exceptional job, working with us to increase supply. We talked about the L40S just a minute ago, so we've also found ways to increase the supply in these other ways. I think it's a process we are in over multiple quarters in a year. I think the good thing is that our customers, on the one hand, have a journey to put new footprint into data centers.

You know, that takes multiple quarters of its own. Meanwhile, we are on this path to increase supply over the next few quarters. And so I think there's sort of a natural balance between those two things. I cannot sit here and tell you exactly when what's gonna happen, but I think we feel pretty good about the fact that we've taken the steps to increase our supply, and meanwhile, customers are on their journey, too, and we see this sort of happening in conjunction as we go. And as I said, it's a secular shift, right? It's not something we are focused on for just one quarter.

Toshiya Hari
Managing Director, Goldman Sachs

Sure. Sure. I guess in the last 60 seconds that we have, obviously, you're playing in this big and growing market. You're at the core of everything. Is there anything that we have missed, in this conversation or anything you wanna highlight to the group before we let you go?

Manuvir Das
VP of Enterprise Computing, NVIDIA

Yeah, I think the, the main thing I'll just point out is, if you zoom out enough, right? I know we are all focused on LLMs and generative AI. It's, it's the, it's the killer application, but really what we are seeing is this, right? It's what's coming to fruition finally is this move to a new computing platform that the world really needed, which is accelerated computing, and large language models and generative AI are this killer app that is convincing everybody to adopt this platform. But as they adopt this platform, it's gonna have a much more far-reaching effect because all your workloads, like data processing, that you normally have been doing on traditional computing, are going to be accelerated dramatically.

And in the same footprint, you're gonna be able to do 10 x the amount of work, 100 x the amount of work that you could do in that existing footprint today. And this is important not just for saving money, but for the energy footprint of the world. And so that's what we, that's what we're on the cusp of, and that's what we are truly excited about as NVIDIA, and that's what we see as the opportunity going forward. And it's just a confluence of these two things, because you need the right application to really drive people to the new platform, and then you open up all the opportunities that the new platform provides, right? And so that's really what we are, we are looking at going forward, that we are super excited about.

Toshiya Hari
Managing Director, Goldman Sachs

Amazing. Really enjoyed the conversation. Congratulations on everything, and, and thank you so much. Really appreciate the time.

Manuvir Das
VP of Enterprise Computing, NVIDIA

Thank you. Thank you for the time.

Toshiya Hari
Managing Director, Goldman Sachs

Thank you.

Powered by