NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.50
-8.75 (-4.18%)
Apr 30, 2026, 12:16 PM EDT - Market open
← View all transcripts

UBS’s 2025 Global Technology and AI Conference

Dec 2, 2025

Timothy Arcuri
Analyst, UBS

Good morning. We're gonna get started here. I'm Tim Arcuri. I'm the Semi and Semi Equivalent Analyst here at UBS, and we're very pleased to have NVIDIA, pleased to have Colette Kress with us this morning. W e've, before we begin, you have to read a,

Colette Kress
CFO, NVIDIA

I do. Okay. As a reminder, this discussion may contain forward-looking statements, and investors are advised to read our reports filed with the SEC for information related to risk uncertainties facing our business.

Timothy Arcuri
Analyst, UBS

Perfect. Right now, Colette, there's basically two debates. One is whether there's an AI bubble, and two is the competition. I wanted to address these one by one. First, what is the market missing? When everyone talks about an AI bubble, what is the market missing versus what you see in your business?

Colette Kress
CFO, NVIDIA

Yeah, it's a very interesting discussion. A lot of words, really focusing on some very interesting thoughts regarding the supposed AI bubble. No, that's not what we see. What we see is two to three different major transitions happening in the market. We've talked about these transitions in history that says first, let's not forget the need to transition to accelerated computing. Most of all workloads, most of all work done in the data center has been done with CPUs for years. What our focus is on is transitioning that to GPUs. It's a necessary thing because there's just not gonna be any improvement that we can see in terms of the other means of using CPUs. That's one of our first pieces.

When we think about our outlook of by the end of the decade, $3 trillion-$4 trillion worth of AI or just total data center infrastructure moving that together, probably about half of that is just focused in terms of the work, in terms of working on that transition. We're in the early parts of that. What you're seeing, for example, is the Hyperscalers, the very large CSPs as well. That is a very big part of the work that they are doing. You are seeing them work in terms of revising search, revising for recommender engines, and revising for the overall social media. This is a very big part of what we are seeing today.

There is also that transition that is going to be necessary for AI, including what you need for accelerated computing, and focusing on AI and agentic AI is moving in that piece. Keep in mind that it is only one part of what we see today, and we're gonna see that continue to grow through the rest of the decade as well.

Timothy Arcuri
Analyst, UBS

Of the three to four trillion dollars that, you know, Jensen talks about by 2020, by 2030, that would include replacing all of the existing trillion dollars' worth of data center infrastructure.

Colette Kress
CFO, NVIDIA

Absolutely can be. A lot of that is going to be necessary. How long will that take? Keep in mind, they're also growing. It is not just thinking about the history. As they continue to grow, they are also going to have to add more and more accelerated computing into the work.

Timothy Arcuri
Analyst, UBS

Got it. Let's talk about competition. We haven't even seen a model yet trained on Blackwell. Everyone is a little up in arms about whether your competitive lead is shrinking or not. Maybe you can speak to that.

Colette Kress
CFO, NVIDIA

Yeah. Let's just talk about where we stand. We're very excited in terms of our Grace Blackwell configurations that we've put in the market. That's both the 200 series as well as the Ultra series and the 300. Today, you're gonna continue to see more and more models come into the world. Those models are right now being built, and you're probably gonna see them in about six months coming out in terms of the new models. What we did, and when we created our Grace Blackwell configuration, that was an important change that we made in terms of completing full data center scale. We refer to that often in terms of rack scale.

The important part to remember is that that was a focus in terms of extreme co-design that would be necessary, not with just one chip, but seven different chips altogether working to create what is gonna be very important for both accelerated computing and many of these new models that would be coming to market. We are very pleased with that. Keep in mind, it is just not related, or it is not anything similar to what you may see in a fixed function ASIC. It is very different. Today, everybody is on our platform. All models are on our platform, both in the cloud as well as on-premise. All workloads all continue to be on our platform.

Timothy Arcuri
Analyst, UBS

From what you see, from the performance of these chips, what you see going on with ASICs and what you see going on with how much you're doing with racks and integration and scale-up, do you feel like your lead is shrinking?

Colette Kress
CFO, NVIDIA

Absolutely not. Ou r focus right now is helping all different model builders, but also helping so many of the enterprises with a full stack, a full stack that is incorporating not just that hardware. Remember, everybody needs that assistance to transform their software. Our software platform with CUDA, all the additional libraries are some of the best reasons why people continue to stay on our platform. That platform is one that is usable for a significant amount of time and actually gets better over time. You've seen our continued improvement in using our software and enhancing our software can give you an X-factor improvement in terms of what you've seen in anything else that we have done.

We're gonna continue working with that and watching customers use that capability to continue new models on our platform, but also maintaining all the same infrastructure that they have on-prem already in terms of working their models as well.

Timothy Arcuri
Analyst, UBS

I get the question a lot about how much of what you're shipping is replacing existing GPUs versus just additive to the existing base. It seems like almost all of what you're shipping is just additive to the base. We haven't even begun to replace the existing installed base. Is that correct?

Colette Kress
CFO, NVIDIA

It's true that most of the installed base still stays there. What we are seeing is the advanced new models want to go to the latest generation because a lot of our co-design was working with the researchers of all of these companies to help understand what they are going to need for their next models. That's the important part that they do. They move that model to the newest architecture and stay with the existing. Yes, to this date, most of what you're seeing is all brand new builds, throughout the U.S. and across the world.

Timothy Arcuri
Analyst, UBS

Jensen mentioned on the call that there's still AI workloads being done on Ampere. Can you talk about that? When I talk to some of these NeoClouds and I ask them how much they're, you know, renting Ampere for when it comes off lease, they say it's for, you know, pretty much the same price. Obviously, you know, demand even for the old instances is still pretty, pretty high.

Colette Kress
CFO, NVIDIA

It is. We still see Ampere. We certainly see Hopper, continuing to be used. That's very helpful for them in terms of their internal research that they do, the work that they are doing to fine-tune their models. They can again, use it because you're backwards compatible, forwards compatible from the software. All of that continues to work in the work they're doing.

Timothy Arcuri
Analyst, UBS

Can we just talk about the profitability of inference workloads? I'm wondering if you can speak to how profitable these inference workloads are for your customers and any anecdotes that can help make the case for ROI. You know, we always hear about ROI, so any anecdotes that you can talk about maybe as these Blackwell racks ship, what the ROI is for these inference workloads for your customers.

Colette Kress
CFO, NVIDIA

You should think about also what we're seeing already with the workloads and what is driving right now the advancement of more and more compute that we're seeing today. Why is that? We had talked about earlier that reasoning models would be an important part of what model builders were building. It was not enough to just say you have a single response long thinking about in terms of reasoning was a very big part of the models that were being built. Those are now coming to market, and you'll see more and more of them again on the Blackwell architecture. What does that drive? That drives upfront in terms of they need more compute. Those three scaling laws that we've talked about are still intact for all of the different model builders and communicate in terms of that.

They're building greater and greater models in terms of that reasoning. W hat happens is more and more token generation. That unique token generation also had another piece to it that says you have more users. Now you have both more token generation, you have more users. Users are also now working in terms of, I could be buying that. I would absolutely pay for being able to do that. Now inference has moved not only to reasoning type of models, but now there's a margin that is actually being created that fuels again more compute and more models to do there. You've got a flywheel happening already in terms of how inferencing and the token generation has occurred.

Timothy Arcuri
Analyst, UBS

Yeah. All your customers on all their public calls, they all talk about if I had more compute, I could generate more revenue.

Colette Kress
CFO, NVIDIA

That's correct. More compute, more tokens.

Timothy Arcuri
Analyst, UBS

Can we talk about this disconnect between some of the model builders who do not have very much revenue, yet they are, you know, committing a lot of capacity to you and to the supply chain and to some of the large Hyperscalers who do have money to spend, but the, you know, model builders do not have a lot of money today. They have to, they have to, you know, raise the money. How do you think about that as a risk to your business?

Colette Kress
CFO, NVIDIA

Let's first step back. When we talked at the very beginning here, really understanding that those Hyperscalers are continuing to buy compute for their internal use and/or the work that they are doing in terms of transitioning to accelerated computing. The model makers that are out there, you're right, they do need more compute, but just like all things, they're gonna have to work through, have I earned enough in terms of a profitability? Can I raise more capital? And can I take a look in terms of additional compute that I would need? All of that is still in motion. I t was helpful for all of the model makers to help us understand what is that vision, what are things in the future giving us an understanding of the options that they have. We'll be here to support them.

Right now, a lot of our work is on today and the next year and the year after that to make sure that the right amount of compute, capacity, and capital is available for what they need today for those models. It was more of a longer-term aspect in terms of that piece. Again, our focus in terms of demand and supply, our supply and our demand is based on do we have POs? Do they have the ability to pay in terms of the capital? Nothing has changed in that perspective.

Timothy Arcuri
Analyst, UBS

Great. On that, just in that vein, can we talk about your partnership with OpenAI? You did announce this big LOI, 10 gigawatts, which by our math is somewhere in the range of $400 billion over the life of that deal. How much of that 10 gigawatts is actually locked in? I imagine maybe there's a gigawatt that you're planning to ship next year, but the agreement is more of a LOI framework agreement, and you're allowed to invest along the way. Can you talk about that?

Colette Kress
CFO, NVIDIA

Yeah. OpenAI and our agreement with them, a very strong partnership, a partnership for more than a decade, and their preferred partner of their availability for compute needs that they have. Keep in mind today and our focus, for example, on our $500 billion worth of Blackwell and Vera Rubin is really based for OpenAI continuation of the CSPs who are helping them with the compute that they would need. Right now, that $500 billion does not include any of the work that we're doing right now on the next part of the agreement with OpenAI.

We believe we'll continue working with OpenAI. Yes, we still have not completed a definitive agreement, but we're working with them. Their desire, which is focusing on how can I work directly, how can I work directly with NVIDIA in terms of how we build out our compute structure. That's going to be in the future. We're right now continuing that work to understand how we can help them through that. Keep in mind, right now, most of it, all of it right now is just with the CSPs in terms of what we've baked in.

Timothy Arcuri
Analyst, UBS

That slide you showed doesn't include anything that would be part of this framework agreement. This framework agreement would be all direct to OpenAI.

Colette Kress
CFO, NVIDIA

That's their plan. They do wanna go direct, but again, we're still working on a definitive agreement.

Timothy Arcuri
Analyst, UBS

Great. Let's, let's talk about your exposure to OpenAI and your Anthropic partnership. How would you contextualize your overall exposure to OpenAI? And then maybe how significant is your partnership with Anthropic?

Colette Kress
CFO, NVIDIA

We're excited about our partnership with Anthropic. Anthropic, needing help in terms of more and more compute and very focused in terms of also on our platform. We are going to help them. This is again a situation through a CSP and working with Microsoft. That's been a big part. Not only are they interested in terms of now for the CSP, they're also looking in terms of a one gigawatt in terms of the future to do that.

Now when we think at all of those model makers, we've got all of them focused on our platform and working with us in there. It's a great position. In the case of OpenAI, OpenAI continues down their path of what they need. I do believe our work with them will never end, in terms of engineering to engineering focus as well as we've been assisting and working with our engineers to do so.

Timothy Arcuri
Analyst, UBS

When you saw them making all these commitments, those announcements came out over a couple week period, maybe it was a month. Did that make you concerned at all about your direct and indirect exposure to them? There was just such a flurry of announcements.

Colette Kress
CFO, NVIDIA

No, they are an indirect customer, through the CSPs, but all of the model makers, most of them are also indirect in terms of there. W e still stand that our CSPs are approximately about 50% or more of our revenue each and every quarter and has been for quite some time. Now, their work in terms of helping model makers us and indirectly we support that is, is a fine process. All of the capital needs is help being fueled by using in terms of the CSPs.

Timothy Arcuri
Analyst, UBS

Great. Let's talk about Vera Rubin for a moment. The transition to Blackwell Ultra has been very smooth. Can you give us a sneak peek on Vera Rubin and what this ramp could look like, and the potential leap in performance we could see relative to Ultra?

Colette Kress
CFO, NVIDIA

Yeah. Vera Rubin, we're pleased to say that it has been taped out. We have the chips and are working feverishly right now to get ready for the second half of next year to bring that to market. We're very pleased, both with what occurred with Ultra. People come in and say it was seamless, and that's what we wanted, so a seamless transition, very, very helpful for many in their new models that they were creating. Y ou're gonna see an X factor increase also with performance as we think about Vera Rubin. It's right around the corner for the second half of next year. We're very excited for it.

Timothy Arcuri
Analyst, UBS

There was a point at which Jensen said, even if any competitor offered their product for free, nobody would buy it. O bviously some people are buying from, you know, some of your competitors. They're nowhere near the scale that you're at. What, what's changed? Would you just say that it's, well, look, the market's growing so much that obviously they wanna just hedge their risk?

Colette Kress
CFO, NVIDIA

You have to look at a statement that just says the performance, and the overall use of NVIDIA's ability to create full systems that can accomplish any type of workload, any place, any type of model is quite unique in what it has been designed. The concept that a fixed function, type of product would be able to do something similar to that leads them down the path that just says you could take it for free and you may not benefit from that. That is what he sees, and that is what we all see, in this. It is very important as they think through not just what they need to do to build the model.

They have the ability to not only train in terms of the model, but complete a full inferencing all on the same type of architecture, each part of that being designed to do so. Being able to scale at many different aspects and scaling up has been extremely important and NVLink's important to do that all for the model making that is happening right now. Once you move from that training and you wanna go into the inferencing, again, you talk about a full system that has been engineered for inferencing with all of the different focus in terms of the networking that is gonna be necessary to make sure traffic and otherwise is happening quite well. Now we see that today, and it's not just about day one, but that is about your full time using that from an inferencing.

Remember, power efficiency is also extremely, extremely important from the inferencing standpoint. Co-designing everything that we did there, it's very hard to think about a very simple chip, fixed function chip would be able to do that. That's why many stay on that platform doing both. You have the capabilities to do all those different pieces together.

Timothy Arcuri
Analyst, UBS

Let's just talk for a second about CPX. I get a lot of questions on this, people don't really get how important CPX is. This is the first time where you're taking a workload and you're breaking it up. It's not an ASIC per se, but it's an approach, it's an ASIC-like approach to a workload. Can you talk about CPX and whether there are more workloads, you know, how ubiquitous that approach could be?

Colette Kress
CFO, NVIDIA

There is a need for breaking down, whether it be the training or breaking down the inferencing, through there. You're gonna have many different types of inferencing requests and needs. CPX takes you to a different stage within the same infrastructure to get that done. The concept that you would have multiple, different infrastructures working at the same time to accomplish that. This is everything that you would think about in terms of the world of a mixture of experts.

This is the key piece right now that is very important in terms of the model builders that are right now. All of it is based on how did you design your work in terms of with that, those experts. T hat takes some very important amount of compute that is able to complete not only a full model, but each one of the experts. They are breaking that down, but not necessarily break it down that says you can use a different type of compute to do that. Staying full on that full system is probably the most efficient way to get that done.

Timothy Arcuri
Analyst, UBS

Let's talk about software for a moment. There's this argument that some people have that because you can program now in AI, that somehow AI will itself break down your moat in CUDA, that it will allow somebody to build a platform faster that could approximate what CUDA does. How would you respond to that?

Colette Kress
CFO, NVIDIA

CUDA is a very longstanding and important development platform that has been with us for several generations, and we're probably on our 13th version of it. The important part of that is not only CUDA, the development platform, but the consistent libraries for all different types of industries, all different types of workloads. The way you wanna think about those libraries, Mark, you're usually generally providing them at least the first 100 lines of code or help that you can go and do. Starting with that, an important group is going to be the enterprises. It is going to be in terms of those that are building for themselves and have some ability to do that. Not everyone's going to be staffed with software engineers at scale in order to do that software. It's a very, very important part.

For many years, people have talked of, hey, we can do something very similar to CUDA, or we can take the key piece of it to CUDA. It hasn't been very successful because when you think about AI and how fast it's moving, we are keeping updates all the time in terms of new techniques, new things that they are working on. It just is always going to be behind if anybody really thinks that that's an easy thing to do.

These have been designed working with all of our different GPUs, not just one GPU, but all of it is backwards compatible and forwards compatible. It's one of the best features that we have. You buy the compute, and it will probably get stronger, more performant as we continue to improve the software over that period of time. We've done that with Hopper, and you're also starting to see it with Blackwell. That helps the continuation on why they use our compute for a long period of time, keeps getting better as, better as they own it.

Timothy Arcuri
Analyst, UBS

Is there a metric that if you bought A100 and you're still using A100 with all the CUDA updates, how much have you been able to improve the performance of A100 with these CUDA updates?

Colette Kress
CFO, NVIDIA

Each one of them has different capabilities in terms of helping the event, but each one of whether it be Ampere, whether it be Hopper 100, Hopper 200, and when we bought X factor, X factor improvement. Even if you think right now with Blackwell, Blackwell right now, you've got a total increase from the last generation of 10-15X. Within that, you probably have a 2X just from the software right now after we've gone to market with it. It's a big increase improvement.

Timothy Arcuri
Analyst, UBS

Great. Can we talk about margins? You've done a great job this year. You committed to being in the mid seventies. You made that commitment early this year and you've reached that. Some people are worried that because of the price of HBM and because of the HBM content and because of the cost escalations and just in your BOM that you won't be able to hold those margins. You sound pretty convinced that you can hold mid seventies next year as, as, you know, Rubin ramps. Can you just talk about how you, how you plan for this?

Colette Kress
CFO, NVIDIA

Yeah. Always when you, when you complete what you said you're going to do, they're always gonna ask in terms of what's next, in there. We knew that would be very important to it, but we're very pleased with the work that the teams did in terms of really fine tuning both our cycle times, our yields, our costs, all of that to move to into the mid seventies. T hat's a very great number if you think about what we've accomplished in, in over a very short period of time. What is with that is seeing right now that the Blackwell Ultra version was quite seamless, which again allowed us focusing more and more in terms of cycle time and work that we could do to do that. We are aware in terms of supply, the prices in terms of supply. It's important.

Those are very important, parts of our business to do there. If we think through just the scale of what we're doing and what we can do with just even one more day of efficiency of cycle time and our focus in terms of how to use our costs the best, way to do that manufacturing, we believe as we move into next year that we'll also stay within about the mid 70s.

Timothy Arcuri
Analyst, UBS

Great. One thing that I thought was really notable from this last earnings report was if you take your inventory increase combined with your increase in purchase commitments, it had been going up a couple billion dollars each quarter and it went up $25 billion this time, a massive increase. Obviously that portends significant revenue growth over the next two to three to four quarters. Can you talk about that? Can you talk about the purchase commitment side of it? P eople look at the inventory and they say, you know, inventory went up so much, that's bad. I don't see why that's bad. That's good if you think you're gonna grow so much. Y ou take the purchase commitments also and it went up a lot.

Colette Kress
CFO, NVIDIA

Yeah. Let's focus on inventory and purchase commitments. You're right. Those are good things, but those are growing. That means we do have supply for what we think in terms of the future is of our demand that we have. We have ordered our supply, but let's break that down a little bit. The inventory is a place in terms of where we are building way things that will be processed and likely go to market within the current quarter. What you saw probably in the inventory and where we stand now at the beginning of December, probably all of that has moved, moved in terms of has already been shipped to our customers. The next piece of that though is to therefore look in terms of where we stand in terms of purchase commitments.

If we have talked about in terms of our growth, our growth that we see in that half a trillion, by the end of next year, we have to be ordering very, very important amounts of supply. What has changed over time, keep in mind the complexity of our systems and what we have to do to put those together, whether those be components, the seven chips, there's a lot that needs to be ordered to make sure that we have long lead time items are also key that we wanna make sure that we are behind.

What's interesting about it is, your purchase commitments, your inventory is important, but let's remember supply and demand and managing that, it's a day-to-day type of things. Things change, you may need more, and you're always, always, always working the supply. Yes, you should look at that as it was a good thing. It was a good thing we're growing.

Timothy Arcuri
Analyst, UBS

I wanted to ask about this famous slide that you showed at GTC now, and we all see this $500 billion number between calendar 2025 and calendar 2026, and we all try to back into what that means for calendar 2026. What that also does not say is that there are deals you are signing if you can still do something within lead time, say for this Anthropic partnership, for example, that would sit on top of that number. Is that correct?

Colette Kress
CFO, NVIDIA

That's correct. We talked about that, probably unique for us to actually at our GTC DC to discuss in terms of what we saw going into the next year. It is important to understand the planning that all of these companies need to do from a capital capacity perspective as also compute. We felt it was important to understand there is a lot of growth, still planned as we even move into the beginning of this next year as well. You are correct. There is also an opportunity for that to increase more. I talked about in terms of the additional things that we saw in the Middle East in terms of focus, and you may even hear about another one today as well.

Timothy Arcuri
Analyst, UBS

Great. T hat slide to me suggests you're gonna do between $350 billion and $400 billion next year in revenue. You're gonna generate tons of cash. The next question is, capital allocation, and this is probably the last question we have time for, but how do you think about when you're making these strategic investments and you're allocating all this cash? I know you have to make these purchase commitments, so you have to keep a lot of cash on hand for that. How do you think about capital management given all of the cash that you're generating?

Colette Kress
CFO, NVIDIA

A really important question. I wanna make sure everybody understands one of our largest focuses, making sure that we have the cash available for our internal needs. That is a lot in terms of the supply and the capacity that's gonna be necessary to build what we're building today. As you know, the engineers are off working in terms of Vera Rubin and bringing that into market as soon as possible. That means we need that capital just to run our business based on just the size of that growth. That's always gonna be one, is back into the business in terms of what we need to do. The second piece also, a focus of ours is focusing in terms of shareholder return. Okay. What can we do in terms of stock repurchases and our dividends?

Those will always be a part of what we do, within there, which then leads that last part within our free cash flow. What can we do in terms of strategic investments? Today, our strategic investments are focusing on the ecosystem and expanding that ecosystem. We have longstanding partners that we have been working with for years in terms of that, but the market is growing and there's opportunities, within the ecosystem to assist and invest and learn from their work that they are doing because it will be an important part for AI going forward.

Now, keep in mind those investments, that we invest in them are small. They still, for a lot of their work that they need to purchase the capital and do that, is probably still 90% or more in terms of what they need to do to raise that capital. Our goal is to help understand what's gonna be possible in the future, with those different types of ecosystem investments that we do.

Timothy Arcuri
Analyst, UBS

Yeah. It does seem like you're pivoting a little more toward ecosystem investment versus M& A. Is that fair?

Colette Kress
CFO, NVIDIA

We, I would say we do both. It's hard to think about very, very significant large types of M& A. I wish one would come available, but it's not gonna be very easy to do so. We do focus on M& A. We focus in terms of engineering teams that can be helpful in terms of our platform and work. From time to time, we do have those.

Timothy Arcuri
Analyst, UBS

Perfect. We run out of time. Thank you, Colette.

Colette Kress
CFO, NVIDIA

Thank you so much.

Powered by