NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.50
-8.75 (-4.18%)
Apr 30, 2026, 12:16 PM EDT - Market open
← View all transcripts

Goldman Sachs Communicopia + Technology Conference 2025

Sep 8, 2025

Jim Schneider
Senior Country Analyst, Goldman Sachs

Okay. Good afternoon, everybody. Thanks for being here. Welcome to the Goldman Sachs BNP Paribas Pure Technology Conference. My name is Jim Schneider. I'm the Senior Country Analyst here at Goldman Sachs. It's my pleasure, sincere honor, to welcome NVIDIA and CFO Colette Kress to the stage today. Welcome, Colette.

Colette Kress
CFO, NVIDIA

Thank you.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Thank you for being here.

Colette Kress
CFO, NVIDIA

Thank you. Happy to be here.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Maybe to start off on a few things from what we heard from you a couple of weeks ago when you reported on your latest earnings call, you stated that data center infrastructure capital requirements could reach $3 trillion- $4 trillion by the end of the decade. I think many investors were doing a sound check at that point. Maybe give us a little bit of context for that statement and sort of unpack or share some of the major building blocks to that.

Colette Kress
CFO, NVIDIA

Okay. First, let me remind everybody we might be making forward-looking statements, and I kindly remind you to look on our website for our disclosures, as well as our 10-Q and other types of reporting to help you. Let me talk about since the last time we were here. The last time we were here a year ago, and congratulations on your new role in helping us here, this was our time that we came in here to discuss a lot of folks in terms of our Blackwell architecture. Really interested to hear whether or not that transition and whether or not there would be an air pocket or such in between that period. We have obviously safely enabled Blackwell into the market, not only just our current Blackwell and our GB200 systems, but we also now have our Ultra system and our Blackwell Ultra systems.

That is one piece that has quite changed during that period of time. Additionally, I think at this time we often talked about, is there going to be a future need for compute? Have we maxed out in terms of the need to compute? A lot of discussion in terms of pre-training, post-training, and inferencing is what has been necessary. As you can see over this last 12 months, a tremendous need still out there in terms of compute. One of the most important pieces that we're seeing today that is increasing more and more of the compute is those reasoning models. We'll talk a little bit more about those as we go forward. Lastly, a lot of discussion in terms of why the one-year cadence. Why do you see what will it help? Can that be something that you can execute?

Our one-year cadence is going quite well, so tremendously important to us and in terms of our customers that we can keep that innovation and advance as fast as possible, which has just allowed us multiple, multiple different architectures in place. Most importantly, we still have our Vera Rubin getting ready. We have indicated not only that our Rubin chip is available, but remember, there are six chips in our Vera Rubin that will be coming to market. Those are doing just fine and have taped out. Now it is our time to mature those before they again go into market. Let's step back to the question regarding the $3 trillion by the end of the decade. What does that mean? What are we thinking? It was really a purpose to help the full ecosystem understand how important this market and this transformation is.

We are really talking about a new computing platform for the next decades going forward. This isn't just about an AI solution. We really need to transform from something that's been here for more than 20 years- 30 years as existing of a standard computing platform. When you think about accelerated computing and AI, it is that large transformation. If you recall from our GTC, we had talked about that you will likely see in a couple of years going forward, meet that $1 trillion mark in terms of capital needed to fuel the data center infrastructure that is going to be built. We're on track to do that. Even our CSPs today have literally doubled the amount of capital that they are spending from what they've had just two years ago. There are only one part. You can look at those four top CSPs.

You can look at the Mag7. You can look at the AI labs that are being built. We can do from the computing perspective, yes, we're right there and right there behind that. There are other pieces in terms of the power getting the data center ready. Most data centers are usually thought through and discussed more than three years out and are beginning that work. Keeping the ecosystem understanding of what we see and the likelihood was a very important exercise.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Very good. That's helpful context. Maybe following on to that, when you report this last quarter, data center had really good growth in Q2, and you guided to strong growth again in Q3, even without the contribution from China in the numbers. There were some moving pieces in there between networking, compute, and other different products. Maybe help us understand what's driving that demand and the different components that are kind of contributing to that strong growth outlook.

Colette Kress
CFO, NVIDIA

Yes. We're really talking about our data center revenue as a whole, stripping out the H20 out of our Q1 results, Q2 results, and really looking at the data center inclusive of the compute as well as the networking. In Q2, that was a sequential growth of 12%. What we're targeting right now for Q3 in our outlook is a 17% growth sequentially. You're already starting to see a surge up in terms of the demand that we see forward and getting ready for it. In our Q2, there's a lot of different parts of it. Not only have we continued what we were doing with our Blackwell GB200, our B200, but we were also at scale with our GB300 Ultra. A lot of discussion that says, I didn't know that would be actually a big part. It was seamless.

It was a seamless transition that many people didn't understand the amount of scale and volume that we were actually able to put into market as well. Both of those are moving quite well, and you'll see more of that in Q3. Still shipping both of our GB200 and GB300. Keep in mind, our networking is for many of our systems, those Grace Blackwell systems as a whole, includes our NVLink, which we will again continue to talk about further on how important that new transition was. NVLink is also incorporated in our networking number. That is usually running side by side in terms of what we can see into the compute. There's also additional important areas of where we had focused. We knew that Ethernet for enterprises was very important, but we built the enterprise focus on Ethernet for AI in what we do.

That is also doing quite well, has a quite good attach rate in terms of a lot of the systems that we're doing. Really focusing on everything at the data center scale and that completion. It was very successful and grew not only quarter over quarter, but year over year. Additionally, InfiniBand, it is the gold standard, continues to be the gold standard, and focusing on a lot of the supercomputing. We have a new offering, and that was a tremendous sequential growth, nearly almost doubling sequentially over there. A lot of great things within our Q2 and more to come as we look in terms of Q3.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Very good. Now, the H20 is your product for the China market. Maybe talk to us about any update you can provide on demand for the H20, what needs to happen for you to ship that product in Q3 to China, and maybe just talk about the broader confidence in your China business overall.

Colette Kress
CFO, NVIDIA

Yeah. We did receive license approval and have received licenses for several of our key customers in China. We do want that opportunity to complete that and actually ship the H20 architecture to them. Right now, there's still in this position a little geopolitical situation that we need to work through between the two governments. Our customers in China do want to make sure that our China government is also very well received in terms of receiving the H20 to them. We do believe there is a strong possibility that this will occur, and it could add additional revenue. It's still hard to determine how much within the quarter. We talked about it being about a $2 billion- $5 billion potential opportunity if we can get through that geopolitical statement.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Very good. Let's touch on the prospect of competition or potential competition for a moment, if we could. There's been an ongoing debate, as you know, in some of your meetings about the role of ASICs for both the training and for the inference markets. How do you see your competitive position evolving for both these kinds of workloads?

Colette Kress
CFO, NVIDIA

Focusing on inferencing and training, it's been interesting to watch the dynamics that says we understand your training performance for so many years in building large language models or recommender engines has been important. The inferencing is very essential as well, and the two are not necessarily a separate type of workload. As we move forward and see the work that's being done even on the reasoning models, you will likely be continuing to do a lot of the post-training along with that reasoning model to assure you can get the right response and will likely be working with multiple different types of models. Where we had created a data center scale system was to focus on what we knew was such an important industry regarding the inferencing, much larger than anything we'll see in the future.

A lot of that fuel of growth on inferencing has been fueled by more individuals just using AI solutions, but also more and more token generation types of inferencing required in terms of the reasoning models. Now, why is that reasoning model such an important piece? It's because if it can reason, if it can get to a high level of performance on reasoning, it can do work for us. That work is really speaking about the agentic AI that is going to be in front of us. We are looking at our position of creating a data center scale that can be the most performant, but the most performant per watt and the most performant per dollars as well. That wattage is such an important thing. Right now, you can decide whether or not capital or power is more important. In that respect, they both are tremendously important.

When you are purchasing any type of large system as we are, you have to keep in mind you will be using power throughout the journey of owning that full cluster for six years or even further. Having that high performance is going to be very important to make sure you are properly addressing that power that is going to be needed for that. We stand very strong in terms of how we thought about that transition and moving to a full data center scale solution. Some of them focus that in terms of it's a rack scale type of capability. We put all of the different chips together so that they would be working together, be optimized together in terms of the right performance. We feel very good with our plan and that transition, but a very big transition for us.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Okay, very good. Now, as AI becomes more mainstream, it stands to reason I think that some workloads might be a little bit less compute intensive going forward and could run on multiple versions of NVIDIA architectures. How do you think about your market share in those kind of simpler AI workloads or things that might be tied to smaller models over time?

Colette Kress
CFO, NVIDIA

Your compute that you'll put together at any type of enterprise is probably a full AI factory where all of your data and all of your pieces are together. What that enables you to do is to continue with all different types of traffics, all different types of requests, needs to all be contained within the same, but you are more efficient with that all together. We believe these AI factories will continue to grow and be a significant piece of how enterprises are thinking about their data and those things pulling together. I don't think it is about, hey, I've got a smaller model. They will all be trying to pull all of their data together. It's not just going to be about a small amount of data. They will continue to probably connect to many different types of systems going forward as well.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Okay, interesting. You feel like you can sort of maintain that sort of shared leadership across the different levels of models.

Colette Kress
CFO, NVIDIA

What you want to do is use your best resources to manage that full data center. Keeping the strongest performance and putting all those things together to happen collectively is going to be the right response.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Yeah, very good. Now, NVIDIA's platform, you know, I think it's pretty clear or almost indisputable at this point that NVIDIA's platform continues to lead the market in terms of performance. You also have an annual product cadence, as you mentioned earlier. Can you maybe talk about some of the sort of economic benefits that annual product cadence brings to your customers?

Colette Kress
CFO, NVIDIA

The economic benefit that we are already seeing from that one-year cadence is the speed in terms of what AI is evolving. Every single time we may be here or any different point in time, there is more and new connectivity, more and new types of models and pieces that need to be put together. By continuing to advance and innovate at a fast speed, we keep folks really focused on not worrying about which version they're on because you're going to still be in line with that cadence of increasing each and every time. What we find is having assured that your power is being utilized effectively, whether it is the most current version or is still holding the last version, which again is still tremendously performant, has been helpful to them to continue to have different modes of getting ready for it.

We find our newest architecture advances to some of the larger and bigger models, but many of them, for example, from the GB300, was a statement where GB versions were being used initially right off the gate with inferencing as they saw the 30x improvement from just where we were with Hopper in our Grace Blackwell versions of that. That is a tremendous performance improvement, but that also enabled Hopper to continue also doing significant work that they need in preparing for that inferencing stage of the things. Keeping the Hoppers also moving in terms of the Grace Blackwell has worked quite well.

Jim Schneider
Senior Country Analyst, Goldman Sachs

In terms of a product perspective with that annual cadence, it seems to me at least that Blackwell was a pretty big performance increase versus Hopper. Going forward, how should we think about the performance increase offered by Rubin relative to Blackwell?

Colette Kress
CFO, NVIDIA

Okay, that's correct. Our Rubin is on a path and that one-year cadence is going to be a journey that we're ready to take on with Rubin. Vera Rubin, six chips, all of them taped out and in terms of a maturing. This will be another point that we can continue to advance some of the most important pieces. As you heard on our earnings call, we announced really our focus in terms of scale out, our scale up, and importantly, the scale across. That's a new thing to think about and in addition to what we are building, which will be a great advancement going forward. One of our most important pieces that we created when we transformed our Grace Blackwell to moving to a data center scale system is our NVLink.

Our NVLink at this point is a fifth generation NVLink and an important piece that enabled not just eight GPUs together, but now you can do a full rack scale and it currently at its 72 GPUs. That was a huge efficiency. That's what enabled such a fast and performant move as we went to Blackwell and is probably by far those two things together, the scale out perspective and the NVLink, truly important to our work.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Very good. That would contextualize in terms of scale out and scale across being kind of one of the big drivers of what Rubin is going to provide people.

Colette Kress
CFO, NVIDIA

Right. We'll see when it gets there.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Okay, very good. You also reported very strong growth in networking this quarter, probably much stronger than most investors, including myself, had expected. What is driving that, and why was the networking growth so strong? Is that something where it's a forward indicator of what's to come on the compute side of things? Do you think about the networking build as sort of pre-building to sort of fill in compute later on, or how do you contextualize that for investors?

Colette Kress
CFO, NVIDIA

Yeah. The way you want to think about networking, networking is a timing of when it arrives. It's very well connected right now in terms of how we designed our compute infrastructure, and as we indicated, NVLink is incorporated both in networking, but it's a very important part of those systems. The growth rate that you see in terms of computing and network should long-term be approximately a continuous growth rate, or said differently, total data center revenue has that continuous growth. In some cases, though, there's a timing in terms of when the networking is received. A lot of times your networking is before you actually ship the compute if they need to wallpaper the entire data center with that networking. Those are just some of the short-term in terms of timing perspective.

In the big picture, compute and networking are very important and are both growing quite, quite consistently.

Jim Schneider
Senior Country Analyst, Goldman Sachs

You mentioned NVLink. That's been, I think, a key competitive note for NVIDIA for some time now relative to other competitive technologies like PCIe Express and others. How do you sort of view the opportunity for NVIDIA in terms of opening this technology up to competitors with NVLink Fusion? How do you think about both the opportunity side and if there's any risk on the downside for you?

Colette Kress
CFO, NVIDIA

Yeah. We're still operating both the NVLink and the PCIe Express. Keep in mind many of our enterprises, as we began with our Blackwell architecture, many of those were liquid-cooled types of systems, very energy efficient, but not everybody was ready to build out that. We still have an enterprise PCIe version, which allows many of those different enterprises and certain industries to use that and will continue with a PCIe version. Yes, the scaling of NVLink has been such a huge value to so many of the AI lab builders, as well as the tremendous amount of token generations that they need. We came into NVLink Fusion and the possibility to say, how do you continue to maintain a focus in terms of our platform, the benefits, and also adding many other different characteristics in the data center within there?

Maybe they could bolt on a line with what we have in terms of our data center. We'd be happy to allow them to be a part of our full infrastructure, and that's what NVLink Fusion is enabling. Now, NVLink Fusion, a lot of interest in terms of many different other chips that could be added to that, and I think we're going to see more to talk about in the future about how that's doing.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Okay, very good. I think it's fair to say that if you talk to anybody in the supply chain, they're very impressed by your ability to scale supply for your products over the past couple of years or so. How have you accomplished this, what lessons have you learned, and how do you ensure your supply chain can actually keep up with NVIDIA's rate of progress?

Colette Kress
CFO, NVIDIA

Okay. The supply chain has been such an important part of our success. We, as you know, with our GTC Taiwan work that we did, we actually really wanted to see many of them because they have spent so much time over the last 30 years, both understanding, but also have been truly inspirational in terms of how they feel they could raise the overall supply for us. Supply is not about just requesting supply. Many of them also have to think about additional capacity and what we mean by additional factory, a complete different manufacturing line. We also have to think about the resiliency and the redundancy that we can use multiple different types of suppliers to build the size that we need in terms of compute both now and as we move forward.

Those folks have been working with us and staying ahead of both understanding where our architecture is going so they can build and work there. We believe that partnership has been one of our largest success factors that I don't think many other companies could have built that supply chain. It's a question of who calls who first in the morning, but the suppliers are all here to help us each and every way around there. It's not just about ordering. It really is about just scaling the entire operations is what we're working on.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Makes sense. You talked about Rubin and the performance advantages it provides. Maybe just give us a quick status check on how Rubin is progressing, and downstream how NVIDIA is sort of preparing or helping data centers get ready for your latest technologies and preparing for acceptance in the ultimate end user environment.

Colette Kress
CFO, NVIDIA

Yeah. We've been very open about our architectures and what we believe is our cadence in which is happening. The closer we get to it, we've provided more and more information, and we've been sharing and getting a great understanding with the customers. What does that enable? That enables the right type of planning. We know these data centers to stand up from the very, very beginning to the end, you've got about three years. They need to understand what's available, when will be available. Right now, our Vera Rubin and its chips as they mature, we already have had discussions to where we probably will see several gigawatt needs for our Vera Rubin. We've already likely seen that and penciled that in. Even way before it's even ready to go to market, we already are seeing gigawatts worth of needs as we go forward.

We feel this is a great one-year kind of cadence program because it does help them run efficiently in terms of other data centers.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Okay. I think Jensen in the past has mentioned that the useful life of NVIDIA GPUs is quite long. How do you think about the useful life of the sort of the chips that were deployed very early in 2023, and when should we expect or might expect a replacement cycle for that generation of products to start kicking in?

Colette Kress
CFO, NVIDIA

Yeah. We're in 2025 and they still have some things for 2023. What we see most is there's one piece of it, which is a depreciable life. The depreciable life is one piece of it. It just means what you have chosen from an accounting type of life. Many of them are probably at about four to six years. Many of them will continue keeping it in their data center because it is high performance still. Sure, the next generation is more, but if you are going to value your power, you do have a key use case for that in terms of moving. If you are going to remove anything version, you do want to replace it with equal in terms of that type of performance.

They're getting a lot of benefit often in terms of through that full period, and the residual life actually is quite reasonable in things that they have. Not yet are we actually seeing a lot of changes in there. We are seeing Hoppers up and running quite, quite well, and it will be a question later based on the size of it of whether or not they want to change that out for a new power option for them.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Okay, very good. A couple of financial questions real quick, if I could. I'm remiss if I didn't ask you. You reiterated a gross margin target of getting back to the mid-70% range by the end of this year. What's driving this, and how should we think about your gross margins more generally as you navigate through different product transitions in the future?

Colette Kress
CFO, NVIDIA

Okay. We did take an action to, we knew going into a new type of build as we did with our Blackwell architecture in a full scale that we had to work through building out full data center scale. We have now had that running quite smoothly and quite smoothly with GB300 and the Ultra version. That allows us now to continue to focus our work in terms of cycle time, the amount of time to market, less time, lower cost, and also in terms of a full mix of all the different offerings that we have can also improve our gross margins. We're on track to that. You saw us increase in terms of Q2 and in terms of our outlook for Q3. We're right in line of moving to Q4 to that case.

In terms of future, as you remember, our whole focus in terms of pricing and how we think about it is a total TCO. How do we want to make sure we can provide customers the best TCO experience of anything else they could ever consider? That will come into a factor. We're already back on those types of systems and we'll incorporate a gross margin at that time.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Okay. I think it's fair from a capital allocation perspective to conclude you generate a lot of cash. Talk to us about your capital allocation strategy. Specifically, is large scale M&A a possibility for NVIDIA at this point in time or not?

Colette Kress
CFO, NVIDIA

Let's first start on what is our most important focus on when we think about that capital, we think about that cash. Leveraging that in the most strategic way of our ecosystem is where we want to go. Helping those early in their work that they're doing, how can we infuse capital to actually help in various forms of investments? Yes, we do have M&A folks that engineering may be bought here that actually helps us both, whether that be from a software perspective or new techniques that we want to build into our infrastructure. Never going to say that M&A is not possible. The size of them, it just depends. It's hard to think about, is that always going to be a perfect match to us? Is there that perfect company out there of a very large size?

We are quite fortunate with Mellanox, probably and nearly is the best acquisition on the planet that ever happened. Sure, we'd love to have a twin of something like that, but it's difficult. We do focus on our cash being used for the most strategic parts of the ecosystem that we can and using that. That doesn't mean that we will not do a focus on repurchasing our stock. That has been our case, possibly just to offset dilution, maybe a little bit more from that. We always keep our dividend as well.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Very good. I'm going to close out on just two high-level go-forward questions if I could. One is just if you think about NVIDIA's leadership, Jensen, yourself, other members of the leadership team, what are your top two to three priorities for the next, say, two years or over that timeframe, both for the company or potentially externally?

Colette Kress
CFO, NVIDIA

Surely if we think about what's running every single day at NVIDIA, it is getting the next architecture out and moving clearly at that cadence. The next piece of it is not a surprise either. We are going to work on the next cadence after that and focusing on what do we believe is coming around the corner, so that we can make sure our infrastructures in every single cadence is right, bleeding at the edge of where the next AI is going. Those are our top pieces. I think we fall into a situation of an agile company that is more agile than probably any large company for sure, but also any type of small company that we can do and move at that speed has been one of the greatest benefits that we had from a global leadership and a company culture as a whole.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Very good. I think it's fair to say that you've seen your investor conversations and I hear it every day from investors. Obviously, there's a lot going on in the AI space today. A lot of market change, a lot of technology change, and a lot of competitive change. If you were to say, you know, people get buffeted by these data points almost every single day. If you were to say, you know, here's the top, you know, two, three, four things to really focus on for investors to think about regarding the evolution of AI, what would they be?

Colette Kress
CFO, NVIDIA

Yeah. It's interesting that even in a great quarter, the great next quarter in terms of ride, there is always this question that says, okay, but what's next, what's out in the future? It's a great question to ask because we're on a journey and maybe we're maybe at the one inning, second inning into this journey of going forward because the world needs to transition as we've discussed. Moving to not just AI solutions, but a different form of how we are doing computing that is accelerated and a parallel computing is needed. You can look and say, but I can see the four cloud providers and the work that they're doing. They are doing a tremendous job of being helpful in the early stages of a way to use in the cloud and get started in terms of the AI.

There is so much more that needs to happen and work. If you recall at the very onset of AI, we're focusing on perception. We were looking and categorizing different items into categories and saying, look what we've done, we've been able to categorize. The next pace was focusing on the recommender engines and pieces of that. Along came generative AI. Everyone said, this is amazing, this is great. It is an important capability that is still absolutely advancing, particularly as we talk about that reasoning and talking about what will move to agentic. You want it to actually get work done. Now, don't get me wrong, in the evenings by yourself and your lovely model telling you and talking to you all evening as you do all your questions is a great thing.

It would be even more impressive as we show up in the network the next day in terms of the amount of work that will be able to be accomplished. It's an important piece because there are so many industries right now that struggle in terms of how am I even going to get the employment to do all of the work that is being needed. The more that AI can be used in there, the more efficient all things will be. We have a long way with enterprises to industry by industry for them to transform. You're not going to see that AI take place in a situation that there's a new AI tool or system. You already have a full SaaS system of tremendous amount of software opportunities that will be infused with AI solutions to get us there.

As we see that journey every day, it is another case of new and exciting pieces. We love being in the highlights and seeing what is moving and what they're building on. Let's just say the next models, they're multimodal. Our models right now, they have a tremendous amount of information. They've got a tremendous amount of data. They've got a tremendous amount of numbers and words. Now, how do I read PDF? How do I include in terms of video? How do I make them specific for physics or otherwise? There's great more, tremendous amount of work as we go forward. Where we focus on is the full journey. We enjoy being that key platform to enable what we'll see in the future. That's what I think our work is in front of us.

Jim Schneider
Senior Country Analyst, Goldman Sachs

Very good. I think it's fair to say if this is the ending one or two, we've got a heck of a ballgame ahead of us. Thanks for being here, Colette. We appreciate it.

Powered by