NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.68
-8.57 (-4.10%)
Apr 30, 2026, 12:14 PM EDT - Market open
← View all transcripts

J.P. Morgan CES Fireside Chat

Jan 6, 2026

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

Great. Thank you. Good morning. Happy new year, everyone, and welcome to JP Morgan's Virtual Fireside Chat Series here at the 2026 Consumer Electronics Show. My name is Harlan Sur. I'm the Semiconductor and Semiconductor Capital Equipment Analyst for the firm. Very pleased to have Colette Kress, Chief Financial Officer of NVIDIA, here with us this morning. It's been a tradition past 12 years to have Colette and the NVIDIA team kick off the Investor Events here at CES. I'll ask Colette to start us off with an overview of Jensen's NVIDIA Live Event yesterday, and then we'll go ahead and kick off the Q&A. Colette, thanks for joining us today. Happy new year, and let me go ahead and turn it over to you.

Colette Kress
CFO, NVIDIA

Okay. Thank you so much for having me here. I appreciate it.

Operator

Please stand by. The call will resume momentarily. Thank you for your patience. The call will resume momentarily.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

Yeah. Great. Thanks. We had a little bit of audio and video difficulties, but we've got Colette back. So I'll ask Colette to start us off with an overview of Jensen's NVIDIA Live Event yesterday, and then we'll go ahead and kick off the Q&A. So Colette, let me go ahead and turn it over to you.

Colette Kress
CFO, NVIDIA

Okay. Let me first start, as a reminder, folks, that this discussion may contain forward-looking statements, and investors are advised to read our reports filed with the SEC for information related to risks and uncertainties facing our business. But now let's kind of get back to CES and our announcements that we were doing here yesterday. It's an important time for us to remind everyone about the transitions that are taking place in the market today. Those are three different transitions and all very important ones. The first one is one that we have talked about for several years regarding the need to move to accelerated computing, where we are beyond the ability in our current development with using CPUs to advance that work. So folks are moving to accelerated computing throughout the world. Secondly, the development of generative AI is also a key transition.

Those are things that are changing a lot of our work today, whether it be search or any of the social media or otherwise. Generative AI is also taking part. But in the future, we also see the third and important transition is we move to Agentic . Agentic AI is really where it is getting work done, work that can augment the work of many employees, many of our folks at home. All of these things are really important as we think going forward. Those transitions are going to take some time, and they're all occurring and creating an exponential growth in terms of our compute. So that's one of the opening statements that we just kind of want to remind in terms of what we see in AI going forward, but also seeing what we're doing in terms of accelerated computing.

This event highlights a lot of different focus on not only just AI and AI for business, but also our work that we are doing in terms of with robotics and really thinking about Physical AI going forward, but an important part of the discussion was talking about our next and upcoming version, Vera Rubin. Vera Rubin, as we've discussed, has definitely taped out and is ready to go, but this was an opportunity to help folks understand that we are well in good shape in terms of bringing this to market in the second half of the year as we are in full production. The important part of Vera Rubin, as we've discussed, is that it is six different chips, and I think it's important to talk about that, what that means in terms of six different chips.

Six different chips that have been extremely co-designed together to create a data center infrastructure at scale. This isn't about coming here and talking about one different piece or discussing that says we are designing and/or just building out the rack. It's more than that in terms of the design that every piece continues to be thought through of its work between each and every single one of those different types of chips. Those six chips that we're talking about, first, of course, that is Rubin, our GPU. That is Vera, our CPU. It's our next version and greatest piece of what we can do in scaling up in terms of our NVLink. It also takes us to Spectrum-X in terms of what we have in terms of the SuperNIC, but also what we have with BlueField, and then also our switch for CPO.

And those six different chips have all been harmonized in terms of what we are bringing to market. We're excited in terms of all the different workloads that it would be able to support. But some of the key things that we have seen already is to understand that this is a full system that will essentially be able to take the time to drain down to 1/4 what we had in terms of Blackwell. Additionally, you have the capability of 10x higher throughput. And then thirdly, and an important part in terms of the inferencing phase that we say, it's actually 1/10th lower token cost throughout. So these parts and bringing that together, we are getting ready for that to continue to scale in the second half of this year, and then we'll be in full ramp as we move into the next calendar year as well.

So those are some of the highlights, and we can talk about more of it in this discussion as well.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

Yeah. No, that was a great overview, Colette. Thank you very much. And Jensen spent quite a bit of time yesterday focused on Physical AI. And the team has framed AI. Physical AI is a massive opportunity, right, powered by platforms and models like Cosmos, Omniverse, Isaac, right, and vertical-specific frameworks like GR00T and Alpamayo, right? Customers are already here at CES. They're already bringing robots in many different verticals to market using Cosmos and GR00T. The Mercedes announcement yesterday is leveraging your Alpamayo-based reasoning model, right? Is Physical AI, is this already a financially material contributor to your data center revenues? And how should we think about the growth curve over the next few years for Physical AI?

Colette Kress
CFO, NVIDIA

Yeah. Physical AI is yet another great opportunity once we advance the Agentic AI. And you're correct. There are different types of models that are going to be needed for the Physical AI. The important parts of what we brought to market and what we discussed about is really the need for the open-source models. And right now, if you think about the top proprietary models, the next in line is the summation of all of the open models and how important these are. Now, these open models are important definitely for the enterprise and the work that they're doing, but being able to manage for Physical AI, the abundance of modeling there from an open source and what is being designed, whether that be for research and whether that be in terms of developing the content within, those models are now in place and here today.

So, here on the CES floor, or even here in terms of our offering, we have both robots visibility, but also what you have is in terms of automotive. Your question stems in terms of, are we seeing that today? And yes, Mercedes is coming to market in terms of very, very hard work that we have done over the last eight years to move to a very high-end self-driving capability in the car, really focused in terms of the safety and the works. And Mercedes has now been able to take the lead as one of the safest cars that will be in the market. So yes, we are earning definitely revenue from our work in terms of Mercedes, as well as many others that are using our platform, whether that be back in the data center.

And that's an important piece to keep in mind, the amount of data that is collected and put together in terms of the data center, as well as what is also inside of the cars as well. As we move forward, taking that to an area such as Physical AI for robotics is also going to be extremely important. The learnings, the simulations of others, of what we've seen in terms of automotive carries very nicely to the focus in terms of what we will be able to do with robotics as well. So yes, a key part of that. We see much work in terms of our Jetson platform, our Omniverse platform, and then also now in terms of our open models helping these important parts of Physical AI.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

That's great, and you touched upon a very relevant topic in your opening remarks, which is good to see the team is in production with your next-generation Vera Rubin AI and accelerated compute platform. You're on track to launch in the second half of this year in line with your aggressive annual product cadence. Six chips, as you mentioned, in the Vera Rubin portfolio. Initial performance relative to Blackwell is very compelling, right? 5x better inference performance, 3x better training performance, and as you mentioned, and most important to your customers, is 10x lower potential cost per token.

As you look at the strong demand curve ahead of you, and we've all heard about, we all track the value chain, the supply chain, but as you look at the strong demand curve ahead of you, what are the product areas or categories of your supply chain that you could see constraining your shipments as you start to unlock Vera Rubin in the second half of the year? Could it be 3 nm wafer supply? Could it be CoWoS? Could it be memory? Any bottlenecks that you foresee as you think about the strong demand ramp in the second half of the year?

Colette Kress
CFO, NVIDIA

Yeah. I think it's right to indicate, yes, there's a tremendous amount of demand that is out there for both the AI and accelerated computing. And we have been focusing on the significant amount of demand and then the needs of what type of supply we'd have to purchase. Keep in mind the work that we do in terms of building any one of these data center infrastructure systems from the very beginning to the very end. There could be anywhere from three quarters to a year to be completed. That means a lot of our supply purchasing is not taking place in terms of what we need for tomorrow today. It has been in the works for a couple of years because what it takes is focusing not only on just that supply, but the capacity needs that they have wanted.

That is an important part of our processes, thinking through every single one of our generation and our future generations and working with our suppliers. We feel very solid about that in terms of what we see in this new calendar year and what we have in terms of supply. As we move forward, it's something to think about as more and more growth goes, how much more can our suppliers do? We feel good in terms of what we have ordered, what we have been confirmed for, and in terms of our supply that we will take us in this year.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

That's perfect. Why don't we take a step back for a second as we enter the new year? This has been the concern or focus as it relates to NVIDIA in terms of how the market thinks about the NVIDIA team and the trajectory of growth. As we step into a new year, the market has always focused on, and by the time that we step into a new year, we already have a pretty good view of your customers' CapEx spending trends, right? So the market, as the market always is, is very forward-looking, right? I think the market is starting to think about the infrastructure growth trajectory looking into calendar 2027, right? If I go back to October of last year when Jensen talked about $500 billion of visibility backlog through calendar 2026, right? That's both on Blackwell and Rubin GPU fabs, right?

And we know that lead times for your rack scale-based solutions are 9- 12 months. And it takes a significant amount, as you mentioned, supply chain management and coordination, capacity build-outs, et cetera, right? But the best proxy, I think, for continued CapEx and infrastructure spending by your customers is to look at your customers' forecasts and orders beyond 2026, right? Which I assume the NVIDIA team is already focused on. I'm not asking you to quantify, but given what you see in your orders and customer forecasts, are you already seeing a continued spending growth profile by your customers into calendar 2027?

Colette Kress
CFO, NVIDIA

Yeah. So let's go back in terms of our GTC DC. That was an opportunity to help you understand that the combination of Blackwell and Vera Rubin together is about $500 billion through that period of time through 2026. But the important part, correct, is thinking of now let's start talking about 2027. When you think about what it takes to stand up the compute, stand up a full-staged data center, that is years to do so from the land power shell to finishing out the build-out to eventually in terms of putting in the compute and getting that ready. So where we see our customers and they can see an event like today, they know that Vera Rubin is here.

There's already been discussions in terms of how can we think about the amount of demand and where they will put that in their land, power, and shell that they have up and coming in terms of the year 2027, so that's the right way to think about it. We're still working on in 2026, there is still a shortage of demand, and they are still looking to see if there are quick adds that we could also add in 2026 to help fuel what we need in terms of that demand, so both of these things are happening at the same time, but this being very helpful to them, they have a good understanding from an engineering what's capable, and now they can start thinking through the volume of what they will need for their data center builds, so yes, that is exactly where we're focusing on as well.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

On the market concerns around an AI bubble, you know, Jensen has, and as you mentioned in your prepared remarks, right, you've articulated three compute platform shifts that are all happening at once, which should mitigate a spending bubble, right, and often feel like the market sort of misses this, right, and first is the transition from CPU compute to GPU accelerated compute, right? I mean, we're seeing this in so many traditional CPU-based compute workloads and dominated segments of the market where over time they're moving from CPU compute to GPU accelerated compute, right? Jensen always talks about this, but EDA, chip design software is a perfect example where most of the chip design software workloads were run on high-performance server CPUs not that long ago, but today they're all, many of them are running on GPU accelerated compute architectures. We see that in the simulation markets.

We see that in the data-based markets and so on, right? So that's one of the first sort of transitions, right, is CPU to GPU accelerated compute in the existing traditional compute base. The second driver is, as you mentioned, the strong adoption of GenAI. And the third transition, again, as you mentioned, is the Agentic AI and of course the onset of new foundation models that will power things like Physical AI, right? So I just understand along all three of those compute platform shifts, like where are we in terms of the adoption curve contribution to your current data center revenue profile? More specifically, looking into 2030, right? Let's take a longer-term view of this.

Looking into 2030, how are all of these three shifts, how are they going to profile into that sort of $3 trillion-$4 trillion of data center spending that the NVIDIA team is forecasting during that time?

Colette Kress
CFO, NVIDIA

Yeah, great set of questions. First, looking at the accelerated computing. Accelerated computing, it's already here, and many of us are seeing it and working with it almost every single day. There's been a massive transformation of how search is completed, recommender engines, and essentially almost all in terms of consumer internet and how we market throughout to businesses and/or consumers, so that's an important piece, but keep in mind, it is going to be a multiple-decade solution to try and get through all. There's a lot of moving to a Software 2.0 and transitioning from CPU to software to a different form of accelerated computing to that software, so we're in the early parts of it.

It's moving quite fast as folks do see the great benefit from the accelerated computing and being able to manage the significant amount of data they have, which you're going to see as time moves forward. However, moving also in terms of our work that we see with generative AI and Agentic AI, the important parts of that also created an exponential growth in the need for the amount of compute that's necessary because one of the very big parts of moving to Agentic AI was the long thinking, was what can I do to get a response on a very difficult, challenging question, and that additional long thinking takes a lot more inferencing demand and takes a lot more token generation as well, so we are also now seeing a surge in that demand as we move forward.

And our vision can see looking at AI as we go forward as nothing more of in the early stages as we move towards these very sophisticated solutions that will augment a lot of the work that we do in our offices as well as what we do in terms of personally. So we know these big markets are driving a lot of this different demand. And no one can know besides do we see any type of shortage or any type of stopping from that. There's a lot more work to get completed. And the world as a whole still has to get that completed, not just here in terms of some parts of what we see here in the United States. You have a lot of different Sovereign AI going on, and that's an important piece of it. And you have many, many different industries.

You have to go industry by industry. You can look at social media, but you have to look at healthcare. You have to look at automotive. You have to look at industrial manufacturing. All of these differences have unique ways that they complete that work that has to both transition and can be infused in terms of AI as well. So a lot still to go, and why we indicated that by the end of the decade, we are definitely going to be up there in the multiple, multiple trillions in the three to four of the amount that will be able to be spent in terms of building out this accelerated computing and AI types of solutions.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

Maybe more near-term kind of focusing on calendar 2026, you know, going back again to Jensen's comments at GTC back in October where he talked about this $500 billion of revenue visibility backlog of cumulative Blackwell and Rubin shipments through 2026. Obviously, as you move forward in time, you continue to get updated forecasts and orders. Ex-China, let's talk about China a little bit later, but ex-China has that $500 billion worth of visibility and backlog number through 2026 continue to improve. And at what point are you supply constrained and need to push any more orders into calendar 2026?

Colette Kress
CFO, NVIDIA

So the demand, the demand as we see continues to increase as folks are looking to enable more compute. For a lot of it is the long test time and thinking that's needed. And so we see this every single day. And since our time that we said $500 billion, of course, we've seen new announcements of new deals, new different both focused in terms of the CSPs, the model makers, as well as many of our neoclouds looking to add more onto that. So yes, more has occurred. And we are now starting to see folks work in terms of providing the orders. We have orders for Vera Rubin and focusing more and more in terms of thinking out a full year of volume of what you may need in terms of Vera Rubin. So we're in a great position in getting better understanding.

We've worked over the many, many years that says the more insight that we provide them in terms of our infrastructure and what's there, the easier it is in terms of the planning and process of that. So their demand needs are quite strong and we are definitely in that process. So yes, that $500 billion has definitely gotten larger. And now we'll probably look in terms of the next year as well to start building up in terms of all the different demand that we have there. But we cannot say anything more than demand is quite strong.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

That's great. No, that's exactly what we're looking for and that's exactly what we thought. Maybe switching gears because Jensen and the team did a great job and you did a great job of laying out the performance specs as I mentioned to you before, right? You know, 5x inferencing performance on Vera Rubin versus Blackwell. That's on the inferencing side, 3x better training. And then what's most important is the economics to your customers and your guys are driving 10x lower cost per token on Vera Rubin versus Blackwell. But I think the market has gotten a better appreciation for you talked about co-development and as you bring more systems and rack scale solutions to the market, it is a solution that is optimized not only around compute, it's optimized around compute, it's optimized around networking, it's optimized around storage and networking, right?

And so let's talk about networking, right? And lots of focus on networking lately, especially as NVIDIA and the industry transitions to rack scale solutions. There's a significant step up in networking dollar content given the scale-up connectivity with your NVLink networking and switching portfolio. Networking attached to your compute revenues was around 19% in your fiscal Q3 of last year, right? And we define networking attached as networking revenues divided by compute revenues, right? That was about 19% in Q3, crept up to 21% in the July quarter. So on average, about 20% networking attached to your rack scale compute systems higher than the average attached over the prior nine quarters, which was around 16% or 17%, I think due to the scale-up adoption, right? As you move to rack scale, looks like you continue to also get traction on Spectrum- X, your Ethernet product line.

Is 20% the baseline on networking attached? And as you drive more Spectrum-X and your recently announced Spectrum-6 platform and you've got some QoS for scale across, maybe the mixed trends move more towards the low to mid 20% range and mid to longer term, right? I'm not sure, but I wanted to get your views on that.

Colette Kress
CFO, NVIDIA

Yeah, it's a great way to start here talking about our networking. We can definitely discuss where we've been historically and where we see going forward on the networking. One of the ways that we have been looking at the networking is how much, in terms of when they are buying the full systems, which almost all of them are, how many of them are attaching in terms of networking, and that's different than looking at it from a dollar perspective, but just the attach rate because that is a very, very clean metric to understand. Understand that that number is nearing 90%. 90% are attaching some form of our networking included in there.

Let's remind folks that is our networking business is number one in the world from moving to a very, very small scale, but now with the full development of all different types of switching capabilities, best of breed in terms of NVLink. Nobody has even figured out how to even do a lot of what we have done. It is really establishing both adoption of not only our InfiniBand, which has been such an important part of supercomputing for decades and decades. It is world-class, but the quickness of providing those key features in Ethernet and the adoption of our Ethernet for the businesses as well has been a huge, huge success. Kind of stepping back and looking at this AI important wave, it's not enough to just have a GPU chip. It's not enough to have just an ASIC.

You're missing such an important part of what the networking does to capture both the capabilities of scaling multiple and multiple ones together, but also dealing with the complexity of traffic and the complexity of responses that you need. At some point, you may be training and some point you may be inferencing, and being able to manage that all with all of our different inferencing platforms with our networking has been a huge, huge success, so even as we go forward and move to a Vera Rubin, already working at some of the most important capabilities and how important that networking has been there. They're also part and focusing in terms of our work in terms of the switch for CPU. That's been an important part that folks know the amount of savings and capabilities that you can establish through a CPO environment.

And we're going to be excited to bring that to market for them as well. But really looking at what we see, it's very interesting. Even if they have a part of our compute, it's very common in terms of our networking is still being chosen for their different systems. Even if they have one of their own ASICs, they will often use our switching capability as well. So we're in full design at end to end, and we're really excited in terms of how the networking has also been established within the Vera Rubin.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

Yeah, you know, and as a reflection of the traction on networking, you know, the team announced, you've always been a leader in InfiniBand switching, right? And as your customers were clearly signaling to the NVIDIA team that they were moving to more of an Ethernet-based switching platform, the team brought to market several years ago your Spectrum Ethernet switching platform. That went from like zero to $10 billion annualized in like record time, right? And I think that last you updated us, your annualized run rate on Spectrum- X was like $10 billion annualized. I think that was in the July quarter. In the October quarter, that looks like that stepped up to a $12 billion-$13 billion sort of annualized run rate for your Spectrum platform. Jensen and you and the team announced your next generation Spectrum- 6 platform, right?

This is 102T terabit per second throughput switch, right? One of the fastest switches in the world. You're bringing that to market with Vera Rubin, right? So if you think about the $12 billion-$13 billion sort of annualized run rate in the October quarter, you've got a new platform coming out, Spectrum-6. You look at your order book for Vera Rubin, like where could this number on Spectrum be, you know, as we move through next year?

Colette Kress
CFO, NVIDIA

So, not giving a forecast going forward, but to understand where we already are in terms of the attach rate, you're going to see something resonate in terms of our growth in terms of compute and our growth in networking be the same. The only difference that you do have is just the timing of when each of those systems are put together in the full data center infrastructure that they're doing. You may have parts of that networking is the first things that are put in place in terms of the data center, and some of the last parts of the data center can also be the networking. So that's the only thing that really changes the growth. But still, we are expecting nearly the same, if not more, of an attach rate in terms of what we are seeing in networking and growth going forward.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

Perfect. And then maybe switching over to China. I know you got some questions yesterday in the financial analyst Q&A, but following the U.S. government's approval of the H200 sales into China, it appears customer interest actually looks very strong, right? So the question is, has the team started receiving orders from approved China entities for the H200? More importantly, how rapidly can the team start shipping H200s to these customers? And how should we frame your China revenue opportunity over the next 12-20 months? But I remember Jensen had previously last year quantified the China revenue opportunity for calendar 2025 at $50 billion going at a 50% CAGR, right? 50% growth implies $75 billion of potential revenue demand for NVIDIA this year. Is that how we should think about the China revenue and growth profile and opportunity?

Colette Kress
CFO, NVIDIA

Great questions. Let's first talk about the H200. We're very pleased that the U.S. government saw that this was the right opportunity for us to fairly be able to compete worldwide and providing a really great product to China. And that's what this is all about. The ability for us to ship H200 to our China customers still requires a license from the U.S. government. And the U.S. government is working feverishly right now on that process in order for them to determine the licenses for those customers. So the customers have requested the licenses and we are now awaiting that part of it. But also on the same side, we have heard from these customers from a demand perspective. That's important for us so that we can prepare as those two things come together.

The POs and the completion of the licenses with the U.S. government will set us on our way to begin shipping the H200 to China. We hope that that gets done soon. But again, it's not all something that we can right now control, but we do are very pleased in terms of the U.S. government's decision to do this. So we're going to wait and see what will happen. It kind of steps back though and says, what is the demand in terms of China? It's a very, very important economy and has a tremendous amount of strong engineers and AI engineers compared to also what we see here in the U.S. So it's also a very big business, as Jensen has articulated. And it's not a static business.

It's going to grow very similar in terms of what we are seeing here in the United States if we can continue selling going forward with any of those different licenses that the U.S. government has. So more to be determined of sales in that, but let's just wait to see how we can get our H200 up and running soon.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

Got it. And then on the recently announced non-exclusive licensing deal with Groq, you know, Groq was focused on this SRAM-based high-throughput inferencing engine, very good for low user count and low model parameter inferencing. Seemed like more of an enterprise-focused solution versus NVIDIA's inferencing solutions, which focuses on very high user count, massive context input capability, right? More targeting foundational model developers. But wanted to get your views on the rationale for the Groq transition and how NVIDIA thinks of integrating their technology into your product roadmaps and target markets.

Colette Kress
CFO, NVIDIA

Yeah, we're very pleased to both have the Groq IP with us. And that's what we created was an IP license stemming from Groq and their IP. But the other most important part of it was an exceptional team that has now joined us as well. You're correct. Their work in terms of inferencing, low-latency inferencing has been a lot of work that they have done with some tremendous engineering horsepower to do so. We found it quite exciting and something very similar of our thoughts and work going forward as well. Bringing them on board with that IP, we're excited in terms of what the teams can work together. So excited we got it done before the holiday to get that completed. And many of them are already with us beginning that work. So stay tuned.

We don't have anything yet in terms of the exact timing when something will come to market, but this is an important area. The complexity of inferencing, the size of inferencing in terms of the market and the different needs that it's going to be, and being with such an exceptional team, we will be able to put something great together for us.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

In terms of some of the market concerns that we continue to hear about, right? And one of them is the concern around the gap between a few of the foundational model builders and the current financial profiles and the data center compute capacity, right? That they've committed to over the next few years, OpenAI, Anthropic, et cetera, right? They're committing to a lot of capacity to you, competitors, and some of the large hyperscalers. Obviously, these AI labs will have to raise money, right? So how do you think about that as a risk to NVIDIA's business?

Colette Kress
CFO, NVIDIA

The model makers are very both foundation model makers, but also in terms of open source models as well. Most of them, if you look at them as a whole, are doing a very methodical piece by piece as they continue building a new training model. Okay, let's move to the inferencing, and now let's get started for my next build and moving in that methodical way. Many of them have had and worked in terms of how do I source the raising of cash, the raising of equity, the combination of the two, and how do I work that carefully either with a fund or working it in terms of on themselves. I think a lot of that is very solid diligence in terms of what we'll probably see continuing going forward. They are essential.

These foundational models are essential from a context perspective in terms of what we have going forward. Working and forming and storming with them and how to get that completed, I think it has gone very well. Sure, they are looking in terms of long term to help folks understand this is not our ability to complete AI in the next couple of years. This is decades. They may talk about it in terms of gigawatts of size as we go forward. But the reality is it's really about the year by year, the quarter by quarter, how do they need to build? Where do they need to build? Are they in the research side? Are they working on the inferencing? And I think that process is fine. Many of them are also with the CSPs. That's a very big help for them.

Their quality of what the CSPs can provide for them so that they can concentrate on building out their models is a great combination. And we're happy to support that. And many of the work that we are doing is through the CSPs and therefore the model makers as well. Whether those CSPs be a Neocloud or some of our longstanding tremendously great CSPs that we've had, it's working quite diligently in terms of all that work. So I think we're going to see more of that to come. But again, you just have to take it day by day, step by step as they think about what they're planning to put together.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

You know, Colette, we're at the Consumer Electronics Show, and the one thing that we noticed was a distinct absence of new GeForce gaming platforms this year. And I guess the question to you is, are there concerns of tightening supply of fixed DRAM and HBM memory for gaming? How are you prioritizing, allocating these components, gaming versus data center? Do you think that there is the potential for demand destruction and the seasonally stronger second half of the year, given that especially DRAM pricing looks to continue to increase through the remainder of this calendar year?

Colette Kress
CFO, NVIDIA

Our gaming business has been a home run. Our representation with our gamers continues to be tremendously strong, and coming out with what we had with Blackwell was also hitting great strides. At the very beginning, we underestimated in terms of that growth, and that growth was so fast at the very beginning, but we have now brought that up to a good level, but given our size of where we are as a percentage of those gaming markets, we're going to continue to think through both the prioritization, what will they need as we go forward. That's still more in terms of later on in terms of this year and next in terms of how we focus, but the best part that we're pleased about is these platforms and enabling gamers, creatives, and AI types of platforms that they can use are really an important business model as well.

So stay tuned as we think through. Demand is again quite strong. And we're going to try and make sure that we can serve as much of the demand as we can.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

And then my last question, and I appreciate the time spent here. You know, you've guided to mid-70s gross margins while acknowledging, right, that the potential for rising input costs looking into this year, levers matter the most to protect the margins. Is it mixed? Is it pricing? Is it cost downs? Is it supply chain efficiencies? And where are you least willing to compromise as you think about all of these levers?

Colette Kress
CFO, NVIDIA

Yeah, it's always an interesting discussion on the gross margin piece of it. It really shows a focus of us not just on getting the compute out, but doing it very in a great position both with our manufacturers, our suppliers, and in terms of our internal teams in terms of how we can do this well. We have it very close right now at that mid-70s right now. We don't want to look at this as, yes, we're here to grow, grow, grow, grow that higher, but we are here to keep what we've said at the mid-70s right now as we go forward. It takes a lot of different things. When you look at the complexity of the system, you are focusing in terms of every last piece of component. We have already done a significant amount of the ordering.

We do understand what it took for the capacity of many of our suppliers, and we're very supportive of the many different suppliers that have pulled that together, but that now moves us working together with manufacturing. How do we improve that cycle time? How did we think about improving all of the different focus of the design as a whole? Not only can we get better and focus on that cycle time, we can also improve the cycle time of them just getting that to customers and the faster to customers. Remember, as we move into this new year, we still have a combination of different platforms that we're going to market. It's not just one product, and that will both enable and also be a mix that we have to keep in mind as we move into this new year.

So right now, in what you've seen, all of our steps for Vera Rubin, as well as what you've seen with GB300, very soon in terms of that process and getting that together. So we do feel that confidence that that will also be something that we can work on. But let's not look at it as something easy. We will continue to work to stay at about that pace.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

Absolutely. Colette, we're just about out of time. I want to thank you as always for your participation and your support. We look forward to strong growth ahead this year for the NVIDIA team and another solid year of execution by the team as well, so thank you very much for your participation and support.

Colette Kress
CFO, NVIDIA

Thank you so much, Harlan. Okay, have a great day. Take care.

Harlan Sur
Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan

Okay. Thank you.

Powered by