Extremely excited to host a discussion with Colette Kress, the CFO of NVIDIA. Before I start, though, Colette, I'm just gonna throw out this little fact, right? You've been here 10 years. Actually, September was your 10-year anniversary. At the point you joined, the company was doing $4 billion of revenue, trailing twelve months. You're now doing $45 billion, roughly, right? The market cap's gone from $9 billion to $1.2 trillion. So I'm gonna start by just saying, good work. Congrats. You know, it's been a phenomenal run, so keep it going. Don't take your foot off the pedal. But before I start with the questions, Colette, I think you get the joy of reading a safe harbor, and then I think you might have some prepared comments as well.
I'll kick it over to you.
Sounds great. Thank you, Aaron, for having us. I do have an opening statement. Let me first say, as a reminder, this presentation contains forward-looking statements, and investors are advised to read our report filed with the SEC for information related to risks and uncertainties facing our business. Okay.
Good.
So, enjoy coming out here for this event. But let me kinda start with some of the things that we are seeing here at NVIDIA. What is the last part of this year been about and this important time? The important time is related to really a change in how we see the data center computing going forward. And the rise of generative AI has really created a new paradigm in front of us, where we will see accelerated computing and AI computing being the thrust of a lot of the computing going forward. There is just an enormous installed base right now, of about $1 trillion of compute that has been about the same type for the last several decades.
This is the opportunity that people see for both sustainability as we move forward, at a more efficient way to do computing, but also to transact using AI, as AI will be with us in almost everything that we do. So it's a statement to say this is just the beginning of a journey. We have a huge opportunity in front of us, and we're looking forward to more to come. Yes, we gave our earnings just right before the Thanksgiving holiday to show both strong growth, both sequentially and year-over-year, growth for the company across all of our market platforms. But a standout, of course, is our data center compute. Our data center compute reached record levels again, both for our training and inferencing, also for our GPU sales and systems, but also our networking.
We are reaching more and more customers every day in terms of the work, that we are doing. The strength stem from our consumer internet companies and a lot of our enterprises. Let's not forget our CSPs, also all growing, in this last quarter, and we are continuing to see more of our specialized and regional CSPs also grow. So I just wanted to start it with just a, a beginning statement. I'll turn it back over to you, Aaron.
That, that's perfect. So, you know, I'm gonna. I've got, I don't know, 50 questions here for you, so we're gonna try and get through, you know, as many as we can. But, you know, the inevitable question I always get is, you know, what does, you know, fiscal 1Q of 2025 look like? And I think I know the answer from you. But maybe, maybe I'll just start. You know, kinda help us characterize the balance or the imbalance, I should say, between the current supply and demand environment, what NVIDIA is doing, and how do we think about the dynamic of lead times on some of your higher-end SKUs, and, you know, how does that start to progress, or how should we think about that progressing as we move forward?
So an important look at this last year has been our ability to scale as a company for the size of revenue. Some folks are really looking at it that it must have been an easy process. But it took a lot of work, working with many of our supply chain, not just in ordering supply, but keep in mind that it's also about ordering capacity. And our long-term relationships with them was really helpful for us to be able to scale as fast as we've had. Unfortunately, we are still supply constrained, though, and it's going to take us a little while yet for us to catch up with that. We plan in terms of scaling supply both all of this year in each quarter, but we also plan that as we go into 2025.
We're making meaningful progress in terms of catching up with that supply. Many folks look at both our ordering and seeing what we have in inventory, but keep in mind, much of that is many different durations. Durations in terms of what we need just today, what we are also procuring or solidifying for capacity in terms of the long term. So we're on track next year to make some meaningful progress right now in terms of that supply and demand. But at the same time that we are serving demand, we are also bringing new products to market. New products to market have therefore surfaced the onset of now more demand coming in for our next set of products, and I know we'll talk about that more.
Yeah, that's perfect. And I know you just mentioned, right, like you had, like if I add up purchase commitments and inventory and prepaid capacity, it's like it grew 40% sequentially this last quarter. The point of that is that you would expect that to continue to grow sequentially over the next handful of quarters.
Correct.
Kind of away from the supply side, the demand side, you know, I guess how has your, if at all, view on visibility changed, the demand visibility that you see, the demand shaping? And the second, you know, piece of that question, I think last quarter when we talked, you know, you talked a little bit about product cycle. The cadence of product cycle is an important variable to consider on product visibility. Maybe help us appreciate that a little bit more.
So when you think back, over time of the many, different architecture generations that we've gone through and what we are seeing today, our relationship with our customers has grown stronger and stronger. When that relationship comes to helping them think through what they plan to build in terms of their data centers, that's a long-standing discussion, to help us both, work with them in what is the exact configuration that they need, but it also helps us on-demand visibility. Our work with them continues each and every day. If you think about how long it would take, to build a data center, from the beginning of day one of planning to standing it up, that's likely a year if you are a very well-seasoned team that has built up data centers.
So already we are seeing the work begin in terms of that next year, what do they want to build? And the bringing together both our existing portfolio, our new portfolio of products coming out, that again builds a relationship of more and more demand as we go forward. So we'll continue that process. Our visibility is strong, and when we talk about our visibility, each one of our customers, knowing where we were with supply, knew to help us plan, they needed to provide us that deep understanding from essentially a PO perspective. So we'll continue that path right now with the new products and maintain this process of understanding demand.
Yeah, and just on a finer point on that, you know, if you look at, you know, the slide deck that you've put out there on your investor relations website, you kinda outline that cadence of product cycle, right? It looks like it's now more or less a year cadence, whereas in the past, it might have, that was a much longer cadence. So, that's part of this visibility discussion, that these customers are asking you for that kind of cadence of product cycle. Fair?
That's correct. Additionally, the market has advanced so much. The complexity of the type of AI and solutions that folks are looking for-
Yep.
They would love to see something new for each of their new plans that they have. And so for us to work on more different products, even in between architectures, that we can do as well as our architectures going forward to be on a faster cadence, that is helpful to them. Both helpful from a planning, but also to support the new projects that they're doing.
That's perfect. So shifting gears a little bit, the competitive landscape... I often get the question of, you know, it's, it's, you know, one of your competitors, you know, will launch a product next week. There seems to be more and more narrative around, like, what the hyperscale cloud guys are doing. I know AWS had Trainium2 out, you know, this week, or announcement. You've seen what Microsoft announced recently. How do you characterize the competitive landscape that you're seeing in data center?
When we think about the work that we have done, we still step back and help folks understand, we didn't build a specific product, or we didn't build a chip. We built a full stack. We built many times a full data center, a data center for computing from the minute information data enters the data center, you can work with NVIDIA, both NVIDIA systems, NVIDIA's networking, NVIDIA's overall software stack.
Yep.
From just full system software to really as close to the application as we can get. We help people build models. We h elp people correct models. That's the work that we do. So competition is hard to look at because there isn't anything that is an apples-to-apples in terms of what we're doing. They're all very, very different. There can be specific chips that may help certain specific workloads, but the reason that our customers continue to turn to us is because of the TCO savings of purchasing a full stack for them doesn't require them the significant amount of resources that they would have to add on top if they only had received just a chip. That work continues for us to help support their TCO efforts.
So when we think about other types of solutions coming to market, they're great. But again, our position is, the more the merrier, that's fine, but we do know TCO is going to be the number one goal of many of our customers today.
A lot of times I'll field the question of it's, you know, is it... It's CUDA, it's the 4.8 million developers, it's the stack there, but it's so much deeper, right? I think we tend to get lost on, you know, the CUDA stickiness, but is that a fair assessment? It's so much more than just the CUDA layer.
If you think about the onset of CUDA, CUDA being our development platform that's on every single one of our GPUs and has been for close to 15 years.
Right.
That is a building of not only a very strong development platform, but a community that has joined that development platform. Everything that we do on our GPUs today is both backwards compatible and forwards compatibility. Every customer knows that. They change generations of architectures to our new generation, everything is still working. We also have to think through: where would that development community like to be? They like to be where all the other developers are, because so much work has been built over time. Somebody would have to rebuild that. And so our position there has just been a very full end-to-end solution that no one can overall argue with, and they understand that we are here to continue to innovate going forward.
They can count on us, that next year, yes, we are going to be thinking about new products for this market as well.
... So I want to go down, you know, further down the layers of the stack strategy in a minute here. Networking, software, I definitely want to touch on those. But before we go there, I wanted to ask about the China restrictions. You know, this recent round, I know last week you had mentioned, look, we're going to have, you know, solutions that adhere to the restrictions to sell into China within, quote-unquote, months. You also mentioned, though, that the China contribution would be down, quote-unquote, significantly this quarter. Help us think about that cadence. Like, you know, is that—did you kind of take out that full China business in your expectations this quarter, and we start to see that come back as some of these solutions come into the market?
Is that how we think about it into the next quarter or two?
The U.S. export controls this time were quite detailed-
Yep
... quite long, and took some really thinking about how do we, how do we move forward to help our China customers? China is still a very big market, not just for us, but for much of the industry as a whole. When you look through the export controls, we have to carefully go through what is just not an option that they would not approve. There's a new set of an area that says up to notify and review with the government, and then there's an area that says, "Carry on, this is fine," for China. Now, what we want to do is make sure our both understanding and our relationship with the U.S. government remains as solid as it has.
We've created a great understanding of their needs, and we want to make sure we're following that. Keep in mind, our China customers want to as well. If we will bring them a new product, they do want to know that the U.S. government also agrees. And so we're working through right now in terms of our design of what we think we could do. We will certainly talk with the U.S. government and make sure that is also aligned with them. Given that that, you know, is an unknown defined time, you are correct. We're not looking for that to be a part of what we provided as an outlook for our Q4.
and so the sequential decline that we will see for China— Keep in mind, we still will be selling for our gaming business. We still will for some of our other parts of our data center business that we can sell in terms of that, but there would be a significant change in the quarter. But going forward, we will. We'll support China-
Yep
... with the approval and the understanding of the U.S. government.
Has the dialogue this round relative to what the restrictions were more bandwidth-oriented, you know? Has the dialogue with the U.S. government changed? Is it, has it deepened, you know, as far as the engagement of solutions that will fall under the thresholds of restrictions?
Just given the complexity of the market, given the complexity of the semiconductor as a whole, and the complexity of AI, yes, it was a much more-
Yep
... thorough discussion on both sides.
Yep. And as far as the cadence of... You said months, right? As far as new solutions for the China-
We're working as fast as we can.
I got you. Okay. Let's go down the stack a little bit more. A topic that I, I've written a lot about, given my coverage universe and, you know, that this, this networking business, which I, I think people are now really starting to see the significance of it. To put some context to that, I mean, when you bought Mellanox, the business was running $1.3 billion of revenue. I think if my math's remotely right, you did $2.6 billion or even $2.7 billion of revenue in networking. I know Jensen endorsed 10 billion+ of annualized revenue in this last quarter. Help us appreciate that a little bit more.
First of all, I want to know how much is InfiniBand, and then I'm going to get to Spectrum-X and how you see that evolving as far as even deepening that networking strategy.
Yeah. A great question regarding networking. At the time that we had completed the acquisition, one of the things that we did know, that it was a match of culture, a match of culture in terms of how both teams worked on both innovation, thinking about where the future would be, and had been really the basis of their data center was high-performance computing, very similar to what we've done. But we're so pleased in terms of how well the acquisition has both helped our solutions for customers, but our partnerships that we now have across so many of our peers that are in Israel. And our work, you're correct, we've reached now an annual run rate of nearly $10 billion in terms of our networking.
That is looking where there's a very sizable amount where we are together when we are selling GPU systems and selling network together. They look for our high-end networking solutions. Why? Because they're the best of the breed-
Yep
... for accelerated computing and also for our AI solutions. If you are doing AI, both training and the inferencing, the importance of InfiniBand as the standard, for those large clusters is very, very key.
Mm-hmm.
InfiniBand has also grown even faster than the total networking business-
Yep
... that we have, and we have very large customers that have been using it and installing it throughout, and, that's an important part of the process of building out their data center.
Yep.
But we need and also understand that Ethernet for accelerated computing and AI is also very key. And so our Spectrum-X will be coming out in the new calendar year. That will be there again with the high speeds, moving from 400 gig-800 gig, and it will be very, very key now based on Ethernet. Ethernet is important for enterprises when they have the multi-tenancy types of data centers that they have, and we do know that that is an important piece. So we're going to be able to really manage both of these industries.
... So, Colette, there's this debate about InfiniBand, Ethernet. Does, you know, Ethernet replace InfiniBand? You know, do you look at Spectrum-X as being accretive to the business, or, or is it an either/or? Or do they just play in different pieces of this AI stack? I think a lot of your, your white paper talks and delineates between AI factories versus AI cloud, and it seems like that might be a delineation of Ethernet versus InfiniBand. Maybe help us understand where one plays and, you know, is, is it accretive to the model?
It's absolutely accretive. This is not taking away. InfiniBand, again, is a standard for many that they will have. Now opening it up for those that are on Ethernet, it's an addition in terms of that key place. It's true that we think about it from what will be the AI factory, what will they standardize on, versus what will they standardize, for example, supercomputers that are built just for AI. Thinking about the traffic that is coming into a data center, particularly for some of these large inferencing platforms, that traffic and that traffic mitigation, both the InfiniBand and the new Spectrum-X really, really work to manage all of those traffic challenges that may be there.
Yep, that's perfect. And again, that's Q1 those come out.
It's coming out.
You've announced partnerships with Dell and the server-
Absolutely
... ecosystem. Okay, great. Kinda sticking on the product portfolio, you know, announcement this week, you know, AWS was the first deployer, I think, of the GH200. So that's Grace Hopper, the combination of the Arm-based CPU and the Hopper GPU. Talk a little bit about where that fits in the strategy. You know, what is, what does Grace Hopper look like as we start to think about that piece of the product portfolio going forward?
Correct. So we came out with Grace Hopper 200, and we started shipping it within Q3. Q3 was many of our supercomputer design wins that we have had, so it has begun the shipping. But what we have now is Grace Hopper 200 with Amazon, and with their AWS EC2 set. Now, what is important about that is they will also take that to create a full supercomputer dedicated to where you are now able to keep 32 GPUs together and working, and as well as a new revised NVLink within there. This is, again, yet that new product introduction. We're excited to work with AWS. They will be standing up the very first GH200 as a CSP.
There will also be the opportunity for them working with us on DGX Cloud, on GH200 as well. So now working with customers on software and solutions, using GH200, just is a great opportunity both of using that Grace, CPU, but also, a complete faster, performance as a whole in terms of AI.
Is it gonna be Grace Hopper, GH200, GH300, whatever the subsequent versions might look like, or is there just a Grace? Is there a market for just an Arm-based CPU from NVIDIA?
There is an opportunity for just a Grace. On new product scenarios that we could see in the data center, you will likely see opportunities for Grace as well.
That's perfect. So I wanna shift now to software, something we've also written a lot about, and I think, you know, the reason for writing more and more about it is that I just hear you becoming more vocal about it, right? You talked this last quarter, it's on pace to hit $1 billion in ARR. Can you walk us through the software monetization for NVIDIA, like the big drivers, and then certainly I'll have questions after that. But, you know, walk us through the key drivers of the software side.
Yeah. We talk about our software more and more because it is an important reason, again, why people choose our stack, and why the success of the work that they're doing is so successful. The years of our software building, there is software that comes with every GPU, even though it is not foreseen as part of the invoice. It just says we'll provide it in terms of free. That is important to the work that they're doing, but now there's a new opportunity for us to look at software as a monetization as well. But there's reasons for that. Our work is with enterprises. Our true end customer is the enterprises around the world of all shapes and sizes for their work that they do.
When they are building accelerated solutions or AI, and they need help, as they are likely not staffed with a significant amount of software engineers, that software stack is essential. It's essential that things have been already pre-built, pre-designed, that they can work for and structure. They could also turn to us in terms of help, assistance on, fixing models, optimizing models, and additional work for new projects that they may able to do. Those enterprises are very focused in terms of seeing their AI computing in the same frame that they see all of their computing that's in the data center. This is important that our software now leverages and works with VMware, as most enterprises leverage VMware to manage and operate all of their data center, all of their different workloads.
So that is key for us to be a key part of this as we see the data centers in the future becoming a very big portion of it being accelerated in AI... Those enterprises are looking for a solution that says, who's accountable for keeping up that software? Who is providing the security platform with it? And how can I create a trusting relationship with it? That is why it monetizes. That's why this is something that we can actually sell in that piece. NVIDIA AIE is our software platform, essentially the AI operating system for the enterprises. That will be a very big part of our software, probably going forward. We have other different components as well. Omniverse is a key component, and let's not forget our AV software-
Yep.
For automotive that will be with us. These things will scale not only with just our types of customers, but just because of our infrastructure. As people install more and more infrastructure, that operating system will be important for them to use.
So if I think about $1 billion ARR, you know, this year, is it fair to say that the overwhelming majority of that, I'm sure... Unless you want to give me a number, which I don't think you will, is the overwhelming majority of that AIE, AI enterprise software today?
There's a lot of different pieces in there, but most of it is associated with what's going to the data center, and a lot of different data center components.
And that's interesting because do you think that that consumption model is through your cloud partners? Again, now that I look at it, you've got Oracle, Microsoft, Azure, Google, and just this week, DGX at AWS as well. Is it consumed through your cloud partners, or is it consumed on traditional enterprise on-premise infrastructure?
Yeah. The great thing is, it's consumed in almost every form-
Yep
-of the channel that you can think through. If you have in terms of, "I'm going to self-design it with an ODM or with a Dell, with an HP," or, "I'm going to have cloud credits and work with my cloud customers and download the software in terms of there," all of these are opportunities for them to procure our software. Making it easy for that integration, you can pretty much get it in a lot of different places.
So through the cloud guys, price per key GPU per hour.
That's correct.
-SaaS model.
It is. You should look at it somewhere in the range of about $4,500-$5,000 per year-
Yep
Per GPU type of look. Somewhere in that range is what we're looking at.
Okay. And I've asked you this many times after conference calls and stuff, but you know, one of the metrics you guys, you know, talk about is these, these multiyear cloud service agreement numbers. Some of that's internal usage. A lot of that might be internal usage, right? I'm always looking for that leading indicator on the software side. Some of that is actually your, you know, potential payment to your cloud partners for the infrastructure for the software. It's actually... That's a fair assessment?
So we have cloud service agreements, just like every other-
Yep
-enterprise has out there. Our cloud service agreements serve many, different uses. The ability for them to stand up in the cloud so we can, understand what enterprises are facing, and then we are using that to test our software, test our future solutions, and, work with them on, new use cases for products. We do this all the time. So we have, most of that right now has been centered around our internal use.
Yep.
But now we are building for DGX Cloud, to where we have established space within the CSPs. So for any customer coming in from an enterprise and says, "We'd like your DGX Cloud," we can move them across multiple different CSPs. They are not having to be in any one. We're just being in almost all of them.
Yep
-and that will help them as quickly come to market as they can on their product solutions.
So, two other real quick, you know, on the software side. So Omniverse, I want to say, if my memory's right, you know, kind of initially, you know, introduced back in the latter part of 2021-ish timeframe. Maybe I'm a year off. I don't know. I don't remember. But, that— Is that the progression of that software piece? Is it just takes a little bit longer? It's more significant change?
You know, it's great progress already in terms of what we're seeing in Omniverse. Working with very large manufacturing and factory types of builds, and the importance of the work that they need to do to redesign and/or initially design any one of those factories. For the most efficiency, they are using Omniverse very clearly. So many of the large car companies and car manufacturers really looking at Omniverse to help them in there. But you can see this to almost any type of factory-
Mm-hmm
... that is being built, industrial types of factories or warehousing, and how do I redesign that? Because you have the ability to create a complete digital twin of your existing and/or future without going through that full prototype of a building and making large errors through it by using an Omniverse environment. What happens is, each and every day, more types of uses come to Omniverse as we add many different more prescriptions that they need to do their work. That's always gonna be added, and so it will be a continuous evolution. But that 3D type of view versus a 2D, which is used so much in terms of design and build, will be essential. So, yes, we're pleased with the progress, and we'll continue to see it in the future.
And then the final thing on software, automotive, you know, am I still thinking, like, Mercedes flagship, Jaguar Land Rover flagship, like, 2025, 2026 timeframe? Is that fair?
Absolutely.
Okay.
We are busy working, but yes, that's when we expect the pilots to start, as well as the full fleet for both of those companies.
So I've got 3.5 minutes left. I'm gonna maybe rapid-fire through a couple quick questions. Mix has been a huge driver of the business. Where do we think gross margin should go? I mean, it's remarkable, right? Your 75% gross margin. How do we think about the trajectory of gross margin? It seems like data center mix will continue to go higher. Software's going to layer on top of that. How should we think about that?
Yeah. So when you think about our gross margin, although it is an important metric for many of us on the P&L, keep in mind it doesn't capture everything when we talk about both our ASP or we think about the actual manufacturing cost. It really just is the manufacturing costs that are included in that, because the work that we did in terms of the designing the software that is in many of these, and/or just the full engineering work on many other solutions that keep giving, even after we have shipped the product, doesn't easily get represented. Most of that is still in the OpEx. So it's a metric, and it's an important metric.
And so the 75, given our size of our data center business, you are now seeing the company margins and the data center margins to kind of be about the same, because what you are seeing as a company total is mostly just that data center. We believe this is about a level, where it will stay with the continuation at this point, probably being software and, software adding to that. But we think you are pretty much now seeing the data center one.
That's perfect. And then, you know, the other quick question I want to ask is that, you know, you exited this last quarter with $18+ billion in cash on the balance sheet. I think everybody can look at a model and say that you guys are going to generate a lot of cash. How do you think about strategic M&A? Like, this, this platform strategy, Mellanox home run, obviously, Arm in play out, but how do you think about the, the balance of strategic, maybe platform expanding M&A activity for the company?
Yeah. I'd say first stepping back, cash allocation is a very top priority to make sure we can think of all the right avenues that we want to apply that cash. Always going to be first is an investment back into the business, whether that be capital back into it, whether it be OpEx in it. Investing in the business innovation, just right off, is going to be our number one use of cash. Secondly, we do want to make sure that our investors get their portion, and we want to make sure that we can not have any dilution associated with our equity that we provide to employees. Our equity to employees is very important. That is a very important part of their compensation, but we do want to keep that dilution about as flat as possible.
After that, we look in terms of investments every single day, investments that we can learn from many companies on terms of the work that they are doing, but also working with other companies, is this an opportunity for M&A? It's hard to have found the perfect Mellanox of the past and think that would be easy to find again. It's a, it's a new environment in terms of right now of the M&A environment, but it doesn't mean that we still look. We look in terms of smaller companies, teams that are bringing a unique add to our company, and that would be something that we'll look at all the time.
So we got literally 12 seconds left, 10. I'm going to ask you just the one pointed question, which is, you talk to hundreds of investors after earnings, right? You know, you're, you know, earnings don't end, and then you go and do something else. It's a continual flywheel of discussion. What are you surprised that people aren't asking you about more? Like, is there any topic where you're like: Man, I'm surprised I'm not getting this question. What would that be?
The surprise of the question that I, that I am not getting. Well, I would look at it as our goal in, in terms of earnings and the reasons why we do, these talks like this, is to make sure the clarity of our products, the clarity in terms of accelerating computing and why, there has been this growth, has been an important part of us. So I do believe, the questions, mainly surround, a little bit more detail that I want to add, but they have gotten a very clear understanding that with generative AI, there has been such a significant change of the focus of enterprises around the world, focusing on building out their AI solutions for their enterprises.
Each enterprise looks and says, the future is about using enterprise, enterprise AI, otherwise they will not be able to compete in the market. That's a pretty big market to go after and work through. This is important to think about, our focus on Sovereign AI has come forth, and folks have asked: What do we mean by that? That has been probably an important key understanding. We're speaking here, we're in the U.S. We see ChatGPT. ChatGPT is U.S. culture, U.S. data, U.S. ways to think. Each and every nation and/or region wants the same thing. They also know that they would therefore have proprietary data and information as well. So not only do we speak with so many different enterprises in their work right now to add AI, we are working with many regions also to build out what they need.
Colette, we're over time. I appreciate you joining us this morning.
Absolutely.
Thank you so much.
Thank you.