Welcome, and thank you for standing by. I would like to inform all participants that this conference call is being recorded, and parts of this call may also be reproduced in JPMorgan Research. Views and opinions expressed by any external speaker on this call are those of the speaker and not of JPMorgan. If you have any objection, you may disconnect at this time. I would now like to turn the call over to Harlan Suhr.
Good morning, everyone. Happy New Year's, and welcome to JPMorgan's virtual fireside chat series here at the 2025 Consumer Electronics Show. My name is Harlan Suhr, semiconductor and semiconductor capital equipment analyst for the firm. Very pleased to have Colette Kress, Chief Financial Officer of NVIDIA, here with us this morning. It's been a tradition past 11 years to have the NVIDIA team kick off the investor events here at CES. I've asked Colette to start us off with an overview of Jensen's keynote last night, and then we'll go ahead and kick off the Q&A. Colette, thanks for joining us today. Happy New Year, and let me go ahead and turn it over to you.
Great. Thanks, Harlan. We're really pleased to be here. But let me make one opening reminder. This presentation contains forward-looking statements, and investors are advised to read our reports filed with the SEC for information related to risks and uncertainties facing our business. Well, last night, Jensen did give a keynote here at CES, a very well-attended audience, and we had some great things to announce. We have Blackwell GeForce coming to market, and we're excited to bring that first from a desktop perspective. We have four different offerings coming to market. This is where we have about 2x increased performance over our last generation. And what we're seeing right now is visuals that are essentially almost fully realistic in terms of what we are putting together for Blackwell.
This also includes DLSS 4, our fourth generation of DLSS, an important piece of using AI in terms of our graphics presentations as well, and improving games year-over-year. So we're very pleased with this coming to market. We also have our notebooks that will also be coming to market later this year. The notebooks get into a different part of our work. It is where we focus in terms of how we can help the development of the OEMs with our looking at the cooling perspective and incorporating a high-end GPU together with that. So our notebooks, again, are top of line, focusing on what we can do with Max-Q technology to improve those, and you will also see those coming into market later this year.
Additionally, one of the highlights that we talked about here at CES was Cosmos Foundational Model, a very, very important model with some of the themes that we're seeing here at CES for sure, which is physical AI and autonomous driving and automobiles. We have, for more than 15 years, been working in terms of the development of AV cars, and now we are able to produce an amazing model that is filled with all different types of videos and the ability to also include synthetic data generation. This is terabytes of huge amounts of data that is going to be necessary for these models, both for AV, but also when you think about the robotic models that will be essential.
Right now, here at CES, looking at all of the different robotic partners that are out there and that are here with us, and we'll be here supporting our overall Cosmos Foundation model, bringing that to market is very key for them. We've also talked about our Omniverse platform. Our Omniverse platform is very well known to be working with a lot of our automotive companies, and we now see even more of an adoption as we think about the physical AI capabilities and even with the robotics. We have new blueprints coming to market with Omniverse, and you will also see a lot of demonstrations here today focused on that. One of my favorite pieces that we also talk about is Nemotron. Nemotron is also now focusing on a very important part of generative AI, which is agentic AI.
Agentic AI will be where you have agents that are actually doing the work for you. We have created Nemotron to actually help put together a model with blueprints, help you design what you need in such things as fraud detection, things in terms of call centers at many companies. So this is also a great addition to so much of our work that we already do in the ecosystem, developing what is necessary for generative AI, and then lastly, our Project DIGITS. Our Project DIGITS is essentially a small version of our DGX, a small supercomputer that can essentially be on your desktop and be able for researchers, developers, folks working in terms of on-AI projects. You can have just one. You can actually put two together.
Both of these things are extremely helpful in this world right now, that you can get access to that type of compute even on the desktop. So those were some of the highlights that we announced last night.
No, that was a great overview. I appreciate that. I'm going to, my first few questions are going to be on some of the things that you just highlighted from the keynote. The first thing is, and we've heard this from enthusiasts, class gamers, you know, obviously a very strong lineup of gaming solutions with the RTX 50 series announcement. As you mentioned, 50-100% better performance versus your 40 series platforms. You start shipping the RTX 50 series in January. Now, typically, when you start off a new product cycle with a strong, you typically do start off a strong product cycle with a strong ramp, but you did call out last earnings that your gaming business would decline sequentially this quarter on supply constraints.
I mean, is this due to the more specialized component requirements of the RTX 50 series platforms, or is there some other gating factor to what looks to be the potential for a very, very strong RTX adoption cycle?
Yeah, we absolutely believe that our Blackwell will be exceptionally well received in the market and a great cycle for us. What we are speaking with is we, for example, are in our ending part of our Q4. We end Q4 right now at the end of January. So our communication where we indicated that we would be a little soft due to supply reasons going into Q4. That supply is really about our current generation, and we will be probably for a while in terms of this month a little bit tight on that perspective. Nothing new, nothing to be concerned. We're already in terms of Blackwell, but we're just now finishing our prior architecture, and that will be a little tight.
Got it. And then, as you mentioned, you know, and something that Jensen and you have brought up a few times, actually during earnings and last night as well, is agentic AI, right? Agentic AI, as you mentioned, represents the next generation of artificial intelligence. It enables systems to perform complex tasks autonomously, right? As Jensen said, perceive, reason, plan, act, right? And the deployment of agentic AI in sectors like customer service, content creation, software engineering, healthcare can lead to increased efficiencies, improved outcomes. I mean, NVIDIA Blueprints, as you mentioned, plays a crucial role in accelerating the adoption of agentic AI by providing reference workflows tailored to specific use cases. How significant is agentic AI? My assumption is this is very important for enterprise. How important is it in driving higher inferencing adoption and software uptake amongst your enterprise customers?
Yeah, we're definitely on a journey of AI as we move forward. And what you have seen today with ChatGPT and many of the foundational models out there is tremendous help in terms of tools, tools for you that help you every single day. And the ability for them to generate tokens and influence in terms of that inferencing has been very key. But what we think moving forward is important to understand is this will continue to advance to where it's agentic AI. As we discussed, this is an ability for us to create solutions to the, where the work that we do every day can also be done behind the scenes using AI and with all of the different models that we have together. That, again, is an inferencing solution after the models have been designed to put that together.
It is very important for the enterprises and also a lot of our partners that we have developed through this period of time. We see AI factories that will be built around the world in many of the both unique countries as well as here home in the U.S. We have already started our work with companies such as Accenture and Deloitte as they continue helping these enterprises design their future AI use. And Agentic AI is going to be an important part of that for many of them. Some of the simple things that we can all appreciate in the future and improving, and that being call centers, those things being how can I test for different types of fraud, how can I make sure any type of risk management.
These are things that AI with a significant amount of the data and pulling that together in a foundational model can actually assist them with.
That's great. Let's take a step back and let's review calendar 2024, right? If you hit your guidance this quarter, revenue for calendar 2024 is expected to more than double, much of which is a reflection of the team's strong product cycles combined with the aggressive build-out of GenAI. As we look into calendar 2025, consensus has you, the team up another 50%-60% revenue growth for your fiscal 2026. Help us understand the demand trends and product cycles that will drive your fiscal 2026. And how should we think about the team's longer-term revenue growth profile?
Yes. So if you think about the journey that we are on, it's important to understand that there's two major trends and transformations that are occurring at this time, each of them extremely large transitions and opportunities. You have a $1-trillion installed base at this time of general-purpose computing solutions that have been a part of us for several decades. And this is the opportunity that so many see in order for us to grow with the data that is available and the overall improvement of the data center. Accelerated computing will be essential. You also have the work in terms of moving to AI solutions. How can I better use this data with AI types of solutions in our everyday life? And so those two transitions are not something that's going to take a year to complete.
It is going to be with us in terms of a journey over the next decade and more, even past that. So when we think about where companies are and how they are thinking about what they are building and their building out of data centers, they are working in terms of that transition. So we still have a lot of growth opportunity in the future for us. And as we continue to bring different types of products to market, each of the customers are thinking of how am I going to assign this to the work that they are doing in terms of on AI. So each one of our generations is being designed for often many different types of use cases going forward.
There is now a long list of different configurations that we have available, some of the most advanced configurations that have included essentially a full data center scale in terms of what we're putting together. That is helping us move forward and ease the adoption of this important transition that we see.
And so that's a good segue. I want to spend a little bit of time on the data center business, right? And you know, with the aggressive annual product cadence within your data center business, the manufacturing complexity, right, of these Blackwell-based system solutions and just the sheer number of SKUs supported are driving an increasingly supply chain and value chain sort of complexity dynamic. And it's not a surprise. I mean, you know, as you continue to help your customers get to market faster with more system-level solutions, with more SKUs, I would assume there would be challenges given this deployment model, right?
And to that notion, I mean, there have been concerns on the production of GB200 racks driven by potential supply challenges and, you know, non-GPU components across areas such as power management chips or what we call PMIC or the NVLink Switch boards that we hear about, cable cartridges and liquid-to-air, you know, sidecarts, right? And so near term, I mean, are these potential issues impacting the GB200, B100, B200 shipment profile here in Q4 or first half of the year? And you know, how is the team helping to alleviate these supply chain and value chain bottlenecks for your customers?
So let me first start with our Blackwell GB200 systems, and many of the other systems are some of the most advanced systems ever created in the world. What you are seeing is an amazing bring-to-market and adoption over a very large set of different customers. Where we are in terms of completing those systems, they are full systems. We are working jointly with our end customers to be assured that what they had designed and want to complete with the use of GB200 is functioning and working. All is on track. All is expected. We have not only been in production, but we are shipping Blackwell at this time, and so we do believe our path is an important path.
We have many customers excited to be some of the first to stand up GB200, and you're going to see a lot of different configurations of Blackwell come to market. And each of those are for different needs. Is it complex? Sure. This advanced work is complex work. Are we the ideal company to bring that to market? Absolutely. What we've done with our suppliers and what we have done with many of our partners is very unique. Our current suppliers that we have worked with for decades are right there with us in terms of both helping us improve our both supply and things that we have, but we have also created a group of new partners that we are working with at a data center scale that they understand our liquid cooling plans, our whole plans in terms of the design of the data center and the networking.
That has been an essential part in terms of the design of the systems that we have done. That work with so many different partners along with us on this journey has made really the design process and the installation process with many of our customers to be one of the best ones out there.
Yeah, and you brought up a good point, which is, you know, and we'll get to it in a minute, but you know, there's standing up these very complex compute-based systems with your cloud and hyperscale customers and the challenges associated with making sure that the data center architecture and infrastructure is in place. And it sounds like based on your last response that you are working with your customers and partners very closely here. I know that on the earnings call, the team did talk about demand exceeding your supply capability. Is that still the case? Is demand still outstripping supply? What are the particular, let's say, components? Is it CoWoS? Is it HBM? Is it other types of components that are continuing to be in tight supply? And the key question is, when does the team expect a lot of these supply constraints to ease?
Yeah, so when you have watched us over the last seven quarters, it's really interesting to look back and say what size of compute were we able to bring to market for our customers seven quarters ago and where we are today. And that has all been the case of our suppliers working with us not only on capacity or focusing just on units and getting us more of that supply, but working together with them in terms of cycle times and how we can even get more different suppliers for what we need to accomplish. So I give a lot of the credit in terms of the work with our suppliers who are right here with us working on that. Some of the key areas are going to keep in the next couple of quarters is advanced systems. So we have advanced packaging.
We have some of the most advanced memory out there, and then the work that we are doing in terms of the connectivity that is necessary in terms of these systems together also takes some of the most advanced networking and the advanced networking for AI to go together. Those are going to be the places that we will focus on. Yes, at this time, demand does exceed supply, but we are working through, as you've seen, each and every quarter that more and more supply is there or said differently, we're able to increase our demand and increase our revenue each quarter as well. We're going to continue to work, and that is our path as we go into this new fiscal year.
Does that, as your supply capability continues to improve, you continue to unlock that in a profile where demand is extremely strong, would that imply that the team's data center business can grow sequentially every single quarter through this fiscal year, you think?
When we think about the demand that's in front of us, it's absolutely a growth year. How we look at every path in terms of every quarter, we're going to try, of course, our best, but we only guide one quarter at a time at this time. But keep in mind, our focus is additionally now with more advanced systems, bringing those to market. And again, we do believe the growth is in front of us in terms of this new fiscal year in front of us.
Your next generation Blackwell Ultra is set to launch in late 2025, and in line, right, with the team's aggressive annual product cadence, help us understand the decision purchasing dynamics with your customers, given that you'll still be ramping prior generation Blackwell solutions at the same time that you'll be rolling out Blackwell Ultra, right? How do you and your customers and the supply chain, I mean, manage the simultaneous ramp of two product families?
So a great effort all around, but let's understand in terms of why. Why is this important? We're in one of the largest transitions here. And each and every time that we're coming to market with new product opportunity is additional advancements, additional understanding in terms of where AI is moving towards, and then helping each one of our customers with the path that they have for AI and the types of solutions that they want to bring to market. We have our own roadmap, but our customers do as well. We are matching their roadmap to our roadmap to say, what project do you want to bring to market at that time? Which one of our systems are going to be enabled for that different project that you have?
So this is a great opportunity for some of the most advanced systems and those in some of our future generation things of AI that you'll see moving to our top, top products. But you also now have the ability to bring our enterprises to create AI factories, even with existing libraries and software and NIMs to actually help them to get to market. So there's something for everyone, which is important in terms of our scale. If we asked a brand new enterprise to say, would you like a GB200? We'll deliver it to you and set it up. That's a big undertaking for an enterprise to do, but they have the ability now, working with so much of the cloud and the different offerings there to get started working with us on their NIMs and building out different models in that perspective as well.
So is this necessary? Absolutely. The cadence is important. It is the speed of what you are seeing in the industry right now, and we're keeping up with it.
There's been, and this is the dynamic that was occurring, and I know you got some questions on this during the earnings call, but there's been some concerns about performance scaling limits, right, on some of these foundational AI models, primarily focused on the potential for diminishing returns on the pretraining phase, right? One approach here is to augment real-world data with synthetic data, right? There's also the emergence of new scaling laws, right, primarily around post-training optimization and also around inferencing, right? Jensen talked a little bit about that on the earnings call, right? This whole notion of test time compute inferencing, right? So how does the NVIDIA team think about the continued scaling of models and the requirements for more GPU compute capability going forward?
Yeah, we do. We do believe that there are multiple scaling laws and many of them that you discussed. For a long time, folks just thought about the training and the pretraining that you train at once and it's done and you move on. And actually, in the same manner that you're always learning, you're always getting educated, you're also always training. And that training is important to think that that post-training can incorporate many different aspects. And as you discussed, you can move to synthetic data that says, can I approximate something in the future using a synthetic version to also expand the model and improve the model for use? And so you're going to see both the pre and the post-training and that verification process continue. But there's yet a third focus as well. This is the time to token, which is so essential.
What you see today is ChatGPT. You ask it a question, off we go, and we get an answer. But there is this part where reasoning and long thinking is going to be essential to the work that is necessary in many of this inferencing and many of the different types of questions that will be there. So this is again where a lot more scaling needs to happen. This is an important reason why a system like a GB200, which is not only focused in terms of the ability to train, but a massive improvement in terms of what it can do for inferencing and what it can do in terms of saving money and saving time to token. These are the important parts as this reasoning and scaling of inferencing is going to be an important part.
You know, we've maybe more near term, right? Many of our client conversations focus on the sustainability of demand as we move into calendar 2026, i.e., cloud and hyperscaler CapEx spending, right? And the market is always concerned for the potential that there will be some capacity digestion on new product cycles. Are your cloud and hyperscaler customers going to monetize their AI investments? I mean, looking at it from NVIDIA's perspective, obviously your customers, you're in discussion for multiple generations over multiple years. You understand their initiatives, right? So how should we think about the future spending and the potential for things like digestion cycles and so on?
You commented appropriately that our work right now with most of our customers is decades long, and that work together, the NVIDIA team with the customer team, engineering team working together to find the right solutions for them and the right systems that they need to put this in market, and yes, we do focus on some of the similar things together. We do focus in each and every year as they further think about their investment cycle. Their investment cycle says, yes, from a capital perspective, how much and am I ready to go? Because even if you think you just have capital, many think about the work in terms of data centers in a four- to five-year projection, meaning they are looking at purchasing data center space really that far out. It is a necessity because it's not something that can start up in a month.
It is something that needs long thought in terms of where, how much, and how to continue to make it some of the most efficient type of data centers. So their needs for data center, their need also in terms of the right types of power. Power for the last several decades has been the same on the grid. Not much has moved from that. And you're going to see a big movement, a movement in terms of how that grid was used and also new forms in terms of power to move that. But the work that we're doing is not only in terms of the best performance, but what we can also do to lower the cost overall and also the most efficiency that we can get in terms of sustainability as well. That's what accelerated computing and AI brings together.
The journey that we're on doesn't see a path of stopping. The journey that we see going forward as more to come is probably one of the most unique times where you see the adoption of a very important transition has taken place worldwide at one time. In the past, what you've seen a lot of times, it goes region by region and the influence is done by an expansion of the globe. But uniquely, this is a case where the world as a whole is focusing on this. I think it's interesting to see all of the different AI factories that have emerged, for example, in Europe and different areas.
The moving of governments from, yes, we have done a lot of supercomputing work, we've done a lot of high-performance computing work, but now they are building AI factories and getting ready for this for their own country because it will be essential to have that AI even within their own country. So that's the uniqueness that you see versus any other type of large transition.
Yeah, we, you know, one of the things I feel pretty confident about in terms of sustainability of spend is your enterprise-focused customers, vertical markets, maybe sovereign AI. You know, we recently ran a CIO survey with the world's largest global corporations and enterprise. And the survey was insightful in that CIOs on average estimated that their current annual spend on AI was about 5% of their total IT spending budgets, but that was expected to grow to 15%-20% of their budgets in three years. So in dollar spending terms, that's like 50%-60% CAGR over the next three years, right? So a very strong growth profile, clearly implying that this is not a segment that is probably going to take the pedal off the metal anytime soon as it relates to spend, right?
So the question here is, what's the anticipated contribution of enterprise, vertical markets as a percent of your total data center business this year? I know last night, for example, Jensen mentioned that automotive vertical within data center is already, I think, $2.5 billion-$3 billion in revenues this year. I know Kimberly, head of your healthcare, was telling us beginning of last year, healthcare was driving closer to $1 billion per year type run rate. And then additionally, we continue to hear that the majority of all large corporate proprietary data still resides on-prem and customers want to keep it on-prem, right? Which fits perfectly with your AI enterprise software suite and NVIDIA NIM services.
So if you can just, you know, give us a read on engagement activity level around enterprise AI and around NIM, but more importantly, you know, what is the contribution of enterprise, vertical markets, and sovereign as a % of your total data center business this year?
Yeah, when you think about how we have designed our story to focus in terms of our different customers that we have, the first part that we start with and get asked a lot about is in terms of our cloud providers. But remember, when we talk about our cloud providers, there's two parts of that. There's the part that says, yes, what they've enabled for the cloud for external use, but remember, many of them are also building internally AI for their own models that they have done. So it's not just in terms of enabling for the cloud. All of those cloud providers, in terms of when you think about who their customers are, are those enterprises, the startups, the researchers, the developers that are working on it? It's very common to see folks start in the cloud and start their work.
That enables us to still be with them in terms of with NIMs and helping them in terms of our software solutions. What you will see going forward and the increase in terms of enterprises is that adoption of an AI factory. An AI factory may essentially be a private cloud for them. It may just be fully designed in terms of what they would like to see, but you're therefore enabling them to define and hold a whole different center specifically for them. That also gives the ability for them to be on our software platform, complete and in terms of the optimization, something that you rarely can get from the individual cloud providers. So right now, after you think about our cloud service providers and a good amount is going to the enterprise, our cloud service providers is about 50% of our data center business.
Flip to the other half of it. You have now a combination of enterprises and many of the different industries that you spoke about, automotive, healthcare, your industry, financial services. These are very big markets for us, as we discussed last night on the call, with the addition of the largest automotive provider, Toyota, now moving to our Drive OS and driving to ORAN for them is again a more understanding on how folks focusing on AV in the future will be here, and even the largest car providers will continue to build upon that. Healthcare is near and dear to our heart and we know very important in terms of JPMorgan and their conference.
If we can continue that path to improve all of the data that's available to centralize it, create foundational models that can really solve some of the problems that we have for decades been looking to do, AI can certainly do that. But these are big industries, big industry opportunities, and you're seeing all of these continue to grow each and every quarter.
Yeah, absolutely. You know, we've seen over the past 12-18 months, hyperscalers shifting some of their compute from, you know, merchant GPUs like NVIDIA to deploying their own custom ASIC solutions, right? And all of your cloud and hyperscalers are ramping or have plans to ramp their internal solutions over the next several years, right? How do you view this sort of evolving market? I don't think it's a winner takes all scenario. You know, I do think that is a combination of a mix and match approach, given the differing workloads that your customers have to support or are developing internally.
But how does the NVIDIA team think about, you know, the emergence and deployment of custom ASICs by some of your customers who I think for the foreseeable future are going to continue to buy a lot of NVIDIA GPUs, but are also going to be, you know, developing and purchasing their own custom ASIC solutions?
Yeah, it's very important to understand that a custom silicon, a custom ASIC is not what we do. It's a very, very different product from what we do. It's not putting any of those two things side by side and looking at them. In terms of our full system and full data center scale solutions, not only in terms of on the infrastructure, but on the software, is the reason that we are used so often because it's easy. It is an easy adoption as much of that has already been built in the ecosystem over the last 10-15 years. Our work was not something that we started, let's design a chip in the last two years because we saw AI coming. This has been a long time coming.
That allows us to build the ability for anybody to focus in terms of compatibility of every single one of our systems and the software that they've done to carry with them for decades and decades. That backwards compatibility and forwards compatibility is key. However, there also is ASICs all the time. There have been for quite a while different types of ASICs for different small types of workloads, and they will be there. But that doesn't mean we look at it as any differently and understanding that our goal is to help all in the ecosystem, all different types of customers, and you're right, they are still working with NVIDIA for sure. There's a full NVIDIA team in every single one of these companies that are focusing and building that.
Moving to some custom ASIC, there's a lot of work that has to be done, not sending it out for design and coming and making it come back. Remember, the only company that has really been successful, but again, we're way ahead of them, is the TPU. And they have been doing this for quite some time. But again, our focus has been on a full-scale data center that has enabled us to do that. And I think that will continue.
Let's talk about your networking franchise. You know, your networking revenues are up 7X, 7X from 2020 when you acquired Mellanox, right? That's a 70% CAGR. I mean, phenomenal growth, obviously led by the team's InfiniBand leadership, but now expanding aggressively into Ethernet networking. You've talked about a multi-billion-dollar pipeline in your Spectrum-X Ethernet portfolio. You still have the X800 ramp ahead of you. Can you just give us a rough sense on your networking business mix today, InfiniBand versus Ethernet, and what's the timing of the Quantum/Spectrum-X800 ramp, right? We hear that there's very strong demand for that platform, but it seems like it's one to two quarters behind Blackwell. Wanted to get your sense on timing relative to the Blackwell ramp.
Let's go back on the acquisition of Mellanox. Mellanox, I hear, yes, it is networking, but the connectivity and the decision to acquire and partner with them was a very well thought through, but also a very important understanding of a brilliant set of engineers that can work side by side in terms of the existing NVIDIA computing engineers. It has been absolutely a great process, and the work that started off first was saying, how do we incorporate networking in all of our systems? We're going to data center scale, and we need to focus every part of that time that that data is running through the data center, the speeds and how it can move there. No other partner outside of Mellanox understood and says, we know what that means and we know how we put that together.
Over the last couple of years, we have moved more and more together in terms of aligning the architectures that are coming to market in terms of what they're doing as well as what we're doing, for example, with Blackwell or Blackwell into the future. That's an important part because they're all working together, they're all designing together, we're going to come to market together. So more and more that you will see of that. The focus of not just the prior thought of higher speeds and what can we do in terms of a supercomputer and high-performance computing, it is really about that connectivity and the connectivity of AI. When you think through the inferencing focus and how important the NVLink, NVLink switch, and every piece of that that is incorporated in our GB200 systems, that is essential. So it is not that we're moving to Ethernet.
We are moving to Spectrum X, which is Ethernet for AI. Our entire focus is on AI solutions. And we do have a very well understood position in what the market needs and working together with a team that's with us inside to build what is necessary for Ethernet on AI. So we're excited in terms of that growth. When we talk about our ability to help our customers understand the importance of the networking, not just the compute infrastructure, that has been now for years where they say, yes, it is not enough to just have the best compute infrastructure. You also need to incorporate the best networking. And we have seen that, and I think more and more of that will be in the future.
Yeah, and that's exactly what we hear from your customers as well. It's not just about the compute. In order to develop the best, highest performance, most optimized solution driving the best TCO, it's a combination of optimizing the compute with the networking, with the memory and the storage. So totally agree with you there. With the last couple of minutes that we have left, I did want to touch on one thing, which is, you know, moving over to gaming and consumer side of your business, it's, you know, 8%-10% of your overall business. It's back above $3 billion per quarter. The proliferation of AI at the edge presents a unique opportunity, right, to expand your install base beyond gaming and into consumer PCs, right, to take advantage of the whole AI PC potential upgrade cycle that's still in front of the industry.
How is the team approaching this opportunity? What initiatives are you undertaking to capitalize on it? I know Jensen unveiled, and you mentioned it too, Project DIGITS last night, for example, which I would call a souped-up AI supercomputing PC. Can the NVIDIA team take Project DIGITS, scale it down to a GPU/Arm-based CPU for AI personal computing form factors optimized to run Windows-based consumer AI applications, and in what time period?
When you think about our consumer business, and even from the onset of the adoption of ray tracing, it also incorporated AI solutions. More and more of that will be a part of our gaming. It goes even beyond that. When we think about folks thinking about an AI PC, will everything be done in the cloud? Or here's a situation where you can take Project DIGITS and just work right there on your desktop. You can take anyone in your high-level workstations, high-level notebooks, high desktop incorporated. It is already incorporated with the best technology and GPUs that can help you in terms of AI. It's really interesting when folks say, I need an AI PC. You've got one. You've got a GeForce PC. You're ready to go in this market.
So now even seeing so many of the independents and seeing all of the creatives working in terms of AI solutions to create all of their different types of marketing campaigns, all of their different types of branding, everything that they do in terms of cataloging and putting that, that is all on our PCs, or said differently, our AI PCs. So the market we know is expanding its use cases, but the unique thing that we have, we're already there. We're already there. Some of them are just, I'm ready to go with even more and more advancements that NVIDIA will bring together to put into that AI PC, but I'm already on GPUs to do that. So we are uniquely positioned into that, and you'll see more and more focus in terms of that.
What we announced in terms of Project DIGITS is a first for us in terms of what we can do at a desktop size and we'll take this to market, and then we'll see in terms of what's next after that.
Great. Well, Colette, I want to thank you for supporting our CES events these past few years. Look forward to continuing to monitor the progress of the team, which looks to be another strong year for NVIDIA. So thank you for all the support and look forward to a great year from NVIDIA. Thank you very much.
Great. Thank you, Harlan. Great to be with you.
Thank you.