Welcome to NVIDIA's Investor Day at GTC Spring 2022. I hope you had a chance to listen to Jensen's keynote kicking off GTC this morning. It was packed with new products and amazing innovations. We issued 14 press releases this morning, which you can find on our website. We have an exciting Investor Day planned for you over the next 2.5 hours. Before I go over the agenda, let me quickly remind you of our safe harbor statement. During today's presentations, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially.
For a discussion of factors that could affect our future financial results and business, please refer to our most recent Form 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, March 22, 2022, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. We have a packed agenda for today. Jensen will start with an overview of the highlights from our announcements this morning, as well as our strategy. You'll then hear from Manuvir Das on enterprise computing, Ian Buck on hyperscale computing, Ali Kani on automotive, Rev and Richard Kerris on Omniverse, Jeff Fisher on gaming, and finally, our CFO, Colette Kress, on financials. We'll leave plenty of time for Q&A with Jensen and Colette at the end.
You'll be able to find our presentation on the investor relations website later today. Now, I'd like to turn it over to Jensen.
Thank you, Simona. She's amazing. Welcome to GTC. We have a packed GTC. 1,600 speakers representing technology, retail, consumer internet, pharma, finance, the auto industries, and researchers from over 100 universities. GTC talks cover AI, digital twins, climate science, quantum computing, protein engineering, 6G research, and more. NVIDIA is accelerating computing across full stack and a data center scale. The compound effect has sped up computing by a Million- x over the past decade. A Million- x has democratized AI and opened the opportunities to tackle grand challenges like drug discovery and climate science. NVIDIA's full stack computing platform is open and in four layers. Chips and hardware, system software and acceleration libraries, the NVIDIA platforms, RTX, HPC, AI, and Omniverse, and AI and robotics applications and frameworks.
Each layer is open to scientists and researchers, computer makers, software developers, service providers, and end customers to integrate into their offerings however best for them. NVIDIA is built like no computing company. Our open, full stack, four-layer, data center scale platform lets us partner with companies across healthcare, energy, transportation, retail, finance, media, and entertainment to apply accelerated computing and AI to revolutionize $100 trillion of industries. We announced a giant wave of products this GTC. New GPU, CPU, and networking chips, new systems, and new software products. NVIDIA SDKs are the heart of accelerated computing. These SDKs tackle the immense complexity at the intersection of computing, algorithms, and science. With each new SDK, new science, new applications, and new industries can tap into the power of NVIDIA computing. NVIDIA SDKs connect us to new opportunities and new growth.
We launched 60+ new and updated libraries of nearly 500 at GTC. For millions of developers, scientists, AI researchers, and tens of thousands of startups and enterprises, the NVIDIA systems they run just got faster. NVIDIA now offers licensable software products, NVIDIA AI Enterprise and NVIDIA Omniverse Enterprise, with enterprise service levels, access to experts, and multi-generational stability. AI is racing in every direction. New architectures, larger, more robust models, new science, new applications, new industries, all simultaneously. Transformers, an AI model architecture that opens self-supervised learning, has unblocked the need for human-labeled data and boosted AI into warp speed. NVIDIA AI is the engine of the AI industry and is used by 25,000 companies and startups. NVIDIA Omniverse is integral to robotic systems, the next wave of AI. Omniverse is a simulation engine for physically accurate virtual worlds and digital twins.
Just as TensorFlow and PyTorch are essential frameworks for perception-oriented AI, Omniverse will be integral for robotics AI. The Omniverse ecosystem is growing fast. In just one year, Omniverse has over 80 third-party tool connectors, been downloaded nearly 150,000 times, and integrated into Bentley Systems' LumenRT, our first third-party integration. We announced new GPU, CPU, and networking chips and systems. AI applications like speech, conversation, consumer service, recommenders, computer vision, robotics, and self-driving cars are driving fundamental changes in data center design. AI companies process mountains of data to train and refine AI models. Their data centers are essentially AI factories. A whole new type of data center has emerged because of AI. Today, we announced Hopper architecture H100, the new engine of the world's AI infrastructure. The performance of Hopper H100 is a giant leap over Ampere, an order of magnitude.
H100 has a new Tensor Core with 4 petaflops, 4,000 teraflops of AI processing, Transformer Engine, Multi-Instance GPU with complete isolation, confidential computing, DPX dynamic programming instructions, and the fourth-generation NVLink with SHARP in-network computing. A DGX connects 8 H100s, and a new NVLink Switch System connects up to 32 DGXs into a massive exaflops DGX SuperPOD. Hopper H100 powers systems at every scale, from the H100 CNX for mainstream servers to DGX and DGX SuperPOD. H100 is in production, with availability starting in Q3. When we announced Grace last GTC, we only told half the story. The full Grace is truly amazing. The Grace CPU is a superchip connected by 900 GB/s NVLink. Grace CPU superchip has 144 cores and an insane 1 TB/s of memory bandwidth. Grace is on track for production next year.
Grace moves and processes mountains of data and is ideal for AI infrastructures, scientific computing, and Omniverse digital twins. One of Grace's best features is the rich ecosystem of servers, CUDA-X libraries, NVIDIA software platforms, RTX, HPC, AI, and Omniverse, and a world of partners that we will bring to Grace. NVLink will be coming to all future NVIDIA chips, CPUs, GPUs, DPUs, and SoCs. We announced NVLink is open for customers and partners to build custom chips. NVLink opens a new world of opportunities to build semi-custom chips and systems that leverage NVIDIA's platforms and ecosystems. We announced the Spectrum-4, 400G Ethernet switch and end-to-end platform. Spectrum-4 is a major new product, the world's first 400 Gbps per second switch. A massive jump in performance translates to higher data center throughput and lower cost and power.
Spectrum-4 with ConnectX-7 and BlueField-3 SmartNIC endpoints and the DOCA infrastructure software will be the highest performance Ethernet platform. With Spectrum for Ethernet, Quantum for InfiniBand, NVLink for multi-node DGX, and our DOCA networking, storage, security, infrastructure software stack, NVIDIA is ready to help build out the world's AI infrastructure end to end. Spectrum-4 samples in Q4. The next wave of AI is robotic systems that perceive, plan, and act. Omniverse Avatar, DRIVE, Metropolis, Isaac, and Holoscan are robotics platforms built end to end and full stack around four pillars: ground truth data generation, AI model training, robotic stack, and Omniverse digital twin. We engage partners and customers in any or all four pillars. Our ability to add value at every stage of the AI and robotics workflow gives us many ways to partner with the AV and robotics industry. The demand for robotics and industrial automation is increasing exponentially.
NVIDIA works with thousands of customers and developers building robots for manufacturing and retail, healthcare and agriculture, construction, airports, and entire cities. One of the fastest-growing robotic segments is AMR, autonomous mobile robots, essentially driverless cars for indoors. There are tens of millions of factories, stores, and restaurants, and hundreds of millions of square feet of warehouse and fulfillment centers. We announced a major release of Isaac for AMRs. Like the NVIDIA DRIVE, Isaac for AMRs has four pillars: DeepMap, NVIDIA AI on DGX, Isaac Reference AMR Robot powered by Orin, and Omniverse for digital twins. Orin, our robotics computer chip, is a great success. DRIVE Orin started shipping production this month. Isaac Orin developer kits are available now, and Clara Holoscan developer kits are available in May. Omniverse is central to our robotics platform and the next wave of AI.
Like NASA and Amazon, our customers in robotics and industrial automation realize the importance of digital twins in Omniverse. Last time, we showcased BMW, Siemens, and Ericsson. This time, PepsiCo and Amazon fulfillment center digital twins. Modern fulfillment centers are evolving into technological marvels, facilities operated by humans and robots working together. The warehouse is also a robot, orchestrating the flow of materials and the route plan of the AMRs inside. This is the busiest GTC in our history, the largest wave of new CPU, GPU, networking chips, new systems, new software products, new AI and robotics models. Today's presentations will cover our growth drivers, strategies, and opportunities. NVIDIA management will discuss five areas, enterprise, hyperscale, Omniverse, auto, and gaming. Every group builds its products and strategies on one NVIDIA architecture, leveraging the full platform and all our technologies to serve our markets.
This intense focus on platform leverage lets us direct the full might of NVIDIA to serve every industry. In computing, we will distill our opportunities in serving $100 trillion of industries, cloud computing, consumer internet, healthcare, financial services, energy, retail and logistics, manufacturing, industrial automation, higher education, scientific computing, digital content creation, and more into chips and systems and our two major software platforms, NVIDIA AI and NVIDIA Omniverse. We estimate our own available market opportunity at about 1% of the industries we serve. Over the years and decades ahead, our TAM will grow into this opportunity as you will hear today. We will start with Manuvir, who will talk to you about our opportunities in enterprise computing, and I'll be back in a bit for Q&A. Manuvir?
Thank you, Jensen. In this section, I'll talk about our opportunity with enterprise companies at large, with a focus on our AI software. We've seen over the last few years that AI really does occur everywhere. Internet-scale companies doing AI in the cloud, large companies doing AI in their data centers, and more use cases every day at the edge. This is why our data center business, which includes all of these, has grown in the way it has. This view here on this slide shows the growth over the previous seven quarters. Of course, if we had chosen to project further back, the growth would look even more dramatic. We expect this trend to continue, and we are prepared for this by nurturing a sustainable ecosystem. Developers and startups everywhere have integrated with our AI software. Over 25,000 companies use our technology. Here's one example.
Snap is using Riva, our software for speech AI, in their Lens Studio product. They use our pre-trained models and our inference software, Triton, using our software. Now, the way we usually talk about our opportunity is by looking at data center infrastructure and how much of it will be accelerated by NVIDIA over time. The reality is that AI is a full stack problem. There is tremendous value to customers from the software of AI. We have created more AI software than anyone. We see this as our business opportunity, both hardware and software. AI is about use cases that change industries, either saving money or enabling new business. For example, in retail, AI is being used for automated checkout, a new experience that simplifies the shopping experience, driving more customers to stores. It is also being used for loss prevention, saving money.
In financial services for fraud detection. In logistics for optimizing delivery. This is not just about the envelope of traditional IT spend. Rather, there is an opportunity for an AI provider to participate in the revenue of the industry itself. To that end, at NVIDIA, we have developed the full stack of AI. The best hardware, of course. That's what makes AI algorithms practical in the first place. Then the essential tools and libraries that underlie any AI use case. Think of this layer as the operating system of AI. Every server used for AI would run this software, this engine of AI, regardless of use case. Then finally, skills created for specific use cases. Let me take a minute to unpack this stack. The lowest layer is the infrastructure underlying NVIDIA AI.
We have a wide ecosystem of OEM server builders, the public clouds, and our own systems, all growing rapidly, of course. Like other enterprise software platforms, we have a certified hardware program, so customers can choose and deploy hardware with confidence. Notice that I included the DPU in the infrastructure layer. We're seeing early success with our BlueField-2 DPU. On this slide, I've shown three examples. Of course, we are working closely with VMware on Project Monterey, moving software-defined data center services from the CPU host to the DPU. We see this as the go-forward security architecture for data center servers. A DPU in every server. The operating system of AI is what makes AI go. The tools for data processing, training, inference. Based on our experience to date, we know that every server used for AI will benefit from having this software installed.
Finally, the skills, frameworks that implement particular use cases that apply broadly. For example, Riva is our framework for speech AI. As a regulated bank, you can use it to translate audio recordings of customer conversations to text. As a retailer, you can use it to convert product documentation into human voice. This is the full stack then. NVIDIA AI software on industry standard hardware. At this GTC, we announced Version 2 of NVIDIA AI Enterprise, the operating system of AI, representing the next big step in enterprise adoption of NVIDIA AI. Whereas Version 1 focused on virtualized servers running VMware, Version 2 runs on both virtualized and bare metal servers running VMware, Red Hat or other platforms, as well as on all of the major public clouds. Whereas Version 1 focused on servers with GPUs, Version 2 runs on either CPU or GPU.
Together, these enhancements bring NVIDIA AI to every server. Every server will run this operating system. We already see this in data centers where AI is developed. Going forward, we expect to see wide deployment of this AI at the edge for a variety of use cases shown on this slide. Cameras on the roads, AI detecting traffic violations, kiosks at drive-throughs, AI taking orders and recommending menu options. We have been preparing for this growth for some time now by fostering an ecosystem for edge AI, just as we did with AI in the data centers. The chart here shows the growth in our ecosystem and also the components we have added to NVIDIA AI over time to enable this ecosystem. NVIDIA AI Enterprise on every server, data center or edge. Along with us, the flywheel of the ecosystem has been gearing up to sell NVIDIA AI Enterprise.
I've highlighted some companies whose sales teams are working together with our sales team to sell NVIDIA AI. Earlier, I mentioned NVIDIA certified systems, hardware underlying our NVIDIA AI software. Now, we add to that NVIDIA AI Accelerated, a similar program for AI applications built on top of NVIDIA AI. Over 100 software providers are already in the program. The flywheel is cranking. It comes back to a simple view of our business opportunity, one that we know already exists from our own experience with AI to date. The engine of AI on every enterprise server in the data center or at the edge. $150 billion of software opportunity to go with the hardware opportunity. With that, I'll hand over to Ian to talk about hyperscale.
AI is transforming large markets, and every day we work closely with our cloud partners to help bring new AIs to life. We collaborate on the systems, the physical and software infrastructure, the AI frameworks, and AI applications, both for their internal cloud services and their cloud customers. It's a platform that's continuously growing. It is estimated that the cloud server market install base is 20 million servers, and analysts project that this number will grow to 35 million by 2025. Driving that growth is the consumer internet, the apps, the websites, the services that each of us use every day and is built on the cloud. Not surprisingly, 100% of the consumer internet applications will be adopting AI. From Meta to PayPal, Pinterest, Snap, and Twitter, AI is being developed everywhere to process every engagement, every product, every recommendation to deliver great customer experiences.
As a result, AI recommenders are becoming the engine of e-commerce, with over $7 trillion worth of sales projected by 2025. These are just some of the customers using NVIDIA AI today. NVIDIA's growth in hyperscale computing is continuing as more companies and developers find new ways of adopting AI for their applications and the introduction of new GPU architectures turbocharges that adoption. New GPUs do this in three ways. First, by reducing the time to train, we speed up the productivity of AI developers, helping them deploy more AI in the cloud and drive faster growth for AI infrastructure. Second, by improving the scalability of our architecture, we expand the scale and size of AI supercomputers to help our largest customers, as well as NVIDIA ourselves, to build the next generation of AI infrastructure and push the limits of what AI can achieve.
Third, by improving AI inference, the production use case of AI, we widen the aperture to allow even larger and more powerful AIs to be deployed into production. Just as we saw a 3x revenue growth from the launch of the NVIDIA V100 to the A100 GPU, so too will the Hopper H100 enable a new wave of AI models and applications. Hopper is the new engine for AI infrastructure and will be the platform for innovation for large language models, recommender systems, and the complex digital twins in the cloud. To advance AI, it is important to understand the trends in AI. Over the past few years, a new type of neural network has emerged. Invented by Google, the transformer has become the dominant building block for neural networks.
Built on the idea of attention, transformers help AI understand which parts of a sentence, an image, or disparate data points are relevant to each other. Unlike CNNs, which typically only look at immediate neighboring relationships, transformers are designed to train on the more distant relationships, which is important for applications like natural language processing. Transformers are transforming AI. 70% of the AI papers published in the last two years incorporate transformers into their work. Transformers are also the building blocks of the world's largest neural networks for large language models, like OpenAI's GPT-3 and NVIDIA's own Megatron-Turing NLG 530B. This neural network has 530 billion parameters trained on the corpus of the internet to build intelligent chatbots and other intelligent language applications. Hopper, with its new Transformer Engine, is explicitly designed to accelerate these transformers.
It can train GPT-3 6x faster than A100, reducing the time to train from five days down to just 11 hours. It gives the latest mixture of experts model, transformer models from Google a 9x boost, reducing time to train from a week to less than a day. Hopper's innovations don't just benefit training. When deploying these models for inference, Hopper delivers a 30x higher throughput compared to the A100. Hopper's ability to accelerate transformers will not only help bring new AIs to market, but will turbocharge AI productivity and, as a result, the demand for AI infrastructure in the cloud. There is a second equally important AI use case that is taking shape in the cloud: AI-based recommender systems. Recommender systems are the commercial engine of the internet.
Hyperscalers and the cloud service providers use recommender systems to connect literally trillions of items with the billions of consumers. Even the simplest search query today involves a complex recommender system that attempts, on the first try and only in a few milliseconds, to connect you with the right product, article, tweet, or advertisement. NVIDIA Merlin is an open-source framework for building large-scale, deep learning recommender systems. NVIDIA Merlin's NVTabular library can accelerate feature engineering and preprocessing to manipulate the many terabytes of unstructured datasets into AI tensors that can be operated on by an AI. In addition, Merlin supports distributed training with model parallel embedding tables and data parallel neural networks running across multiple GPUs for these giant models.
Snap used NVIDIA GPUs and Merlin software to achieve a 50% increase in the cost efficiency and an improvement in serving latency by 2x for their content delivery. Training and operating recommenders with NVIDIA GPUs saves money, enables smarter, more intelligent consumer interactions, and activates the $7 trillion worth of e-commerce coming to the cloud. Inference. Once you've trained an AI model, you need to deploy it. Five years ago, AI could still be run on legacy CPUs within the hyperscale data center. The overall amount of AI workload was small enough and these models simple enough that one can use the millions of existing CPU servers to deploy AI. That's not true today. As AI has gotten smarter, AI models have gotten larger and more complicated. CPUs simply cannot meet the real-time inference requirements of modern AI.
Furthermore, as AI has become an increasingly larger part of the cloud workload, optimizing infrastructure for the AI throughput of the data center matters. We have seen a rapid shift to GPUs as a result, starting with the NVIDIA P4 GPU in 2016, the T4 GPU in 2018, and now with our Ampere-based A2, A10, and A30 GPUs, we've experienced a 9x growth in our inference revenue. We invested heavily in software for inference. A software platform for inference needs to handle all the different types of models used across a company to deliver inference in real time, while maximizing the throughput of an infrastructure, as well as handling the increasing complexity of AI models. To tackle these challenges, we built the open AI inferencing software solution called Triton. Unlike the training frameworks, Triton is designed exclusively for AI inference.
It is an open source framework supporting every AI model running on CPUs and GPUs, and has become the de facto framework for AI inference across the cloud and on-prem deployments. Last year, we announced Grace Hopper, the ideal processor for giant-scale AI and HPC. This year, we've announced the new Grace CPU Superchip, the world's fastest, most efficient CPU for the data center. For markets where CPU performance is paramount, Grace shines. As AI models continue to get bigger and our GPUs get even faster, CPU performance plays an important role in managing the execution, as well as the pre- and post-processing of data for AI operations. The Grace CPU Superchip is designed to be the CPU for AI infrastructure. Its performance and efficiency will allow GPUs to train faster and larger AI models without ever letting the CPU get in the way.
Furthermore, Grace's configurability for new CPU-GPU system configurations allows us to optimize AI infrastructure for different workloads, leveraging both existing PCIe-attached GPUs and GPUs attached with the new NVLink chip-to-chip interconnect. The Grace CPU Superchip is, of course, an amazing CPU all by itself, and we're seeing strong interest in scientific computing, data analytics, hyperscale computing applications where absolute performance, energy efficiency, data center density matter for those CPU applications. For NVIDIA, we see the data center as the new canvas of innovation. With every generation of AI, we innovate unconstrained, studying all aspects of how AI operates inside the data center, knocking down barriers and inventing new technologies and products to optimize that infrastructure.
With new kind of compute-rich servers based on our Hopper H100 GPUs and our HGX GPU baseboard products, optimizing communication with BlueField-3, Spectrum-4, Quantum-2, and even integrating networking and GPUs into single products like our converged accelerators, we have added compute to the network itself, offloading teraflops of computation from the compute nodes and saving gigabytes of network traffic. This year, we're even taking it a step further by taking NVLink, previously used to connect GPUs inside the server, and now unleashing it into the data center itself, scaling interconnect beyond the server with NVLink Switch systems. With Grace, we can broaden that opportunity to build novel CPU-GPU system designs for the variety of AI workloads. Of course, working closely with our hyperscale partners, NVIDIA is inventing the data center of the future together.
Of course, now that we've opened NVLink-C2C, our chip-to-chip interconnect, we are open for business for custom IP integration, which brings new custom silicon opportunities with NVIDIA technology to the cloud. The available market opportunity has expanded, and we've gone beyond the GPU and are now a three-chip company. The new data center is redefining the nine million hyperscale servers deployed each year, and this opportunity opens up a $150 billion market for NVIDIA in the hyperscale, and with just the infrastructure opportunity alone. NVIDIA is the only AI company that works with every other AI company, and we are at the center of the AI ecosystem, working with AI companies to bring AI to the cloud, enabling new AI applications, and accelerate innovation across industries powered by NVIDIA. Thank you. I'll now turn it over to Ali Kani on automotive.
I'm here to talk about our automotive opportunity, NVIDIA. The automotive industry is large, with 100 million cars sold a year and an installed base of over one billion vehicles on the road. Auto is at the beginning of a few inflection points that together create compelling opportunities for NVIDIA. First, advancements in electrification have pushed OEMs to re-architect their cars from the ground up into software-defined vehicles that use centralized, high-performance computers that can provide a large and growing list of new features and services over the life of a vehicle. Second, these services give OEMs an exciting opportunity to transform their business model like we have seen Tesla do with their autopilot software that has grown in price from less than $5,000- $12,000 a car today.
As part of this AV software disruption, we're seeing an order of magnitude 10x in compute increase in vehicles as partners use twice the number of sensors in their cars, with each sensor supporting 5x- 10x the resolution of current sensors in production. These cars are also being developed with more advanced machine learning algorithms that enable the development of vehicles that support L2+ all the way up to advanced full self-driving capability. Now we're just at the beginning of these inflection points. Today, electric vehicles and vehicles with L2+ or higher software represent less than 10% of cars sold a year. In the next decade, a majority of the vehicles being sold each year should be electric, software-defined vehicles that support L2+ or higher capability. Now we've invested heavily to develop a full stack solution for the automotive market.
We offer our DRIVE Hyperion platform in vehicles that includes our Orin SoC and reference compute and sensor architecture. We also offer DGX and OVX servers and data centers that partners can use for AI training, map generation, and system validation. We try to make our car-to-cloud experience seamless by supporting common SDKs, APIs, and libraries end-to-end. We have three layers to our automotive software stack above our hardware layer. All partners can take our core operating system that runs on our hardware. Many take our DriveWorks acceleration middleware that makes it easy to efficiently run advanced AI applications on our platform. Some partners like Mercedes-Benz and Jaguar Land Rover use our full stack application software across their car and cloud infrastructure. In such cases, the entire NCAP, parking, mapping, autopilot, and even some IVI software is developed by NVIDIA in partnership with our OEM partners.
Autopilot and AI cockpit application development is a grand challenge. It requires high performance computing, platform programmability and scalability, advanced machine learning and robotics know-how, as well as expertise in functional safety and cybersecurity. NVIDIA is unique in our ability to help our partners across this entire stack, from chips and software in the vehicle, to data collection services in cars, to AI training, map creation services in the cloud, to application software from AV to cockpit in cars, and finally, onto simulation for vehicle validation. We invest to improve our solution across the entire stack because we believe what will most differentiate automotive companies is the speed of their end-to-end development flow, from finding an issue in a car, to root causing it, providing a fix that's quickly validated, and then securely OTA-ing better software into every vehicle.
We estimate auto to be a large $300 billion market opportunity. NVIDIA's opportunity spans both hardware, software, and services in the car and in the cloud. Inside the vehicle, there are nearly 100 million cars sold a year that will each need a high-performance computer. We offer these OEMs our Tegra SoCs and discrete GPUs, along with our operating system and DriveWorks acceleration SDK and libraries. When these partners go to production, we have the ability to support them with long-term software services that ensure they have a safe and secure experience for their customers over the life of their vehicle. We also offer application software to partners, which gives them the ability to increase the revenue opportunity in their cars. Our business model here is to share in the revenues generated by the software we provide to our partners.
We're especially excited about this software opportunity as they can even be larger than the hardware opportunity in each vehicle. Now, on the infrastructure side, there are close to over 100 OEMs that need DGX systems for training, OVX servers for validation, and we also can offer them a range of software services. For example, DRIVE Replicator can be licensed to build up virtual vehicles and create synthetic data for our partners' AV software development. With our recent acquisition of DeepMap, we have the ability to build maps at scale for partners' own AV stacks worldwide. Both our car and cloud market is well-positioned to grow as the investment needed for L2+ is many times larger than the NCAP-only cars. True full self-driving will also require an order of magnitude larger investment than L2+.
We have a large pipeline of wins in automotive that we have announced will be going to production in the next few years. Our traction is strong across all the segments of the automotive market. We have won designs in 20 of the top 30 EV car OEMs. We're working with seven of the top trucking companies, eight of the top robotaxi companies, and we help all of the leading OEMs with their infrastructure in the cloud. Beyond working with OEMs, we have an open platform strategy that we believe is a big differentiator for our partner ecosystem. We have an inception team that targets all automotive startups worldwide. We work with many of the major universities who use the DRIVE platform for their automotive and robotics education programs.
We also partner with Tier 1s, software providers, sensor and simulation partners to help them better develop their application on our platform. We learn a lot from each of these engagements and use the learnings to make our Drive platform roadmap even better for our future automotive partners. We announced last year that our automotive pipeline was $8 billion over the six-year period from fiscal 2022 to 2028. Orin has been a huge success. With all the wins we have announced over the last year, we're providing an update that our six-year pipeline from fiscal 2023- 2029 is now estimated at $11 billion. You'll start seeing this ramp scale up this year with production of Orin NEVs from partners like NIO, Li Auto, XPENG, and SAIC's R brand.
We'll see a bigger ramp in the following years as OEMs like BYD, Hyundai, Volvo ramp up, and we'll even be able to scale even further when Mercedes and JLR, as well as our partners in the L4 commercial trucking and robotaxi market scale beyond fiscal 2025. Now I would like to introduce you to Rev, who's gonna tell you about Omniverse.
It's well understood that modern AI is built with enormous amounts of data. Up until recently, we've relied on data captured from the real world and painstakingly labeled it by hand. The next era of AI requires a scale, diversity, and accuracy of data that's impractical and in many cases impossible to capture through traditional means. The only way to produce the data we need is by synthesizing it. Like humans and all creatures, AIs learn continuously from their environment. Babies learn how to perceive depth and identify objects by experiencing their environment. They learn the rules of the world, physics, through experimentation. AIs learn in precisely the same way. They experience the world through the images, sound, and information we feed them. They learn physics through continuous experimentation, trial and error. Unlike us, AIs are born and raised inside a computer.
The most natural place for them to learn and experiment is not in the real world, it's in a virtual world. In virtual worlds, AIs can learn in super real time, where one second of our time can be days' worth of life experience. In the virtual world, they are free to experiment, learning how to operate heavy machinery and drive multi-ton vehicles without risk of physical harm. Once an AI has learned a skill well in the virtual world, we can move its brain to a robot where it can operate in the real world. For this brain transfer to work, the virtual world must be indistinguishable from the real world. It must look, sound, and feel the same. The rules of physics must match the real world closely, otherwise the AI will have learned poorly. We have built Omniverse for this very purpose.
Omniverse is our platform for building and simulating virtual worlds that are indistinguishable from the real world, leveraging the full might of NVIDIA's accelerated computing. Four key technological advancements have recently converged, making the ideal conditions for the creation of Omniverse. First, with the introduction of NVIDIA RTX in our Turing generation of GPUs, we transformed real-time 3D rendering from a system that produces images that merely look good into a physically accurate simulation of how light interacts with matter. Previous techniques based on rasterization had hit a wall in terms of physical accuracy. With ray tracing, we know we can simulate all aspects of the behavior of light. Virtual world simulation has always been limited to relatively small computers at the edge, mobile devices, gaming consoles, or gaming PCs.
With recent advancements in data center GPU computing, we have the opportunity to leverage graphic supercomputers in the cloud, running simulations that are too large and compute-intensive for traditional computers. Omniverse is designed as a cloud-native and scalable engine that can utilize the full capabilities of the data center. Pixar invented Universal Scene Description, or USD, and open-sourced it in 2015. USD provides a common standard that allows us to describe virtual worlds with physically accurate pieces that can be composed into a large virtual worlds. Omniverse is built with USD at its core, enabling easy and lossless interchange of 3D data between the rapidly increasing set of tools and simulators that support it. USD is to Omniverse what HTML is to the 2D web. In addition to creating a large market for world simulation, AI is the key to building of virtual worlds.
The construction of high-fidelity, physically accurate, and large virtual worlds is currently limited to a small group of artists who have spent decades mastering the craft of 3D design in visual effects and video games. Every nook and cranny of the virtual worlds we enjoy in films and video games has been touched by an expert artist. For 3D worlds to be as ubiquitous as 2D web pages, we need everyone to participate in the creation of such worlds. Fortunately, AI has advanced to the point where we can train them to help us build virtual worlds, augmenting average people with skills that would otherwise take decades to master. AI will make creating virtual worlds as easy as creating a web page is today. All things that are designed and built by humans are typically first built in a virtual world.
Bicycles, cars, bridges, factories are all designed with various CAD tools well before they are built in the real world. Physically accurate and extremely fast simulation is key to designing the best and most efficient products. We can quickly test many iterations of a design in the virtual world at a fraction of the cost of what it would take to build them in the real world. Once the digital version of the product is complete, it's transformed into its real-world dual, one built from atoms instead of electrons. In most cases today, that's the end of the road for the digital version. If we link the two manifestations, digital and real, they can evolve with each other. We can capture data from the real world through IoT sensors and devices and feed it into the digital model, keeping the twins in sync.
Applying accurate physical simulation to the digital twin gives us incredible superpowers. We can teleport to any part of the digital twin, just like we can in a video game, and inspect any aspect of it reflected from the real world. We can also run simulations to predict the near future or test many possible futures for us to pick the most optimal one. We've built the Omniverse platform to unlock the full potential of digital twins, from design and creation to physically accurate simulation on our supercomputers. Now I'd like to welcome my good friend and colleague, Richard Kerris, to tell us about the growth of the Omniverse ecosystem.
Thank you, Rev. We are supercharging a huge ecosystem. Starting with developers, worldwide, there are over 25 million developers, and we see great opportunities to expand our Omniverse developer platform to encompass developers across all verticals that NVIDIA serves and beyond. NVIDIA currently has over three million developers in our program, growing from just 2.5 million a little over a year ago. Omniverse is a robust and modern SDK, and with the recent release of Omniverse Code, there are now opportunities for hobbyists to professionals to create extensions, connections, and even full-blown applications on the platform.
With artists and designers, there are close to 45 million creators in the world, and that number is growing faster than ever with the onset of growing demand for the content and world creations as we move into the next generation of the World Wide Web, commonly referred to as the Metaverse. Now, a large percentage of these artists and designers are already familiar with NVIDIA and are using partner applications that have been NVIDIA GPU accelerated. Omniverse is a platform that extends and enhances existing workflows, meaning we don't replace partner applications. We bring new features and capabilities to them, like true-to-reality simulation and real-time photorealistic rendering, essential features in the growing need for content for virtual worlds and digital twins.
With enterprise, there are over 150,000 warehouses and over 10 million factories in the world today, many of whom are moving to automation and digital twins. As a matter of fact, the global digital twin market is forecast to grow over 40% in the next five years. Omniverse is designed from the ground up as a platform for the enterprises who serve these industries, as you saw some amazing examples of in the keynote earlier today. From developers to artists and designers, we are supercharging the ecosystem of Omniverse. We have some great examples of early indicators of success. For individuals downloading and making Omniverse part of their workflow, we have had a growth of over 10x where they were just a year ago.
Many of the leading 3D software applications across media and entertainment, architecture, engineering, construction, and operations, manufacturing, and industrial design are connected to Omniverse with more coming every month. Plus, we're seeing a growing number of startup companies interested in using Omniverse as part of the platform for their work. There are many other ways to connect to Omniverse as well, not just software applications, things like sensors, cameras, LIDAR scanners, many of which are essential for digital twins. We're seeing great growth with connections here as well, over 10x where we were a year ago, with hundreds more on the horizon for the year ahead. Lastly, Omniverse as a compute engine is something that can be licensed to power the next generation of software products for our NVIDIA partners. You saw the first one earlier today with the launch of LumenRT by Bentley Systems.
Bentley is a leader, leading provider of software and services to design, build, and operate the world's infrastructure, and they licensed Omniverse to power their next generation of iTwin applications. We are in active negotiation with other leading software companies looking to use the power of Omniverse for their product lines as well. These are some great examples of how it's going with the Omniverse ecosystem. Now, with Omniverse Enterprise, we are empowering a global network of Omniverse Enterprise as a platform for them to sell and expand their product lines. We currently have over 65 partners worldwide, and 30 of them have Omniverse demo labs being set up all over the globe for our enterprise customers. This is building on our existing, strong ProViz foundation, which has been serving the artist and designer markets for many years with NVIDIA. Omniverse is supercharging their already active networks.
We're seeing momentum with the leading companies making Omniverse as part of their workflow, such as BMW, Siemens, Ericsson, and those we featured today in the keynote, PepsiCo and Amazon. We have over 700 more companies in our pipeline. Omniverse can serve all these opportunities from individuals to the world's largest enterprises because it's run on RTX systems, from laptops to servers and even directly from the cloud, as you saw announced today. Omniverse is for consumers, professionals, developers, and researchers across all industries. Omniverse enterprise software is a $150 billion market opportunity, and we estimate the available market opportunity for Omniverse software at this $150 billion because it's based on two main use cases immediately in front of us. First, we estimate that there are 45 million designers.
Those designers that are creating in industries such as media and entertainment, architecture, engineering and construction, manufacturing, and industrial design. Many of these are end user customers for our ProViz products already. Omniverse and Enterprise can help them modernize their existing workflows. Second, we have Omniverse opportunities for our digital twins with Omniverse. This will serve millions of factories, warehouses, and fulfillment centers across the globe, and we already have active engagements with hundreds of them in these early days of Omniverse Enterprise. We're investing in our Omniverse ecosystem, we're empowering our ProViz partner network, and we're building a long-term subscription-based model that will provide even more opportunities in the future. Omniverse software and chips and systems equals a $300 billion market opportunity, and we're gonna go get it. Thank you very much.
Next up is Jeff Fisher to talk about games.
Thanks, Richard. Hi, everyone. I'm excited to have this opportunity to talk to you about our gaming business. Gaming is huge. The industry is on fire. The number of gamers, esports athletes, creators, and broadcasters engaged in new shared experiences is exploding. With three billion gamers, no one is asking if gaming is growing. The question is, how big will it get? You can start by looking at Generation Z. By most measures, it's the largest generation ever. When surveyed, 80% of Gen Z are gamers, and gaming is their favorite activity, twice as high as music, watching TV, movies, or social media. Gen Alpha is up next. The 2022 Game Developers Conference Annual Survey once again ranked PC as the most important platform for developers.
Looking at the openness and technology leadership of the PC ecosystem and the fact that we are adding 50 million to our ranks every year, we couldn't agree more. GeForce is a lot more than playing games. We estimate 80 million creators and broadcasters who are designing, building, and sharing their work. There are now 24 million Twitch channels, doubling in the past two years. Last year, there was $29 billion in YouTube ad revenue, twice that of 2019. Minecraft, the ultimate game fusing playing and creating, reached 140 million monthly active users in 2021, growing 1.5x over two years. Minecraft content has been viewed over 1x trillion . 3D creators are the construction workers of virtual worlds.
Blender is the tool of choice among this growing class of 3D casual and pro creators, with 14 million downloads, 1.5x more than 2019. Over the past 25 years, we have dedicated ourselves to building the best platform for gamers and creators. It is enjoyed by hundreds of millions of gamers on desktops, laptops, consoles, and streaming from the cloud. At the heart of our platform is our GPU and a history of revolutionary architectures, each delivering new innovations for developers to create amazing games and creators to do their best work. Our new GPUs are programmable. Hobbyists and professionals regularly discover new applications for them, and crypto mining is one example. On top of our hardware comes a massive investment in software. Game Ready Drivers is our commitment to the best possible gaming experience.
Whether on PC or in the cloud, we will release an optimized driver with every major game to provide gamers maximum performance and stability. We have now extended this commitment to creators with our Studio Driver. This investment also delivers new technologies like DLSS, Reflex, Max-Q, and NVIDIA Broadcast, along with SDKs to enable a broad ecosystem. Our newest architecture, NVIDIA RTX, reinvented graphics featuring real-time ray tracing and DLSS AI image upscaling. Game developers, creator ISVs, even other GPU suppliers have gotten on board. For esports athletes, we introduced Reflex, removing latency between the game and the gamers. Over 20 million gamers are competing with Reflex each month. There are now over 250 RTX-accelerated games and applications. This comes at a time of strong gaming growth.
The pandemic introduced millions more to PC gaming, and we expect they are here to stay. Valve continues to highlight gaming momentum. Last year, there were 30 million more gamers buying games on Steam, and the number of engaged concurrent gamers on Steam has more than doubled in five years. This past weekend set yet another record. The Epic Games Store has shown similar strength, adding almost 100 million users in just two years. All of this and more has led to record results in our gaming GPU business. With about 30% of our installed base on RTX, there are a lot more gamers yet to upgrade. The opportunity grows when considering all the gamers on Steam and elsewhere who are not yet on GeForce GPUs. One more thing I'd like to share.
Looking into the millions of desktop GeForce gamers who we know have upgraded their GPU to a 30 series, they are buying up. The GPU is offering more value than ever. Based on our data, they are spending $300 more than they paid for the graphics card they replaced. Now let's look at gaming laptops, the fastest growing PC category fueled by gamers, creators and students looking to work and play from anywhere. This year we introduced our fourth generation Max-Q. Working with OEMs and CPU manufacturers, we use AI to instantly optimize the CPU and GPU for every workload. These are our thinnest and lightest laptops ever. This year we have announced 170 new RTX 30 series laptops starting at just $799. Our gaming laptop business continues to deliver record growth, revenue units and ASP.
GeForce gaming GPUs are driving the overall consumer laptop market, approaching 25% attach with plenty of room to grow. We estimate there are over 80 million creators and broadcasters who are fueling a creator economy in excess of $100 billion. As more careers are built around content creation, their tools become more important. We built NVIDIA Studio for these creators. NVIDIA Studio starts with RTX GPUs powering a range of laptops tailored to the needs of creators. On top of that is our studio software stack that includes specialized drivers and dozens of SDKs, which accelerate over 200 of the industry's top creative applications. This includes Adobe Premiere, Adobe Photoshop, OBS, the number one broadcast app, and Blender, the number one 3D design app. There is a strong connection between gaming and creating. Today, 1/4 of GeForce gamers are also creating or broadcasting.
They value performance, investing more in graphics than other gamers. There are also creators and broadcasting, broadcasters expanding the reach of our platform, which likely well extends beyond those who share their profile. RTX AI is making creation more approachable for everyone. It is easier than ever to create like a pro. Like Notch AI pose estimation. You can animate a 3D character or avatar using your body, your body motions in a webcam. Or Adobe Substance that converts a photo into a 3D texture you can add to any design. NVIDIA Canvas is our app that lets you draw a simplistic image and use AI to create a photorealistic picture. Of course, NVIDIA Broadcast, powered by Maxine, our popular application that uses AI to enhance your video streaming with features like background and noise removal.
With billions of devices unable to play the latest games and apps, it's no surprise that cloud gaming is projected to grow to over 100 million users in 2024. Our cloud gaming strategy is to offer an RTX gaming PC in the cloud through our own GeForce NOW service and expand the reach globally through alliance partners and third parties. GeForce NOW offers user access to our most advanced RTX gaming GPUs starting at $9.99 a month. Like on PC, we want to offer a full stack of our latest gaming GPUs, so we recently announced an RTX 3080 tier for $19.99 a month. GeForce NOW opens the PC gaming ecosystem to any client, including Android and iOS phones, an RTX gaming rig in your pocket. There are over 1,200 games onboarded from Steam, Epic Games Store, Ubisoft Connect and others.
We are extending our footprint through alliance partners like SoftBank, LG Uplus and Taiwan Mobile, who operate the service regionally and help offer GFN to over 80 countries worldwide. Several marketing partners also feature GFN in their products and services, including LG, Samsung and AT&T. Finally, we are offering RTX Graphics, our game-ready software stack and cloud gaming expertise to third-party gaming services like Tencent GameMatrix. Cloud gaming in the cloud is not just for playing games. As announced today with GFN, we will have an expanded opportunity to offer Omniverse running on RTX to every creator on every client. Looking forward, the opportunity for gaming and graphics is almost endless. There are a ton of headlines about a multiverse, billions of dollars of real estate, NFTs and crypto economies, but one thing is very obvious.
Gaming is leading us there, and creators will build it. We are just at the beginning. The graphics required to deliver a cinematic VR experience in a massive multiplayer physically accurate world will likely require three to four orders of magnitude more than the performance of our highest-end GPUs, plus continuing advancements in algorithms for rendering physics, AI and animation. There are three billion gamers and creators, and it's growing. We believe over time, a quarter of them will spend over $100 a year for high performance GPUs in desktop, laptop, cloud or console. In total, this translates to a $100 billion opportunity. The fundamental strength of gaming has never been stronger. I will now turn it over to our CFO, Colette Kress.
Fiscal year 2022 was a record-breaking year for NVIDIA with revenue, gross margins, operating income and earnings per share, all achieving records. Revenue increased 61% year-on-year to $26.9 billion, driven by our incredible ramp of our Ampere architecture across our graphics and data center platforms. We achieved record revenue in gaming, data center and professional visualization. Gross margins increased 120 basis points year-on-year to 66.8% as we benefited from gamers and creators buying up our stock. Gross margins expanded against the backdrop of industry-wide supply chain disruptions and rising costs. This speaks to the strength of our business model and execution. We drove strong operating leverage as operating income increased 87% year-on-year to $12.7 billion, and earnings per share increased 78% year-on-year to $4.44.
I'd like to talk about our market platforms and the opportunities that we see ahead of us. Starting with gaming. Fiscal year 2022 was a phenomenal year, with revenue increasing 61% year-on-year to $12.5 billion, driven by broad-based strength across desktop, notebooks and consoles. Strong demand for RTX and our Ampere GPUs helped drive tremendous unit and blended ASP growth. This is consistent with what we have observed over time. Gaming revenue has grown at a four-year 23% compound annual growth rate with both units and ASPs contributing. We see these trends continuing in the future. The universe of gamers continues to expand and the creator economy will be further turbocharged with Omniverse now available for individuals.
With RTX-enabled content nearly ubiquitous and over 70% of our installed base yet to upgrade to an RTX GPU, we see a tremendous revenue opportunity ahead of us. Our Max-Q technology transformed the notebook PC into a new gaming device and unlocked for us one of the fastest-growing and largest PC gaming markets. Longer-term, GeForce NOW expands the reach of our GeForce platform to billions of gamers. NVIDIA is the only gaming platform to address every way a gamer plays, desktop, notebook, console, and the cloud. Over time, we see a $100 billion available market opportunity and a long runway for growth. Turning to data center. Fiscal year 2022 built upon the great momentum we saw in fiscal year 2021, with revenue increasing 58% to $10.6 billion and exiting at a $13 billion run rate.
Strong and broad-based help from the A100 helped fuel strong revenue growth in hyperscale and vertical industries, and natural language understanding and deep recommender models are uniquely enabled by our full stack approach. The diversity, compute intensity, and latency requirements of these models also helped to drive accelerating growth in inference revenue and widespread adoption of our Triton Inference Server, downloaded over 1x million by 25,000 customers. Our data center business has grown at a four-year compounded annual growth rate of 53%, and we entered fiscal year 2023 with great visibility into demand and supply. Announced this morning, the H100 GPU and the Hopper architecture is set to build on an incredible success of the A100 and the Ampere architecture. The H100 has a Transformer Engine to speed transformer networks, the most important deep learning models invented.
These models are helping to give rise to an emerging type of data center, the AI factory, where data is the raw materials and intelligence is the end product. These factories require large amounts of compute and networking and a full stack approach for both training and inference. Our Grace CPU is perfect for these environments. Our BlueField DPU rounds out the three-chip strategy and is set to ramp this year with interest high among our CSPs and major OEMs. We also see strong interest in support for graphic-intensive workloads from cloud gaming to virtual worlds and industrial digital twins. Just as DGX runs on NVIDIA AI software for processing, machine learning, and deep learning workloads. Our just announced OVX server will run on Omniverse software for processing industrial digital twins.
This is a perfect example of our ability to extend our platform and add new growth vectors of growth. Our large ecosystem continues to grow, helping to unlock new markets and foster adoption of our platform. Turning to professional visualization, fiscal year 2022 revenue increased 100% year-on-year to $2.1 billion. We saw strong demand from hybrid work-related deployments and the ramp of our Ampere GPUs into workstations. RTX and AI have completely revolutionized the computer graphics and can drive continued growth as adoption expands within the estimated enterprise end user base of about 45 million creators and designers. Omniverse adds a new tremendous new growth opportunity, and interest is high with some of the world's leading companies such as BMW, Siemens Energy, Ericsson developing in Omniverse. We have more than 700 companies in the pipeline.
Not only does Omniverse present a large software opportunity, but it also will help drive a large hardware opportunity as Omniverse runs on NVIDIA RTX-powered desktops, laptops, workstations, and servers. Finally, turning to automotive, we believe this will be our next multi-billion business, and we're on the cusp of an inflection. Autonomous driving is a significant technological challenge, and NVIDIA uniquely enables the entire workflow. This comprehensive yet flexible approach is helping to drive rapid adoption of our DRIVE platform and Orin SoC across the transportation industry, unlocking new business models for us and our customers. Our design win pipeline measured over the next six years is now $11 billion, reflecting our great momentum with new NEVs, traditional OEMs, truck makers, and robotaxis. This is up from $8 billion a year ago.
Our opportunity is large, and we expect our revenue momentum to build in the coming quarters and hit an inflection point in the second half of this year. Let me talk about our software strategy. As you know, software has been integral to our platform for over 15 years since the introduction of CUDA. It helps us enable and create new markets and is a key competitive differentiator. So far, software has been largely included as a part of our broad platform offering and not sold standalone. We've been offering some standalone software and services, including vGPU subscriptions for professional graphics, GeForce NOW subscriptions for cloud gaming, as well as software support. All in, these types of recurring software and services revenue are currently at an annual run rate in the low $100s of millions.
Building on this foundation, we see a much larger software revenue opportunity going forward across three key opportunities. First, the NVIDIA AI Enterprise Software Suite brings NVIDIA's AI tools and SDKs to enterprise IT. It is offered as an upfront license plus maintenance or as a subscription, and it is available through our broad channel partners. Second, our Omniverse Enterprise Software Platform enables collaborative product development and operation of digital twins. Omniverse is offered to enterprise customers as an annual subscription with additional licensing opportunities for the operation of digital twins. Third, DRIVE Software revenue will be enabled by our end-to-end and full stack autonomous driving platform, as described by Ali Kani. This new business model can drive one of our largest revenue opportunities as millions of vehicles become software-defined and capable of delivering software and services over their lifetime.
Each one of these software offerings has a multi-billion revenue potential and should contribute positively to our gross margins over time. Our software go-to-market leverages many of the relationships and existing channels we have built over decades. This will help us scale these businesses with an efficiency unique to NVIDIA. I'd like to spend some time discussing our long-term available market opportunity. Keep in mind, this is not a reflection of our TAM in any given year as adoption and penetration curves can vary. For example, cars have a much longer refresh cycle than PCs and servers, so the auto opportunity will take longer to realize. For brand new businesses like Omniverse, it's hard to predict the pace of adoption, but we can help size the overall available opportunity based on the number of gamers, designers, engineers, servers, cars, and other devices that we can power with our technology.
For gaming, we see a total available market opportunity of $100 billion as we can reach gamers any way they play. There are three billion gamers globally growing every year, and we believe a quarter of them can be addressable over time. Whether we serve them with GeForce GPUs in their systems or GeForce NOW in the cloud, the per user pricing is similar and annualizes at over $100 per year. For chips and systems, we estimate a total available market opportunity of $300 billion. This spans all three processor types, GPUs, DPUs, and CPUs, as well as networking infrastructure to power AI, graphics, and high-performance computing from the cloud to the edge, including public and private clouds, enterprise and edge locations, and workstations.
Our estimates assume that all servers over time will address the GPUs and the DPUs, and a portion in the high end will be addressable with our CPUs. As Manuvir described, in a new era of AI and virtual worlds, we believe companies will direct a greater portion of their capital and operating expense budgets to their technology infrastructure spend. Running on top of our hardware stack are our enterprise software offerings, Enterprise AI and Omniverse Enterprise. For NVIDIA AI Enterprise, we estimate the total available opportunity at $150 billion based on the installed base of enterprise servers and our per-server software pricing. For Omniverse Enterprise, we also estimate a $150 billion software opportunity based on two opportunities. First, a per-seat software subscription for professional designers and creators, which we estimate at 45 million.
Second, a per-robot software subscription for digital twins based on more than 10 million factories and warehouses. Finally, for automotive, our opportunity reflects three components, the drive software for autonomous driving, the in-vehicle hardware, and the data center infrastructure for training and simulation. The vast majority of the estimated $300 billion opportunities comes from software for two reasons. First, our software content per vehicle can be in the thousands of dollars over the lifetime of the vehicle compared to the hundreds of dollars for the hardware. Second, software scales with the installed base of vehicles, not annual production. In total, we see a $1 trillion available market opportunity in front of us. We believe our opportunity will increase in time as we roll out new products and offerings, unlocking new markets that previously were not available or did not exist.
We have been investing significantly to address this opportunity. Fiscal year 2000, we have invested cumulatively $29 billion in research and development, growing at a 24% compounded annual growth rate. We have also invested significantly in capital expenditures, $5 billion cumulatively since fiscal year 2010, growing at a 24% compounded annual growth rate over this same period of time. We have also extended our platform and added talent through a handful of acquisitions. Given the wave of new products and offerings discussed today and the opportunity ahead of us, we will continue to scale investments to support continued strong revenue growth. Our full stack approach not only is the key competitive differentiator, but enables us to innovate quickly and profitably. We have architected our software to work across all products, systems, platforms, and applications.
This single platform allows us to develop and launch new products at a rapid pace and efficiently enter new markets as each innovation takes advantage of NVIDIA's entire body of work and go-to-market capabilities. This approach enables a business life like no other, allowing us to invest at scale with confidence while also driving strong operating leverage. Compared to fiscal year 2018, fiscal year 2022 operating income increased 3.5x to $12.7 billion. Over the same period of time, operating margin has expanded 1,000 basis points to over 47%. We will continue to balance investing for growth with driving operating leverage over time. With our software-rich business model and inherent operating leverage, we generate a lot of cash. Free cash flow has grown significantly at a four-year compounded annual growth rate of 29%.
Free cash flow growth accelerated in fiscal 2022, almost doubling to $8 billion. We anticipate significant growth in our free cash flow over time. Let me update you on our capital allocation priorities. First, after pausing for over a year, we resumed our stock repurchases this quarter. We've repurchased $2 billion. We have $5 billion remaining under our authorization through calendar year-end. Second, we plan to maintain our dividend, which is currently a use of cash of around $400 million per year. Third, we will continue to make strategic investments where it makes sense to grow our talent, platform reach, or our ecosystem. Note, however, that our number one focus will continue to be investing organically for growth. I'd like to close with our commitment to ESG.
NVIDIA is building one of the world's greatest companies by focusing not only on what is good for business, but on what is good for our employees, our partners, the environment, and society at large. We have strived to create a company and a culture where employees will want to come and stay and do their life's work. We were ranked number one on Glassdoor's Best Place to Work list for 2022. We are building Earth-2, the world's most powerful AI supercomputer dedicated to predicting climate change. We are committed to strong corporate governance with NVIDIA receiving a number of recognitions for the strength of its management team and diversifying our board. That wraps up our presentations for today. Please enjoy this short video that we have for you, and then we'll move to the Q&A portion of our event.
I am a visionary. Expanding our understanding of the smallest particles and the infinite possibilities of the universe. I am a guardian. Protecting us on all of our journeys and ensuring our most precious passengers make it home safely. I am a healer. Searching for hidden threats in every cell and delivering precise care with every breath. I am a helper. Taking on complex tasks in the most challenging environments and giving our crops room to grow. I am a creator. Transforming the very fabric of our everyday lives and using the creative DNA of the masters to inspire a new generation of art. I am a learner. Taking just minutes to discover how to crawl, walk, and stand on my own. I am a storyteller. Giving emotion to words. I am even the composer of the music.
I am AI, brought to life by NVIDIA, deep learning, and brilliant minds everywhere.
We will now start the Q&A session. If you would like to ask a question on the video bridge, please use the Raise Hand button at the bottom of your screen. When it is your turn to ask a question, you will be brought into the virtual room and allowed to speak. Our first question comes from Aaron Rakers from Wells Fargo. Aaron, please click Okay on the pop-up, and you will be admitted to the room. Please unmute yourself and start your video using the buttons in the bottom right corner.
Yes. Can you guys hear me?
Yes.
We can.
Awesome. Sorry about that. Thanks for doing the detailed presentation. I guess, Colette, the thing that most notably stands out is that you're really kind of leaning in on sizing the software opportunities that the company is developing and discussing as far as the TAM opportunity. I guess my question to you is that, you know, how do we as investors think about the progression of, I think you said $100 million or so ARR in your comments. How do you define success of that? And where do you think that the earliest success would show up? And does this become a material driver even looking through this next fiscal year? Thank you.
Great. Let me start off, Aaron. A great question to start on the software. We articulated today so many other software opportunities available. We're talking definitely about three key areas for the enterprise: NVIDIA AI Enterprise software, Omniverse, as well as DRIVE. Correct. Today, already, we have been selling software to our enterprises, and this is $200 million today. We believe this is a growth opportunity for us, but a growth opportunity in many ways, not just on the software line, but the infrastructure that will be important in terms of building this out as well. They'll both go hand in hand.
We do believe it's an important growth driver as we go forward. I'll move to Jensen if he wants to add more.
Aaron, the important thing about our software is that it's built on top of our platform, meaning that it activates all of NVIDIA's hardware, chips, and system platforms. Secondarily, the software that we do are industry-defining software. If you understand that, you understand well that NVIDIA AI is a collection of libraries that make it possible for you to do data processing, to machine learning, to deep learning, to inferencing at hyper scales. In the cloud, there are large engineering organizations that help the clouds do that themselves. For the world's enterprise, you have to do it with them and for them, and help them maintain this really complicated AI engine across multiple platforms and multiple generations of platforms.
The amount of value that we've encoded into NVIDIA AI over the years is really quite tremendous, and that's just one. Second, as I mentioned, when you start to move into the edge or the industrial edge or what people call robotics, those systems require the simulation, a digital twin, if you will, that models your products out in the field. Because if you can't do that, then you can't develop new software, optimize the software, and very importantly, do what is called continuous integration and continuous development and integration so that you could deploy the software into your fleet. You have to be able to simulate the results of that deployment before you deploy it.
People are really coming to grips with the idea that if you wanna deploy AI out into the edge, if you wanna put robotics out into the world, you really need this concept of a digital twin. We're years ahead of the industry in this, and because it leverages NVIDIA's entire body of work, Omniverse is really an industry-defining piece of software. These two products, as you know, are one of a kind, and it runs on top of our platform, enables AI to go to the world's enterprises into all of these industries. That's really the reason why we've productized them, built internal organizations to be able to productize them, support them, and deploy them over time. We're really quite excited about it.
Great. Thank you very much.
Thanks, Aaron.
Thank you, Aaron. Our next question comes from CJ Muse at Evercore. CJ, please OK pop up and you will be admitted to the room.
Hey, how are you? Thank you for today. Really appreciate it. I guess my question is on the hardware side. You know, I think you've tripled the size of the TAM there from $100 billion- $300 billion. Curious if you can kinda walk us through, you know, what you're seeing from a core GPU perspective in terms of, you know, increasing the size of that TAM, as well as, you know, what kind of assumptions you're making around penetration for both Grace on the CPU side and as well as on the DPU side. I guess lastly, the synergies that you see from offering all three pieces of silicon and how that can drive overall revenue growth as well.
Thanks so much.
Yeah. Thank you, CJ. First of all, remember that, or note that, we have GPU, CPU, and the DPU, the Mellanox architecture, the Mellanox platform, if you will. All three platforms are unique in their richness of ecosystem. These are not just three chips, they're three platforms. The body of work and software that's on top of each one of them, and the ecosystem of systems and servers and computers, and the partners, and the go-to-market partners, and all the third-party developers, everybody that's working on these three platform is really unique. So I'm delighted to have three of the most important data center chip technologies in one company. However, these three platforms are wonderful all by themselves, all individually. The...
There are several different growth drivers for today's GPUs. The number one, of course, by far, is AI being put into operations all over the world. Inferences for recommender systems, conversational AI, speech AI, natural language understanding, the number of new models that are based on deep learning is just growing exponentially. This is really the modern way of doing software development. There's no question at this point that what has now taken over the vast majority of the cloud will go forward into all of the world's enterprise. That's number one driver, AI with all of the different models for training and for inference. The new things that we...
I think in our last conference call, we said several times that our visibility of data centers through the year is really excellent. That is just driven by today's continued expansion, continued use of deep learning. The new growth drivers that we talked about today, there are a couple. We spoke about, of course, the Grace CPU Superchip. No CPU has ever been designed this way. Two very powerful CPU dies that are then connected using NVLink, 900 gigabytes per second NVLink that's memory coherent, makes for one super CPU chip.
This particular CPU is going to be really great for moving data, processing data, which is really consistent with all of the core business of our company, AI, scientific computing, and in the future, Omniverse, digital twins. All of these applications are gonna benefit from a CPU that's not just incredibly good at single-threaded processing, but very importantly, moving data. It's not 50% better or, you know, something like that over the best today, but it's many times more memory bandwidth than what's available today. That's a new business driver for us. I also spoke about Spectrum-4. You know our business in NICs and endpoints, whether it's smart NICs or the BlueField-4, BlueField-2 DPU is doing fantastically.
I'm gonna add to that with Spectrum-4, which is a 400 Gb/s Ethernet switch that, in combination with ConnectX-7 and BlueField-3, turns it into an end-to-end 400 Gb/s Ethernet platform. That's going to be a major new driver for us. We're super successful already with InfiniBand. We're super successful with end-to-end InfiniBand. This is going to be a new journey for us, and I'm super excited about it. The performance is unrivaled, and the software stack on top of it is incredible. We have a new data center driver with CPUs, new data center driver with Ethernet switch end-to-end platform, and we should have some pretty exciting times ahead for data center hardware.
Thank you. Our next question comes from Vivek Arya from Bank of America. Vivek, please consent to join the virtual room. It looks like Vivek is having some issues, so we will return to him in a moment. Our next question comes from Matt Ramsay at Cowen. Oh, sorry. There is Vivek.
Hi. Thank you, Jensen Huang and Colette Kress.
Hi, Vivek.
Thanks for hosting behind the scenes. Really appreciate it. Actually, Jensen, you know, I wanted to go back to the Grace CPU, server CPU. So from what you're suggesting, you're only targeting the high end of the market, and I'm curious why only limit yourself to the high end of the market? Why not go after the cloud and the broader enterprise market as well? What's stopping you from doing that? Because do you not leave, you know, x86 competitors who can kinda come up the stack and, you know, continue to challenge you at the high end of the market? So that's kind of part A of the question. I thought I heard you say that you're using the off-the-shelf, kind of the Neoverse cores that Arm has developed.
Do you have any plans to do your own custom implementation of those cores over time that can give you a bigger competitive advantage in that market? Thank you.
The answer to the second question first, there are more surprises for Grace that will be coming out, and we'll have plenty of time to describe all the characteristics of Grace over time. Today I thought we would focus on the Superchip architecture, and it is such a fundamentally different way of designing chips and systems, and it provides incredible capabilities for us to modularize and combine and create different types of systems to diversify the platform in a lot of different ways.
The number of different types of configurations that you're gonna see from Grace Superchip, Grace Hopper and Hopper, and ConnectX-7 and BlueField-3, the combination of those chips with the switches that are behind them, the combinations and the configurations of systems are gonna be pretty staggering. I'm super excited about that, and we'll describe more about that over time. With respect to the target market of Grace, the area we're most focused on. First of all, the CPU cores is incredible. As you saw, our estimated spec in performance is off the charts compared to what's available today, and the CPU performance is fantastic. However, what really distinguishes Grace is a couple things.
Its memory bandwidth is unrivaled. The memory capacity and the memory bandwidth available on that capacity is like nothing the world's ever seen. Second, the energy efficiency of the entire CPU subsystem, which includes the CPUs and all of the memories associated with it and all the SerDes, the energy efficiency is probably about 2x , maybe more, than what will be available in the market at that time. That's a giant leap in those couple of factors. The areas where we're gonna focus, Grace initially, and as you know, we'll have plenty of time. This is the beginning of our journey into providing discrete CPUs, and we'll have plenty of time, and the market for discrete CPUs is quite segmented and quite fragmented, and so we have to respect that.
The areas where we're gonna focus are also happens to be the fastest-growing segments of CPUs of computing today, which is AI infrastructure. As you know, we're one of the fastest-growing data center companies in history, and yet all of that data center growth is rather new. This idea of an AI factory is a new thing that came about because of AI. This is a data center that most companies historically didn't have, and many companies and enterprise still don't have. As we grow into this new class of data centers called AI factories or AI infrastructure, this is an area that we really wanna focus Grace on. You could use it for training very large models.
You saw earlier that in training large language models, Hopper is going to be some order of magnitude higher than Ampere. The way to think about that is no one really builds data centers, AI factories with more than a couple thousand or 4,000 GPUs today. Well, you can now extend that to 10,000, 20,000. The reason for that is because the efficiency, the utilization of the processor is now made possible by the new architecture of Hopper and all the interconnects makes it possible for you to scale up your infrastructure so that you could do the training of these really valuable models from weeks to days to hours. That's just game-changing. This is one way that we're going to scale out.
The other way that we're gonna scale out is that the ability for us now to build very, very dense Grace Hopper, as you see, is incredibly dense. It's the most dense AI inference computer the world's ever seen. An incredibly dense server in just one superchip. That one dense server replaces about 14 servers. Each instance replaces 2 T4s, and so that's 7 instance x 2. 14 servers can be replaced by this one single superchip. Whether it's AI infrastructure for training large models or AI infrastructure for large-scale deployment of AI, we're gonna have plenty of market to go after. Just and just really, really giant markets to go after. That's where our focus is.
Thank you.
Thank you, Vivek.
Our next question comes from Matt Ramsay from Cowen. Matt, please join us in the room. Matt, please unmute yourself.
Yeah, yeah.
You are in the room.
Thank you very much, Colette, Jensen, and the whole team for a very helpful day.
Thank you.
I wanted to ask a couple of quick follow-up questions on the software business 'cause that was something that was of emphasis and new today. The first one's quick. Colette, you mentioned a couple hundred million dollars today. Could you shed any light on the growth rate of that number in the recent periods? Jensen, the longer term question, ironically, the Omniverse has, I guess, come into the investor lexicon about your company over the last six to nine months. But some of the work that we've done, I think I'm a bit clearer on the TAM and the ability to potentially monetize the Omniverse than I am maybe on the enterprise AI opportunity in software for your company over time.
Maybe you could talk a little bit about how you guys put that TAM together. I think some of that is priced on a per CPU basis today 'cause you kinda meet in the middle with the VMware pricing model. Just the inputs of how you're thinking about the sizing of that enterprise AI TAM for the company, revenue per seat, revenue per CPU, just examples of how you're gonna penetrate that market over time. Thank you.
All right. Well, let me first start, Matt, on the question on what do we think it will grow? What do we think software will grow moving forward? It's important part of what we're planning in both this next year and a decade going forward. Probably the best way to think about the growth is the growth that we'll see in enterprise. Enterprise overall, hardware, overall systems, and seeing software being an important part of that complete stack that we're gonna be needing for them. It's tough to say how much will the enterprise growth be versus some of our other components, but I think it will track quite well to what we'll see in enterprise.
NVIDIA AI. Remember what's inside it. There are several stages in building an AI model and deploying an AI model. The first stage is processing the data. It's mountains of data, terabytes of data, petabytes of data, just incredible amounts of data. You have to find a way to refine that data, process the data, refine the data, clean that data, you know, augment that data. There's a lot of it's related to SQL, when it comes to structured data. When it comes to unstructured data, a lot of it's image processing, signal processing. You're doing a lot of processing of data, number one.
Number two, you have to do feature engineering, try to figure out what are predictive features. Number three, whether you're using, if you're using classical machine learning, which is the vast majority of the industry today, graph analytics, all of that you would like to do 1,000x faster, 1,000,000x faster because the amount of data that you have is just torrential. Number three is machine learning. Number four is deep learning. Deep learning is where TensorFlow comes in, where PyTorch comes in, and then when you're done with that, you have to deploy it, inference. That entire workflow is unlike any software development that's done today.
The vast majority of the world's software development until now has been humans writing code, testing it against some, you know, dataset, or some test suite and then deploying it. That's the vast majority of development today. It's done by humans, right, developing software on laptops. Yet in the future, the way software's gonna be developed is engineers developing software on laptops, but connected to supercomputers in the back. If you look at the amount of infrastructure per software engineer at the largest internet companies or at NVIDIA, you will see that the amount of computational infrastructure beyond the laptop is enormous. That's how machine learning is done. That's how AI is done.
When you're doing this in the cloud, in the hyperscale companies, they have a lot of engineers who could do that. For the rest of the world's Fortune 100, Fortune 5,000, the other 100,000 companies around the world who needs to do this and would like to do this either on-prem or at the edge, somebody has to go develop that software suite. Somebody has to bring the NVIDIA AI software engines that are running in the cloud today and putting it into on-premise. It's really a body of software that's really quite complicated end to end. That's number one. Now, how many companies in the world will be doing data processing, feature engineering, classical machine learning, graph analytics to deep learning?
Well, I happen to believe that every company's fundamental production, fundamental output is intelligence. A recommendation for a financial strategy, a recommendation for some health regimen, some recommendation for a therapy, it's a recommendation. In the future, almost every company will be a tech company, and every company will be an AI company producing intelligence. If so, then every company's servers will have some part of this pipeline running on it. If you run that pipeline, if you wanna run that pipeline, you wanna run it well with NVIDIA AI Enterprise. We have a library, an engine, the engine has a suite of libraries, but an engine that allows you to run it on every server. There are about 50 million installed servers in the world's enterprise today.
It's gonna be a lot more in the future, especially as we move on to the edge, but 50 million installed today. If every single server, and the way you count a node is the CPU inside, and that's why we use CPU, but basically a node. For every single node, if you want to run NVIDIA AI, we have an engine for you. That engine is per CPU or per node, it's $2,000. 50 million, $2,000.
On top of that is all the NVIDIA SDKs, all of the other AIs and AI frameworks, and maybe it's an AI framework just for recommenders and AI framework just for speech or AI framework just for large language models or AI framework just for, you know, computer vision or robotics or whatever it is. We're gonna have a whole bunch of software on top of that, but they all run on top of that one engine, NVIDIA AI. I just gave you the NVIDIA AI story. The NVIDIA Omniverse story is really about connecting designers and artificial intelligence. It's about connecting designers and artificial intelligence. The artificial intelligence could be a self-driving car.
It could be a robot that's roaming around inside a logistics warehouse, one of the 100 million sq mi of fulfillment warehouses around the world. They're just too big for humans to walk it, so you're gonna have a whole bunch of AMRs move stuff around. All of those AMRs are going to be sitting in digital twins. You have to have a digital twin because you're gonna reprogram the AMRs. When you wanna reprogram the self-driving car fleet or the AMRs or the pick-and-place robots, or the last mile delivery, pizza delivery robots, grocery delivery bots, when you wanna reprogram them, optimize the software, before you do it, you wanna see how that software build is gonna do in the real world.
You know, you don't wanna just develop software and roll it out and hope for the best. You wanna simulate it somehow in virtual reality, virtual worlds. We call that Omniverse. Omniverse is an engine for you to simulate all these different types of robots. Designers, roboticists, AI developers are gonna all be connected into this virtual world, and they're gonna develop software, optimize the fleet, optimize the factory. When they're ready, they deploy it. The way that we benefit from Omniverse is the connections of the robots and the connections of the designers.
Hopefully, I would expect that in fact, more things will be designed in Omniverse long term than the physical world because you'll have many versions of cars and houses and cities and buildings and factories and so on and so forth. The number of designers that are connected to it hopefully starts from the 50 million today, and hopefully it's a lot more. The number of robots, I think it's fairly clear now that the world will have billions and billions of robots. Not humanoid versions like us, but they're autonomous robotic systems that are moving around. They could even be medical imaging systems, surgical systems, you know, AMRs and so on.
We have two different industry-defining software platforms, NVIDIA AI and Omniverse. They have different business models because they're used in different ways. Both of them leverages all of our platforms, which means the entire network of go-to-market that we've developed over the years are super excited to take these two platforms out to market with us. We have large channels already built up. We have a large network partners already built up. We got a large number of third-party software developers that are hooked into it. These two platforms I think are gonna be that's one of the reason why we're focused so intensely on these two. One more thing.
With respect to NVIDIA AI, remember, I think, Matt, it was you in the beginning, why now that we're going into it. Remember, NVIDIA AI runs on NVIDIA gear. Even though a lot of software runs on CPUs, a lot of its most important features can only run on NVIDIA's hardware. This is the groundbreaking work that we did with Tensor Core and our GPUs and so on and so forth. So now is really quite ideal because we've had several years, about six or seven, of building an install base of NVIDIA hardware in the world's enterprise. Remember, software wants install base.
We have the benefit now of going to market with a known large enterprise install base, and that install base hopefully doubles every year. That's the plan.
Thanks for all the thoughts, Jensen. Super helpful.
Yeah. Thanks a lot, Matt.
Our next question, apologies, comes from John Pitzer from Credit Suisse. John, please join us in the room. John, once you've joined us, please turn on your audio and video.
Perfect. Can you guys hear me okay?
Yes. Nice to see you, John.
I passed the test. Thanks, guys. Along with everyone else, thanks, Jensen, for all the information today. I'm kinda curious if you could talk a little bit about the transition from Ampere to Hopper. How quickly do you think that's gonna happen? I mean, the last couple of GTCs as you've brought out new products, especially in the data center, incremental performance gains are not measured in percentages as much as multiples. Is there a risk that A100 demand falls off more quickly than A100s are than you can ramp Hopper? Then Colette, maybe as a back half of that question, you reiterated, I think, last quarter gross margin progression every quarter this year despite all these new product introductions. I'm just wondering if that's still the case.
We have excellent visibility into our data center business because of the breadth of AI products that we offer and the number of services, AI services and applications that are built on top of it that the world provisions for their own business. It is the case that when we launch a new consumer product, the transition is rather crisp. However, because the world's enterprises and the cloud service providers, they are running their business on top of Ampere today. They've got their businesses forecasted out for some time, and their expansion forecasted out for some time. They're gonna keep on building it because every single you know system they put in place provisions more services and more growth and more customers.
They're anxious to get that in place and they want stability and security on the forecast they've given us. That's one of the reasons why we have so much visibility today. Now, when we first started in the data center going from Kepler to Pascal, it was super spotty. It was because the number of applications on top of our GPUs wasn't that many. When we went to Volta, that was really still kind of the beginning. You know, Volta built a great base. Ampere built a phenomenal base. And now the number of deep learning services that are sitting on top, you know, from imaging to video to language to speech to recommendation systems.
Just the recommender systems that drives the world's commerce on the internet, the number of recommenders in the world, I mean, it's not one recommender per company. It's hundreds of recommenders per company. They're recommending products and ads and things like that, right? They're recommending all kinds of things. That is so vital to their business. They forecast that out, they plan that out, and that gives us the visibility we need. Okay.
When talking about gross margin, moving forward, we've done a tremendous job with gross margin up to this point. We're probably looking even in this quarter at 67%. We know that the future in front of us is going to incorporate software standalone, which will assist our gross margins. Products and systems that in the data center can also help influence, and a right mix of growth can also influence our gross margins as well. We'll stay focused on gross margin going forward and looking for the growth from software to probably be one of the largest drivers that will increase our gross margin.
Thank you.
Thank you.
Our next question comes from Stacy Rasgon at Bernstein. Stacy, you should see a pop-up. Please present and join us in the room.
Stacy, please join us in the onwards.
Hi. There you are.
Hey, Stacy.
How are you?
Terrific.
Great. I have two questions, one a longer term, and one a little shorter term. For the longer term, the $300 billion in chips and systems opportunity in data center, can you give us some feeling for how you see that breaking out between the enterprise side and the hyperscale side? And I guess more generally, like, where does all of that come from? I mean, the entire server market today is, what, $100 billion, maybe networking's about the same. I guess what's just underlying that $300 billion, and how does that split out between enterprise and hyperscale?
Long term, I expect enterprise and edge to be bigger than hyperscale. I believe that there will be not just hundreds of data centers in the world, but there will be millions of data centers in the world. I believe millions of data centers will be out at the edge, and they have to be built, designed, orchestrated like it's a cloud computer, but it's all over the place. That you can ensure, guarantee nanoseconds, surely less than a millisecond of latency, and guarantee that service every single time. Not best effort, no excuses during high traffic times because there's an industrial application connected to it. They're robotic applications.
They're working hand-in-hand or, you know, machine to machine, and they're communicating with each other, and they just can't afford to get behind the latest drop of some, you know, new show on Netflix. I mean, that just can't happen. They're working together. They're doing important things. Humans are working among them. That world you need data centers right at the edge. Long term, I believe that the hyperscale will continue to be very, very large, of course, and it's gonna continue to grow from here and the industrial edge will be quite large. I also remember that NVIDIA is not a chip-only company. We're a chip and systems company.
We build some of the world's largest systems, and those largest systems are not one-off supercomputers for a particular, you know, nation, but they are supercomputers that are built as AI factories. You saw recently a very large company announce a very large installation of an AI factory. It's really about processing data and trying to refine the data and trying to produce the most valuable commodity that we have, which is intelligence. We now have the ability as a form of information science to be able to harvest data, to process data, and turn it into intelligence, invaluable intelligence. I believe that kind of data center, the DGX SuperPOD type of AI factories, are gonna continue to grow.
It's already been spectacularly successful. You know, we're the only company in the world that builds that. We offer the blueprint to all of our partners so everybody could build it. But of course, we build it ourselves as well. So the second thing is that, remember inside it, our systems. Our systems are, of course, have CPUs inside, which is a discrete CPU, which is a brand-new growth opportunity for us. We have NICs inside the hyperscale cloud called ConnectX-7. We have NICs, smart NICs at the edge of the cloud called BlueField. Okay? We have the switches that connect everything together, and we have three types of switches. Those three types of switches connect basically the end-to-end of an entire modern data center.
At the core, where the nodes wanna be connected, we have this brand-new NVSwitch, a new class of switch that doesn't exist anywhere on the planet. Second, we have InfiniBand switch to Quantum platforms. Third, we now have, for the very first time, a world-class Ethernet switch platform, absolutely world-class. So these three platforms allows us to connect every company, whether it's hyperscale or enterprise, from the core all the way out to the edge. We have the end-to-end solution, we have the compute solution, and very importantly, probably the most importantly, we have the software capability to glue it all together. Otherwise, how do you even assemble all this stuff? It's just way too much gear. Nobody without software has the courage to invest $200 million on a bunch of hardware to connect it together if you don't have software.
When it comes to this type of software, AI factory software, NVIDIA is singular. We are. This is our focus. All of that plays. I think the $300 billion basically represents all of that. Okay.
Got it. No, that's helpful. On the shorter term side, Colette, I hate to ask this question in this forum, but I've got 10 emails in my inbox from investors asking, so I'm just gonna ask it. We're about two-thirds of the way through the quarter. Do you have any updates on the quarter itself? Any changes at all that you're seeing? I apologize for asking it, but I think it needs to be asked.
Well, for your 10 questions that are out there, we don't have any update for you. We provided guidance at the beginning of the quarter. I feel that our guidance was quite solid even in what we'd say has been a lot of world dynamics over this period of time. At this time, no change from the guidance, nothing to add. I guess in that perspective, we decided to concentrate here on GTC and the great announcements, and you should say status quo. All's looking fine.
Got it. That's helpful. Thank you so much, guys.
Thank you. Thanks, Stacy.
Our next question comes from Tim Arcuri from UBS. Tim, please go ahead.
Hello, can you hear me?
We sure can. Hi, Tim.
Oh, thanks. Hi. Hi, Jensen. How are you?
It's wonderful.
I had a couple questions on autos, and I know you said it's the next, you know, multi-billion dollar business on the cusp of some inflection. My two questions are, first, can you sort of help us shape the curve for that $11 billion that's in the pipeline over the next six years? I guess maybe, you know, one way I was thinking about it was if you split it sort of into two different three-year parts, is it reasonable that maybe 25% of that pipeline is in the first three years and 75% is in the back three years? That's the first question.
The second question is, it sounds like most of that is software versus hardware, so I'm wondering if you can break that down for us. Thanks.
I'll do the second one, and Colette will do the first part. Okay? The autonomous vehicle, the software-defined car movement, took one generation longer than I expected, but it is all here now. Part of it has been accelerated by a vision of what a software-defined car could do and the business models that it could enable. Every single car company in the world wants to be a high-tech company, and a tech company doesn't ship, you know, a product and never connect to it again. A technology company today is a connected device company, and the car is one of the greatest opportunities for a connected device because it stays in your connection for 20 years.
You know, once it's on the road, you're connected to it for 20 years. The install base that you could build over 20 years is incredible. I think that car companies, especially the state-of-the-art car companies, what is called the new electric vehicle companies, the NEVs, they all see this. They're piling on as much computation as they can into the car because they're going to provide new services for two decades after that. It took a while, but the time is now here. They see the vision, they see the excitement, they see the opportunity for transformation, they see the business model opportunities, that the economics after they sell the car is going to be way better than the economics at the point of sales.
That's the big realization. Although it took us a little longer to get here, we are all here now. Orin that started production this month is just a home run. It's potentially one of the most successful products in our company's history. It is singular in the marketplace. It has the benefit of all of NVIDIA's software stack that sits on top of it so that you can program all of this complicated robotic software. It has the benefit of having three other pillars. Aside from the robotics computer inside the car, you have NVIDIA's architecture to help you train the model. You have NVIDIA's architecture to help you develop synthetic data to train that model.
You have the opportunity to use NVIDIA's architecture to do this, the digital twin simulation so that you could orchestrate and manage your fleet. We have four pillars of opportunities besides what goes into the computer. That $11 billion doesn't include the other three pillars. The $11 billion is just what goes into the car. Now you can imagine how big the business opportunity is for us and the largest robotics opportunity near term. That's, I think, I answered his question. Good luck.
I think you did. Your second part of the question was regarding the $11 billion and how to think about it in terms of the years. I would look at it in multiple inflection points, okay? Inflection point now as we begin the ramp on Orin, the ramp with the NEVs, the EVs using this as a computing platform, and this is what you will see even today, even this year. The second part of it comes into calendar 2024, calendar 2025, software begins. Yes, you are correct. It is over time and very much influenced by the software when that ramps. That will be a very important part of this growth in terms of $11 billion.
Is it reasonable, Colette, to say that like 75% of it's parked into the out period? Is that a reasonable estimate?
Is it? It's reasonable. It's reasonable, but again, we're not done. I'm sure we'll continue to update that pipeline, over time as more and more partners become very locked in in terms of this platform. For right now, yeah, it's a reasonable assumption.
Okay. Awesome. Thank you so much.
Thank you.
Our next question comes from Ambrish Srivastava from BMO. Ambrish, please go ahead. Ambrish, please unmute yourself and turn on your video.
Hi, can you hear me?
Yes, perfectly.
See me?
Perfectly.
All right. Thank you, folks. Colette, Jensen, thank you. That was very informative. A lot of information to digest. I had a question on the software side. I'm just. Pardon me for failing to understand the opportunity and really it's pretty big, $150 billion in both sides of the enterprise as well as the Omniverse. So really the longer term question is, A, how big is this opportunity today out there? There are others serving this and obviously the market is nascent, but how much are you competing to get that? Are there other players participating in the market? That's really longer term trying to understand. These are really big numbers and big numbers attract competitors. That's what I'm trying to understand, how should we see NVIDIA's positioning?
More on a tactical term, Colette, thank you for sharing the, if I got the number right, $200 million today. At what point would you consider giving us metrics on backlog or any other metrics you had in mind? Thank you.
There's two types of software, if I could simplify it. There's the application software, and then, if you will, the operating system software. In the case of a data center, the operating environment of a data center is VMware and Red Hat. For example, the operating environment of a computer or a client computer, Windows, Apple Mac, for example, Android, for example. That's the operating environment. On top of it, there's an engine. That engine is, if you will, the operating environment of a domain of applications. In the case of AI applications, the engines are built on top of CUDA. As you know quite well, we pioneered this whole space.
CUDA has an engine on top of it called cuDNN, has a library called TensorRT, has a library called Triton, and the list goes on. Okay? There's DALI, there's a bunch of stuff inside. For doing all the things that I mentioned earlier, which is the ingestion of data, the preprocessing, the processing of data, to the feature engineering, to the machine learning, to deep learning, to inference. Every one of those stages of that workflow has engines associated with it, libraries associated with it. That library engine today runs in everybody's clouds. It runs in hyperscale companies all over the world. Pieces of that engine runs all over the place. For up until now, and for hyperscale, we'll continue to keep it as part of our product, if you will.
For the world's enterprise, they will need a different level of support. They need a different level of support because the world's enterprise doesn't have the type of DevOps and MLOps that's needed to maintain this engine. We will do that continued innovation, bringing new features and capabilities to it, update it for new GPUs like Hopper's coming out. We'll create a new version for Hopper. We'll connect their existing services to our new services to last generation, old services to new generation. That entire body of work, it's fairly intensive work for operating an AI factory. That work, that technology, all of those services, if you will, we embody into this thing called NVIDIA AI. Does anybody else do it today? That engine?
I think it's reflected in our success with NVIDIA GPUs in the world's enterprise for DataOps, data science, machine learning, deep learning. We're quite successful, as you know, and quite singular, as you know. This engine sits on top of our GPUs. This engine sits on top of our DGXs and servers and, you know, all of our in-network computing, distributed computing. It sits on top of all of that. Okay. We've now finally produced a product that an enterprise can license. They've been asking for it. They've been asking for it. The reason for that is because they can't just go to open source and download all this stuff and make it work for their enterprise.
No more than they could go to Linux, download open source software, and run a multi-billion-dollar company with it. That's why Red Hat exists, that's why VMware exists, that's why, you know, so on and so forth. Okay? Even though we have a lot of our software in open source, the enterprises really need us to turn this into a product, support it like a product, enter into service level agreements that gives them 24/7 access, teach them how to use it. And help them operate it, deploy it into their own data centers, turning every enterprise's data center into a state-of-the-art cloud. That's what they would like, and that's what NVIDIA AI is about. We have an installed base of GPUs in the world today.
We support them with NVIDIA AI as we already are. Going forward, we've turned it into a product, and a licensable product called NVIDIA AI Enterprise. Okay. As far as an alternative, you know, NVIDIA AI is really quite industry-defining, and it is the case with NVIDIA Omniverse as well, quite industry-defining.
Ambrish Srivastava, your question that you indicated on software in the future, do we see metrics being eligible for discussion? Absolutely. If this is a growth driver for us as we see going forward, providing you insight in terms of what drove that software growth and how to think about both what the licensing is, what the maintenance is of it going forward, we're happy to share.
Okay. Thank you, folks.
Thank you.
Thank you. Good questions.
Our next question comes from Harlan Sur from JP Morgan. Harlan, please unmute yourself and start your video using the buttons at the bottom left corner. Please go ahead.
Hey, can you guys hear me?
Yes, perfectly.
Let me check that for you.
Can you hear me?
Yes.
Can.
Still can.
Bear with me, Harlan. I am just looking for the video feed.
Your video feed got lost on the internet.
Apologies, Harlan. We have lost the connection to the video feed. Please bear with me while I find out what is going on.
Maybe you could just ask it.
Harlan Sur, we can hear you. We could chat. Well, while we're waiting, I like to say I thought the NVIDIA management team presentations were pretty fabulous.
Okay. Harlan, please unmute.
What an amazing management team. I love my team. So proud of my team.
Can you guys hear me?
Yeah.
We can see you too.
All right. Great. Thanks for hosting this. This is a very informative event.
Thank you.
You know, is the version of your Hopper GPU, the H100 that you announced today, optimized for your Grace CPU and other Arm-based CPUs that are currently in the market today or do we have to wait for a follow-on version of the Hopper? 'Cause Jensen, I know that you had talked previously about having an x86 optimized GPU version and an Arm-based optimized GPU version. Then outside of the early wins that you have with Grace on supercomputing platforms like Alps, you mentioned broader expansion into AI infrastructure. Would this include your successful DGX platform, maybe DGX SuperPODs powered by your Arm-based Grace architecture and Hopper GPUs in the future?
You know, our company's business is about accelerating computing, which means we like computers of all kinds, x86 kinds, Arm kinds, any kind. Okay? Wherever there's a CPU, there's an opportunity for us to accelerate that CPU. That is really the core of our business, and we'll continue to support whatever CPU the market best desires. There's all kinds of different CPUs out there for different types of configurations and different use cases, and we'll support all of them. That's kind of the nature of our company, and we'll continue to be open and support whatever the market needs.
Grace has a just off the charts phenomenal capability, and its performance is unlike any others, for the type of AI applications that we're targeting. For large data movement workloads, Grace is really quite ideal. For AI infrastructure, whether it's in our DGX, OEM servers, computer makers in the cloud, wherever there's AI infrastructure, we're gonna offer Grace, as well as support for x86.
Yeah.
Let the market decide, and we're delighted by adoption of accelerated computing wherever it is and on whatever microprocessor comes along.
Thanks, Jensen.
Yep. Thanks.
We have time for one last question, and our last question comes from Atif Malik from Citi. Atif, please unmute yourself and start your video.
Hi. Can you hear me?
Yes. Nice to see you, Atif.
Hi. Thanks for taking my question. I have a question on gaming. Last year you were supply constrained. Wanted to get your thoughts on supply and demand for this year. There has been, you know, disruptions on both the supply side with the Shenzhen lockout, as well as on the demand side with the Russia-Ukraine conflict impacting European gaming demand. How should we think about your supply and demand dynamics for this year? As a second part, if you can talk about RTX, the install base doubled from last year, 15%-30%, and how should we think about your refresh on the gaming products, given rising competition from AMD's RDNA 3 and Intel Arc? Thank you.
I'll go backwards. It's hard to comment about things that don't exist, and so, I'll look forward to them when the time comes. The, with respect to the two dynamics that you mentioned, they're disproportionate by an enormous amount. Our supply constraint is by far the greatest impact of this last year, and it continues to be. There are several. They're just really terrific dynamics that are happening in gaming. Number one, there are more gamers than ever, as Fish was saying earlier. The way that people game is changing. Not only are they playing games for the game itself, but gaming is also a way to hang out with friends and spending time with friends.
Gaming is a form of art now, and gaming, of course, as you know, is a very important form of sports. Gaming now cuts across leisure to social to art to sports. Very few, I mean, I can't think of one right now, very few other entertainment genres is as broad and as broadly impactful. More gamers, more ways to game, and of course, very importantly, gamers don't just game, and gaming is not just about games anymore. The creative part of gaming has really done so well.
We are quite unique in our ability to serve every segment of gaming, whether it's PC desktop, PC laptop, the most successful game console in the history of game consoles, to cloud gaming, first-party cloud gaming with GeForce NOW to third-party cloud gaming partnerships. We have the ability to reach televisions and tablets and phones and PCs and, you know, wherever whatever operating system it happens to be. Our gaming strategy is incredibly broad and because gamers are such a creative bunch, it is so much more than gaming. Our dynamics is really great, which explains the reason why channel inventory remains low.
We expect it for some time. With respect to RTX is a big deal on multiple dimensions. Number one, if not for RTX, Omniverse wouldn't exist. If not for RTX, it would not be possible to make something like Omniverse exist, which is a simulation. It's not pre-baked art. Everything that you see, it's not pre-baked. Like, most video games have a lot of pre-baking. It's called pre-baking. Movies, as you know, is largely pre-baked. Omniverse is real time. It's synthesized in real time. It's simulated in real time. The materials, the lights, the shadows, all of the really impressive effects that you see that makes things beautiful come about because it's just beautiful. You know, the physics is beautiful.
RTX made that possible. RTX reset computer graphics altogether. If you look at NVIDIA's install base today, and you look at the world's install base of gaming platforms, the number that our RTX level of ray tracing is really small. In a lot of ways, we have completely reset. Because of this continuous invention in computer graphics, we have reset the world's install base of hardware. The combination of the rich dynamics of gaming and the fundamental invention of RTX has really caused the demand to be just, you know, through the roof during this time. I think the gaming dynamics overall are just really terrific, and I really appreciate that question.
These are all the questions we have time for today, and I would like to hand back to Jensen Huang for any closing remarks.
Thank you for joining NVIDIA GTC and our Analyst Day. I would like to say one more time how incredible the NVIDIA management team did. It is so fantastic to be on stage with Colette and to share the stage with the NVIDIA management team. As you could see, I'm super proud of them. You could also see why I should be. They're incredible and it is the reason why NVIDIA is such a great investment. From NVIDIA's management presentations, you could see the exciting growth drivers. Gaming dynamics are excellent. As I mentioned, more gamers, more ways to game. RTX has reset the gaming install base, and games are so much more than games now.
Demand continues to exceed supply, keeping the channel inventory low. We have strong demand for our data center platforms driven by AI training and inference across all of those different models that I mentioned earlier and across just about every cloud computing company and now going into the world's enterprise. We have excellent visibility into our data center business. NVIDIA is the engine of the world's AI infrastructure, and our software business now augments our platform, the platform that we've been developing over all these years. We've been developing software on top of it, and now we turn them into software products that customers can license for the enterprise level of supports that they desire. We're offering two industry-defining platforms, NVIDIA AI and NVIDIA Omniverse, and they both come with world-class software support and licensable support.
Auto is on its way to be our next multi-billion-dollar business, and I'm super excited about the work that we've done. It took us nearly a decade to reach this point where the entire automotive industry is now ready to be a software-defined industry and become a tech industry. Today we announced and launched a giant wave of new products. The Hopper H100, the DGX SuperPOD with our brand-new NVLink Switch System, our Grace CPU Superchip, and the enabling technology that made it possible, the NVLink-C2C, incredibly energy efficient, high-speed, world-class SerDes link is now open for our partners. Spectrum-4, 400 Gbps Ethernet switch. Of course, the whole bunch of software that connect scientific challenges, markets, and new growth to our company. It's software that activates all of this hardware.
It's software that connects all of this software to interesting challenges, groundbreaking work by developers and scientists, and of course, very importantly, new growth of our company. With each of our four-layer stack on top of our one NVIDIA architecture, we engage more opportunities. Now our company and the accelerated computing platform has grown so much to 500 libraries, we're able to serve the world's $100 trillion of industry. Thank you all for joining us today.