NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.50
-8.75 (-4.18%)
Apr 30, 2026, 12:16 PM EDT - Market open
← View all transcripts

Investor Day 2021

Apr 12, 2021

Hi, everyone, and welcome to NVIDIA's 2021 Investor Day. I'm Simona Jankowski from Investor Relations. I'd like to welcome all of you joining us on the webcast and hope to see many of you in person for future events. Let me quickly walk through the Safe Harbor before getting to today's agenda. We will be making forward looking statements regarding our expectations and other future events, which may differ materially from NVIDIA's actual results. Please refer to our SEC filings for a description of our businesses and associated risks and other factors, which could cause our results to differ materially from these statements. All our statements are made as of today, April 12, 2021, based on information currently available to us. Except as required by law, we assume no obligation to update any of these statements. Also, if we use any non GAAP financial measures, You'll find the reconciliations to GAAP on our IR website. Today's agenda includes presentations by Jensen Huang, Jeff Fisher and Colette Kress. Once we finish the presentations, you will have an opportunity for Q and A with Jensen and Colette. And now please join me in welcoming NVIDIA's Founder and Chief Executive Officer, Janssen Huang. Good morning. I hope you are enjoying GTC 2021. It is the best ever. Over 180,000 registered attendees, 1600 talks From Turing Award winners, Gordon Bell Award winners, Kaggle Data Science Grand Masters, Renowned AI researchers, Joshua Bengio, Jeff Hinton, Jan Lecun, Jurgen Schmidhuber, so many others. 1600 talks highlighting research on accelerated computing. AI, 5 gs, quantum computing, natural language Recommender systems, self driving cars, healthcare, cybersecurity, robotics, 5 gs, edge IoT, Incredibly diverse fields. There are many vertical industry tracks, so be sure to tell your colleagues covering healthcare or transportation or retail So you know to listen to the talks. We announced some important things. Let me describe how they fit into our strategy and highlight some key points. Omniverse, a platform to create and simulate Shared virtual 3 d worlds. It connects to other worlds using USD, universal scene description. If you will the HTML of 3 d worlds and it simulates in a physically and photorealistic way. With Omniverse, you can connect designers using different tools into one world, a shared world To create a scene or a game. With Omniverse you can also connect robots, AI characters performing various tasks into one world, a shared world like the BMW simulator factory. Omniverse is the foundation platform of our AR and VR strategies, our design and remote collaboration strategies, Our metaverse virtual world strategies and our robotics and autonomous machine AI strategies, You're going to see a lot more of Omniverse. It's a really important platform. We announced the new DGX station, the new DGX SuperPOD, The world's first cloud native supercomputer. It is now the world's first cloud native supercomputer. We announced 3 important new SDKs. Megatron to train giant transformer language models and Naver, the number one Internet company in Korea, It's an excellent example of why domain and region specific language model development on SuperPods is necessary And a trend. NVIDIA Clara for computational drug discovery announced new algorithms and partnerships with Oxford Nanopore, Schrodinger and Recursion, there's so many more. We announced cuQuantum, Our quantum circuit simulation platform running on DGX simulate quantum circuits in days that otherwise Researchers in industry, national labs and academia around the world are investing 100 of 1,000,000,000 in quantum research. Quantum, Qquantum will benefit designers of quantum computers and those designing hybrid GPU and quantum architectures, Researchers inventing new quantum algorithms like a cryptographic algorithm for the post quantum world To scientists who are simulating science with quantum physics, we announced BlueField 3, our next generation data center infrastructure computing platform, Isolating control from application planes to enhance security, offloading and accelerating virtualization, Networking, storage and security. A third of today's processing in software defined data centers is already in the infrastructure software And we'll grow substantially with 0 trust security models. The time for BlueField has come. We announced DOCA 1.0, the software stack for data center infrastructure computing. DOCA will be for BlueField, What CUDA has been for our GPUs. We announced Grace, our first data center CPU designed especially For a giant data scale, AI and HPC, Grace is 30 times the system and memory bandwidth of our DGX, The fastest computer in the world today, just to put that in perspective. We'd expect to see an order of magnitude leap For giant AI models like MEGATRON transformers. We will sample next year and ship 2023 and DGX and the giant Swiss supercomputer called ALPS, Which is going to be 10 times the AI performance of the fastest supercomputer in the world today. One of the most important announcements is NVIDIA EGX for enterprise with Ariel for 5 gs Industrial Edge, and with Ariel for 5 gs Industrial Edge. We believe enterprise industrial edge will be where AI makes the biggest impact In healthcare, warehouse logistics, manufacturing, retail, agriculture, transportation, the world's largest industries. We announced new customers of Drive Orin and a new chip called Atlan and a reference AV car computer system called Hyperion 8. The announcements feature the most powerful dynamics that are shaping The industries today, accelerated computing is the path forward and not only has it kept computing performance from plateauing, We've set computing on a new supercharged curve. AI is software that writes software no human can. It has profoundly changed how software is developed and opened new opportunities to automate tasks never before possible. The data center is the new unit of computing. The software as a service trend has caused software To be refactored into disaggregated microservices that can easily scale out and run across the entire data center Like it's one computer. AI and 5 gs are the 2 critical technologies that enterprises Of the world's largest industries need to deploy new products, services and business models. The era of robotics is here. The first significant example is self driving cars. Thousands of companies around the world are building robotic services. One of the most important missing technologies is finally here, Omniverse, a simulated virtual world so accurately that robots can learn to be robots. Grace is our 1st data center CPU and is designed especially for giant scale AI and HPC. In addition to chips from industry partners, we now have the essential 3 chips needed to innovate in a diverse range of computing From cloud data centers to 5 gs industrial edge, our roadmap rhythm will combine CPU, GPU and DPU to deliver a big boost each year. We will support both X86 and R And many CPU designs optimize for diverse segments of computing from Intel, AMD, Amazon, As you see in my talks and announcements, I'm a big fan of ARM, their CPU architecture and their open licensing business model. We believe there are great many new market opportunities for ARM. To engage it, we will have to develop new platforms, New ecosystems and new markets. These are things that NVIDIA excels. We announced several partnerships to And Arm's ecosystem and opportunities. We announced that AWS and NVIDIA are building cloud computing platforms With Graviton2 and NVIDIA GPU. We announced that Ampere Computing and NVIDIA are building reference platforms for AI, Scientific Computing and Cloud. We announced that Marvell and NVIDIA are building reference platforms for 5 gs and cloud And 5 gs Edge and Cloud. And we announced that MediaTek and NVIDIA building reference platforms for PC and mobile devices. Our combination will accelerate and expand ARM's opportunities. ARM and NVIDIA are great businesses separately. Together, we will create new growth platforms for our partners and build a premier computing company for the age of AI. We're working with regulators in the U. S, Europe and Asia to explain our vision for Arm and obtain the necessary approvals. Discussions with regulators are expected and constructive. We continue to expect closing in 2022. AI, the automation of intelligence that can operate and scale out at the speed of light It's the most powerful technology force of our time. We're expanding the reach of AI in 4 waves. 1st is to reinvent computing and software for AI. This led to the invention of NVIDIA DGX, The Tensor Core GPU and the NVIDIA AI platform. The 2nd wave is cloud and CSP adoption. The drive to maximize cloud throughput, flexibility and utilization led to the invention of the Ampere universal GPU for training and inference, TensorRT optimizing compiler, Triton Inference Server and MIG multi instance GPU, RAPIDS data processing and integrated cloud graphics and a global team of accelerated computing software experts working with CSPs. We've created so much to enable AI in the cloud. The next waves of AI are big. The enterprise industrial edge and robotics. EGX is our AI platform for enterprise and industrial edge. AGX is our robotics computing platform for autonomous machines. NVIDIA EGX is the AI platform for enterprise and industrial 5 gs edge. Every layer of the computing stack required massive engineering and a large ecosystem of partners We're brought together so EGX can seamlessly integrate into the world's enterprise IT infrastructure. From the bottom up, 14 global computer makers are offering 55 new high volume servers designed For the world's enterprise, integrating our new A10, A30 and Aerial A100 GPUs Optimized for enterprise data center environments. A10 is optimized for AI as well as graphics. A30 is optimized for AI and Compute. Aerial A100 combines A100, BlueField 2 and our Aerial 5 gs vRAN stack. All of these servers are NVIDIA certified to run NVIDIA AI, NVIDIA Omniverse and NVIDIA optimized VMware vSphere. Together with VMware, we did some great computer science. We engineered vSphere to incorporate NVIDIA Tensor Core GPUs, NVIDIA BlueField Technologies so that NVIDIA AI can achieve bare metal performance in a VMware virtualized environment. This is important because 80% of the world's enterprise runs VMware. We're offering NVIDIA AI Enterprise, a major software product With enterprise grade service levels for mission critical AI operations. And to run on top of The NVIDIA AI OS, if you will. We have the NVIDIA suite of NVIDIA pre trained models, State of the art AI models designed for production and performance optimized to deploy. There are many personas of AIs that we offer, but think of it as sight, speech, language, animation, Understanding and recommendations AIs. If NVIDIA AI is the OS, The NVIDIA pre trained models are the application suites. Let me highlight One of the new pre trained AI's Jarvis. Jarvis is an interactive conversational AI, State of the art deep learning model end to end, a 100 milliseconds blink of an eye response time, world class results In speech recognition and translation, Jarvis supports 5 languages today, English, Spanish, French, German, Japanese and Russian. Pre trained models are a company with some great tools, NVIDIA TAO to customize Jarvis to the language or lingo of your domain, Healthcare, technology, insurance, retail, customer service of some kind. And NVIDIA Fleet Command To securely deploy and manage your AI to your fleet of EGX Computers, End to end, top to bottom complete. And this is most important. Jarvis and NVIDIA pre trained models Runs in any cloud and now on prem and to the 5 gs edge on NVIDIA EGX. If you deploy to the 5 gs Edge, add aerial A100 GPU and with our aerial 5 gs vRAN stack turns the EGX server Into a 5 gs base station that runs AI services like the cloud, We brought the power of the cloud to the edge. I'm delighted to see the IT industry join us to bring EGX AI platform to the world's industries. Dell, Atas, HP, VMware, Red Hat, Google Cloud, Splunk, Cloudera, OmniSci, Checkpoint, Fortinet and in 5 gs Ericsson, Mavenir, Fujitsu and so many more I didn't list. I appreciate all of your support. We may have noticed, you may have noticed that the leading cybersecurity companies are also joining our platform. NVIDIA Morpheus real time all packet inspection platform is really exciting And really essential in this future 0 trust security model. Autonomous driving is one of the first mass market robotics applications. It is also one of the most intense machine learning applications and requires decade only decade long investment. NVIDIA's strategy has 3 central pillars. 1st, build an end to end and full stack drive service that we will operate With automakers. 2nd, build an open platform for the drive building blocks For the entire transportation industry to build AVs. And 3rd, generalize the learnings From Drive to create other machine learning applications like robotics and edge AI. On the left is the pipeline. It's an incredible thing that Drive autonomous driving requires Hyperion 8 AV system and BB8 test car collects data that our data factory processes And to label training data. Labeled data in Omniverse synthetically generated data trains our models. The Drive AV application runs an Omniverse Drive Sim simulation on real car computer, But in our data center, these computers are called Constellation. The software is OTA ed into the car with Fleet Command And tested in BB8. Effectively, a massive AI computer and a fleet of AV computers They're running continuously in a loop, improving continuously. This is the machine learning loop. All of this work will go into the 2024 Mercedes Benz EQS fleet. On the right is our AV computer roadmap. The growth in computing is staggering and needed. We announced Atlant, our card computer for the 2025 generation. It's 1,000 TOPS in a chip and fuses our most advanced GPU and AI technologies, the BlueField 4 data center technologies, The most advanced ARM server class CPUs, years of functional safety and security expertise and tens of thousands of engineering years of software. All software developed on Xavier runs on Orin and then on Atlant, one architecture. Our car computing platform that offers carmakers one consistent and programmable installed base That they could leverage software investments across and a growing installed base that they can deliver value to for the life of the car. We announced Hyperion 8, a fully production able, Functional, safe and secure card computing platform, including sensor, network, Computer and software for data collection, testing and production. Hyperion 8 is compatible with NVIDIA's entire drive AV stack. Hyperion 8 is a standard reference AD car system that can fit into most vehicles. Ipirune 8 is like the PC ATX motherboard, a standard reference that invigorated and accelerated the PC industry. We announced more Oren customers. The transportation industry is quickly becoming a technology industry. Makers now realize something very important. They now realize that the car is more than the vehicle, But an installed base, an installed base platform that they own. And if the computer is powerful, It will host valued software services they invent for years to come. This is a giant new observation And it's transforming the industry. A few million cars installed based on the road a few million cars installed base On the road for 15 to 20 years, even a few $100 of services each year Represent an immense value creation opportunity. Putting an orin or a few into the card is the best way to build a valuable installed base. NVIDIA is an accelerated computing company and innovates across the entire stack. NVIDIA is built as an open platform with 3 layers, chips and systems, Software platforms and application frameworks. With the global NVIDIA partner network Connected to the layers of our platform, we serve customers and markets from the layer best for them. Each higher layer offers an order of magnitude more opportunity for our company. Our chips and systems market opportunity is profoundly greater With NVIDIA AI and NVIDIA Omniverse, our markets our company's market opportunity is profoundly greater By offering drive as a service to a 10,000,000,000,000 miles a year industry and Jarvis to automate 100 of billions of hours of spoken language each year. All of this is built on one architecture, each layer leveraging and enhancing the layer below. We had a great GTC and lots of new products, new partners and new markets. These are the points that I highlight. Bluefield and Grace make NVIDIA 3 chip company to do data center scale computing. NVIDIA EGX is our new AI platform for enterprise and industrial 5 gs edge. And NVIDIA AI Enterprise is our new software product. Others are NVIDIA vGPU, NVIDIA Omniverse, Base Command and Fleet Command. And I'm looking forward to NVIDIA pre trained models like Drive and Jarvis going into production services. NVIDIA is a computing Platform company, innovating across 3 chips and a full stack 3 layers. Jeff Fisher will be up next to talk to you about gaming. So with that, I'll hand it over to Jeff. Fish? Thanks, Jensen. Welcome, everyone, to Analyst Day 2021, excited to give you guys an update on our gaming business. Every person born today will be a gamer. That's 140,000,000 more gamers this year alone. As gaming has become a pastime for millions more people, It has transformed into one of the largest and fastest growing forms of entertainment. Gaming is no longer about playing a game. Gaming is an immersive social network, connecting like minded people, building lifetime friendships. Discord, Which began with a mission to connect gamers, has grown into a social network and more than doubled its active users to 140,000,000 In 2018, since 2018. Gaming is a sport. Esports have fueled a new generation of e athletes Who compete for bragging rights and has spawned professional careers. 75% of GeForce gamers play esports. Esports influence is growing and the world is tuning in. In the past 2 years, The esports audience has grown by 75,000,000, totaling 436,000,000 viewers. Gaming is offered gaming also offers interactive storytelling. Why watch a movie when you could play a movie That's as cinematic and rich as anything produced in Hollywood. And gamers want to share their passion for gaming. Live streaming your gameplay and connecting content for others to watch. In 2020, 200 in 2020, 100,000,000 hours of gaming content was watched on YouTube. That's twice of 2018. The growth of gaming is also evident on Steam, one of the top destinations for gamers Where 120,000,000 people played every month in 2020. With 25,000,000 peak concurrent gamers, That's up 1.5 times from 2018. In that same time, the Epic Game Store has grown from its launch in December of 2018 To 160,000,000 PC gamers. Gamers are demanding much more from their hardware as well. So our GeForce gaming platform continues to deliver more value. For example, This past year, we launched DLSS 2.0. DLSS uses AI to deliver a significant performance increase in games. Running on RTX Tensor Cores, DLSS can increase frame rates by up to 100%. Press and gamers have widely recognized DLSS as a must have feature for gaming. We also delivered a powerful weapon to esports gamers, NVIDIA Reflex. System latency is a killer, literally, For competitive gamers, shots need to be spot on, not a few frames behind the opponent. In the case of Blizzard's Overwatch, which recently integrated NVIDIA Reflex, Reflex reduces system latency by 50%. Now 8 of the top 10 competitive shooters have integrated NVIDIA Reflex. And last, we brought AI to streamers and video content creators with NVIDIA Broadcast. NVIDIA Broadcast can turn any room into a broadcast 2020 was an exciting year for our gaming business. 2 years ago, we introduced a breakthrough in graphics, Real time ray tracing and AI based DLSS. We called it RTX. In 2020, we doubled down. We introduced the Ampere architecture. Ampere featured a new shader design, A 2nd generation RT core for ray tracing and a 3rd generation tensor core for AI. It was our biggest generational leap ever, and gamers who were waiting to upgrade to RTX jumped in. Combined with strong gaming market fundamentals, the increasing production value of AAA games, Global esports growth and an increasing number of creators and streamers, we delivered a record year. Our 5 year gaming GPU CAGR is 21%, with growth in units shipped and ASP. Looking forward, RTX represents a huge reset of the installed base. RTX was featured in the biggest games of last year, including Cyberpunk And Watch Dogs Legion and in massive hit titles like Fortnite, World of Warcraft And the best selling game of all time, Minecraft. And if you haven't seen some of the beautiful RTX worlds that Minecraft gamers are creating, Do a quick Google search of RTX Minecraft. Trust me, you will be amazed. With every major operating system, game engine, and console supporting ray tracing, and a strong pipeline of games on the horizon, RTX is the new standard. Looking just at our installed base of 140,000,000 GeForce gaming GPUs, 85% were designed to run traditional games. Turn on ray tracing, and like this example of the hit game Control, The games become unplayable on these GPUs. RTX with RT Cores and AI based DLSS Delivers beautiful ray traced games at terrific performance and then some. And the upgrade opportunity extends well beyond this. Considering the hundreds of millions of gamers playing on a wide range of hardware, just look at Steam for a peek at this hardware profile. We believe RTX is at the front end of a major upgrade cycle. And we are off to a great start. The excitement was really high Prior to the Ampere launch, if you recall, rumors were all over the Internet. Google search for NVIDIA RTX Was up 6 times compared to that around the Turing launch. Since the RTX3000 Was pulled from Jensen's oven last September, sales have been off the charts. With new buyers and those upgrading, Sales have outpaced prior generations by 2x. These are end market sales. While demand continues to outstrip our supply, gamers are getting their hands on Ampere. Ampere share on Steam It's twice that of Turing's at the same time after launch. More important, as the GeForce gaming platform Continues to add more value to gamers. Gamers are buying up each generation. This chart on the right shows the end market ASP of our desktop stack. This was calculated for the 6 months after each architecture launch. In effect, Today's desktop card ASP is $3.60 that's based on MSRP. That's 20% higher than the same time after Turing launch. And I believe there's plenty of ASP headroom, Especially compared to what gamers are paying for consoles, which start at $500 Now let's talk about the fastest growing game platform, gaming laptops. This past January at CES, we launched the RTX 3000 series for laptops. With 70 new models from every OEM starting at just $9.99 this was our best laptop launch ever. Laptops represent a major growth opportunity as new buyers are choosing laptops, choosing gaming laptops to learn, Fueling this growth is Max Q. Max Q is a system design approach That delivers high performance in thin and light gaming laptops. It has fundamentally changed how laptops are built. Our 3rd generation Max Q was introduced with the Ampere architecture. It includes Dynamic Boost 2.0. For the first time, dynamic boost uses AI to dynamically shift power between the GPU, CPU and system memory, Depending on where it is needed most. And most important, it shifts power away from where it isn't needed. Max Q also features resizable BAR, which enables more efficient memory access to boost performance While using no more power and an updated whisper mode, which maintains high system performance While minimizing acoustics. Over twice the number of Max Q models shipped this year are shipping this year. 2021 will offer the thinnest, highest performance gaming laptops ever. And I've got mine right here. This has got an RTX 3,070 high performance gaming laptop, thin and light. This follows a strong this follows years of strong growth at over 20% CAGR. Gaming laptops are outpacing the consumer laptop market and outselling the most popular game console. And our gaming business is more than just playing. The creation of digital content is exploding. For the 45,000,000 creators and growing, we launched NVIDIA Studio. NVIDIA Studio is our accelerated platform for creators, speeding up ray tracing and AI in over 60 creative and design applications, Including Adobe Photoshop, DaVinci Resolve, and Blender. Studio includes monthly dedicated Studio drivers that we release, they offer new features, faster performance, enhanced stability for creative applications. Since the launch of Studio in 2019, there have been over 100 purpose built Studio laptops and desktops from every major OEM. Our Ampere architecture takes NVIDIA Studio to the next level. Rendering is up to 5 times faster than Pascal. Video editors can now use AI to simplify workflows. Video encode times are reduced by up to 75%. RTX will change the way creators work. We estimate there are over 30,000,000 streamers globally. On Twitch alone, the number of streamers more than doubled over the past year. In China, Streamers are becoming a new e commerce channel, selling gaming hardware direct to their followers, including GeForce RTX GPUs. Sharing your gameplay while streaming requires high performance hardware. Many enthusiasts use 2 PCs, one to play and the other to broadcast. NVIDIA Broadcast Paired with GeForce RTX solves that. Broadcast uses AI to eliminate background noise and to green screen your background. And the powerful video encoder in RTX is capable of streaming video without impacting your gameplay. NVIDIA Broadcast works seamlessly with all the popular streaming and video conference apps. It turns any room into a broadcast studio, And all you need is an RTX GPU. VR has long been viewed as the next thing for gaming. With the availability of lighter, more capable headsets like Quest 2, Valve's Index and HP's ReVirt G2, Along with compelling new games like Microsoft Flight Sim in VR and Valve's Alyx, Which drove a 71% increase in VR game sales on Steam, VR is coming into its own. It's reported that Facebook has over 10,000 people working on augmented and virtual reality. And as you heard today, NVIDIA is building the Omniverse, our portal and a VR portal into a metaverse. VR headsets have twice the resolution of a desktop gaming monitor and demand very high, smooth Frame rates. It's very unforgiving from a performance perspective. This requires a very high performance GPU And provides more motivation for gamers to upgrade. There are expected to be over 30,000,000 PC capable VR headsets sold in the next 5 years. And from what you've heard today, that could be conservative. This past year, we officially launched GeForce NOW. 10 years in the making, GeForce NOW leverages our GeForce PC platform Into a cloud gaming service. Gamers on underpowered PCs, Chromebooks and mobile devices can effectively subscribe to a virtual GeForce PC in the cloud. And the PC ecosystem is coming along with us. Gaming stores and publishers like Valve, Epic and Ubisoft see the opportunity that we see to reach billions more gamers. And we are not building an app alone. Our strategy is to team up with ISPs and telcos, or GFN Alliance Partners, Around the world to offer GeForce NOW to their subscribers, Alliance Partners manage the infrastructure. We operate the service, and we share the revenue. This will become increasingly more powerful as 5 gs blankets the world. With the high resolution, low latency requirements of gaming, cloud gaming is the killer app for 5 gs. Today, GFN has passed 10,000,000 registered users and offers 1,000 instantly playable games, With much more coming online every Thursday in our GFN Thursdays. Tune in and watch for that. GFN is offered in 27 countries. And today, I'm excited to announce that we are adding South America to the list For 2021, including our most requested country and one I'm most excited about, Brazil. Brazil is traditionally a low end hardware gaming market, and they are going to love GFN. Brazil has 95,000,000 gamers alone. Over time, I see GFN extending beyond gaming to all kinds of interactive experiences. This past Sundance Film Festival, GFN hosted Disney's interactive short, To wrap up gaming, we see a long runway for growth. RTX resets everything and will drive a major upgrade cycle. The entire PC ecosystem Needs to upgrade, including 85 percent of our gaming installed base. As the value of our platform grows, Max Q makes laptops thinner and lighter, and new buyers are choosing gaming laptops. RTX and RTX laptops will power the 100,000,000 and growing number of e athletes, creators, streamers and virtual reality adopters. Last, GeForce NOW continues to scale up. The PC ecosystem is coming along with us. GeForce NOW gives us the opportunity To extend our PC platform to billions more gamers. I want to thank you all for joining me and joining us on an Analyst Day. And with that, let me hand it over to Colette Kress. Good morning, everybody. Fiscal year 2021 was a record breaking year for NVIDIA. We achieved record revenue and EPS, Launched our Ampere architecture for both gaming and data center into incredible demand, completed the acquisition of Mellanox and announced our transformative acquisition alarm. Let's first look at some high level highlights of our P and L. Our fiscal year 2021 revenue increased 53% year on year to $16,700,000,000 fueled primarily by the tremendous ramp of the Ampere architecture across our data center and gaming platforms. We grew our non GAAP gross margins by 310 basis points as data center increased as a percentage of revenue. We also demonstrated strong operating leverage. Our non GAAP operating income increased 82% And our non GAAP EPS increased 73% year on year to $10 a share. Let's turn to the performance of our market platforms. Our gaming business grew 41% year on year to record $7,800,000,000 In fiscal year 2021, with broad based strength driven by growth in our desktop, notebook and console businesses, We reinvented the graphics with the launch of our RTX 30 series GPUs, and the demand has been off the charts, resulting in the fastest launch In the company's history, twice as fast as Turing, gaming revenue has seen a 4 year compounded annual growth rate of 18%, driven by a combination of GPU unit and blended ASP growth. Unit growth has been driven By the expanding universe of gamers, our phenomenal success in growing gaming laptops and strong demand of our console business. Blended ASP growth has been driven by the gamers buying up the stack as they adopt new features and capabilities like NVIDIA Ray Tracing and DLSS as the production value of games continues to increase. Turning to pro visualization. Our revenue declined 13% in fiscal year 2021 to 1,100,000,000 As we believe, enterprises deferred purchases due to the pandemic. Despite these headwinds, we grew revenue at a 4 year compounded annual growth rate up 6% as we benefited from the continued growth of a number of the GPU accelerated applications and as RTX technology gains Automotive revenue declined 23% year on year in fiscal 2021 to $536,000,000 due to lower global production volumes and the expected decline in infotainment revenue. We grew revenue to a 4 year compounded annual growth rate of 2% as strong growth in our autonomous And AI cockpit solutions were largely offset by the declines in infotainment. Autonomous solution and AI cockpit are approaching 2 thirds Our automotive revenue, and we expect mix to continue to shift to these businesses over time as substantial Wins in these areas ramp in the coming years. Data center had a record year, with revenue increasing 124 percent year on year to 6,700,000,000, including almost 70% growth for data center compute. The A100, based on our Ampere architecture, delivers up to 20 times performance increase versus the prior generation, our largest generation leap ever. It delivers high utility driven by its unified architecture, allowing it to process Numerous workloads, including training, inference, data analytics, and graphics. Its MIG technology allows the A100 to efficiently scale up for demanding training workloads and scale out for high volume inference use cases. These advances of the A100, Combined with the forces of AI and cloud computing drove strong demand in fiscal 2021 across hyperscales and enterprise customers. From a workload perspective, we saw strength across training and inference as exponential increase in AI model complexity And compute requirements drove the demand for NVIDIA accelerated compute and networking products. Finally, Mellanox had an outstanding year with growth stemming from hyperscale, supercomputing and AI customers. From a product perspective, Mellanox saw strength across Ethernet and InfiniBand offerings. In total, we have seen tremendous growth in data set with a 4 year compounded annual growth rate of 69%. AI is the most powerful technology force of our time, and we see a long runway of growth ahead of us. As you know, NVIDIA is a full stack computing platform company Spending across silicon systems and software. So far, our software has largely been offered as part of the platform And not directly monetized on a stand alone basis. Jensen earlier discussed our 3 layer model of customer engagement, This conveys our go to market strategy as well as our growing revenue opportunity as we move up the stack to offer software commercially. This helps unlock large new market opportunities and will add reoccurring revenue to our P and L over time. Let me now highlight one such opportunity. We recently announced NVIDIA AI Enterprise, a comprehensive suite of enterprise grade AI software that speeds development and deployment of AI workloads and simplifies management of Enterprise's AI infrastructure. Through our partnership with VMware, 100 of 1000 of vSphere customers will be able to purchase NVIDIA AI Enterprise with the same familiar pricing model that IT managers use to procure VMware infrastructure software. NVIDIA AI Enterprise software is offered in a perpetual license per CPU socket with annual maintenance. We also offer the software suite as a subscription. We believe the NVIDIA AI Enterprise Software represents a multibillion dollar opportunity. NVIDIA AI Enterprise, combined with our EGX Enterprise platform, Is democratizing AI and helping to bring NVIDIA AI and accelerated computing to the world's largest industries. Our EGX platform is gaining rapid adoption as enterprise customers such as Lockheed Martin and Mass General Gringen. Additionally, We are supporting these systems with powerful processor roadmaps, as shown with the launch of our A30 And A10 GPUs, our Aerial A100 platform and our BlueField DPU roadmap. This powerful enterprise and edge computing platform positions NVIDIA for the next wave of AI adoption And we'll be driven by the vertical industries. Let me shift gears to our automotive opportunity. The NVIDIA DRIVE platform is seeing wide adoption across the transportation industry, which will create a significant software component. Today, Volvo Cars announced that it will build next generation vehicles on NVIDIA drive along. This further extends our partnership with Volvo to now include more software defined vehicles in its lineup beginning in 2023. Volvo joins other established OEMs such as Mercedes, Audi and Hyundai, who are all developing on NVIDIA DRIVE. Additionally, a wide range of autonomous vehicle companies are developing on NVIDIA DRIVE, including many trucking and robotaxi companies. We have great momentum across new electric vehicle makers such as NIO, SAIC, XPeng, Lee Otto, Faraday Future and others. These automakers not only are harnessing the new compute Horsepower of our SoCs and GPUs, but also their incredible energy efficiency. Last year, we announced a landmark partnership with Mercedes Benz, which will adopt NVIDIA's full stack drive platform to enable their entire fleet of vehicles To be software defined and perpetually upgradable, this deal was transformational in that in addition to the hardware, It includes a revenue share component for the software sales that Mercedes will make on their fleet of connected vehicles, such as autopilot. Vehicle owners will be able to purchase over the air software and service offerings to enhance the capabilities of their vehicles And increase the joy of driving. With software content potentially in 1,000 of dollars per vehicle, This could be a multibillion revenue opportunity for both Mercedes and NVIDIA. Not only is this deal transformational NVIDIA's business model, but for the auto industry as a large. There are 100,000,000 cars sold each year globally, And over time, we believe all vehicles will be autonomous, software defined and upgradable. We see the potential of similar deals This large and growing list of wins across the transportation industry is set to ramp in the coming years We have over $8,000,000,000 in automotive design win pipeline through fiscal year 2027, with a good amount of this revenue expected to ramp in the latter part of this time frame. I talked earlier about the revenue growth that our data center and gaming market platforms experienced over the last few years. Over the past 4 years, our data center business has grown nearly 3 times the rate of the company's 25% compounded annual growth rate. This mix shift has led to a favorable expansion of our gross margins, as our fastest growing business is also our highest margin business. Given the secular forces of AI and cloud computing, combined with this rapid adoption of our computing and networking platforms, we believe this trend can persist. Additionally, the gross margin profile of our gaming GPU business has increased over time as gamers have bought up our stack, resulting in rising blended ASPs. We expect this trend to continue over the coming years As gamers upgrade to Ampere, RTX continues in its rapid pace of adoption and the overall production Overall, our gross margin has increased from 59.2% in fiscal year to 65.6 percent in fiscal year 2021. We continue to see uplift to our gross margin profile as our mix shifts Furthermore, software is a significant opportunity for NVIDIA. This revenue, As it scales, we'll provide an additional tailwind to gross margins. As discussed, our non GAAP gross margins have increased 6 40 basis points over the past 4 years, and our non GAAP operating margins have also increased At even a faster rate, growing from 32.1 percent in fiscal year 2017 to 40.8 percent in fiscal year 'twenty one. We delivered significant operating leverage even as we invested heavily across the market platforms to support the growth we have demonstrated over the years. And as we look to take advantage of the material opportunities that lie ahead of us, we have a single architecture across all of our platforms that forms The basis for our product offerings and innovations, our 1 architecture approach provides us leverage and core IP to innovate across the entire Technology staff from silicon, systems and software to compute, networking and storage technologies, we are able to rapidly develop new products and This in turn fosters increased adoption of our platform and helps to drive revenue growth and our ability to evolve. This flywheel of innovation is accelerating as seen by our incredible breadth and depth of products and technology As found in the many new announcements discussed today, NVIDIA's unique business model allows us to innovate Like no other company, while driving attractive economic returns. Going forward, we believe we can continue to drive On revenue and earnings growth over the past years, it has also resulted in material increase to our cash flow generation, With cash flow from operations growing from $1,700,000,000 in fiscal year 2017 to $5,800,000,000 In fiscal year 2021, a 37% compounded annual growth rate. This increase This is our ability to invest for growth. Our capital expenditures have increased from approximately 200,000,000 In fiscal year 2017, to $1,100,000,000 in fiscal year 2021, it has also allowed us to engage in transformational M and A, As seen with our purchase of Mellanox and the announced acquisition of ARM, we maintain a disciplined capital return policy With $5,000,000,000 returned to shareholders in the form of dividends and share repurchases since fiscal 2017, We remain committed to paying a dividend. We employ a conservative financial policy and have a healthy balance sheet with $11,600,000,000 in cash and marketable securities and $7,000,000,000 in debt. Overall, our business is highly cash generative, And we expect continued growth in our cash flow in the coming years. This will help fund our investments as we seek to take advantage of the growth opportunities We start our fiscal year with great momentum across our business. While fiscal Q1 is not yet complete, our Q1 total revenue is tracking above the $5,300,000,000 outlook Provided during the fiscal Q4 earnings call, we are experiencing broad based strength with all of our market platforms, Driving upside to our initial outlook. Additionally, we now expect CMP revenue to be approximately 150,000,000 Higher than the $50,000,000 included in our fiscal Q1 outlook. Upside to CMP is not displacing supply from our other platforms. It is incremental. Within data center, we have good visibility, and we expect another strong year. Industries have been increasingly using AI to improve their product and services. We expect this will lead to consumption of our platforms Through CSPs resulting in more purchases as we go through the year. Our EGX platform has strong momentum, and we expect this will drive increased revenue from enterprise and edge computing deployments in the second half of the year. Overall, demand remains very strong and continues to exceed supply, While our channel inventories remain quite lean, we expect demand to continue to exceed supply For much of this year, our operations team is agile and executing fantastically, and we expect our supply will increase As the year progresses, we believe we will have sufficient supply to support sequential growth beyond Q1. Finally, I want to discuss our commitment to ESG. NVIDIA is committed to build 1 of the world's great Companies through people, innovation and energy efficient technology. This means not only doing what's good for business, For what's good for our employees, our business partners, society at large, and the environment. We have been recognized as one of the best places by a number of publications, including Forbes, Fortune and Glassdoor. No Time Better highlighted Our innovations in this past year when our technologies played an important role fighting the pandemic across a wide range of use cases. Our Clara platform helped with drug discovery efforts as humanity races to develop a vaccine and other treatments. Our V100 and T4 GPUs helped scientists create the 1st atomic scale map of the coronavirus. And our AGX platform helped keep healthcare and frontline workers safe with automated technologies that accomplished tasks without Finally, our RTX and virtual GPU technologies help the world work, learn and play from home. We are also committed to investing in a safer environment. Our GPUs inherently provide more energy efficient form of computing, Up to 42 times more efficient for processing AI workloads, the Green 500 less ranks the world's most energy efficient Supercomputers in the world. NVIDIA's own Selene supercomputer ranks number 1, the most energy efficient supercomputer in the world, And NVIDIA powers 26 of the most energy efficient systems in the world. Finally, we are committed to sourcing 65 That wraps up our presentations for today. I want to thank everyone for tuning in to our GTC and presentations. And we would like to now move to Q and A for this portion of our event. So moving it back to you, Savanna. Thank you very much, Colette. We will now begin the Q and A portion of our event with analysts who have joined us on Zoom. Our first question will come from John Pitzer from Credit Suisse. Please go ahead. Yes. Good morning, guys. Thanks for letting me ask the questions. Congratulations GTC at Analyst Day. Judson, I just want to talk a little bit about the Grace announcement and congratulations. I'm kind of curious, I'll add it into the $100,000,000,000 data center channel that you guys discussed a year ago, do you see grace? And as part of your presentation, you reiterated your commitment to CD6 and ARM and actually mentioned multiple CPU players like Intel, AMD, Marvell, Graviton. How do you see the coopetition Between your 3 chip strategy embraced specifically and the ecosystem that you still want to support. Yes, I'll work backwards. First of all, thanks for the question. The world's computing It is really diverse. I mean, there are so many segments of computing and each one of them are architected slightly differently For good reasons, some of them are optimized for single threaded performance, some of it is optimized for many cores, some of it is optimized for Strong IO performance, some of it is really optimized for large amounts of data. We designed Grace for Especially for a particular application that is a set of range of applications if you will that is really dedicated towards Giant data scale computing. You're working on terabytes and terabytes of data For a very long period of time and recommender systems are part of this, natural language understanding models are part of this. In the future you are going to see multimodal Transformer models that are learning from video and speech at the same time or images and speech at the same time. And these are going to be processing just a gigantic amount of data, healthcare processing a giant amount of data. And so we're really designing grace for this particular segment. With respect to coopetition, we work very closely with AMD. As you know, the CPU in our DGX, It's a fantastic CPU. We work with Intel and Enterprise Data Centers, In notebooks, we build amazing notebooks together. We work with Ampere Computing In cloud and cloud gaming and supercomputing, scientific computing, Marvell at the edge, Media tech for PCs and mobile devices. And The world of computing is gigantic. And the nature of our company is an open platform. We are a platform company. And we use the word platform probably more than any chip company in the And we do so purposefully and exactly. This is really About building a platform by which the entire ecosystem can benefit. There is no sustainable growth that is not inclusive growth. There is no sustainable growth that doesn't include partners and collaborators and developers And ecosystems and such. And so we're delighted to partner with all of these great companies to build a future. So I appreciate that question. Thank you. Thank you very much, Jensen. Our next question will come from C. J. Muse from I guess, Jensen, to follow-up on John's question, I was hoping you could speak a little bit more around your RF CPU strategy. It looks like Grace into 2023 appears to be high performance, high bandwidth Using NVLink, but curious as you look to 2025, Praisenext, as well as potentially a good foot strategy, What is your ultimate aim here looking over the next 5, 10 years? Yes. Thanks, CJ. Well, first of all, Grace next, I'm going to have to say that to surprise you. I hate to ruin your And there is so much stuff that we are working on. Our strategy as a company, our, if you will, core philosophy as a company Really to do things that are uniquely for NVIDIA to do And things that are really, really hard to do and I prefer things that take a long time to do And things that frankly nobody else in the world is doing. In every aspect of the conversation that we had in the You could see that in everything whether it's Omniverse or Grace Or BlueField 3 or DOCA in everything, Morpheus, our new cybersecurity platform, All of these things drive everything, Hyperion 8, the world doesn't have it. And that really needs to be the driving purpose of companies to go solve problems that are incredibly hard, that are uniquely Specialized for our capabilities and that the world doesn't have. And so it is the nature of our company to go build CPUs that the world doesn't have, Built CPUs and built products that can somehow expand the envelope, expand the overall size of the marketplace for everybody. And so, we are going to continue to work with a whole bunch of CPU partners and many of that I have mentioned and many, many more That are building all kinds of different specialized CPUs and we will build ours and of course support everybody else With the NVIDIA AI platform, the NVIDIA platform so that we can bring forward this new method of computing we pioneered called accelerated computing. That is if you will the highest level bit and everything else is really about expanding markets, expanding reach In a way that other people can't. Thank you, Jensen. Our next question will come from Vivek Arya from Bank of America. Please go ahead. Thank you, Simona and thanks for the question. I appreciate the very informative Analyst Day. Jensen, I'm curious to get your strategy On the role that system and software and subscription sales will play at NVIDIA over the next several years because We think of NVIDIA as being a more semiconductor company and obviously that's the key part of the business. But then we also see you launch a number of System products, a number of them on subscription type services, whether it's in the data center or enterprise or gaming, You're talking about subscription sales on the enterprise side and then the automotive side. I'm curious, what is the strategy here? How big are those businesses for you today? How big can they be over time and what impact will they have on the financials of the company? Vivek, thanks for the question. The driving purpose for The full stack approach is to pioneer accelerated computing. When you create a new form of computing That is brand new and it doesn't come along very often. If you take a look at the modern way of doing computing, it's running on CPUs. And then of course cloud computing came along largely running on many CPUs turning an entire data center into a computer. And the approach that we pioneered is accelerated computing. Accelerated computing is very different than accelerators. Our GPUs have video encoders inside, video decoders inside, those are accelerators, image processors that are accelerator. Accelerated computing As a general purpose computing platform that is somehow particularly good at a domain of work. And an accelerated computing platform and a company who is a computing platform company It is sensible about architecture compliance, it's sensible about backwards and forward compatibility, it's thoughtful about creating an installed base and Developing developers and ecosystems and networks of partners. And so our primary goal is to pioneer accelerated computing. Well, before you could do that, you really have to build a whole stack because the entire way of computing is refactored. We've refactored computing from the application To the algorithms, to the solvers, to the system software, all the way down to the silicon As you guys know well, and that's why we say NVIDIA is a full stack company. We're computing company that's a full stack company because In the final analysis, it's essential to pioneer a new way of doing computing. Now the way the thing that we select, Our approach, our strategy is to develop it in 3 layers which is the 3 layers of computing. It's the hardware layer, the system software layer, the middleware layer if you will. Operating systems are in that layer, VMware for example is in that layer, NVIDIA AI is in that layer. Okay, so those are algorithms and solvers and middleware that Connects if you will the application on top to the hardware on the bottom and that transformation is very specific to accelerated computing. Transformation is very specific to NVIDIA CUDA and our GPU architecture. And so, the 3rd layer of course is applications. In the world of AI, The invention we created underneath was DGX and Tensor Core GPUs and such. In the middle is this layer called NVIDIA AI And the upper layer would not be an application, but it would be a skill, it would be a task, a skill that could perform a task. That task could be driving cars. That task could be recognizing speech and answering a question, a query. That task could be responding to a recommendation. What movie do you recommend? For example, if I click a movie, what's the next one you recommend? Psalms recommend groceries and you are recommending your cart. And so these are all skills that are sitting on top. And so if I think about it and if I answer your question is in context of AI then we have the chips on the bottom, we have NVIDIA AI in the middle and we have the skills on Each one of these layers, each one of the higher layers increases NVIDIA's opportunity by an order of magnitude. Let me give you an example. There are 100,000,000 cars sold a year and that's the entire opportunity annually for chips, 100,000,000 cars. However, those cars are driven 10,000,000,000,000 miles. And so whatever numbers you use For 100,000,000 cars, if we said $1,000 let's pick a random number for illustration, 100,000,000 A $1,000,000 $100,000,000,000 opportunity. Yet, at the driving level, at the task of driving low, 10,000,000,000,000 miles a year, if a dollar a mile is $10,000,000,000,000 And so that kind of gives you a sense of the economics involved. In the middle of the middleware, you sell a chip once. That's the economics of selling chips. However, In the world of middleware, you have to continuously refine it, enhance it, support the customers as they deliver their Mission critical applications and services. And so there is an ongoing support agreement That is in place to respond to customer needs and bug fixes and feature enhancements and Maintaining long life and all of those things are associated with enterprise software licensing. And they tend to live with the GPU for the entire term of the use. And So we now have an economic model that is about selling, of course, building the most advanced and selling the best chips. And the customers could very easily and we're joy that they do use our SDKs and our programming model to develop their entire stack on top Or they could use NVIDIA's AI stack and NVIDIA Omniverse and NVIDIA whole bunch of other things like Base Command and Fleet Command and vGPUs. They could develop their own Or they can license hours with a maintenance fee for the length of their usage of our GPUs. And then on top of that, it's really skills and tasks, skills, AI skills that perform tasks And they tend to be per end user or per task or per instance. And I could imagine a day Vivek, where an AI is paid by the hour. Just like a particular skill, Someone who performs a particular task is paid by the hour, the AI of course will be paid by the hour. And so that's kind of the economic funnel Of our company and my expectation is that over time the layer that's on top will be the largest of all. But it's built on top of the next two layers. It's enabled by the next two layers. And then we will offer each one of the layers open to the industry so that all of our partners could benefit from our knowledge and our skills And developing the capability and be able to build their own if they like. Thank you. Our next question comes from Stacy Rasgon from Bernstein. Please go ahead with your question. Hi guys, thanks for taking my questions. I have a question for Colette. The uplift And outlook for the current quarter, given you suggested you are still supply constrained, is it proper to take that almost purely as a function of better than expected Increased supply from the last earnings call until now. And I guess if that's true, can we take that pace of So, I think it's sort of a likely indicator of the likely continued pace of supply increases we go through the year. And then finally, I guess given that, do you expect supply to improve similar in all the businesses and hence Do you expect all of your business segments to be growing sequentially through the rest of the year? Sure. Let me see if I can answer your questions, So yes, we are planning on being higher than where we set out at the very beginning of the quarter with our overall guidance. As we discussed even at earnings last quarter, we will see supply continue to increase Throughout this quarter as well as throughout the year. So it's really a statement of the supply at the right time for the demand that we're Today, we will continue to work that all year long. Our operations team is very, very focused not only on just what we need July is going to be coming in every single quarter and that is aiding us as we look in terms of the growth for the rest As we talked about on our prepared remarks, we talked about really seeing growth even past overall Q1 As we do expect supply to be here for the full year. Thank you. Our next question is from Matt Ramsay from Cowen. Jensen, I wanted to ask a question on your auto business and it Kind of dovetails in with the 3 layer strategy that you described a second ago. I think you guys talked about an $8,000,000,000 I would think you should be getting visibility at now or sometime soon to hardware Sales into some of these OEM platforms. So maybe you could walk me through and break down that $8,000,000,000 pipeline a little bit between Confirmed hardware sales, maybe software and services sales and then what you guys are doing with Constellation and some of the simulation services. We cut it off in 2017 for Fiscal 2017, I guess for arbitrary reason. The design wins of cars tend to last a very long time. And the architecture selection will tend to last probably even longer. The car industry You know things in architectures that are things about architecture in a decade way. But What is new to the automotive industry is that these computers and this is very new, this is something that is unique to the car industry that It's unlike any computing platform in the world. The computers that you launch, these computers on wheels, these data center on wheels Are going to last into in the marketplace for a couple of decades, 15 to 20 years. And electric cars are going to last a long time. And with the industry starting to think like a technology industry And recognizing that these cars are not just vehicles that they sold, but they're now part of their installed base, That they're part of their fleet, that the installed base if programmable could offer them A couple of decades of service offering opportunities. They are starting to think like cable providers, infrastructure Cable sit on box providers, they're starting to think like that, phone providers. And The installed base belongs to them. They made the effort to create the cars and the installed base is proprietary to them. So they have to think along those lines. If you look beyond 2017, The answer is yes. The pipeline goes much deeper than that, but we simply cut it off at 2017. And I think the plan is that we would come back every year and give you guys an update every single year And hopefully that pipeline continues to grow. Because it's growing kind of back end loaded, you should My expectation is that we should see our auto business grow quite fast, our pipeline grow quite fast For some time to come, this is really one of the largest industries in the world. We've been by the time that we shipped we would have been investing for a decade. The arsenal of technology that we are bringing to bear to enable this industry both for our own service, but also creating Platforms for the rest of the industry is really quite significant and leverage is basically all of the might of our company And we've been focused on it for coming up on 7 years. And so I think this is A really exciting opportunity for us. You are right that we can benefit from top to bottom and end to end. Top to bottom meaning, in the case that we could offer the full service and operate it with our partners, we'll get the benefit of the service. In the case that a car company would like to provide their own service and they feel that they could develop their own stack, we sell them a great chip And these chips are getting more and more powerful because they would like that fleet to be programmable, richly programmable For valued services for decades to come and so they want to put as much technology into it upfront As possible. And end to end in the sense that we have Hyperion 8 which is a reference car computer And I think it will prove to be quite profound in And its ability to positively impact the ecosystem and make things go allow people to accelerate their developments Just like the PC ATX motherboard did, which was a profound, deeply profound innovation. And then DGX for training models, Constellation for simulation, Omniverse for simulation, All the way to of course our drive computers that go into the cars. And we also announced that the same computer, The same architecture could be used for driving AV but also for in car AI and now integrating 4 computers into 1. This is a very big deal. What used to be multiple ECUs that are sprinkled all over the car really needs to get unified so that the And it needs to have something that is like a data center. It's virtualized, software defined, but fully offloaded and accelerated like the things that we talk about in data centers. And so, it will be architected like a data center, it will be built like a data center, it will be operated like a data center and as a result the So, we have a lot of different ways to engage the car industry. We've been talking about the car business for Quite a long time because heard me talk about it now for probably about 5 years. And Oren, the generation coming up, Oren It's just a complete home run and because we stuck with it, we stuck with what we believed in and it turned out to be right that The car industry really needs to be a technology industry. That the car is not just a car anymore, but it's a car within an installed base, A fleet. That fleet is going to be managed like a data center, software defined. And when you manage that fleet and you grow that installed base, It's going to create a gigantic installed base of future value creation for the car makers. And I think all of these pieces are really coming together. Thank you. Our next question Go ahead. Let me see if I More to that. So as discussed, we've been probably more than 7 years in the making to get this point and We're talking about our pipeline going forward, that's 6 years to 2027. 2027 is about last year that we provided there. It is providing for the established OEMs, our electrical vehicles, our trucking, our robotaxi And we probably expect to see a revenue inflection point somewhere in the timeframe of calendar 2023 2024. To your earlier comment regarding other types of things that we may provide them in terms of our stock, helping them in the data center, helping them in the Our development process, again, that's in our data center revenue. So, this pipeline out to 2027 is really just about our automobile revenue. Thank you. Our next question comes from Tim Arcuri at UBS. Please go ahead with your question. Hi, thanks. I also had a question on the Grace roadmap and sort of How or even if you sort of democratize that, when you sort of think about that roadmap, is it sort of decoupled in any way or monetized Separately, I mean, certainly the propensity is to prioritize for your own roadmap, but I'm wondering sort of how you thread that needle, Jensen? And then I guess, Colette, is it maybe reasonable to just take the entire $25,000,000,000 server TAM, server CPU TAM and just tack that onto the $100,000,000,000 by 20 24 Tim, for the company that you gave last analyst meeting. Thanks. We are going to offer CPUs and technology to our ecosystem partners In the form that best suits them. Today, we put our GPUs into SXM modules that are then I'll put into a by 4 or by 8 HDX carrier board, GPU board. We also sell our GPUs individually so that they could build their bespoke servers. There are a lot of different configurations of GPU servers That are available now. In the case of the EGX launch, we worked with 15 of the world's largest computer makers To build 55 different configurations, there will be more coming. And so, you could see how many different configurations of servers that 1 years and 2 years and 3 years and 4 years, blades and otherwise, liquid cooled and HPC versions. There's just all kinds of different configurations of services. And so we'll offer Grace as an integrated part of our DGX will offer Grace separately to OEMs. And then when the deal closes, we'll continue to license Arm to openly to the entire industry because there are so many different versions of CPUs that could get built And we would love to have every single version built. And to us, we can add the NVIDIA architecture to it, Whether it's our CPU or somebody else's CPU, so long as the market is being created, we could add the NVIDIA architecture to it, the GPU, The DPU, CUDA, DOCA, NVIDIA AI, NVIDIA Omniverse, all of our AI stacks on top and the AI skills on top. Look, our economics are just so much broader, so much richer and so much larger when we have more people around the world And the ecosystem supporting our architecture. So that I think is the most significant bit. And then everything else We want to create products that the world doesn't have that expands our TAM, that expands the market's TAM. And then after that support the customers as best they would like to be supported. Yes, so Tim regarding your question that you asked regarding how to look at this opportunity going forward. So, Grace is definitely for, our giant VHPC and AI use cases, which is And when we think about our TAM opportunity going forward, it is Breadth and depth growth from a lot of things that you've seen us talk about today. We've talked about incorporating Our grace inside many of our different systems that we would have. So, of course, it's an opportunity, but very hard to Our next question will come from Aaron Redeker from Wells Yes, thanks. Great presentations today and congrats on the product announcements. Colette, I wanted to ask you about the commentary around data center. You used the term again, good visibility. So I'm curious of How you would characterize the visibility today relative to what it was, let's say, a year ago when you started to see kind of the strict emerge And data center. And also on top of that, when do we think about the DPUs, the BlueField products starting to become Sure. So let's first start with a year ago, again, still quite Different business at that time versus what we're seeing today. So, we have incorporated Mellanox Into our overall stack, we continue to work on building products across overall Mellanox as well as full systems In terms of the work that we're doing, we started off many years ago focused on hyperscales, hyperscales moved to cloud instances. As you can see, we're just touching that opportunity for enterprise and enterprise for the edge. What this does is there's just a meaningful amount of opportunities in many different facets for us to grow the overall data somewhat. So, Our overall visibility is good and we look at it as an opportunity to expand our types of customers Many different views in terms of with hypertrophy, working with OEMs and enterprises. Therefore, we consider this to be a vast opportunity in front of us and we feel good about the growth that we will likely see Our next question comes from Brett Simpson from Please go ahead with your question. Yeah, thanks very much. I had a question on the software and services strategy, Jensen. So you laid out the business model for DRIVE with the Mercedes agreement. Can you share with us how NVIDIA might monetize Your other AI software stacks as they come out of beta. So things like Jarvis and Omniverse, I think you touched on those, but just curious as to how we should think about the monetization and The timeline around that ramping up. And then just on the cloud side of things, do you see NVIDIA more as a competitor To the public cloud players and AI, where you host services directly, would you expect the hyperscalers to also license these software stacks you're presenting today? Thanks. I'll go backwards. All of our stacks are cloud native, Everything from Omniverse to, of course, GeForce Now to Clara to Jarvis to Metropolis to They're all cloud native, Maxine to Merlin to and they're used by the cloud service providers. We've announced already many, Many instances where the cloud service providers are using our libraries to provision their own services to deliver higher throughput. And so we run everything cloud native. Where we can offer unique value added is embedded And at the edge and on prem, where someone needs state of the art models That needs to be that are trained incredibly well. That's what they call pre trained. And that is treated like a production model, not like a demo. That will have a company with the dedication to continuously improve it for as long as they shall live. And to Provide a suite, as I mentioned, we provide some fundamental suites from sight To language, to speech, to understanding, so on, animation, so on. To Tools that allows you to customize it and adapt it to your own domains and then run it basically anywhere. You can run it in any cloud, you can run it on prem, you can run it in a robotic You can run-in your car, one architecture that spans it all. And so the way that we We'll offer these will largely be embedded software. And so, think of it much more as Intellectual property that will be embedded into systems. And so those are the skills. Now, some skills you have to operate. There is no other way to do it, but operate it. In those cases, we'll have a slightly different model. For example, in the case of Mercedes Benz, we'll have to operate that service for basically decades. And so, we'll create a sharing business model like we've described with Mercedes Benz. And in the case of Omniverse, it will be licensed as a server. Think of it as a server. There's a server and then there's a user component to it. For communities and individual users, it will be free. It's already in open beta, it's already available and people are using it all over the world doing amazing things. And so for researchers, for people who are individuals, for people who are playing with it, it's all free. For people We're utilizing it to operate a service, operate for example a digital factory. We are operating a large scale 5 gs network that's simulating and optimizing in real time. Those will have a server component to it and then a per user component to it. And so each one of the software licensing, there is no one model that kind of cuts across everything. It just depends. And you should always come out of our first principles, what is the best way to deliver the value to the customers. But we care deeply about the software in our company and we're going to continue to innovate In this area, so the business opportunity is quite large. Thank you. Our next question will come from Will Stein from Truist. Please go ahead with your question. Great. Thank you very much for taking my question. Jensen, congratulations on All the exciting announcements today. A few years ago you surprised us with strong entry into the interface Pardon me, the inference category, one that you previously haven't played as strongly on with the announcement of the T4 Chip. Today, you've discussed this Triton inference server. I was expecting perhaps An announcement bringing T4 up to the current Ampere architecture. Is that Essentially what this announcement is about or is it much more or different? Any explanation would be helpful. Thank you. Yes, Will thanks a lot for that. I should have been more clear. So first of all, a comment about inference. People who guess us the most, Which is what inferencing is, guessing, very informed guessing, making predictions. People who guess the most frequently learn the most. And So, I think inferencing has always been very core to our strategy and we've always believed it to be very hard And it's proven to be way harder than that. And so I think inference is just such a great opportunity And we love it because we like hard things that when achieved could make a real impact. Triton is an inference server. It runs at data center scales and there are so many people using It supports CPUs and GPUs and it supports every generation of our GPU. And so if you imagine A hyperscale data center, it's got all kinds of computer chips inside. And this is the only inference server that's optimized for all of Basically there is 2 types and that covers 99%, which is X86 CPUs And NVIDIA GPUs that goes all the way back to Kepler. And so there are 6 generations of GPUs and all the different versions of them and we support every single one And it's optimized for every one of them. We'll support them for as long as we shall live and that's the advantage of Triton. It's also open source, So that the CSPs could make bespoke versions and derivatives from what they like. And so, we are seeing incredible success with training. The version of T4 that's based on Ampere we announced at GTC. It's called A30. A30 is our brand new GPU That is designed for, if you will, mainstream use. And it's designed for enterprise servers, cloud computers And we are super excited about A30. It's derived from A100 of course And it's not as powerful, but it's much more it fits into a much lower energy or power envelope So that you can deploy it very broadly. Okay, so that inference, Triton and A30 is the mainstream version of TD. Thank you. Our next question comes from Blayne Curtis from Barclays. Please go ahead with your question. Hey, thanks for taking my question and thanks for the day. Just following back on a question on BlueField, you announced last GTC BlueField 2, just wondering if you Friday, clarity or color as to the traction you've gotten over the last year. I don't think you've ever broken out how big that is, but any color on the roadmap And the traction with BlueField 2 would be great. BlueField 2 is a programmable device And it needs a rich software stack on top of it just as our GPUs need a CUDA in order for it to be useful. And on top of CUDA, you need a whole bunch of other libraries, we call the CUDA X libraries, acceleration libraries. All of that has been put together In the case of BlueField called DOCA 1.0 and we just released it. We just released the software. So now, DOCA 1.0 And BlueField 2 are ready for production. BlueField 3 is right behind its heels. And You could tell how excited I am about this area because very simplistically a data center is becoming software defined And you know that well, but what that means for everybody is that it takes networking, storage, virtualization And now really, really importantly cybersecurity and it puts it in software, it runs on the CPU. The software defined data center stack Is now overloading the CPU, not to mention it's weird to have the control plane, the security plane, Agents all running, commingled with the application, which could be the intruder. And so the right answer really is to isolate it. That's very, very clear and everybody agrees with that. To isolate that, offload that And very importantly, accelerate the workload so that you could take it off of the CPUs, which is ultimately what the data center is for, To host applications. And one of the things that we prove is that we've demonstrated is that BlueField And it's And the reason for that is because of 0 Trust security models where every single transaction is going to be monitored. And So, I am incredibly excited about this new strategy. BlueField 2 is really the world's first version of it And DOCA 1.0 is the first version of it. And so we're in the process of building that. We have developed dev kits out to all the major companies in the world you could just imagine and you saw the IT companies that are all Part of our launch And so I think this is going to be a giant business. Here is my prediction and I predicted this. I made a prediction about 20 years ago that every single computer will have GPU in it and the key true. And I predict that every single data center will have devices like computing platforms, like BlueField that we call Data center infrastructure processing platforms that every single data center, every single computer will have A blue field like device in the future. And I think it will take less than 5 years because of the importance of cybersecurity That is top of mind to everybody today. Thank you. Our next question comes from Harlan Sur from JPMorgan. Please go ahead with your question. Good morning and exciting to be here for another GTC. And it's actually great to see the NVIDIA Accelerated compute architecture and ecosystem move into new opportunities like 5 gs. Now as we all know, the move to 5 gs opens up Opportunity for broadband wireless connectivity to more than just smartphones. So is EGX with the Aerial A100 platform In trial or planning to be trialed by any of these large telco or greenfield networks with the cloud native open RAN or virtual RAN And how big is this opportunity for NVIDIA over the next 3 to 5 years? Yes, I'm excited. I'm just about as excited as you could imagine about 5 gs and the reason for that is because Particularly private 5 gs, I think commercial 5 gs, consumer 5 gs is really fantastic and it will incrementally and over time Increase the capacity of broadband and very importantly continue to drive down cost of broadband because What goes along with a lot of capacity is the decrease in cost. And so, I think Consumer commercial 5 gs is really fantastic, but what's brand new is private 5 gs, all these spectrums that companies could license To operate a wireless network that is secure and private In farms and entire factories and as you know, these factories are gigantic. The vast majority of the earth livable space is covered by farms and infrastructure. And finally, we have the networking Available to deliver computation services out to the edge. And the computation service that I think is most exciting is sensors connected to AI applications, AI skills They're monitoring and predicting, making sense of the world. And 5 gs makes it possible. You could install all kinds of cameras now with 5 gs, little tiny 5 gs modem that is Light and data rate, light low in power, very low cost and you could just bring up cameras and sensors of all kinds all over the place. As soon as You connect it, you put it up, connected the powers, it's on the network. And so the ability to outfit infrastructures, factories, Farms, roads, cities, it's so much easier with 5 gs. So I think that at the foundational layer, 5 gs is the big catalyst for industrial AI to really get out to the edge. The second thing We need it to be software defined just like modern data centers are, you want it to be software defined. If you can make it software defined, It is orchestratable. You can disaggregate it, meaning you could put computing power, Shift it from place to place across the network to focus the computation on where the workload is. And all of that is possible if you make a software defined and what NVIDIA has done with the aerial 5 gs and A100, There are features in it that enables much better 5 gs and then with the BlueField, we made it possible to deliver A whole 5 gs base station inside a standard enterprise server. And it's a state of the art 5 gs base station, 20 gigahertz, it supports up to 900 megahertz 64x64 Massive MIMO radios and so this is an amazing 5 gs base station. You saw the announcement of all the partners who are working with us and we're all racing to get the full stack out there. Fujitsu is working with us, Mavenir is working with us, Ericsson is working with us of course as we've announced and many, many more will come. And the reason for that It's because they need a programmable software defined 5 gs base station that is connected to the AI edge And that's what EGX is all about. Thank you. And our last question will come from Atif Molli from Citi. Please go ahead. Thank you for taking my questions, squeezing me in. Colette, in your updated Q1 guidance, is gaming still the biggest Sequential driver. And then when you spoke about EGX is gaining rapid adoption, When do you expect to recognize software revenues from the NVIDIA AI into enterprise license model? Thank you. Yes. So thanks for the question regarding our outlook for the quarter. Gaming will still be a material part of our growth So From looking additionally on your software opportunities and time, there's many different ways that we've Including incorporating in OEMs to our channel through our enterprise channel through, also looking at it as part of our And then keep in mind there's still an opportunity with automotive going forward working directly with those OEMs and purchasing it So over time, we believe this will continue to grow, as it is incorporated, of course, in many of our platforms today. Looking at it as a separate offering is new and something that we'll see over the opportunities going forward. Thank you. This is all the time we have for Q and A today. I will now turn the call back over to Jensen for closing comments. Thank you and thank you for joining us today. I hope you guys all enjoyed GTC. It's It's the endeavor of multiple years of development inside the company. Some of it we developed in plain sight And with a vision to pull it together in just this way today, it is the hard work and the genius A couple of 20,000 employees at NVIDIA. And I'm so grateful for everything that they've done and We are so hard and so diligently towards this vision. The strategies that we announced today, the products that we announced and the partnerships that we announced, The new markets that we announced are all targeting and focusing and enabling The forces that are shaping our industry and we spoke about accelerated computing And the groundbreaking work that we're doing in healthcare and even quantum computing. We spoke a lot about AI. AI is not just algorithms. AI is a fundamental way of doing software and it's transformed computing completely. And I used self driving cars as an example to illustrate how AI and machine learning Changes how you should think about developing software and as a result, even profoundly changes how you think about your products. What used to be a car is now part of an installed base. And we talked about data center, How the entire data center is going to be programmed like one computer. What a miracle it is that technology Has reached a point where it's possible for 1 software engineer to write an application that scales across 100 of thousands of servers. It's a shocking realization. And networking technology made it possible. It is what's driving Mellanox's high speed and low latency networking. It transformed how data centers ought to be architected, software defined data centers And made possible a brand new type of computing platform where the computing is done in the fabric, Otherwise known as data center infrastructure. The data center is now a new unit of computing And it has profound implications on how we architect data centers going forward, not just in the data center proper as we see it, But remember, everything is going to be a data center. 5 gs base station is going to be a data center. The car is going to be a data center on wheels. They're going to be architected the same way and it will be highly secure in the same way. They'll be able to orchestrate these computers in the same way, use very similar approaches. 5 gs, incredibly excited about the convergence Of 5 gs and AI. And so now we can deliver AI on 5 gs. What a transformative time it is for the industries. We use transportation as a great easy example. And notice how it's possible to apply the combinations of technologies I've mentioned To turn your automotive fleet into your installed base. And they think like cable network providers. You can think like smartphone network providers, basically like network providers. 5 gs And with AI on 5 gs running on EGX, we would like to do that for the entire industry, for every industry, for healthcare, For warehouse, for logistics, where companies and enterprises in these industries could reinvent their business models And their products and services and enjoy their smartphone moment For every industry to become a technology industry from agriculture to warehouses. And then lastly, The ultimate form of AI if you will is autonomous systems. And remember that autonomous systems in our imagination is not just Physical autonomous systems, but there are going to be virtual autonomous systems. There will probably be a 1000000 times more robots inside virtual worlds Than there are in physical worlds and there are going to be tons of robots that are autonomous someday. Self driving cars of course is the first Example of them, but that's just the first example. And we have the AI technology That are being developed and we have machine learning practices and skills and all of the things that we are doing will lead up to Be able to deploy robotics into the physical world, but they will have digital twins and those digital twins will help them learn. And those digital twins will be synchronized to the physical versions And they'll exist in another virtual world that is physically and photo realistically like ours. It behaves like ours. And so, but it's completely simulated. And this virtual world and these digital twins We'll make up some of the industrial metaverse that people have been talking about, particularly For consumer applications. But the physical version, the industrial version of them will be called digital twins. And And it's going to enable robots to learn how to be robots and work with each other and simulate factories And the work that we do with BMW hopefully activated some of your imagination there, but there is so much more. There are just powerful forces And we're excited to be part of them. Thanks for joining us today in our keynote and also Our analyst meeting to hear us talk about the work that we do that we're so passionate about. And with that, I wish you all well and