NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.75
-8.50 (-4.06%)
Apr 30, 2026, 12:15 PM EDT - Market open
← View all transcripts

Earnings Call: Q1 2021

May 21, 2020

Good afternoon. My name is Josh, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's Financial Results Conference Call. All lines have been placed on mute to prevent any background noise. After the speakers' remarks, there will be a question and answer session. Thank you. Simona Jankowski, you may begin your conference. Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the Q1 of fiscal 20 21. With me on the call today from NVIDIA are Jensheng Huang, President and Chief Executive Officer and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the Q2 of fiscal 2021. The content of today's call is NVIDIA's property. It can be reproduced or transcribed without our prior written consent. During this call, we may make forward looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may vary materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10 ks and 10 Q and the reports that we may file on Form 8 ks with the Securities and Exchange Commission. All our statements are made as of today, May 21, 2020, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non GAAP financial measures. You can find a reconciliation of these non GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Jenson. Thanks, Simona. Before Colette describes our quarterly results, I'd like to thank those who are on the front lines of this crisis, 1st responders, healthcare workers, service providers, who inspires every day with their bravery and selflessness. I also want to acknowledge the incredible efforts of our colleagues here at NVIDIA. Despite many challenges, they have barely broken stride during one of the busiest periods in our history. Our efforts related to the virus are focused in 3 areas. 1st, we're taking care of our families and communities. We've pulled in raises by 6 months to put more money in our employees' hands. And NVIDIA and our people have donated thus far more than $10,000,000 to those in need. 2nd, we're using NVIDIA's unique capabilities to fight the virus. A great deal of science being done on COVID-nineteen uses NVIDIA technology for acceleration when every second counts. Some of the many examples including sequencing the virus, analyzing drug candidates, imaging the virus at molecular resolution with cryo electron microscopy and identifying elevated body temperature with AI cameras. And 3rd, because COVID-nineteen won't be the last killer virus, we need to be ready for the next outbreak. NVIDIA technology is essential for the scientific community to develop an end to end computational defense system, a system that can detect early, accelerate the development of a vaccine, contain the spread of disease and continuously test and monitor. We are racing to deploy the NVIDIA Clara computational healthcare platforms. Clara Parabricks can accelerate genomics analysis from days to minutes. Clara Imaging will continue to partner with leading research institutes to develop state of the art AI models to detect infections. Clara Guardian will connect AI to cameras and microphones in hospitals to help overloaded staff watch over patients. We completed the acquisition of Mellanox on April 27. Mellanox is now NVIDIA's networking brand and business unit and will be reported as part of our data center market platform. And Israel is now one of NVIDIA's major technology centers. The new NVIDIA has a much larger footprint in data center, computing, end to end and full stack expertise and data center architectures and tremendous scale to accelerate innovation. NVIDIA Mellanox are a perfect combination and position us for the major forces shaping the IT industry today, data center scale computing and AI. From micro service cloud application to machine learning and AI, accelerated computing and high performance networking are critical to modern data centers. Previously, a CPU compute node was the unit of computing. Going forward, the new unit of computing is an entire data center. The basic computing elements are now storage servers, CPU servers and GPU servers and are composed and orchestrated by hyperscale applications that are serving millions of users simultaneously. Connecting these computing elements together is the high performance Mellanox networking. This is the era of data center scale computing. And together NVIDIA Mellanox can architect end to end. Mellanox is an extraordinary company and I'm thrilled that we're now one force to invent the future together. Now let me turn the call over to Colette. Thanks, Jensen. Against the backdrop of the extraordinary events unfolding around the globe, we had a very strong quarter. Q1 revenue was $3,080,000,000 up 39% year on year, down 1% sequentially and slightly ahead of our outlook, reflecting upside in our data center and gaming platforms. Starting with gaming, revenue of 1,340,000,000 dollars was up 27% year on year and down 10% sequentially. We are pleased with these results, which exceeded expectations in the quarter marked by the unprecedented challenge of the COVID-nineteen. Let me give you some color. Early in Q1, as the epidemic unfolded, demand in China was impacted, with iCafes closing for an extended period. As the virus spread globally, much of the world started working and learning from home and gameplay surged. Globally, we have seen 50% rise in gaming hours played on our GeForce platform, driven both by more people playing and more gameplay per user. With many retail outlets closed, demand for our products has shifted quite efficiently to etail channels globally. Gaming laptops revenue accelerated to its fastest year on year growth in 6 quarters. We are working with our OEMs, channel partners to meet the growing needs of the professionals and students engaged in working, learning and playing at home. In early April, our global OEM partners announced a record new 100 NVIDIA GeForce powered laptops with availability starting in Q1 and the most to ship in Q2. These laptops are the first to use our high end GeForce RTX 2,080 Super and 2,070 Super GPUs, which have been available for desktop since last summer. In addition, OEMs are bringing to market laptops based on the RTX 2060 GPU at just $9.99 a price point that enables a larger audience to take advantage of the power and features of RTX, including its unique ray tracing and AI capabilities. These launches are well timed as mobile and remote computing needs accelerate. The global rise in gaming also lifted sales of NVIDIA Nintendo Switch and our console business, driving strong growth both sequentially and year over year. We collaborated with Microsoft and Mojang to bring RTX Ray Tracing to Minecraft, the world's most popular game with over 100,000,000 gamers monthly and over 100,000,000,000 total views on YouTube. Minecraft with RTX looks astounding with realistic shadows and reflections, light that reflects, reflects and scatters through surfaces as naturalistic effects like fog. Reviews for it are off the charts. Arts Technica called it a jaw dropping stunner and PC World said it was glorious to behold. Our RTX technology stands apart, not only with our 2 year lead in ray tracing, but with its use of AI to speed up and enhance games using the Tensor Core silicon on our RTX class GPUs. We introduced the next version of our AI algorithm called deep learning super sampling. In real time, DLSS 2.0 can fill the missing bits from every frame, doubling performance. It represents a major step function from the original and it can be trained on non gaming specific images, making it universal and easy to implement. The value and momentum of our RTX GPUs continue to grow. We have a significant upgrade opportunity over the next year with the rise and tide of RTX enabled games, including major blockbusters like Minecraft and Cyberpunk. Let me also touch on our game streaming service, GFN, which exited beta this quarter. It gives gamers access to more than 6.50 games with another 1500 in line to get onboarded. These include Epic Games, Fortnite, which is the most played game on GFN and other popular titles such as Control, Destiny 2 and League of Lightning in the fall. Since launching in February, GFN has added 2,000,000 users around the world with both sign ups and hours of game playing boosted by stay at home measures. GFN expands our market reach to the billions of gamers with underpowered devices. It is the most publisher friendly, developer friendly game streaming service with the greatest number of games and the only one that supports ray tracing. Moving to Pro Visualization. Revenue was 307 $1,000,000 up 15% year on year and down 7% sequentially. Year on year revenue growth accelerated in Q1 driven by laptop workstations and Turing adoption. We are seeing continued momentum in our ecosystem for RTX ray tracing. We now have RTX support for all major rendering, visualization and design software packages, including Autodesk Maya, Dassault's Catia, Pixar's RenderMan, Chaos Group's V Ray and many others. Autodesk has announced that the latest release of V Red, its automotive 3 d visualization software supports NVIDIA RTX GPUs. This enables designers to take advantage of RTX to produce more like life designs in a fraction of the time versus CPU based systems. Over 45 leading creative and design applications now take advantage of RTX driving a sustained upgrade opportunity for Quadro powered systems, while also expanding their reach. We see strong demand in verticals including healthcare, media and entertainment and higher education among others. Higher healthcare demand was fueled in part by COVID-nineteen related research at Siemens, Oxford and Caption Health. Caption Health received FDA clearance for an update to its AI guided ultrasound, making it easier to perform diagnostics quality cardiac ultrasounds. And in media and entertainment, demand increased as companies like Disney deployed remote workforce initiatives. Turning to automotive and robotic autonomous machines. Automotive revenue was $155,000,000 down 7% year on year and down 5% sequentially. The automotive industry is seeing a significant impact from the pandemic and we expect that to affect our revenue in the Q2 as well, likely declining about 40% from Q1. Despite the near term challenges, our important work continues. We believe that every machine that moves someday will have autonomous capabilities. During the quarter, XPENN introduced the P7, an all electric sports sedan with innovative level 3 automated driving features powered by the NVIDIA DRIVE AGX Xavier AI Compute platform. Our open programmable software defined platform enables XPeng to run its proprietary software while also delivering over the air updates for new driving features and capabilities. Production deliveries of the P7 with NVIDIA DRY begin next month. Our Amcare architecture will power our next generation NVIDIA DRIVE platform called Orin delivering more than 6x the performance of Xavier solutions and 4x better power efficiency. With Ampere's scalability, the DRiV platform will extend from driverless robo taxis all the way down to in windshield driver assistance systems sipping just a few watts of power. Customers appreciate the top to bottom platform, all based on a single architecture, letting them build 1 software defined platform for every vehicle in their fleet. Lastly, in the area of robotics, we announced that BMW Group has selected the new NVIDIA Isaac Robotics platforms to automate their factories, utilizing logistic robots built on advanced AI computing and visualization technologies. Turning to data center. Quarterly revenue was a record $1,140,000,000 dollars up 80% year on year and up 18% sequentially crossing the $1,000,000,000 mark for the first time. Announced last week, the A100 is the 1st Ampere architecture GPU. Although just announced, A100 is in full contributed meaningful to Q1 revenue and demand is strong. Overall, data center demand was solid throughout the quarter. It was also broad based across hyperscale and vertical industry customers, as well as across workloads including training, inference and high performance computing. We continue to have solid visibility into Q2. The A100 offers the largest leap in performance to date over our 8 generations of GPUs, boosting performance by up to 20x over its predecessor. It is exceptionally versatile, serving as a universal accelerator for the most important high performance workloads, including AI training and inference as well as data analytics, scientific computing and cloud graphics. Beyond its LEAP performance and versatility, the A100 introduces new elastic computing technologies that make it possible to bring right sized computing power to every job. A multi instance GPU capability allows each A100 to be partitioned into as many as 7 smaller GPU instances. Conversely, multiple A100s interconnected by our 3rd generation NVLink can operate as one giant GPU forever larger training tasks. This makes the A100 ideal for both training and for inference. The A100 will be deployed by the world's leading cloud service providers and system builders, including Alibaba Cloud, Amazon Web Services, Baidu Cloud, Dell Technologies, Google Cloud Platform, HPE and Microsoft Azure among others. It is also getting adopted by several supercomputing centers, including the National Energy Research Scientific Computing Center and the JUUL Supercomputing Center in Germany and Argonne National Laboratory. We launched and shipped the DGX A100, our 3rd generation DGX and the most advanced AI system in the world. The DGX A100 is configurable from 1 to 56 independent GPUs to deliver elastic software defined data center infrastructure for the most demanding workloads from AI training and inference to data analytics. We announced 2 products for Edge AI, the EGX A100 for larger commercial off the shelf servers and the EGX Jetson, Xavier NX for micro edge servers. Supported by full AI optimized cloud native and secure software, the EGX platform is built for AI computing at the edge. With the EGX, hospitals, retail stores, farms and factories can securely carry out real time processing of the massive amounts of data streaming from trillions of edge sensors. NVIDIA EGX makes it possible to securely deploy and manage and update fleets of servers remotely. EGX is also ideal for the massive computational challenge of 5 gs networks, which we are working on with our partners like Ericsson and Mavenir. Additionally, we announced CUDA 11, and other important software harnessing the A100's performance and universality to accelerate 3 of the most complex and fast growing workloads, recommendation systems, conversational AI and data science. 1st, NVIDIA Merlin is a deep recommender application framework that enables developers to quickly build state of the art recommendation systems leveraging our pre trained models. With billions of users and trillions of items on the Internet, deep recommendators are the critical engine powering virtually every Internet service. 2nd, NVIDIA Jarvis is a GPU accelerated application framework that makes it easy for developers to create, deploy and run end to end real time conversational AI applications that understand terminology unique to each company and its customers using both vision and speech. Demand for these applications are surging amid the shift to working from home, telemedicine and remote learning. And 3rd, in the field of data science and data analytics, we announced that we are bringing end to end GPU acceleration to Apache Spark, an analytics engine for big data processing that uses more than 500,000 data scientists worldwide. Native GPU acceleration for the entire Spark pipeline from extracting, transforming and loading the data to training to inference, delivers the performance and the scale needed to finally connect the potential of big data with the power of AI. Adobe has achieved a 7x performance improvement and a 90% cost savings in an initial test using GPU accelerated data analytics with Spark. Our accelerated computing platform continues to gain momentum underscored by the tremendous success of GTC Digital, our annual GPU Technology Conference, which shifted this spring to an online format. More than 55,000 online developers for the online event, which includes hundreds of hours of free content from AI practitioners and industry experts who leverage NVIDIA's platforms. Our ecosystem is now 1,800,000 developer strong. Times like these truly test a computing platform's metal in the utility it brings to scientists racing for solutions. Researchers around the world are deploying our GPU computing platform in the fight against COVID-nineteen. Scientists are combining AI simulation to detect changes in pneumonia cases, sequence the virus and seek effective biomolecular compounds for a vaccine or treatment. The first breakthrough came from researchers at the University of Texas at Austin and National Institute of Health, who used the GPU accelerated application to create the first 3 d atomic scale map of the virus using NVIDIA GPUs. This was followed by Oak Ridge National Laboratory, who screened 8,000 compounds to identify 77 promising drug targets using the world's fastest supercomputer Summit, which is powered by more than 27,000 NVIDIA GPUs. The VA 100 GPUs at Oak Ridge are in high demand as they can analyze 17,000,000 compound protein combinations in a day. To help understand the virus spread pattern, the University of California at San Diego researchers ported their microbiomic analysis software to GPUs in the San Diego supercomputing cluster of 500x analysis speed up from what some people are more susceptible to the virus. Okay, moving to the rest of the P and L. Q1 GAAP gross margins was 65.1 percent and non GAAP was 65.8%, up sequentially and year on year, primarily driven by GeForce GPU product mix and higher data center sales. Q1 GAAP operating expenses were $1,030,000,000 and non GAAP operating expenses were 821,000,000 dollars up 10% and 9% year on year respectively. Q1 GAAP EPS was $1.47 up 130% from a year earlier and non GAAP EPS was $1.80 up 105% from a year ago. Q1 cash flow from operations was $909,000,000 Before I turn to the outlook, let me make a few comments on our Mellanox acquisition. Beyond the strong strategic and cultural fit that Jensen has discussed, Mellanox has exceptionally strong financial profile. The company reported revenue of $429,000,000 in its March quarter, accelerating to 40% year on year growth, with GAAP and non GAAP gross margins in the mid to high 60% range. We expect the acquisition to be immediately accretive to non GAAP gross margins, non GAAP earnings per share and free cash flow. We aim to retain the full Mellanox team and accelerate investments in our combined roadmap as we jointly innovate on our shared vision for the future of accelerated computing. With that, let me turn to the outlook of the Q2 of fiscal 2021, which includes a full quarter contribution from Mellanox. We have assumed in our outlook the potential ongoing impact from COVID-nineteen. We expect our automotive platform sales to be down 40% on a sequential basis and ProVis to decline sequentially. In gaming, while we will likely see ongoing impact from the partial operations or closures of iCafes and retail stores, we expect that to be largely offset by a shift to e tail channels. Overall, the precise magnitude of the impact is difficult to predict given uncertainties around the reopening of the economy. Overall, we expect 2nd quarter revenue to be $3,650,000,000 plus or minus 2%. The contribution of Mellanox revenue is likely to be in the low teens percentage range of our total Q2 revenue. We are providing this breakout to help with comparability between Q1 and Q2, but going forward it will become an integrated part of our data center market platform. GAAP and non GAAP gross margins are expected to be 58.6% 66% respectively, plus or minus 50 basis points. The sequential decline in GAAP gross margins primarily reflects an increase in acquisition related costs, most of which are non reoccurring. GAAP and non GAAP operating expenses are expected to be approximately $1,520,000,000 respectively. The sequential change in GAAP operating expenses reflects an increase in stock based compensation and acquisition related costs. GAAP and non GAAP operating expenses for the full year are expected to be approximately $5,700,000,000 $4,100,000,000 respectively. For the full year, stock based compensation and acquisition related costs also influence. GAAP and non GAAP OI and E are both expected to be an increase of approximately $15,000,000 $45,000,000 respectively. GAAP and non GAAP tax rates are both expected to be 9%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $225,000,000 to 250,000,000 dollars Further financial details are included in the CFO commentary and other information available on our IR website. New this quarter, we have also posted an investor presentation summarizing our results and key highlights. In closing, let me highlight upcoming events for the financial community. Next Thursday, May 28, we will webcast a presentation and Q and A with Jensen on our recent product announcements moderated by Evercore. We will all EMT conference on May 27. Morgan Stanley's Cloud Secular Winners Conference on June 1, BofA's Technology Conference on June 2, Needham's 4th Automotive Technology Conference on June 3, and NASDAQ Investor Conference on June 16. Operator, we will now open for questions. Can you please hold for questions, please? Aaron Rakers with Wells Fargo. Please go ahead. Yes. Thanks and congratulations on the solid quarter. Colette, I'm curious of your commentary around visibility in the data center side. That's been comments over the last couple of quarters. How would you characterize your visibility today relative to maybe what it was last quarter? And how do we think about the visibility in the context of trends maybe into the back half of the calendar year? Thank you. Thanks, Will, for the question. You are correct. We have indicated a couple of quarters ago that we were starting to see improved visibility after we came out of the digestion periods in the prior overall fiscal year. As we move into Q2, we still have visibility and solid visibility into our Q2 results for overall data centers. So at this time, I'd say they are relatively about the same of what we had seen going into the Q1 period. And we think that is a true indication of their excitement about our platform and most particularly our excitement regarding A100 and its launch, and its additional products. Now regarding the second half of the year, as you know, we have seen broad based growth in both the hyperscale and the vertical industries, both of them in terms of at record levels in our Q1 results. And we see in terms of inferencing continuing to grow as well, as well as we're also expanding in terms of edge AI. Our strong demand of the A100 products, including the Delta board, but also in terms of our DGXs is just starting a initial ramp. However, we do guide only 1 quarter at a time. So it's still a little bit too early for us to give a true certainty in terms of the macro situation that's in front of us. But again, we feel very good about the demand for A100. Your next question comes from Stacy Rasgon with Bernstein Research. Please go ahead. Hi, guys. Thanks for taking my questions. I first wanted to follow-up on your gaming commentary. You sort of mentioned a couple of offsets, COVID potentially still a headwind, e tail a tailwind and maybe offsetting each other. Were you trying to suggest that those did offset completely and gaming was kind of flattish into Q2? Because I know it has a typical seasonal pattern, switches typically up. I guess what were you trying to say with those kind of factors? And what are the kinds of things we should be thinking about when it comes to seasonality, Colette, into Q2 around that business segment? So, let me start and I'll see if Jensen also wants to add on to it. I think you're talking about our sequential between Q1 and Q2. Some of the pieces that we had seen related to COVID-nineteen in Q1 may carry over into Q2. COVID-nineteen in fact had an impact in terms of our retail channels as well as our iCafes. However, as we discussed, it efficiently moved to overall e tail. We have normally been seasonally down in desktop between Q1 and Q2 and that will likely happen. But we do see the strength in terms of laptops and overall consoles as we move for Q1 to Q2. So in summary, we do expect growth sequentially between Q1 and Q2 for our overall gaming business. And I'll turn it over to Jensen to see if he has additional commentary. That was great. That was fantastic. I guess just to follow-up on that though, if it's growing, I mean, like in prior years, we've seen it grow like very strong double digits. Obviously, the mix of the business was different back then. But do you think that the kind of I mean, are we thinking kind of it's up somewhat? You don't is there any chance that it could be up like for what we've seen in terms of like typical levels in the past? Like can you give us any sense of magnitude? That would be really helpful? Yes. I think when we think about that sequential growth, we'll probably be in the low, moving up to probably the mid single digits in terms of that's what our guidance right now and we'll just have to see how the quarter goes. Yes. That's very helpful. Stacy, the thing that I would add is this. I would say, I think the guidance is exactly what Colette mentioned. But if you look at the big picture, there's a few dynamics that are working really well in our favor. First, of course, is that RTX and ray tracing is just a home run. Minecraft was phenomenal. We have 33 games in the pipe that has already been announced. For shipping. Just about every game developer signed on to RTX and ray tracing. And I think it's a foregone conclusion that this is the next generation. This is the way computer graphics is going to be in the future. So I think RTX is a home line. The second, the notebooks that we crew is just doing great. We got 100 notebooks in gaming. We have 75 notebooks designed for either mobile workstations or what we call NVIDIA Studio for designers and creators. And the timing was just perfect. With everybody needing to stay at home, the ability to have a mobile gaming platform and a mobile workstation, it just was just perfect timing. And then of course, you guys know quite well that Nintendo Switch is doing fantastic. There are 3 the top 3 games in the world. The top games in the world today are Fortnite, Minecraft and Animal Crossing. All three games are NVIDIA platforms. And so I think we have all the dynamics working in our favor and then we just got to see how it turns out. Got it. That's helpful. Thank you, guys. Yes. Thanks. Your next question comes from Joe Moore with Morgan Stanley. Great. Thank you. I wanted to ask, how quickly it's between hyperscale as well as on the DGX side as well as on the HPC side? And is it a smooth transition? Is there I remember when you launched Volta, there was a little bit of a transitional pause. Just can you tell us how you see that ramping up with the different customer segments? Yes. Thanks a lot, Joe. So, first of all, taking a step back, accelerated computing is now common sense in data centers. It wasn't the case when we first launched Volta. If you went back to Volta, Volta was the first generation that did deep learning training in a really serious way. And it was really focused on training. It was focused on training and high performance computing. We didn't come until later with the inference version called T4. But over the course of the last 5 years, we've been accelerating workloads that are now diversifying in data centers. If you take a look at most of the hyperscalers, machine learning is now pervasive, deep learning is now pervasive. The notion of accelerating deep learning and machine learning using our GPUs is now common sense. It didn't used to be. People still saw it as something esoteric, but today, data centers all over the world expect a very significant part of their data center being accelerated with GPUs. The number of workloads that we've accelerated since in the last 5 years has expanded tremendously, whether it's imaging or video or conversational AI or deep recommendation deep recommender systems that is that probably unquestionably at this point the most important machine learning model in the world. And so the number of applications we now accelerated is quite diverse. And so that's really that's contributed greatly to the ramp of Ampere. When we came when we started to introduce Ampere to the data centers, it was very common sense to them that they would adopt it. They have a large amount of workload that's already accelerated by NVIDIA GPUs. And as you know, our GPUs are architecturally compatible from generation to generation. We're forward compatible, we're backwards compatible. Everything that runs on T4 runs on A100. Everything that runs on B100 runs on A100. And so I think the transition is going to be really, really smooth. On the other hand, because V100 and T4, which by the way, V100 and T4 had a great quarter. It was sequentially up. And then on top of that, we grew with the A100 shipment. A100 and excuse me, V100 and T4 are now quite broadly adopted in hyperscalers for their AI services, in cloud computing, in vertical industries, which is almost roughly half of our overall HPC business, all the way out to the edge, which had a great quarter. Much smaller part, of course, supercomputing is important, but it's a very small part of the high performance computing. But that's also we also shipped A100 to supercomputing centers. And so I think the general sense of it, the summary of it is that the number of workloads for accelerated computing has continued to grow. The adoption of machine learning and AI in all the clouds and hyperscalers has grown. The common sense of using acceleration is now a foregone conclusion. And so I think we're ramping into a very receptive market with a really fantastic product. Your next question comes from Vivek Arya with Bank of America. Please go ahead. Thanks for taking my question and congratulations on the strong growth and execution. Just a quick clarification, Colette 66% kind of the new baseline for gross margin. And then the question, Jensen, for you is give us a sense for how much inference as a workload and Ampere as a product are expected to contribute. I'm just curious where you are in terms of growing in the inference and edge AI market and where are we kind of in the journey of Ampere penetration? Thank you. So let me start on the first question regarding the gross margin and our gross margin as we look into Q2. We are guiding Q2 non GAAP gross margins at 66%. This is would be another record gross margin quarter, just as we finished a overall record level, even as we are continuing right now to ramp our overall Ampere architecture within that. The Q2 also incorporates Mellanox. Mellanox is had very similar overall margins to our overall data center margins as well. But we see this new baseline as a great transition and likely to see some changes as we go forward. However, it's still a little early to see where these gross margins will go, but we're very pleased with the overall guidance right now at 66% for Q2. Accelerated computing is just at the beginning of its journey. If you look at I would characterize it as several segments. 1st is hyperscaler AI microservices, which is all the services that we enjoy today that has AI. Whenever you shop on the web, it recommends a product. When you're watching a movie, it recommends a movie or it recommends a song. All of those or recommends news or recommends a friend or recommends a website, the first 10 websites that they recommend. Now all of these recommenders that are powering the Internet are all based on machine learning today. It's the reason why they're collecting so much data. The more data they can collect, the more they could predict your preference. And that predicting your preference is the core to a personalized Internet. It used to be largely based on CPU approaches. But going forward, it's all based on deep learning approaches. The results are much more superior and a few percentage change in preference prediction accuracy could result in tens of 1,000,000,000 of dollars of economics. And so this is very, very big deal and the shift towards deep learning in hyperscale microservices or AI microservices is still ramping. 2nd is cloud. And as you know, cloud is a $100,000,000,000 market segment of IT today, growing about 40% into a $1,000,000,000,000 opportunity. This cloud computing is the single largest IT industry transformation that we have ever seen. The 2 powers that is really the force the 2 forces that is really driving our data center business is AI and cloud computing. We're perfectly positioned to benefit from these two powerful forces. So the second is cloud computing and that journey is has a long ways to go. Then the third is industrial edge. In the future, today it's not the case today, but the combination of IoT, 5 gs, industrial 5 gs and artificial intelligence, it's going to be it's going to turn every single industry into a tech industry. And whether it's logistics or warehousing or manufacturing or farming, construction, industrial, every single industry will become a tech industry. And there'll be trillions of sensors and they'll be connected to little micro data centers. And those data centers will be in the millions. It will be distributed all over the edge. And that journey has just barely started. We announced 3 very important partners in 3 domains. And they're the lead partners that we felt that people would know, but we have several 100 partners that are working with us on edge AI. We announced Walmart for smart retail. We announced the U. S. Postal Service, the world's largest mail sorting service and logistics service. And then we announced this last quarter BMW, who is working with us to transform their factory into a robotics automated factory of the future. And so these three applications are great examples of the next phase of artificial intelligence and where Ampere is going to ramp into. And that is just really at its early stages. So I think it's fair to say that we're really well positioned in the 2 fundamental forces of IT today, data center, scale computing and artificial intelligence and the segments that it's going to make a real impact are all gigantic markets hyperscale AI, cloud and edge AI. Edge AI. Mews with Evercore. Please go ahead. Yes, good afternoon. Thank you for taking the question. I guess if I could ask 2, Colette, can you help us with what you think the growth rate for Mellanox could look like in calendar 2020? And then Jensen, a bigger picture question for you and really not specific to healthcare, more broad based, but how do you think about the long lasting impact of COVID on worldwide demand for AI? Thank you. CJ, can you help me? You cut out in the middle of your sentence to me. Can you repeat the first part of it for me? Thank you. Sorry about that. I'm curious if you could provide a little hand holding on what we should think about for growth for Mellanox in calendar 2020? At this time, it's a little early for us. And as you know, we generally just give a 1 quarter out and we're excited to bring the Mellanox team on board so we can start beginning the future of building products together for the overall margin. We've seen their overall performance over the last couple of quarters. They had a great last year. They had a great March quarter as well. And we're just going to have to stay tuned to see equally with them what the second half of the year looks for them. This pandemic is really quite tragic and it's reshaping industries and markets. And I think it's going to be structural. I think it's going to remain. And I think your question is really good because now is a good time to think about where to double down. There's a few areas that I believe are going to be structurally changed. And I think that once I say it, it will be very sensible. The first is that the world's enterprise digital transformation and moving to the cloud, that is going to accelerate. Every single company can't afford to rely just on on prem IT. They have to be much more resilient. And having a hybrid cloud computing infrastructure is going to provide them the resilience they need. And so that's one. And when the world moves and accelerates into this $1,000,000,000,000 IT infrastructure transformation, which is now $100,000,000,000 into that journey, it's growing 40% a year, I wouldn't be surprised to see that accelerate. And so cloud computing AI is going to accelerate because of that. The second is the importance of creating a computational defense system. The defense systems of most nations today are based on radar. And yet in the future, our defense systems are going to detect things that are unseeable. It's going to be infectious disease. And I think every nation and government and scientific lab is now gearing up to think about what does it take to create a beach country that is based on computational methods. And NVIDIA is an accelerated computing company. We take something that otherwise would take a year in the case of Oak Ridge, and they filtered 1,000,000,000 compounds in a day. And that's what you need to do. You need to find a way to have an acceleration an accelerated computational defense system that allows you to find insight, detect early warning ASAP. And then of course, that computational system has to go through the entire range from mitigation to containment to living with it and monitoring. And so scientific labs are going to be gearing up, national labs are going to be gearing up. The third part is AI and robotics. We're going to have to have the ability to be able to do our work remotely. NVIDIA has a lot of robots that are helping us in our labs. And without those robots helping us in our labs, we'd have a hard time getting our work done. And so we need to have remote autonomous capability for to handle all of these either dangerous circumstances, to disinfect environments, to fumigate environments autonomously, to clean environments, to be able to interact with people where as little as possible in the event of an outbreak. All kinds of robotics applications are being dreamed up right now to help society forward in the case of another outbreak. And then lastly, I think more and more people are going to work permanently from home. There's a strong movement of companies that are going to support a larger percentage of people working from home. And when people work from home, it's going to clearly increase the single best home entertainment, which is video games. I think video games is going to represent a much larger segment of the overall entertainment budget of society. And so these are some of the trends, I would say. I would say cloud computing, AI, I would say National Labs, a computational defense system, robotics and working from home are structural changes that are going to be here to stay. And these dynamics are really good for us. Your next question comes from Toshiya Hari with Goldman Sachs. Please go ahead. Hi, guys. Good afternoon and thank you very much for taking the question. I had one for Colette and then one for Jensen as well, if I may. Colette, I wanted to come back to the gross margin question. You're guiding July essentially flat sequentially, despite what I'm guessing is better mixed with non ops coming in and automotive guided down 40% sequentially. I guess the question is what are some of the offsets that are pulling down gross margins in the current quarter? And sort of related to that, how should we be thinking about the cadence and OpEx going forward given the 6 month pull in that you guys talked about on the compensation side? And then one quick one for Jensen. He'd comment on the current trade landscape between the U. S. And China. I feel like you guys shouldn't be impacted in a material way directly or indirectly. But at the same time, given the critical role you play in scientific computing, I can sort of see a scenario where some people may claim that you guys contribute to efforts outside of the U. S. So if you can kind of speak on that, speak to that, that would be helpful. Thank you. Thanks, Toshiya, for your question. So regarding our gross margins in the Q2, our Q2 guide at 66% is up sequentially from even a record level in terms of what we had in terms of Q1. This next record that we hope to achieve with our overall guidance is even with including our overall Ampere architecture. So typically when we transition to a new architectures, margins can somewhat be a little bit lower on the onset, but tend to kind of move up and trend up over time. Additionally, as you articulated, our automotive is lower, but also we're going to see growth in some of our platforms in gaming such as consoles, which may offset those too. But overall, there's nothing structural to really highlight other than our mix in business, and the ramp of Ampere and its transition? Let's see, the trade tension, we've been living in this environment for some time, Tushyoh. And as you know, the trade tension has been in the background for coming up on a year, probably longer. And China's high performance computing systems are largely based on Chinese electronics anyhow. And so that's I think our condition won't materially change going forward. So, Toshiya, let me respond to your second question that you had for me, which was regarding to our OpEx and our decision to pull forward our overall focal into Q2. This is something that we've normally done later in the year. We felt it was prudent during the current COVID-nineteen, although our employees are quite safe. We just wanted to make sure that their family members also were safe and had the opportunity to have cash upfront. It is, about a couple of months, about 4 months earlier than normal, and it is incorporated in our guidance for Q2. Your next question comes from Mark Lipacis with Jefferies. Please go ahead. Hi, thanks for taking my question. Question coming back to the A100. I'm trying to understand how this kind of fits into the evolution of your solution set over time and the evolution of the demand for the applications. Is and I guess if I think about it going back, you had a solution which is largely training based and then you kind of introduced solutions that were targeted more inferencing. And now you have a solution that it sounds, to my understanding that is it solves both inferencing and training efficiently. And so I guess I'm wondering is 3 years, 5 years, 10 years down the line, is this part of the general purpose computing or acceleration framework that you had talked about in the past, Vincent, where Ampere is kind of like an Ampere class product? Or is this would you still should we still expect to see inferencing specific solutions in the market and then training specific solutions and then an Ampere solution for a different class application. If you could provide a framework for thinking about AMPYRIQ in those contexts, I think that would be helpful. Thank you. Yes. Thanks a lot, Mark. Good question. I think if you take a step back, currently in our data centers, the current setup in data centers, starting from probably all the way back 6, 7 years ago, but really accelerating in the last 5 years and then really accelerating in the last couple of years, we learned our way into it. There are 3 classes of workloads and they kind of came into acceleration over time. The 1st class of workload that we discovered was the major workload was deep learning training, deep learning training. And the ideal setup for that today, prior to Ampere or yesterday prior to Ampere is the V100 SXM with NVLink, 8 GPUs on one board. And that architecture is called scale up. It's like a supercomputer architecture. It's like a weather simulation architecture. You're trying to build the largest possible computing node you can for 1 operating system, so scale up. And the second thing that we learned along the way was then cloud computing started to grow because researchers around the world needed to get access to accelerated platform for developing their machine learning algorithms. And because they have a different degree of budget and they want to get into it a little bit more lightly and have the ability to scale up to larger nodes, The perfect model for that was actually a V100 PCI Express, not XSM, but PCI Express that allows you to offer 1 GPU all the way up to many GPUs. And so that versatility V100 PCI Express, not as scalable in performance as the V100 SXMs, but it was much more flexible for rentals. File renting was really quite ideal. And then we started to get into inference and we're on our 7th generation of TensorRT, TensorRT 7. Along the way, we've been able to accelerate more and more. And today, we largely accelerate every deep learning inference computational graph that's out there. And the ideal GPU for that was something that has the reduced precision, which is called 8 bit integer, reduced precision, not with electronics that is focused more inference. And because inference is a scale out application where you have millions of queries and each one of the queries are quite small versus scale up where you have one training job and that one training job is running for day. It could be running for days and sometimes even weeks. And so scale up application is for 1 user that uses it for a long period of time on a very large machine. Scale out, it's for millions of users. Each one of them have a very small query and that query could last 100 of milliseconds, where ideally you like to get it done in 100 of milliseconds. And so notice I've got 3 different architecture in the data center today. Most data centers today has a storage server, has CPU servers and it has scale up acceleration servers with Voltas, has scaled out servers with T4s and then has scaled cloud computing flexible servers based on the 100. And so the ability to predict workload is so hard and therefore the utilization of these systems will be spiky. And so we created an architecture that allows for 3 things. The things the three characteristics of Ampere are number 1, it is the greatest generational leap in history. I mean, I don't remember a generation where we increased throughput for training and inference by 20x. It's just a gigantic leap. For training and for inference, it is a gigantic leap forward. The second, it's the first architecture that is unified. We could use this computational the computation engine of Ampere accelerates the moment the data comes into the data center. From data processing, it's called ETL, the engine, which many of you probably know, it's the single most important computational engine in the world today for big data. It used to be Hadoop, but now it's Spark. Spark is used all over the world, 16,000 customers. We finally have the ability to accelerate that. And then Ampere is also good for training, deep learning, machine learning, XGBoost as well as deep learning all the way out to inference. And so we now have a unified acceleration platform for the entire workload. And then the third thing is it's the first GPU ever, the 1st acceleration platform ever that's elastic. You could reconfigure it. You could configure it for either scale up or you can configure it for scale out. When you configure it for scale up, you've got a whole bunch of GPUs together using NVLink and it creates this one gigantic GPU. When you want to scale it out, that same computation node becomes 56 small GPUs. Each one of those 56 partitions, each one is more powerful than Volta. I mean, it's really quite extraordinary. And so Ampere is a breakthrough on all of these fronts for performance, for the fact that it unifies the workload and you can now have one acceleration cluster. And then number 3, it's elastic. You could use it in the cloud. You could use it for inference. You could use it for training. And so the versatility of Ampere is the thing that I'm most excited about. And now you could have one acceleration cluster that serves all of your needs. Thank you. It's very helpful. Yes. Thanks a lot, Mark. Your next question comes from Timothy Arcuri with UBS. Please go ahead. Thanks a lot. Actually, I had 2. I guess, Jensen, first for you. Just on the data center business, things have been very strong recently. Obviously, there's always concerns that customers are pulling in CapEx, but it sounds like you have pretty good visibility into July. But I guess last time, most folks also thought that your penetration really was so low that you would be immune to any digestion, but that wasn't the case. So I guess I'm wondering things are different now with A100 and whatnot, but my question is how you handicap your ability to this time maybe get through any digestion on the CapEx side? And then I guess second question, Colette, stock comp had been running like 220 a quarter and the guidance implies that it goes to like 460 a quarter. So it goes up a lot. Is that all executive retention? And is that sort of the right level as you look into 2021? Thanks. Colette, did you want to handle that first and then I'll do the Sure. So let me help you on the overall GAAP adjustments, the delta between our GAAP OpEx and our non GAAP OpEx. If you look at it for the full year and what we guided, we probably have about $1,550,000,000 associated with GAAP level expenses. Keep in mind, there is more in there than just our stock based compensation. We have also incorporated the accounting that we will do for the overall Mellanox and a really good portion of those costs are associated with the amortization of intangibles and also in terms of acquisition related costs and deal fees and one time items. So our stock based compensation includes what we need for NVIDIA and also the onboarding of Mellanox. There is some retention with the overall onboarding of Mellanox, but for the most part, it is just working them in to the year for 3 quarters, which is influencing the stock based compensation? Tim, there are several differences between our condition then and our condition today. So the first difference is the diversity of workload we now accelerate. Back then, we were early in our inference. We were still early in our inference and most of the data center acceleration was used for deep learning. And so today, the versatility stands from data processing to deep learning and the number of different types of AI models that's being trained for deep learning is growing tremendously. From detecting from training video from training a model for detecting unsafe video, the natural language understanding, the conversational AI to now, a gigantic movement towards deep recommender systems. And so the number of different models that are being trained is growing. The size of the models are gigantic. Recommendation systems are gigantic. They're training on models that are 100, the data size is 100 of terabytes, terabyte, 100 of terabytes. And it would take tens of hundreds of servers to hold all of the data that is needed to train these recommender systems. And so the diversity of from data analytics to training all the different models to the inference of all the different models, We didn't inference recurrent neural nets at the time, which was which is probably the most important model today. Text language models, speech models are all recurrent neural net models. And so those models were early for us at the time. So number 1 is the diversity of workload. The second is the acceleration of to cloud computing. I think that accelerated cloud computing is a movement that is going to be a multi year, if not a decade long transition. From where we are today, it's only $100,000,000,000 industry segment of the IT industry, it will be $1,000,000,000,000 someday. And that movement is just starting. We're also much more diversified out of the clouds. At the time, cloud was largely where our acceleration went for deep learning. And today, hyperscale only represents about half. And so we've diversified significantly out of cloud, not out of cloud, but including vertical industries. And a lot of that has to do with edge AI and inference. And as I mentioned earlier, we're working with Walmart and BMW and USPS, and that's just the tip of the iceberg. And so I think the conditions are a little different. And then what I would say lastly is Ampere. I mean we ramped a few weeks even though it was quite significant, there was a great ramp. The demand is fantastic. It is the best ramp we've ever had. The demand is the strongest we've ever had in data centers. And we're starting to ramp of a multiyear ramp. And so those are some of the differences. I think the conditions are very different. Thank you, gentlemen. Yes. Thanks a lot, Tim. Your next question comes from Harlan Sur with JPMorgan. Please go ahead. Good afternoon. Thanks for taking my question. Jensen, the team has showed the importance of networking, Networking Fabric and the Mellanox acquisition. For example, when you guys moved from Volta DGX-one to Volta DGX-two, you guys didn't change the GPU chipset, but by adding a custom networking fabric chip and more Mellanox network interface cards among other things, you guys drove a pretty significant improvement in performance per GPU. But now when we think about scaling out compute acceleration to data center scaled implementation, how does Mellanox's Ethernet switching platforms differ from those provided by other large networking OEMs, some of whom have been your long term partners? And then how does the Cumulus acquisition fit into the switching and networking strategy as well? Yes, great. Thanks a lot, Harlan. Appreciate the question. So DGX, you know this is our 3rd generation DGX and it's really successful. People love it. It's the most advanced AI instrument in the world. If you're a serious AI researcher, this is your instrument. And in the DGA, there are 8 A100s and there are 9 Mellanox mix, the highest speed mix they have. And so we have a great appreciation for high performance networking. High performance networking and high performance computing go hand in hand. And the reason for that is because the problems we're trying to solve no longer fit in one computer, no matter how big it is. And so it has to be distributed. And when you distribute a computational workload of such intense scale, the communications overhead becomes one of its greatest bottlenecks, which is the reason why Mellanox is so valuable. There's reason why this company is so precious and really a jewel and a one of a kind. And so and it's not just about the link speed, it's not mostly. I mean, we just have a deep appreciation for software. It's a combination of architecture and software and electronics design, chip design. And that combination Mellanox is just world class. And that's the reason why they're in 60% of the world's supercomputers. That's why they're in 100% of the AI supercomputers. And their understanding of large scale distributed computing is second to none. Now in the world of and I just talked about scale up. And you're absolutely right. Now the question is why scale out? And the reason for that is this. This is the reason why they're doing so well. The movement towards disaggregated micro service applications where containers, microservice containers are distributed all over the data center and orchestrated so that the workload could be distributed across a very large hyperscale data center. That architecture and you probably know the 3 most important application in my estimation in the world today, number 1 would be TensorFlow and PyTorch. Number 2 would be Spark. And number 3 would be Kubernetes. You could rank it however you desire. And these three applications in the case of Kubernetes, it's a brand new type of application where the application is broken up into small pieces and orchestrated across an entire data center. And because it's broken up into small pieces and orchestrated across the entire data center, the networking between the compute nodes becomes the bottleneck again. And that's the reason why they're doing so well. By increasing the network performance by offloading the communications off the CPUs, you increase the input of a data center tremendously. And so it's the reason why they had a record quarter last quarter, it's the reason why they've been growing 27% per year. And their software stack, their integration into the hyperscale cloud companies, their low latency, their incredibly low latency of their link makes them really unique, even on whether it's Ethernet or InfiniBand in both cases. And so there it's a really fantastic stack. And then lastly, Cumulus. We would like to integrate we would like to innovate in this world where the world is moving away from just a CPU as a compute node. The new computing unit, a software developer is writing a piece of software that runs on the entire data center. In the future, going forward, the computing the fundamental computing unit is an entire data center. It's so incredible. It's just utterly incredible. You write an application, 1 human could write an application and it would literally activate an entire data center. And in that world, we would like to be able to innovate from end to end, from networking, storage, security, everything has to be secure in the future so that we can reduce the attack surface down to practically nothing. And so networking storage, security are all completely offloaded, all incredibly low latency, all incredibly high performance and all the way to compute, all the way through the switch. And then the second thing is we'd like to be able to innovate across the entire stack. You know that NVIDIA is just supremely obsessed about software stacks. And the reason for that is because software creates markets. You can't create new markets like we're talking about whether it's computational healthcare or autonomous driving or robotic or conversational AI or recommender systems or edge AI, all of that requires software stats. It takes software to create markets. And so our obsession about software and creating open platforms for the ecosystem and all of our developer partners, Cumulus placed perfectly into that. They are they pioneered the open networking stack and they pioneered in a lot of ways software defined data centers. And so we're super, super excited about the team and now we have the ability to innovate in a data center scale world from end to end and then from top to bottom the entire stack. Okay? Yes. Thank you, Jensen. Okay. Thanks a lot. Your next question comes from William Stein with SunTrust. Please go ahead. Great. Thank you for taking my question. Jensen, I'd like to focus on something you said, I think it was in one of your earlier responses. You said something about a very significant part of data centers are now accelerated with GPUs. I'm sort of curious how to interpret that. If we think about sort of the evolution of compute architecture going from almost entirely, let's say, racks and racks of CPUs to some future day where we have many more accelerators and maybe a much smaller number of CPUs relative to those. Maybe you can talk to us about where we are in terms of that architectural shift and where you think it goes sort of longer term, where we are in the position of that? Yes, I appreciate the question. And this for computer architecture geeks and people who follow history, you know well that in the entire history of time, there are only 2 computing architectures that has made it so far, which is one of them is X86, the other one is ARM in any reasonable way. And if you get an ARM computer, you get an X86 computer, you can program it. And in fact, there's no such thing as an accelerated computing platform until we came along. And today, we're the only computing accelerated computing platform that you could really largely address. We're in every cloud, we're in every computer company, we're in every country, We're every single size. And we accelerate applications from computer graphics to video games to scientific computing to workstations to machine learning to robotics. This journey took 20 some odd years. Inside our company, it took 20 some odd years and we've been focused on accelerated computing since the beginning of our company. And we made a general purpose, we made a general purpose really starting with Andeavor cost CG, C for graphics, and then it became CUDA. And we've been working on accelerated computing for quite a long time. And I think at this point, it's a foregone conclusion that accelerated computing has reached the tipping point and it's well beyond it. The number of developers this year that support that we supported was almost 2,000,000 developers around the world And it's growing what appears to be exponentially. And so I think accelerated computing is now a well established. NVIDIA accelerated computing is well established. It's common sense and people who are designing data centers expect to put accelerated computing in it. The question is how much? How much accelerated computing do you use and what part of the data in your pipeline do you do it? And the big the gigantic breakthrough, of course, we know well now, and NVIDIA is recognized as one of the 3 pillars that ignited the modern AI, the big bang of modern AI. And the other 2 pillar, of course, is deep learning algorithm and the abundance of data. And so the 3 these 3 ingredients came together and was the people use NVIDIA accelerated computing largely for training. But over time, we expanded training to have a lot more models. And as I mentioned earlier, the single most important model of machine learning today is the recommender system. It's the most important model because it's the only way that you and I could use the Internet in any reasonable way. It's the only way that you and I could use a shopping website or a video web or a video app or music app or book or news or anything. And so it is the engine of the Internet from the consumer's perspective. On the company's perspective, it is the engine of commerce. Without the recommender system, there's no way they could possibly make money. And so their accuracy in predicting user preferences is core to everything they do. You just go up and down the list of every company. And that engine is gigantic. It is just a gigantic engine. And from the data processing part of it, which is the reason why we went and spent 3 years on Spark and RAPIDS, which made Spark possible and all the work that we did on MDLINK and all that stuff was really focused on big data analytics. The second is all of the training of the deep learning models and inference. So the number of applications, the footprint of accelerated computing has grown tremendously and its importance has grown tremendously because of the applications are the most important applications of these companies. And I think when I mentioned when I said that acceleration was is still growing, it is. But the major workloads, the most important workloads of the world's most important companies are now solidly required acceleration. And so I'm looking forward to a really exciting ramp for Ampere for all the reasons that I just mentioned. Your next question comes from John Pitzer with Credit Suisse. Please go ahead. Guys. Thanks for letting me ask the question. Just two quick ones. Colette, I hate to ask something as mundane as OpEx, but just given the full year guide, there's sort of a lot to unpack. And you talked about some of it like the raises. I mean, I think you also probably have some COVID plus or minuses in that. I think there's an extra week this year as well. And then of course, there's Mellanox and how you're thinking about investing in that asset. I guess I'm just kind of curious, when we look at the full year guide, is there something structural going on OpEx as you try to take advantage of all these opportunities? Or can we use it as sort of a guidepost to how you're thinking about revenue for the back half of the year as well? How do I understand that? And then, Jensen, just a quick one for you. Kind of makes sense to me that COVID is accelerating activity in sort of HPC and hyperscale and maybe even in certain verticals like healthcare. But in the other verticals, has the sort of shelter in place kind of hurt engagement? And could we actually come out of COVID with some pent up demand in those vertical markets? Thanks John for the question. Let's start from the first perspective on the overall OpEx for the year. We've guided the non GAAP at approximately $4,100,000,000 for the year. Yes, that incorporates 3 full quarters of Mellanox. Mellanox and its employees, we have about close to 3,000 Mellanox employees coming on board. You are correct. We have a 53rd week in this quarter, excuse me, not this quarter, this year. And that is, has been outlined in the SEC filings that you should expect that as well. We've pulled forward a little bit our focal by several months, in order to take care of our employees. And then lastly, though, we are investing in our business. You see some great opportunities. You've seen some great results from our investment and there's more to do. We are hiring and investing in those businesses. So there's nothing different structurally, but just this onset of Mellanox and our investing together, I think will produce long term great results. And as usual, John, you know that we're investing into the IT industry's largest opportunities, cloud computing and AI. And then after these two opportunities is edge on. And so we're looking down the fairway with some pretty extraordinary opportunities. But as usual, we're thoughtful about the rate of investment and we're well managed and NVIDIA's leadership team is our excellent managers and you could count on us to continue to do that. Hey, Simona, what was John's question? Could you just give me one hint? I had a tip of mine. Just the idea of engagement levels in verticals just with shelter in place, has that hampered? Yes, right. A few some of the industries have been affected. We already mentioned automotive industry. The automotive industry has been grounded to a halt. Manufacturing has largely stopped. And you saw that in our guidance. We expect automotive to be down 40% quarter to quarter. It's not going to remain that way, it's going to come back. And nobody knows what level is going to come back to and how long, but it's going to come back. And there's no question in my mind that the automotive industry, they're hunkered down right now, but they will absolutely invest in the future of autonomous vehicles. They have to or they'll be extinct. It's not possible not to have autonomous capability in the future of everything that moves. Not so that it could just completely drive without you. That's a nice benefit too. But mostly because of safety and comfort and just the joy of what seems like the car is reading your mind. And of course, you're still responsible for driving it. But it just seems to be coasting down the road, reading your mind and helping you. And so, I think the future of autonomous vehicles is a certainty. People recognize the incredible economics that the pioneer Tesla is enjoying. And the industry is going to go after it. The future car companies are going to be software decline companies and going to be technology companies. And they would love to have an economic that allows them to enjoy the installed base of their fleets. And so they're going to go after it. And so this is I'm certain that this is going to come back. And I have every confidence it's going to come back. And let's see, the energy sectors are have been impacted. The retail sector has been impacted. Those aren't large industries for us. The impact in some of the industries is accelerating their focus in robotics. Like for example, on the one hand, BMWs has obviously impacted in manufacturing, which is the reason why they're moving so rapidly towards robotics. They have to figure out a way to get robotics into their factories. And same thing with retail. You're going to see a lot more robotic robotic support in retail. You're going to see a lot more robotic support in warehouses, in logistics. And so during this time, their market, when the industry is disrupted and impacted, it allows the market leaders to really lean into investing into the future. And so when they come back, they'll be coming back stronger than ever. Thank you. And your next question comes from Matt Ramsay with Cowen. Please go ahead. Thank you very much. Good afternoon. Two different topics, Jensen. Well, first of all, congrats on AMPYRA's heck of a product. Thank you, Matt. I'm so proud of you. The first question is, it might have been a little bit hard to talk when the deal was pending about this topic. But now that it's closed, maybe you could talk a little bit about opportunities to innovate on and customize the Mellanox stack and the balance of having an industry standard? And the second one is E3 canceled, Computex moved around. At the same time, there's obviously stay at home gaming demand. Just how you think about gaming product launch logistics and any comments on there would be really helpful. Thank you. Yes. Thanks a lot, Matt. Appreciate the questions. I'll go backwards because it's kind of cool. On the one hand, I do miss that we can't engage the developers face to face. It's just so much fun, GTC, seeing all their work and the hundreds of papers that are presented, I learned so much each time. And frankly, I really enjoyed the analyst meetings that we have. And so there's all kinds of stuff that I miss about the physical GTC. But here's the amazing thing. We had almost the GTC Kitchen keynote. I did it from my kitchen just right behind me. And the kitchen keynote has been viewed almost 4,000,000 times. And the video is incredible. And so I think our reach is could be quite great. And so I'm not too we've got an amazing marketing team and just we've got great people. They're going to find a way to reach our gamers. And whenever we launch something next, the gamers are going to be and our customers are going to be our end markets are going to be really excited to see it. And so I'm very confident that we're going to do just one. Matt, what was the question before? I should never do it backward. Just the industry standard versus customization of Mellanox opportunity. I see. Okay. Yes. There's a we worked so closely with Mellanox over the years. And on the day that we announced GTC, you could see the number of products that we have working together. The product synergies are really incredible. And the product synergies include a lot of software development that went in and a lot of architectural development that went in. DGX comes with 9 millonautsmets as I mentioned. If you look at our data center, we shipped before we shipped DGXs to customers, we shipped it to our own engineers. And the reason for that is because every single product in our company has AI in it, from Jarvis to Metropolis to Merlin to Drive to Clara to Isaac to right, all of our products has AI in it. And we're accelerating frameworks for all of the AI industry. And Ampere comes with a brand new numerical format called TensorFlow32. And TF32 is just a fantastic new numerical format and the performance is incredible. And we had to get it integrated in with the industry standard frameworks. And now TensorFlow comes standard with TensorFlow with TF NVIDIA TF32 and PyTorch comes standard with TF32. And so we need our own large scale data center. And so the first customer we shipped to was ourselves. And then we started shipping as quickly as we could to all of the customers. You saw that in our data center, in our supercomputer, we have 170 state of the art brand new Mellanox switches and almost 1500, 200 gigabit per second Mellanox mix and 15 kilometers of cables, fiber optics cables. And that is one of the most powerful supercomputers in the world today. And it's based on Ampere. And so we have a great deal of work that we did there together. We announced our first edge computer between us and Mellanox in this new card we call the EGX A100. It integrates Ampere and then integrates Mellanox's CX-six DX, which is designed for 5 gs telcos and edge computing. It's incredible security and it has single root of trust and it's virtualized. And so basically, we this EGX A100, when you put it into a standard X86 server, turns that server into a cloud computer in a box. The entire capability of a cloud of a state of the art cloud, which is cloud native, it's secure, it has incredible AI processing, it's now completely hyper converged inside one box. The technology that made EGX A100 is really quite remarkable. And so you could see all the different product synergies that we have in working together. We couldn't have done Spark acceleration without the collaboration with Mellanox. They worked on this piece of networking software called UCX. We worked on nickel. Together, it made possible the infrastructure for large scale distributor computing. I mean, it's just the list goes on and on and on. And so we the 2 teams have great chemistry. It's a great culture fit. I love working with them. And right out of the chute, you saw all of the great product synergies that are made possible because of the combination. That is all the time we have for questions. I'll turn the call back to Jensen Huang for closing remarks. It's coming. Thank you. We had a great and busy quarter. With our announcements, we highlighted several initiatives. 1st, computing is moving to data center scale. Where computing and networking go hand in hand, the acquisition of Mellanox gives us deep expertise and scale to innovate from end to end. 2nd, AI is the most powerful technology force of our time. Our Ampere generation offers several breakthroughs. It is the largest ever generation of LEAP 20X in training and inference throughput, the 1st unified acceleration platform for data analytics, machine learning, deep learning, training and inference and the 1st Elastic Accelerator that can be configured for scale up applications like training to scale out applications like inference. Ampere is fast, it's universal and it's elastic. It's going to re architect the modern data center. 3rd, we are opening large new markets with AI software application frameworks such as Clara for healthcare, Drive for autonomous vehicles, Isaac for robotics, Jarvis for conversational AI, Metropolis for edge IoT, Aerial for 5 gs and Merlin with the very important recommender systems. And then finally, we have built up multiple engines of accelerated computing growth. RTX computer graphics, artificial intelligence and data center scale computing from cloud to edge. I look forward to updating you on our progress next quarter. Thanks everybody. This concludes today's conference call. You may now disconnect.