NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.68
-8.57 (-4.10%)
Apr 30, 2026, 12:14 PM EDT - Market open
← View all transcripts
Investor Day 2018
Mar 27, 2018
I think we're just about ready to get started. Thank you, everybody. Welcome to NVIDIA's 2018 Investor Day. I'm Simona Jankowski with Investor Relations, and I'd like to welcome those of you in the room today here with us in San Jose, as well as all of those people who are watching this from the webcast. Before I go through the agenda, I do need to read our safe harbor statement, and you can take a look on the screen as well.
We will make forward looking statements in today's program regarding our expectations and other future events, which may differ materially from NVIDIA's actual results. I'd like to refer you to our SEC filings for a description of our businesses and associated risks and other factors which could cause our results to differ materially from these statements. All our statements are made as of today, March 27, 2018, based on information currently available to us. Except as required by law, we assume no obligation to update any of these statements. Also, if we use any non GAAP financial measures, you'll find a reconciliation to GAAP on our IR website.
Now today's agenda is going to include about an hour and a half of presentations from 5 speakers. We're going to kick off with our CEO, Jensen Huang followed by Shankar Trivedi on data center Rob Chonger on automotive Jeff Fisher on gaming and then finally, Colette Tress, our CFO. After all of that, we're going to have all of our executives come up on the stage for about an hour of Q and A. We'll aim to catch up on time and finish close to 3 p. M, and then that's going to be followed by a 1 hour reception just outside to your left.
Just finishing up with a couple of logistics items. If you need anything in the course of today's presentation, please reach out to me or to Sean Simmons from the IR team or you can e mail either of us. At this point, I'd like to have you all silence your phones, please. And please join me in welcoming to the stage our CEO and NVIDIA's Founder, Jensun Huang.
Ladies and gentlemen, our great Simona.
Did I just finish talking?
My goodness. Okay, amazing times. There was no rehearsal. So what do I do? Do I grab something?
Rip it and rip it. There you go. No, that's not it. Okay. Okay.
So,
let's see.
Amazing times. Amazing times. I think that some great decisions that we made, some great decisions. There are 2 I would say that there are a couple of things. Thanks a lot, Fish.
Jeff Fisher. Thanks, Fish. Always watching my back. Okay. So, I would say there are 2 things that are just really, really big deals.
They're really big deals. And the reason why is because when a couple of dynamics happen to an industry or to a particular strategy, the momentum is incredibly powerful. And you see this in almost any company that you cover that ultimately reaches some kind of a tipping point. And all of a sudden the forces that drives the business grows incredibly. And in the conversation I had with you guys today, there were a couple of things that I said.
The first, the first was an observation we made 25 years ago that computer graphics was going to be the driving force of the future of computing. And the reason for that is because computer graphics is ultimately about virtual reality and virtual reality is about simulation. You can't create virtual reality without simulating the laws of physics. If I dropped a ball, if I poured water, if something were to break in front of me, something go up in smokes, all of that is laws of physics. You can't recreate the laws of physics without simulation.
And so the future of computer graphics was going to become one of the pillars of the future of computing. We made that observation 26 years ago. And we made the observation that computer graphics was unique as a technology because it is not only technologically insatiable, it is also gigantic in scale. Now at the time, it was 0 in scale. But we made the observation that if we were successful in creating these virtual reality worlds, otherwise known as video games, everybody would be a video gamer.
That was my business plan. They said, how big is the market? And I said, everyone. That's where you lose them. I also had a hard time writing the first business plan because the numbers were either going to be big or 0.
And there's nothing in between. And in fact, there are many business plans like that. We now know that that impossible to write business plan, there are many examples of them. There are many examples of them. They're either going to be infinitely large or 0.
And so we came to the conclusion that we would go do this. Now we had fortunately, some great backers and they believed in what I just said. And that ultimately long term, computer graphics was going to be one of the fundamental pillars. First great decision, we never gave up on it. And we just kept leaning into it.
When people said, how big is the next GPU? We just made it twice as big. Nobody asked for it. We just did it. And we just kept making it more and more great.
And we kept making reality, virtual reality, better and better and better. And we did it completely on a self propelled system. No customer asked for it. No analyst said we should do it. No industry standards suggested it.
We just did it because we believed in it. And we were right. Today, there are 100 of millions of gamers. I believe there will be billions of gamers. Everybody that's coming into society today is a gamer.
Of course, they're going to be. The question is what kind of games are they going to be? Of course, they're going to be a gamer. We're going to have many virtual realities that we engage, that we enjoy. Number 1, computer graphics is one of the driving forces of computing.
The second great observation we made, this was 15 years ago, that this processor we created called the GPU, which at the time was an accelerator, it does one thing. It ran Direct X or Open GL, basically the same thing. It does one thing that we would take the chance to make it more programmable. When you do more than one thing, you lie risk of doing that one thing poorly, especially when there were so many competitors. We found our balance, this incredible balance, this careful balance, and it's subtle, but it's incredibly vital in our company, where we somehow find the balance between always accelerating everything we're known for and continue to propel the company forward in the things and the applications that the world uses today, while opening the aperture, expanding the aperture of our company slowly, but wisely, slowly, meticulously, but wisely, making our GPU more and more and more and more programmable over time, while the market supported us.
It took 15 years. This is our 10th GTC. Spencer and Madison sent me both texts this morning. Congratulations, dad. Do a good job.
They didn't say, I know you're going to do a good job. They said, do a good job. That's the kind of kids that Jensen would have. Yeah, do a good job. Okay.
And so we opened up the aperture of our company, of our programmability opened up step by step by step by step to one day all of a sudden a critical tipping point. Whether you're in biological science or computational science of chemistry or biology or fluids or weather simulation or imaging or just kept we just kept opening up the aperture until one day, every single field of science that we know, including cryo EM that just won the Nobel Prize, cryogenic electron microscopy, RELION, this incredible CT EM reconstruction program won the Nobel Prize, one after another. And then one day, deep learning opened up into the aperture. And then we focused on it, We accelerated it and we accelerated at the speed of light. And in the course of literally 5 years' time, it increased by 500x.
5 years, 500x. Moore's Law is 10x in 5 years. Really, really crazy stuff. And so those were the 2 great observations that we made, the fundamental forces. And as a result, we have this multibillion dollar industry that's fueling this one thing, the NVIDIA GPU.
It's not a generic GPU like you can't use some random GPU to do this. We call it a GPU because we just decided not to change the name. It's hardly a GPU anymore the way a GPU is. I should change the letter G to no, let's keep G, but let's call it a great processing unit. Ladies and gentlemen, today we're going to I want you to remember plaster and I want you to remember GPU and the G stands for great, the great processing unit.
It does these things the NVIDIA way and it runs CUDA, it runs all these things that we talked to you about. We made those 2 fundamental good decisions. If I could add the 3rd, and what is so confusing to everybody is why is it that other people have chips and it doesn't do this? The reason for that is because accelerated computing is a field of computing that's radically different than microprocessor computing. You don't compile code and run it on this thing.
Our processor has a JIT. It's compiled just in time. And this JIT runs this entire stack of software that we've been tuning and optimizing and changing because it's going to get compiled just in time anyways, you could change it. Because you're going to compile it just in time, you could change it every time you run it. This revolutionary new computing approach we call accelerated computing, I illustrated today in the keynote, the entire stack and it's just daunting.
1000 and 1000 of engineers are working on it over all this time. The reason why we care about the CUDA architecture is 99% for us. Because if we have 10,000 software engineers working on CUDA and they worked on it for 1 year And next year's GPU was not CUDA compatible, I've lost 10,000 human years of engineering. And if I worked on it for 10 years and they're not architecturally compatible, then I've lost 1,000 and 1,000 and tens of 1,000 of human years, not to mention the investment of the rest of the industry on top of this. That's the reason why architectural compatibility and architecture is so vital to our company.
It is the reason why architecture is so vital to all serious computer companies. The body of work that grows on top of this architecture over time is just incredible. And now, it's been almost 15 years. 1 architecture, we're on 2 to 9. 1 architecture, we're on 2 to 9, all backwards compatible.
And so, we find ourselves in these extraordinary times, where our computing architecture has made it possible for us to extend the computing law called Moore's Law and made it possible for the industry to have a computing approach when all of these new demanding software applications are starting to show up, such as artificial intelligence, such as deep learning. We serve 4 markets today. Gaming, of course, as you know, is a gigantic market. Provis is a gigantic market. It's just been half of it is in workstations for interactive use, but the vast majority of ProVis is actually rendered in data centers.
And we've just never been able to address it based on all the things I told you today. And finally, after 10 years of work, literally 10 years of work, Stephen Parker came to our company for this fundamental reason. We started NVIDIA Research to solve this problem. It took 10 years. We introduced today NVIDIA RTX.
I'm so proud of you guys. It wasn't something like, gee, there's a doohickey you introduced and that was it. It wasn't like here's a software algorithm we were missing. The entire computing stack had to be fundamentally changed. And not to mention, if it wasn't because of deep learning, we wouldn't be able to introduce these new capability for another 5 years.
AI made it possible for us to bring it in 5 years, really exciting. And now 1,000,000,000 images, large industries, we can now make possible for them to use our GPUs to render these beautiful images. And as we know, the more you buy, the more you save. There were 2 messages today. Plaster, inference is much more complicated than people think.
And 2, the more GPUs you buy, the more money you save. And then 3rd, our data center business. You guys are going to bless you. You're going to hear lots and lots about it in autonomous vehicles. Autonomous vehicles, let me make a comment about that.
Obviously, we're really serious about autonomous vehicles. We've been talking about autonomous vehicles for some time. And last week's incident, fatal incident is just incredibly sad. But it just it reminds us the importance of its work. We know the reason why we started all this is so that we can make transportation easier, safer.
Safer is one of its primary goals, if not the ultimate goal. And if we could solve that problem, wouldn't it be amazing? It would be one of the greatest contributions we've ever made to the industry and to society. Our strategy is to engage this industry to help everybody create the future of autonomous vehicles. This isn't going to be something that one car company does.
This is something that every industry player in the transportation industry will have to get involved. And that's the reason why we created an open system. I think it was a news that came out today that says, apparently, we've suspended testing. We suspended testing not because we know that we're doing something wrong. We are already using all of the best techniques we know how to keep things safe.
I've got my own engineers in the car, and they're testing in our neighborhoods. This is safety of its highest level that we a simple reason, good engineering. Somebody had just learned something. There's an incident of great importance. We should stop and see what we can learn from them.
Until I learn something from them, let's pretend for a second there's something to learn. Did I just say something logical? Some important incident just happened, not because it affects us directly, but because we might be able to learn from them indirectly. We should take a pause, see what we can learn. There's some really great engineers at Uber.
I know they're all over the thing. And the moment they learn something for us to all benefit from, we'll reassess and see what to do. Okay. And not because there's anything about our self driving cars and our test cars that somehow we need to suspend. Use abundance of caution, practice great engineering, and we'll take it from there.
Now, our strategy is not just to build a chip for self driving cars. As you know, self driving cars is a completely new software stack, a completely new type of computer with new types of algorithms, new types of sensors, operating in a condition that no computers in this room has ever known. This is really complicated stuff. And so we have to start this on first principles. How do we create a software development system that allows us to continuously improve and enhance and be productive as an organization at a large scale for as long as we shall live.
And I talked about the NVIDIA perception infrastructure. That's a big deal. And we're opening that up and sharing with our partners. 2, all of the deep learning systems that are on it, we're opening it up, sharing with our partners. 3, if you're an engineer, you've got to simulate.
If you're an engineer, you've got to simulate. We already build some of the most complex computers in the world. In fact, as you know, we build the most complex computers in the world. That computer doesn't work by accident. It works because of the intense amount of simulation technology we deploy.
That level of simulation, that level of rigor from formal verification, architectural validation to C model simulation, gate level simulation, timing simulation, all the way down to spice level characterization of wires, hardware in the loop, emulation of our system with software. The number of engineering years that goes into developing these systems is unreal and most of it is verification. Most of it is verification. We take that entire discipline, that entire sensibility and we're going to bring it to this incredibly valuable endeavor and see if we can help the largest industry in the world transform itself. And then lastly, of course, build a computer, energy efficient, auto grade, functionally safe to the highest level of safety for the entire software stack and all the tools that goes along with it.
And so our automotive industry is large. And then I talked about 2 industries that we are starting to go into. The next 2 of the world's largest industries, as you know, is healthcare and manufacturing. Our healthcare strategy starts with Clara and our manufacturing strategy starts with Isaac Robotics. So I introduced 2 new platforms today that will be the future of our company.
New platforms that I'm going to be super excited to tell you about. Okay. So we are a number 1 as a company business model, you could see that the first thing that we do is we advance this computing approach. We pioneer this computing approach called GPU computing. We advance it at the speed of light and we make it available to computer companies, OEMs and clouds.
Then we apply this capability ourselves to serve 4 large vertical industries, large vertical industries, completely open ended vertical industries, gaming, transportation, high performance computing AI, visualization. And that's it? Did I just do this one? Patrick, I thought I had 2 slides. I'll do whatever Simona wants.
You guys know this company is not a top down company. Okay. I'm just a puppet. And so, Mona goes okay, thank you. Gentlemen, Shankar Tribedi.
Good morning. Head of our Enterprise Business. Stay on target. Stay on target.
Okay. Thanks a lot, Janssen. I'm Shankar, and I just wanted to share with you data center is a very large opportunity for our company. The opportunity is in 3 very large segments. I'll cover for you what our overall data center strategy is and I'll go into each of these segments and explain the size of the opportunity, the leadership that we have and the momentum that we've generated for future growth, okay?
So let's start with 2018. 2018 was a great year. We grew our business amazingly. We reached nearly $2,000,000,000 over $800,000,000 or so last 1 year ago and we reached nearly $2,000,000,000 And Volta has been chosen, which is our new architecture, by every major server of OEM and ODM and by every major public cloud. We now have over 2,000 customers already working on the NVIDIA inference platform.
And in fact, that was before we did TensorRT4, which is a huge leap forward today. On the top 500, which is probably the best metric of high performance computing, over 100 systems are now accelerated. And we have with the NVIDIA architecture 86 of those. So a very, very high market share accelerated computing has really taken off and become established in high performance computing and we have a very high market share in that particular segment. And continuing the high performance computing, last year I shared with you the top 10 HPC applications were accelerated.
Now the top 15 applications are accelerated. And our total applications grew from about 400 to over 550 in this last year. So and of course, our developer momentum continues. You can see it posters, you can see in the papers, in the scientific papers, not the newspapers. And we have more and more developers embracing the CUDA platform.
So it's a great year. But moving forward, we have a really large opportunity. Our opportunity in data center is of the order of $50,000,000,000 And I think of it in these three segments: high performance computing, scientific and technical computing, modeling and simulation, and now today, AI is part of the high performance computing workload. Almost every HPC paper, every developer is thinking about adding AI into their workflow. Then we have hyperscale and consumer Internet.
And in the past, I shared with you what we're doing with the larger companies. Here's what's new. The larger companies or there's a lot of smaller companies, which are quite large in size and scale, they have very large data sets and they are part of the same segment. And this is where AI is predominant. And so it's a very large opportunity for us.
And then lastly, I think of cloud computing and traditional industries. And obviously, these companies, the traditional industries are adopting HPC and AI a little bit later. But you will see that this itself is a large opportunity for NVIDIA. So I'll share details with you
in the
next couple of slides. But first, let me explain to you what our data center strategy is. And it's very, very simple. We have a platform and that platform is used by developers. Developers create applications.
Developers create value. And then those applications are used by users and engineers in either the public cloud or on premise and data centers on OEM and ODM platforms and in the public cloud. And what do you see on the developer side, I listed the top 15 HPC applications for you there. What's interesting is observe how many of them are in computer aided engineering, LS Dyna, Abacus Simulia, ANSYS Fluent, ANSYS Mechanical. And what we just did today with the V132 doubling the memory creates even more momentum for those application developers in the computer aided engineering space.
And I think you'll start to see some other interesting things in genomics and those sort of bio areas. In the middle, I've laid out to you the AI frameworks, which are all the developers associated with all these frameworks, Caffe, Chainer, Connected Cognitive Toolkit, TensorFlow obviously and the like. And then observe on the right hand side, on the left hand side for you on the right hand side of the box, observe that there are 2 new categories of developers emerging. 1 are these large databases like NAP B and Kinetica and so on, and then traditional enterprise apps. So right now the SAS leader of data science, SAS is this big data analytics company and enterprises.
He is doing a presentation on the way SAS is using GPUs for accelerating their traditional machine learning, traditional statistics applications used by enterprises. IBM is a very large enterprise software company. GE is actually a large enterprise software company. Schlumberger is a large enterprise software company serving the oil industry and so on. So we've got lots of momentum.
SAP has been speaking quite extensively about how they're using GPUs. Last year, Hasso Plattner talked about SAP's use in the Sapphire keynote, how they are using GPUs. So I see more and more momentum behind the developers. And the more developers you have, the more users you have, which means more deployment in public cloud and more deployment on OEM servers. And all of that is because of the platform.
So let's just take a look at the platform in a little bit more detail. So you can see the foundation of our platform is the CUDA GPU architecture. And then above that, we have a variety of data center servers, inferencing and edge servers based on our P4, very large, highly connected nodes based on the V100 and NVLink and then the systems such as DGX and the new DGX2. And then above that, I've shown you a selection of the software. Software is really, really important for the platform.
And I kind of loosely characterize into high performance computing software, into deep learning software or data analytics software. And then on the right hand side into data center management software. And so you can see how all of these types of softwares get stacked up as you deliver, for example, an industry vertical solution. And more and more, as Jensen showed, we are going into industry vertical solutions, Drive for autonomous vehicles, Clara for Healthcare, Metropolis into AI and Smart Cities, and overall NVIDIA AI and vertical platform built on these middlewares. And then in the data center, it will be managed using NVIDIA Data Center Manager, orchestrated using Kubernetes on NVIDIA GPUs, cluster management and so on and so forth.
So we've got one other thing on the data center management, the new vGPU 6.0 is even more universal. So we support a very wide variety of hypervisors from KVM all the way through to VMware and Citrix. So essentially a very, very good data center middleware stack of software that allows us and our developer ecosystem to deliver very high quality applications to their customers, whether they're internal to their enterprises or outside customers. And then last but not least, to simplify things, we package this into containers, which sit on the NVIDIA GPU cloud, the container registry. So last year, we had just only 4 containers.
Now we have 30 containers. Last year, we only had deep learning containers. Now we have deep learning containers, inferencing containers, high performance computing containers, data analytics containers and visualization containers. And you can expect to see over the coming years more and more of these optimized performance optimized containers that people can deploy anywhere on premise, on the cloud, on a PC, anywhere deployment in a nice easy way. Developers don't need to worry users don't need to worry about all the complexity of fitting these things together.
They just take it from the container and run it wherever they want. So development ecosystem, fantastic momentum, I already shared with you high performance computing. We grew the number of applications. Notice even in high performance computing, we grew over 350,000 developers last year. So that's amazing momentum.
For hyperscale and consumer Internet, I've given you 2 indicators of future success. The number of CU DNN downloads, CU DNN is core to the training workload. And this is the TensorRT download, 30,000, which grew infinitely from last year, because last year we had no TensorRT. And now with TensorRT 4, that's points for even stronger growth. And then on the cloud computing and industries, you can see there are around about 2,800 startups in our inception program.
These startups cover a wide range of industries, and I'll talk a little bit more about that in a moment. And then last but not least, I've shown you 550 Enterprises are today DGX customers. These are these 550 do not include any universities, any national laboratories, any other cloud computing companies like Facebook, which are using DGX. These are mainstream enterprises using DGX. And think about it, that's what we put DGX out for.
It's an appliance for deep learning. For the first time, people can get hold of a compact, well provisioned appliance and run their data science models on a computer, easy to deploy and easy to get going. That shows you how much momentum we have in the industry segment. And then lastly, the value. And of course, the way I think about it, you speed up the application, which means you need fewer servers.
Those fewer servers cost you less money, less money to buy and less money to operate. That last OpEx savings line that you see, the OpEx line you see, that sometimes is really, really important for our customers because the power and cooling costs of some of these deployments far exceeds, far exceeds the purchasing price of the actual traditional form of computing. So what we've seen consistently, time and again, 70%, 80%, 85% TCO savings by deploying accelerated computing, whether it's for an HPC workload, a deep learning training workload or an inferencing workload. And today, Jensen shared with you how for a graphics workload and data center like rendering, it's the same value proposition. The more you buy, the more you save.
So let me now just spend a little bit of time on each of those 3 large segments. High performance computing is a very large segment. Supercomputers are everywhere. We believe every single supercomputer will be accelerated. I repeat, every single supercomputer will be accelerated.
The graph that you see in dark green is and every computer is going to go towards exascale. The world's first exascale computers will probably be deployed in 2021 or 2022. We're currently in the 100 of petascale 100 of petaflops and exascale is coming very soon. So you can see that if every computer that is in the top 500 was to go to excess steel and was to be accelerated, you can see that dark green line constitutes about half of our TAM. The rest of the TAM is in traditional industries for fluid dynamic simulations, for bio simulations, for auto simulations and many other kinds of simulations.
So I've taken the traditional HPC industries and then consider the top 500 simulation TAM, not deep learning, simulation TAM that the opportunity is for us here. And that itself is a $10,000,000,000 opportunity. The leadership is very, very clear. Our nation's supercomputers summit in Sierra, Lawrence Livermore, Oak Ridge being built on Volta. That's what we created Volta for.
The biggest computer in Europe, CFCS in Switzerland, based on Volta architecture. The 2 biggest computers in Japan, one for deep learning and AI and the other for scientific computing, Tokyotech, built on NVIDIA Volta architecture. So I think we've got a lot of momentum here. And one thing I want to share with you, actually 2 things. 1 is every government, every public sector organization is investing more in HPC and AI.
The European Union put out a $14,000,000,000 investment program. Japan has a huge investment program for industrial AI. The government of Taiwan has an AI program. The United States has an AI and exascale program. The Canadian government has an AI and HPC program.
The list India has a supercomputing mission. So the list goes on. All the nations of the world are investing more in HPC plus AI. And then you can see the momentum in terms of developers. The number of scientific papers almost doubled that cite GPU.
So GPUs cited in the citations almost doubled in 1 year from around 1200 to around 2,300 scientific papers. So huge momentum in high performance computing. Hyperscale and consumer net Internet, these applications also powered by AI. And so I've presented to you the 2 parts of the TAM, the deep learning training TAM and the deep learning inferencing TAM. And the order of magnitude is about $10,000,000,000 to $12,000,000,000 is what we estimate will be the size of the opportunity for deep learning training.
And around 10 to maybe more will be in deep learning training. We don't quite know, I'm not 100% sure exactly what the inferencing TAM is. I'm pretty confident it's of the order of at least $10,000,000,000 And so let me explain to you what actually happened. So last year, I explained to you in the hyperscale companies, they all have AI assistance, they all have image, search, video things, speech recognition and translation, and they're all using neural networks. Now here's the interesting thing that happened.
During the course of the year, we discovered that many other companies are also using the same sort of architecture. So for example, SK Telecom has an AI enabled speaker called Nubu. China Mobile has AI enabled assistant called MIGU, MIGU. So these kind of consumer and traditional telecom companies are also using AI assistance. And then when you think of the consumer Internet, think of companies like Pinterest, think of companies like Snap, think of companies like PayPal, like eBay, like JD, like Paytm, like Ola and Flipkart and so on.
And all of these companies have, depending on their business model, a recommendation engine, a credit scoring algorithm, a fraud detection, training and inferencing algorithm and indeed an ad insertion example. There are 48 talks at this GTC on these subjects from leading consumer Internet companies. For example, JD is talking about advertising insertion. 2 years ago, JD talked about how they were using it for equal monthly installments for payments. So this is a large opportunity.
And then just to share with you, this, as Jensen said, just for training, the neural networks are becoming much more computationally intensive and much more complexity. The number of layers is increasing, the number of parameters is increasing, the amount of computing needed to train deep learning networks is increasing. If you see for ImageNet, it's grown by 350 times. For neural machine, for language translation, the growth from the open NMT algorithm to the new mixture of experts neural network has gone up by 10 times in just a couple of years. So these areas are more and more demanding of high computational and highly complex neural networks.
And the graph on the right shows you the exponential growth of our shipments of Exaflop to our top 10 customers. So you can see how the shipments have grown have started to grow exponentially, okay, for deep learning training. So in this, I took out all of the public cloud shipments, there's no inferencing in here. This is only deep learning training and these are the shipments. And likewise on consumer on inferencing, I didn't get an opportunity to use the word plaster, so I'm still working on the program, mobility, latency, energy and so on.
So sorry, Jensen, I will. But you can see the clear value proposition between 36x and 190x faster on 5 different inferencing algorithms, the CNNs, the RNNs, the deep generative networks, the GANs, the RNN plus pluses for language translation, all of these are highly complex neural networks. And notice, Kaldi takes us into speech, TensorRT is now part of TensorFlow. The new TensorRT 4 is integrated into TensorFlow, which is fantastic news for all the TensorFlow developers out there, much, much faster inferencing. And then the standard format of the graph, ONNX, the Onyx format, which is everywhere in all of these frameworks is accelerated using TensorRT floor.
So I think that based on this, our TensorRT inferencing business is poised for acceleration. We're at that tipping point where now it's a no brainer to start to use the GPU for inferencing in hyperscale and consumer Internet applications. So finally, the 3rd segment, cloud and industries. And so here's the interesting thing, why did I combine them? Because I've noticed that large enterprises more and more are deploying on the public cloud or a version of the public cloud that's kind of compartmentalized and dedicated to them.
For example, the U. S. Government cloud or a cloud for GE's internal deployment. So public cloud goes hand to hand with industry. The other interesting thing is all of the startups that are out there, those 2,800 Inception partners that I talked about are operating in these traditional industries.
And invariably, they deploy their applications to the public cloud and they are big users of the public cloud infrastructure out there. So in terms of estimating the TAM, what I've done is I've taken all of the startups that we are engaged with, which got about $5,000,000,000 of funding from various companies. Obviously, they use a lot of that funding for their people and other things, but a significant amount of that goes into computing and then a significant piece of that becomes an NVIDIA data center opportunity. So that's kind of on the left. And then totally, I worked out the TAM is of the order of $20,000,000,000 to $21,000,000,000 right now.
And the 3 biggest areas are healthcare, transportation and manufacturing. We can see a lot of momentum. We're all in all of the public cloud. One other interesting thing there is you can see the types of companies that are starting to use us. So maybe I'll spend 1 minute here.
Baker Hughes is an oil and gas company. I was there at their keynote, their customer conference over 1,000 oil companies and they were demonstrating using our DGX architecture, training a neural network for the drilling that goes on deep sea. So this drill is about 10 feet tall and the drill gets micro adjusted using inferencing as it goes underneath the ground, extracting the oil that comes up to the top of the drilling platform. So amazing deep learning training and inferencing application using GPUs from Baker Hughes GE. Let me share a few highlights of where I think the opportunity is for us in healthcare and specifically in healthcare in medical imaging.
So, and the 3 bullets that the boxes that I put up at the top are kind of the method that we use. So things start in research in a clinical hospital or a university with papers. And then a lot of those people go into either companies or start ups. And that's where they use their life's work to deliver to their customers. So what was interesting was the number of papers that are going on in the leading 3 medical imaging conferences, SPIE, IPMI and MICS, M I C C AI, has gone up by 60%.
In 1 year, the number of papers using deep learning. That's an early indicator of what's happening in this market. Traditionally, medical imaging use traditional computer vision or just normal human beings to identify images. And the work that we are doing with Massachusetts General Hospital, for example, there are 40 AI projects that are going on right now at the Center For Clinical Data Sciences in Boston. And those 40 AI projects are based on 10,000,000,000 images that MGH has gathered over the years.
So now you can start to figure out, wow, that is a big opportunity. And it's the opportunity is not just in traditional medical imaging. So traditional modalities are x-ray, CT, PET, ultrasound, MRI and so on. The new modalities include pathology, chemistry, people are looking at images of biopsies, of cells and using deep learning to then do better diagnosis of things like prostate cancer or breast cancer, fewer false positives for all of humanity. It's a big, big deal.
And what we see is that 300 start up partners already in our inception program, GE, for example. So GE, there'll be roughly 3,000,000,000 instruments that are out there. GE has about 500,000 of them. They are planning to AI enable, GPU enable all of these instruments. So think about the opportunity for NVIDIA in terms of our data center opportunity using DGXs and Tesla for doing deep learning training and then inferencing based on the things coming out of those instruments.
And my estimate of this is approximately of the order of $1,000,000,000 to $2,000,000,000 just for traditional medical imaging. The opportunity for pathology is even more because there are more of these chemical images. And then there's further opportunities in genomics and in drug discovery and pharmaceutical companies. Two interesting examples, Arteris has now been one interesting thing that's happening is the FDA is accelerating the approval of certain new types of instruments and technologies. The FDA has traditionally been known for taking a very long time to give approvals.
They've shortened the time. So Arteris recently got their FDA approval and they are using deep learning and AI for lung, for cancer and for liver detection. And the earlier we detect cancer, as Jensen said in the keynote, the better off your chances are for a successful remission therapy. Page has a small company, an AI company. They have installed a dedicated supercomputer just to do pathology, just to do images around cells and chemistry.
And then finally, the AI data center opportunity for autonomous vehicles. So Rob will share with you the full workload, but here's the simple thing. We estimate that there's probably 10 DGXs required for every car that is out there just for training the neural networks, not even the simulation part around the 10,000,000,000 to 11,000,000,000 miles that need to be simulated. It's about 130 companies, each of them having a certain number of cars and each of them having a certain number of DGX or DGX equivalents. It turns out that the TAM just for training is in excess of $1,000,000,000 And my best estimate at this point of time is around $2,000,000,000 So in conclusion,
data center
is a very large opportunity. I want to thank all of our customers who contributed to our success last year. We had revenues of nearly $2,000,000,000 but we are very well positioned for future growth. Future growth in these three segments in high performance computing, which is a very large opportunity in hyperscale and consumer Internet and in cloud and industry. We feel confident because we have a compelling platform based data center strategy.
We know that there is a very large incremental AI inferencing opportunity ahead of us with TensorRT 4 and all of the other goodness that was announced today. We're seeing more and more enterprise adoption. And I think with NGC, the number of containers now getting to critical mass with 30 odd containers, with the 32 gigabyte Volta V100 now shipping in volume. The incremental memory will increase the enterprise adoption, not only in traditional HPC, but also in AI and deep learning. And then we continue to grow our developer ecosystem into new areas, which you'll see on the slow floor.
And all of this is based on an extensible and flexible CUDA architecture, GPU architecture, which is a great way to for me to bring on Rob. Rob will talk to you about how we're using this end to end architecture for autonomous vehicles. Thank you.
Hi, everyone. I'm Rob Chunger. I'm GM for Automotive. Let me jump into the material. Automotive for NVIDIA was a good year this past year.
As you guys know, we transitioned from a focus on infotainment. And while we still you'll see announcements from customers like Daimler that announced a AI cockpit, which is the future evolution of infotainment. You know that our focus is predominantly on self driving cars and autonomous vehicles. And the best way to see the incredible momentum and all of the work that's happening to build these autonomous vehicles is to look at the development work that's happening. And GTC is probably the best place for you to get insight into all the development work going on.
If you look at autonomous vehicles, they break down into a number of different problems, all right. I'm going to spend some time sharing with you the kind of problems that, automakers face to develop these autonomous vehicles, the kind of problems that mapping companies face, sensor companies, LiDAR, cameras, sensors, all of these things coming together to form an autonomous vehicle. And then aside from the computer in the car, the system that's required to develop it, maintain it, test it and deploy it and do it over fleets of cars, all right. So this amount of work is just extremely difficult and that's one of the reasons why NVIDIA focuses on it. Complex computational problems are what we focus on and we believe that it has a lot of potential.
If you look at GTC today, just to give you some of the numbers, there are 20 5 carmakers here, 8 trucking companies, 7 mobility service companies, 16 Tier 1 companies, 8 mapping companies. Automotive is the single largest industry represented here at GTC. So all of the work, all the problems that are being addressed and all of the momentum are the things that consume our work and it is for that that we do all of our investment. Drive partners, right now, as you know, we have already transitioned revenue and part of what we are doing is focusing on the development phase of autonomous vehicles, which will be shipped in 2 or 3 years. So during this time, we are selling now thousands of development systems into the ecosystem to all the different kinds of companies that I just mentioned.
We're also doing development NRE and then we're selling systems that would be used for test and validation. One of the things that's also something that's important to look at is that we know that autonomous vehicles requires fundamentally a different computing model. The ADAS market, which started with conventional computer vision, has given way to a recognition that you simply cannot manually write the code for a self driving car and then deploy it into the world. And for that, you need a new computing model. So given our expertise in AI and leveraging into automotive, you can see that the amount of AI engagements for all sorts of things has just skyrocketed.
And you can track actually the jumps in the number of AI engagements specifically to when we release something important to those people like TensorRT or like discrete GPU in the Pega system. So these are all things that are important indicators to watch as our momentum grows. The reason we're doing all of this work, of course, is because there is a massive transportation industry that is suffering with lots of issues and problems that can directly be solved by autonomous vehicles. For self driving cars, 3,000 people are killed every day by human drivers in the world. It's a staggering number when you think about it.
Aside from the lives, it's over $500,000,000,000 of lost productivity. Mobility services, ride sharing, utility services, all of these things offer the promise of everything from parking lots, footprints to lower cost of ownership for cars. I think you're familiar with all of the work that's going on there, but the implications for society and productivity are significant. And as you know, trucking is right now at experiencing a severe crisis. There's a shortage of trucking drivers driven by the Amazon age, the implementation of electronic logging devices on truckers, which required truckers to monitor the amount of time that they're on.
They have to be 10 hours on, 11 hours off. As soon as these logging devices were implemented, It resulted in a 10% drop in productivity because these truckers were obviously spending more time driving than they should have been. So 10% drop of $750,000,000,000 of goods sold in the United States is a $75,000,000,000 drop. So these are real problems, very, very significant and they affect all of us in a way that's very deep. The other side of and the other side of all of these challenges is of course an enormous opportunity.
The autonomous vehicle market and our TAM opportunity is based on a fundamental fact, which is that we believe that every vehicle will be autonomous. In the short term, our opportunity and our growth vision starts with development first, development drive systems, development NRE as I mentioned, and then the testing and validation systems that people need in order to test their vehicles before putting it out onto the road. Today's introduction of Drive Constellation is exactly that kind of a response. From there, the way that people have engaged NVIDIA and as we talked about, NVIDIA is an open platform. In every one of NVIDIA's businesses, our strategy is to provide a unified single software architecture and then create the means by which the industry can develop on top of our platform.
And those engagements take the form of everything from level 2 cars up to level 5 robo taxis. Level 5 robo taxis starting with a trunk full of PCs have the requirement for massive amounts of performance and then need to find a way to get to an auto grade solution, which is functionally safe, which consumes far less power. And these are real problems, real implications and they have real financial impact to our customers. So if you broke down everything between all of the sensor makers and the development working going on, the engagements that we have right now range everywhere from level 2 cars, which is taking a step up from ADAS. They want to take advantage of AI.
They want to have the processing capability of a Zapier, but do it in a way where they can improve the quality of the system that they have all the way up through level 3, level 4 and then level 5. Level 5 engagements actually, I was asked a lot at the reception last night. Level 5 engagements are actually starting much sooner. We believe that this market would actually grow and start much sooner because you can geosense a Level 5 robo taxi. You can take robo taxis and deploy them into campuses.
You can use them for utility services and you can engage them into a geofence type of area. And as a result, they can come to market much sooner. Ultimately, a world where every vehicle is autonomous, we believe is a $60,000,000,000 TAM opportunity. In roughly a 2,035 timeframe, this would be roughly 100,000,000 self driving cars, roughly 10,000,000 robo taxis. And as a result, a very significant opportunity to change society and also significant growth opportunity for us.
As I mentioned, all of this starts with a very fundamental issue, which is there are enormous problems to solve to get to a self driving autonomous car. You guys know that in the case of, for example, a Level 5 robo taxi car, the requirement just to illustrate some of the problems that our customers have, right, in order for us to provide solutions for our customers, we have to place ourselves and be empathetic to the problems that they're experiencing. It starts with the need for a tremendous amount of processing. Processing requirements for up to Level 5 car would be 100 of trillions of operations per second, tens of teraflops and thousands of KG NIPs of CP performance. The only way to do that today is a trunk full of PCs.
And in this situation, it's running in 1,000 of watts, tens of 1,000 of dollars of cost and obviously a lot of complexity. And then in conjunction with that, if you look at the Rand report, they had published a report that said where they calculated what was the amount of miles that you would have to drive in order to reach a reliability level that was better than a human, which is the bar here. And in order to do that, the conclusion was that you have to drive 100 of 1,000,000 or 100 of 1,000,000,000 of miles to get to that reliability and it would take 100 of years to do it or in other words, practically impossible just by driving. So clearly, all of these solutions, all of these problems require a solution. And as a result of that, NVIDIA has focused its investments and its manpower and its will towards solving these.
There are 4 fundamental game changing strategies that we've introduced to our customers and that they value to solve these problems. First, as I mentioned, AI is a game changer and is the computing model for an autonomous vehicle. I don't think we have that argument anymore. We don't have discussions with people on whether or not it's useful or not. At this point, the question is just simply how do we use it, how do we make progress, what's the fastest way to solve the problems that we need to solve.
Secondly, we introduced a processor at CES, but we introduced it sorry, years ago, but we made it public that at CES that the Xavier processor is now available. This was extremely important news to customers who have been developing on our platform. I'm going to share with you our strategy on the next slide on how we provide that roadmap to the customer so that they're able to begin their development. And then through software architecture compatibility, they know that NVIDIA will be coming with a solution that allows them to go to production. And as a result, NVIDIA is a time machine for them.
We can give them their production solution ahead of time and they know that software is compatible. And as a result, they can provide they can have a 1 architecture driving solution for Level 2 through Level 5. But the driving computer is not enough. You need to have an end to end system. Aside from the driving, you have to have a way to train your fleet of cars, to train them algorithms that to train and develop the algorithms that you're going to need for the car and then to do the testing and validation of those solutions.
Finally, a fundamental strategy for NVIDIA is to have an open platform and as a result an open ecosystem. It's very easy to say, but it's an enormous amount of work. It's something that requires significant effort, well defined SDKs, APIs, a fleet of people to assist, conferences like GTC, all of the work that we do to foster and build this ecosystem means that our customers can use as little or as much of the software from us or they can go to an ecosystem with lots of experts who develop solutions for them that they can use and the architecture is consistent across all of it. So this is a powerful game changing strategy that our customers take advantage of. We started our automotive roadmap with a chip called Parker.
This is our first automotive solution. What we did is we took this Parker and we put 2 of them plus 2 discrete GPUs into a box and we called it Drive PX2. The Drive PX2, which I mentioned earlier, has now shipped in the 1,000 allowed us to scale our performance from 1 to 20x and customers knew that if they develop on this platform, this software would run without rewrite, without recompile on the Xavier chip, which was the next step of what we delivered. From DRIVE PX2 to Xavier, we are able to go from 4 chips to 1. At this point, a level 2 solution, level 3 and level 4 can be used with on a Zinger chip running 30,000,000,000,000 operations per second.
The Zinger chip is now already out at customers and, Drive PX2 is now shifting to Drive Xabr. From Xabr, we then put 2 Xabr's and 2 discrete GPUs onto a product we call Pegasus and the same architecture solution is now able to run. By the way, the 370 partners that we talk about do not represent the total number of engagements. They are separate unique partners. But for example, at car companies, we may have multiple engagements where people want to leverage the same architecture, but for example, they have a level 5 development effort where they use Drive PXs and then they have level 2, they might have a research group, they might have an AI cockpit group and in those places we'll have multiple engagements across one company.
And we're doing the same thing with Orin. Multiple drive Pegasus is now going to the Orin chip and people know that this roadmap will continue. On top of this architecture, people can build, they can use their own software, and for some customers, they will use NVIDIA software. Our software is called Drive AV. Just to give you an idea, we showed some of the performance today in the keynote of the type of performance we're getting.
But the level of improvement in perception that's accomplished using AI and using the Xavier processor is just stunning to our customers, right? This picture illustrates just how many different objects we're able to detect. Cars we're able to detect the front, the back, the intersection point. There's 4 deep neural networks running in this scene right now, the signs, pedestrians and so on. And this level of perception performance is a fundamental ingredient to create a car that's safe and that can accurately perceive the world around it.
And that's why this work is important. Aside from the performance requirements to build a self driving computer, Today's carmakers are in many cases working on a system or a car that has evolved over time to be multiple ECUs to handle a single function. This is a very representative use case where a carmaker might have 4 different boxes to do 4 functions: the surround view function, a parking computer, a driver monitoring computer to monitor the driver and then a driving computer. One of the value propositions of the Xavier and our strategy with Xavier is that we can take an architecture like this and reduce it to a very simplified single computer. And the awareness within customers is that the cost savings is not so much even on the BOM, but on the software cost.
The implications of having 4 separate boxes, each with its own OS, each with its own versioning software, each with its own maintenance requirements, which have
to be
supported forever. Car makers are starting to realize the incredible cost that multiple architectures cost them. And as a result, the value proposition of a powerful centralized computer goes far beyond just the bomb cost. When you take the value proposition from an ADAS car, which is going to be an AI AV car and take it to the other side for a robo taxi, it's even more. In a robo taxi, the trunk is simply filled with computers and the value proposition of the compute density that we created with Pegasus is to reduce it to something far smaller.
Now you guys know that we introduced Pegasus at GTC in Europe, but let me walk you through some of the value that's created and translate it to you in a way where you understand the business implication to the carmaker and why this performance is important and how does it translate to things that matter to them on the business side. The use case for a robo taxi can range from a couple, 1 or 2 drive Pegasuses up to multiple drive Pegasuses. But in this case, this is just kind of a representation. If you had 2 PCs or 2 boxes in the trunk, which might have multiple CPUs, multiple GPUs versus 2 Pegasuses. The Pegasuses are creating and providing roughly 2x of the top performance, 2x the CPU, roughly the same amount of flops.
In this case, we're assuming that these are actually using V100 versus the Pegasus. But look at the power and the size and the cost. The power is 1 third, the size is about 1 tenth and it's about 1 fifth of the cost. And by the way, aside from just those numbers, let me just give you some idea of what 1 third power actually means to a carmaker. The difference between 3 kilowatts and 1 kilowatt obviously is 2 kilowatts.
2 kilowatts, if you assume a car with a 100 kilowatt battery, the size of the battery operated over 10 hours, that would be 20 kilowatts, a 20% reduction in the range of the car. If you had to replace that range in the form of battery, it would cost you several $1,000 roughly $125 per kilowatt. The result is something on the order of a couple of $1,000 implications just for power. So these are important. These are very important issues.
And of course, the DRIVE PEGASUS as opposed to a trunk full of PCs is auto grade. It has functional safety and all of the things that you require to actually get to market. So aside from the computer that's in the car, I mentioned the need for the end to end system and Shanker showed this. The amount of data that you have to collect is enormous. 1 petaflop per car per year.
So imagine that you're collecting data from 5 cameras and you are taking this data, you have to decide which of this data is useful data. You then have to label the data. We're running 10 deep neural networks in the car. So you guys know we're running deep neural networks, including drive nets to detect objects, lane nets to detect lanes, open road nets to detect where to go, pilot nets to tell us where to drive. Within the cockpit, we're doing face recognition.
You could be doing speech, head tracking, gaze detection and this is just the beginning. So for all of those, you need to collect data. Based on the report, the RAND report, you would need to drive on the order of 10,000,000,000 miles to ensure safe driving. And then re stimulation, taking our sensor data and then re inputting it into the computer is yet another form of testing and validation that needs to be done. Ultimately, the end result of all of this is an enormous amount of driving that you would need to do and the testing and validation can cover the 4 corner cases, difficult and rare conditions and then we can create scenarios and do regression tests.
Let me walk you through a little bit on the business side and show you some of the value and why this matters to customers. If you had to drive 10,000,000,000 miles, it would require roughly 50,000 drivers. If you took a driver we're driving for 8 hours a day, an average of 35 miles per hour, 2 20 days a year, multiply it all out, it would take an enormous amount of drivers at a $10,000,000,000 cost over 3 years to drive this amount of miles. The same path using Drive Constellations would take roughly 1 5th of the time, so 7 months. Obviously, Drive Constellations can run 20 fourseven.
They can run 3 65 days a year. And as a result, we can test more conditions, more scenarios and we can be more efficient in the time that we have to have them do it and obviously do it at a fraction of the cost. So now the overall task and the challenge of driving all of these miles together with simulation becomes something that's much more manageable and approachable. I mentioned and I've talked quite a bit about the platform, the NVIDIA platform, 370 partners. I've already mentioned a lot of the different ones that are involved here from the cars, trucks, startups and research.
But one of the things that is important to us and I invite you to walk around GTC and take a look at all of the amazing work that's happening here. We believe that our progress and the way that we track our progress and our milestones against the vision of fulfilling autonomous vehicle business is number 1, are the products that we're introducing important? Do they solve real problems for our customers? And then based on that, what kind of partnerships, what kind of companies come to NVIDIA to solve those problems. And over the last year, we believe we've introduced critical product announcements on computer, solving the problem of processing, AI, testing and validation.
And then the kinds of announcements that we've announced this year, I think you can see that they spread and they cover a wide breadth of area, not just self driving cars, but China, AI cockpits, trucking, we announced Deutsche Post, DHL, the world's largest mail and carrier service. They're actually implementing our products into multiple segments of their commercial trucking, utility services and last mile services. And then for trucking and then for not just for utility or vehicle applications, but also for important geographies like China, where the requirement to have a platform to build on is especially important. So in summary, we believe we have an enormous opportunity, a lot of work to do, but this is an enormous industry that will be transformed by AV and I think it translates to an enormous opportunity for NVIDIA. We have 4 game changing strategies that we believe matter deeply to solving these problems.
AI, Xavier, end to end systems and our open platform. Testing and validation for autonomous vehicles is more important than ever. And in the next 2 to 3 years, if customers want to ship, then they need a serious approach and a serious platform that can let them get there and know that they've tested it with open ecosystem, the platform that NVIDIA has developed, the openness of the platform allows a diversity of solutions, a diversity of computing models as a result, a much richer and safer solution for our customers to deploy on. All right. With that, I'm going to introduce Jeff Fisher, talk about gaming.
Thank you.
All right. I'm All
right. I'm pretty certain I can't get us back on schedule since we're
already over, but I will do my best to give you guys an update on gaming, so we can get on to some Q and A. First of all, gaming has had a great year. Hopefully, all of you have taken note of that. Total gaming business is up 36% year on year. We earlier in the year, we refreshed our PASCAL lineup, our 10 Series with 10 80 tie, 10 70 tie, Titan XP.
Also last time we got together, I think I talked to you guys about notebooks, how it's the fastest growing segment of the gaming market and that continues to be the case in all regions, gaming notebooks continue to grow. But one of the most exciting things that I talked about at the time was about this new trend, the student gaming notebooks, backpack ready notebooks, thin and light gaming notebooks. And I noted that we were you're probably going to hear some exciting news about it coming up. We hadn't yet announced at the time our Max Q technology, which is a new technology that enables gaming or OEMs to put a high performance gaming GPU in a thin and light notebook, in a 20 millimeter or less notebook. Max Q design, we announced that at Computex last year.
And over the course of the second half of last year, over a dozen Max Q notebook SKUs were launched. It's something we're super excited about. And over the course of this coming year, you're going to see many, many, many more Max Q Notebook shift that effectively that deliver that promise of a backpack ready gaming notebook. So we're super excited about the notebook business this year as well. Nintendo launch last year, that was Nintendo Switch, rapidly was an instant success and rapidly became the fastest selling console in the U.
S. In history. And as you know, the Nintendo Switch is powered by our gaming SoC. GeForce Experience, we surpassed 100,000,000 clients. GeForce Experience, I'll talk about in a little bit, but it's an essential client for any gamer in a using a gaming PC powered by the GeForce platform.
Also, GFN launched. We talked about our cloud gaming service, GeForce NOW, at CES last year. Around the middle of the year, we launched GeForce NOW in beta. We've got thousands of customers that are playing PC level games on Macs and portable notebooks around the world. We're learning a lot from them about how to deliver an awesome service.
They love the service. This is going to be a long time, a model that's going to take us a long time to grow and deploy, but we're excited about the prospects of cloud gaming over time. And finally, not to be ignored, crypto came to town this year. It had it entered the GPU space. We know gamers have done some mining.
We know miners have bought some GPUs, but it also had an influence on the total gaming ecosystem this year. So considering we had such a great year, I mean, of course, it's no surprise that gaming is also huge and gaming is growing. I kind of ask you guys as an audience whether or not your kids game. I think generally speaking, it's true, every human born today is a gamer. Gaming is social.
Gaming is competitive. Gaming has replaced to the traditional media and entertainment types around the world. Think about 10 years back, 2007, what was what were gamers doing? Well, let's see. Gaming was primarily a 1 or 2 player game, wasn't that social.
There was no Twitch at the time. Mobile gaming had maybe just gotten started. There was certainly esports was still pretty much a Korean thing, hadn't really moved out outside of the Far East. And since that time, gaming has exploded. All of those aspects of gaming have come about social, multi massively multiplayer, eSports.
And as a result, the revenue generated by gaming on the software side has about tripled, over $100,000,000,000 today. The number of gamers in the world has also about tripled. By some estimates, well over 2,000,000,000 gamers, about 25% of the world's population are gaming. And if you ask gamers today, gamers are surveyed, interviewed, they believe 80% of them respond that they will be a gamer for life, gamers for life. It's now a part of the mainstream.
It's a part of their culture. It's a part of what they do on a daily or weekly basis. And as a result, turn the clock back to 2,003. In 2,003, let's see, Xbox had just come out. PC gamers were playing on GeForce FX on the GeForce 5,600, and gamers were primarily younger.
About 75% of the gamers were under 35 years old. Well, gamers for life means that these gamers are going to continue to game as they get older, and that in fact is the case. Today, about 50% of the gamers are over 35 and about 50 are under 35. This increases the TAM of overall gamers as new kids enter the ecosystem and the, I'll say older gamers, myself included, continue to game as they get older. And this is a dynamic that I believe we're going to continue to see over time as everybody is born becomes a gamer and stays gamer.
But PC is where the action is. I continue to believe that and there's definitely evidence of it. PC is an open platform. It's not a closed platform, you're told what performance level to buy, where you're told what games you can own. It is an open platform.
And it allows gamers to pick what class of hardware they want. They can scale from a low end to a very high end. It allows game developers to innovate. They can target games that will scale all the way up with the most beautiful realism or play on lower end systems. But more importantly, it allows new business models for developers.
It allows it's an ecosystem that is a kitchen that developers can bring new ideas out to gamers and see what sticks. In fact, the ecosystem itself is what has enabled games to come almost out of nowhere and turn into phenomenal successes overnight. I think everybody in the room has heard of at least 2 recent games, Player Unknown Battlegrounds, which has become the fastest selling game in history on Steam, introduced a new genre, a new way of playing called Battle Royale. Battle Royale is a Hunger Games type of last man standing. 100 people go into the game and after 20 minutes, there's one left.
Totally fun. And then another game, Fortnite from Epic, which also picked up the Battle Royale genre and instantly became the most viewed game in history. And it is pacing now PUBG for a number of gamers. The most viewed game in history, by the way, there's a famous streamer called Ninja, I know some of you have heard of Ninja, invited a friend of his, Drake, the superstar singer to play Fortnite on Twitch online. And just that one session, 700,000 people tuned in to watch them playing Fortnite.
As a result, new people learn about the game, they invite their friends and these games really take off. Game Developer Conference was last week. I don't know if you're paying much attention, but about 25,000 game developers every year converge on San Francisco to learn about the latest techniques, to share ideas. And they're surveyed every year. One of the questions is asked is what platforms are you focused on?
What platforms are you targeting your next games? And again this year, but even in a higher percentage than last year, game developers said their next project is targeted at PC. It's a big validation for the strength of the PC, the openness to the PC and the excitement it brings even not just the gamers, but the game developers. And as a result, the PC has got tons of games that are coming into the ecosystem that are offered to gamers. On Steam, just last year, there were about 8,000 games released on Steam just last year.
There's over 20,000 games available to the 100 of millions of PC gamers worldwide to pick and choose from. Compare that to PS4, the best selling console, there's about 1500 games total, not last year, 1500 games total on the console. So the PC is vibrant, it's alive, it's open and will continue to grow. GeForce, our gaming platform is at the middle of PC. And with 100,000,000 GeForce PCs in the world, you could say it's the world's largest gaming platform.
Nobody invests, nobody innovates, no one has as much energy focused on PC gaming as NVIDIA. We have an entire team working with game developers inventing new ways to render, creating new technologies. We've got entire team in the end market, holding events, building energy, educating PC gamers. We are really at the center of this market. It's our core business and we're incredibly focused on keeping this a vibrant gaming platform.
Last year, our GPU gaming business gaming GPU business grew 21% year on year. As with prior years, we continue to see growth in our units and growth in our ASP, both driving this business. In the end markets, the sellout in the end markets, while developed is still is a very important market for us. We see continue to see a big opportunity in the emerging markets. A smaller percentage of our total revenue, but it's a high very high percentage of the total population.
And with the broadband penetration continuing to grow in the emerging markets and the social norms, which is really important as new kids are gaming as a part of their everyday life, the social norms are also becoming much more positive towards gaming. We see a big opportunity for continued growth in the emerging markets. It was about 2 years ago, actually a little less, we launched Pascal, our 10 series and started ramping it through the end of the year, 2 years ago. We now have about a 30% penetration of our installed base is on Pascal. Still a lot of GPUs out there to upgrade as we move forward.
I also want to talk about our positioning in gaming. I don't think it's really there's really much dispute about our leadership, but you look at our share among gamers, look at Steam, for instance, I mean, this is a public stat. The GeForce GPUs are having 85% share installed base of Steam gamers. And I believe that that will continue to grow. That includes integrated.
There's a steam gamers still on integrated. If I look at discrete GPU only, we have about a 90% share. In the VR space, VR is still a nascent technology. I happen to believe that VR and AR are going to become incredibly important to gaming and gamers over time. It will be a new way that people interact in game.
I think it's taking longer, but I believe it is definitely going to get there. If any of you haven't gone or I guess it's coming out, see
movie
Ready Player One, thank you.
Ready Player One, which is coming out next week.
You can see a really sort of a dystopian view of what VR can do to the world. But I think it paints a picture of what is possible. If you look, Oculus just published their installed base of HMDs. The GPUs that are powering Oculus HMDs are pretty much all in GeForce NVIDIA GeForce CPUs, about 91% of them of GPUs are powering Oculus HMDs. So we're at the center of this market and we're continuing to invest and see a huge opportunity, continued sustained opportunity in the future.
So why is that? Well, I believe the fundamentals of PC gaming continue to remain very strong. And an important one is eSports. I won't argue, maybe I assume you won't argue anymore that eSports is in fact a real sport. If you don't believe that, you can ask the 100,000,000 or so member new members of the audience that are going to join watching esports online from 2016 through the end of this year, 100,000,000 more, estimated that we're getting close to 400,000,000 by the end of the year of people who are watching esports online.
Real sports, there's discussion about esports becoming sport within the Asian games starting in 2020. You just can't stop the momentum that this new generation of competing in playing video games, the impact it's going to have on society. Last year was a huge year for esports. It started early in the year in Poland, I'll say, Kettowieki, Poland, Extreme Masters Tournament, a record 175,000 people came, attended this esports event in Poland. Unbelievable.
There was about 40,000,000 people that watched it online. Later in the year, the legendary DOTA 2 International was held, record price pool, dollars 25,000,000 price pool for 1 esports tournament, Seattle, helped drive prize money over the course of last year in excess of $100,000,000 total prize money for esports tournaments worldwide. And it's no surprise that leagues are now formalizing. I'm sure you've all read, Blizzard is now creating a formal league for Overwatch, and it's actually started up. They fielded 12 teams that are regional like traditional sports leagues in different parts of the country and a few Asian countries.
These 12 teams are now financed and sponsored are owned by experienced sports professionals. Owners of major league teams are now fueling these teams as a part of the Overwatch League. The more eSports become formalized, the more gamers get interested, the more people come in to the platform. And finally, League of Legends, the granddaddy of eSports, held their tournament in Beijing, China in the famous Birds Nest arena at the end of December. And that drew 40,000 people into the arena itself live watching the final event and about 60,000 people streaming online watching League of Legends finals.
If you look at Newzoo has got a has an estimate surveying gamers and through a client of theirs on gaming PCs, 86% of the Esports players, players that are playing Esports games are powered by GeForce GTX. Those gamers that are playing on with discrete GPUs, 86% of them are on GeForce GPUs. Esports, super important, a foundation driving PC gaming. Over 70% of gamers who are surveyed have also said that they play games to be social. Games have become a social phenomenon.
And people learn about new games by word-of-mouth. Gamers tune into Twitch to see their friends play games and to see celebrities play games. Twitch now has more concurrent viewers than CNN, people tuning in to watch video games. There was about 700,000 years of video watch of gaming video, streaming video watch on Twitch last year, 700000 years watched and that's just Twitch. Twitch doesn't make its way into China.
Imagine all the streamers watching content in China and other parts of the world. It was I think the number one gaming content on YouTube the number one content on YouTube is people watching games, Live gaming stream, how to play, unboxing, number one content on YouTube is gaming. Gamers want to connect. If you want evidence of that, you just have to look at this brand new app, it's been out for about a year and a half called Discord. I don't know if you've heard of Discord, but Discord is a voice, text sharing app that is cross platform, cross games.
Regardless of what game you're playing, you're not tied to just talking to other people playing that game. You could talk to all your friends regardless of what they're playing, invite them in, invite them to see your latest achievement. And Discord, as a result, has grown from 0 to almost 100,000,000 expected to have 100,000,000 users sometime this year. And that's just because gamers, they want to be social, they want to connect, they want to share, they want to tell their friends about what they're playing and bring new people into their games. You look at we'll go back and look at Fortnite and PUBG, and these are among G Force Weekly GeForce gamers.
This is our ecosystem, weekly GeForce gamers. I mentioned PUBG most played, Fortnite most shared And it's obvious now that these games, when these characteristics talk about these games, these games will go viral. PUBG, an instant success. Fortnite, Battle Royale launched, an instant success. 100 people playing at once, telling their friends they're all coming into the platform.
Goldman Sachs recently had a report that said the rise of PUBG and Fortnite is growing the total gaming community. That is to say that people that are playing PUBG and people that are playing Fortnite, it isn't just cannibalizing other games, but they're bringing their friends into the platform to play as well. And we have certainly seen that. If you look at the top chart and these are total game weekly GeForce gamers. When PUBG launched and then Fortnite launched, the total number of gamers playing on GeForce GPUs went up by about 20%, which is really awesome.
The viral nature, the social nature, the community nature of gaming, bringing in more gamers into the PC. As a result, we've been evolving GeForce Experience. I mentioned 100,000,000 clients. GeForce Experience's main mission in life is to keep a gaming PC updated and optimized to play the latest games. About every 2 to 4 weeks, we have a new driver coming out that is preparing your PC for the latest for the newest game.
GeForce Experience has optimal settings regardless of your configuration. It will configure a new game to run on your PC in the most in the best possible way. That's the core mission of GeForce Experience. But we've been adding social features to GeForce Experience as well for the creatives and people who want to share. Ansel, NVIDIA Ansel, I know we've talked to you about before, has now been integrated in the GeForce Experience.
It's an in game camera that lets you roam around anywhere inside these beautiful games and snap pictures. Take a picture and you can upload it. In fact, there's such a demand for uploading pictures that we created a site, we announced a game developer conference called Shot with GeForce. Shotwithgforce.com. And you could snap a picture inside your game and instantly upload it.
It's a curated gallery of in game photographs. If you have a quick chance to go there, you'll see some amazing captures, unbelievable, beautiful captures that our gamers have shot with NVIDIA Ansel and uploaded the shot with GeForce. NVIDIA highlights, when a gamer is playing a game, boom, boom, boom, you have a great kill, maybe you have a great die, maybe you flipped off a building and done something amazing, and you kind of wish you had captured that moment. NVIDIA Highlights automatically records those special moments in your game, so you can come back later and edit them, clip them, upload them to Facebook or Twitter or your favorite site or just email them to another friend. It's a way to capture those highlights and quickly share with each other.
Developers love this feature because it causes games to be more viral. The more people see their friends doing things, the more they're going to want to play the games. All told, last year, there was over 1,000,000,000 in game captures using GeForce Experience, and we see that continuing to grow. This year alone, there's been, to date, about 500,000,000 in game captures using GeForce Experience. So we're super excited about the more engagement with our GeForce Experience gaming client, gives us more of a direct relationship with our gamers And we're super excited about social in general and how it's driving the whole PC gaming ecosystem.
Okay. Social, eSports, foundational fundamental drivers of bringing new gamers into the PC ecosystem, keeping it vibrant, alive. But gamers, when they come or when they're here, are also super excited about the commitment for developers to keep pushing cinematics and realism in games. And this is also super important to our business, not just to keep the ecosystem alive, so gamers will continue to be interested in new games because they're more beautiful, more storytelling. But also it helps us innovate on the GPU side as well.
Last week, we announced NVIDIA RTX. You've heard a bit about more of a bit today. It's been 10 years in the making. We've talked about it as the Holy Grail. This is really where game developers, where anybody is interested in graphics wants to go.
The challenge has been delivering it in real time. Rendering it offline is one thing for movies, but running at 60 frames per second at 1080p or 4 ks requires a great deal of compute. Well, we saw we thought the time is now right to start to deliver real time ray tracing. A couple of years ago, we started collaborating with Microsoft to develop their new standard API DX Ray Tracing. They call it DXR.
And what's important about that is in order for ray tracing to be made available to all game developers, Any game developer could write could create a game that uses ray tracing and they want it to run on any PC, they need a standard API. So Microsoft was interested in doing DXR. We were well ahead in our research and our development of ray tracing code. So we worked with them to develop DXR. Now what is NVIDIA RTX?
NVIDIA RTX is the is our technology that accelerates ray tracing. It's a combined software and hardware stack architecture that currently runs on Volta. And DXR NVIDIA RTX lives underneath DXR. Any game that is written for DXR on a GeForce platform on NVIDIA Volta will instantly be accelerated using RTX. So they're not separate, they're complementary.
And DXR is written in a way to take advantage of the acceleration in NVIDIA RTX. In addition, we're investing in our GameWorks libraries. GameWorks is our libraries of accelerated code for helping developers deliver ray tracing quicker. We can create ray tracing shadows libraries that developers can integrate into their code. So we invest it, make it easier for them to bring ray tracing to games.
It's not just about ray tracing, but as we've talked, games get more demanding generation after generation in general. Developers want to deliver more realism. As a result, the GPU load required to play games at high settings at 60 frames per second or 90 frames per second goes up every year. Every year, the games that are released require a bigger GPU to deliver a smooth, buttery experience. In fact, the games that are released last year are starting to exceed the capabilities of console.
And they are starting they have to dumb down the games, I'll use that word with all due respect, but dumb down the games, visually either lower resolution, lower frame rate, much less graphics realism in order to deliver a smooth experience on console. But of course, they can benefit from the scalability of the PC and PC gamers can get a full experience. As you add ray tracing to games, as you play games at 4 ks, when VR really has an impact on gaming, the load required for a smooth experience is going to keep going up. And I believe this has been accelerated by the introduction of ray tracing last week at Game Developer Conference. I believe you'll start to see ray tracing, real time ray tracing coming in games towards before the end of the year, but the pipeline will start to build into next year and beyond.
And gamers have taken notice. Gamers want the best experience and they've been buying up and we've seen it again this year with an increase in ASP. The ASPs I'm showing here are our estimate of MSRP price. I won't say today's price on Newegg, but I'll say an MSRP price, based on our GPU price. This is not our GPU ASP, but it's a card price, what a gamer would pay on average for a GPU over the course of time.
And when gamers buy our GPUs, GeForce GTX, they think of it as a game console. They buy a GPU and they're upgrading their PC. So think about it, game consoles are maybe $249 all the way up to $500 today. If you're a gamer, you're buying a G4 GTX product that has higher performance and much lower price than a game console and yet you get all the benefits of an open ecosystem. So we see still a lot of headroom and opportunity in gamers continuing to buy up over the course of time.
Okay. So to wrap up, just quickly, this is our family picture. This is the GeForce gaming platform family picture. Gaming is big and growing, firmly believe it. All the evidence is out there, entire generation.
GeForce is the number one gaming platform. We're at the center of PC gaming. We invest with a great deal of passion, our people, our money, our best ideas into growing and keeping the PC platform PC gaming platform vibrant. Esports social are fueling new gamers into PC gaming. Cinematics are going to keep increasing.
Developers love to deliver better experiences. They love NVIDIA RTX and ray tracing, super excited about it. That needs bigger GPUs. And we see a really exciting future in gaming and in PC gaming. So I thank you all for your time today.
Good afternoon. I'm Colette, CFO, and I'm going to run through a quick bit of our financials. I want to land with a couple of key things for you to remember about the company, about some of the things that you've seen over the last couple of years, which sets our stage in terms of going forward. We're then going to open up for Q and A. So we'll make this short and sweet, so we can make sure that we hear from you in terms of questions that you have from what you've seen today.
So let me first start off with our records, okay, and how we ended fiscal year 2018. A phenomenal year from us in terms of any way that we try and look at it. When we think about the overall size of our overall revenue, our revenue approaching nearly $10,000,000,000 and growing 40% in 1 year alone. Our gross margin also reaching a record level and exceeding more than 60% for the full year. It grew 100 basis points over where it was last year, all fueled by our value added platforms, which we'll discuss in a little bit.
Whatever view you want to look at in terms of profitability, whether that is operating income, whether that is EPS, whether that is net income, all growing tremendously faster than our overall revenue and growing more than 60% as you can see here and also in terms of reaching records. We're pretty proud of the success that we did. I've come up here many times and talked about the transformation that the company has gone through in terms of from the PC industry to the platforms. You can now see that coming through in terms of the financials and where we are. Let's dive a little bit more in terms of looking at our individual in terms of market platforms.
Our growth was fueled by each and every single one of our market platforms. Each one of them last year grew double digits in terms of total revenue. Looking at both a 3 year historical CAGR as well as in terms of the full year, There is growth. You've seen each one of the leaders come up here and also talk about what we see in terms of the size of the TAMs and the opportunities that are still in front of us. Gaming, each year we come in here and say, can it still grow?
Is it still a growth business? Will you still be able to do it? Time and time again, we still do. And the reason why is it is, it's an entertainment sport. It's moved past just the early days in terms of gaming.
And what you've seen in terms of just talk about in terms of the size of gamers and the social platform that it's fueled at. The 36% growth overall in gaming last year alone. Pro visualization for many years of business in terms of moving quite nicely in terms of enterprises looking at visualization, but we're starting to see even more of a transformation of this business as people rethink how they do rendering and moving forward. This business over a 3 year CAGR has now grown 6%. But as you can see in terms of this last year, more than 12% growth, double digit, which has been a great growth as well as some of the products that we have coming to market this year.
Our data center business, you can look at that and look at the 85% growth in terms of this last year excuse me, 85% growth in terms of the last 3 years, but also fueled by more than 130% growth in terms of the last year alone and reaching almost $2,000,000,000 in revenue. We've talked about the many different businesses that are within data center. It is not just all of 1, but really thinking about the accelerating computing from end to end in terms of how that is important in terms of high performance computing, what we are seeing in the hyperscale, hyperscale for AI, but also what we've also seen in terms of cloud and cloud instances and their availability. And then lastly, in terms of automotive. Over the last 3 years, it's grown 45%.
Keep in mind, this is all before we're going to hit in terms of a new wave of autonomous driving going forward. Most of this, as you know, is still our infotainment modules that we are selling, and we still had very great growth in this last year growing 15%. Another way to look at our overall growth is to think about the diversification of our customers and what we've seen. So I pulled a couple of key highlights that you may have seen in some of the presentations today. One, thinking about our overall gamers.
In 1 year alone, reaching and surpassing from our GFE, GeForce experience, we've seen more than 100,000,000 users coming on board. We're touching more and more gamers every single year. Additionally, in terms of pro visualization, one of the highlights that we talked about at the end of Q4 was really the fueling of new use cases of using the workstation for some of the emerging technologies that we're seeing, whether that be virtual reality, whether that be rendering or whether that is AI. And seeing that now in terms of much of the revenue that we are selling today in terms of ProRes. Additionally, data center.
A lot of discussion in terms of our growth in terms of our hyperscale. Here we have the hyperscale 7s in terms of what we've seen alone in a year in terms of their growth, going faster than the overall data center revenue as a whole. We have tremendous growth opportunity by the availability of the cloud to reach more of the startups, to reach more of the researchers as well as the future enterprises. But this is a good indication of the diversification that we're already seeing. And then lastly, our automotive partners.
In 1 year alone, we've moved from getting a great understanding in terms of our platform's capabilities to having more than 3 20 different partners across the world thinking about how to implement autonomous driving and working with many different types from the startups to the Tier 1s to the OEMs in terms of working. So let's move in terms of gross margin. This last year, a very important year, probably something that we've been talking about for more than 4 to 5 years, which was the end of the IP licensing agreement with Intel. For many, many years, people were always concerned in terms of what would happen to your gross margin, what would happen to your profitability. We grew through it.
So even in terms of reaching a record level of 60%, keep in mind that's actually about 200 basis points of true product growth in terms of the overall gross profit because we lost a little bit of more than 100 basis points with the absence of the Intel royalty agreement. So as you can see, our value added platforms to those profit from them is still a material percentage in terms of our growth that we've seen. We always get a lot of discussion in terms of what are you seeing by your individual platforms in terms of what is driving your overall gross profit. I wanted to remind everybody in terms of what we have put together. Our value added platform is the key driver of what we see in terms of growth and gross margin.
Sure, there's other things that we do tremendously well in terms of the manufacturing, in terms of the yields, in terms of all the costs, putting all of that into the market and thinking about that. But the number one most important thing that drives us is our value added platforms becoming a larger mix of our overall portfolio as well as a mix in terms of those platforms. It matters in terms of their volume and it matters in terms of their growth. So as you can see here from what we have is in terms of gaming GPUs, you could see that they're probably close to the company average. And the reason why, they're probably more than 50% of our overall business and that therefore drives the underlying base of our overall gross margin.
On top of that, with all the additional value added platforms that we have, including the full stacks that we sell combined in terms of our platforms, we're able to realize higher gross margins at this level. As you can see, I put here in terms of automotive off in terms of on the left. Those are infotainment modules. Those are the modules for several generations that we've been selling. And as you know, as we move towards autonomous driving, we'll see a different shift in terms of the value that we are providing inside of those platforms, a full stack that we'll be able to overall monetize.
We have yet another upcoming value added platform in terms of automotive in terms of fueling our overall gross margins. OEM embedded, still an important part for us to be part of many of the embedded devices around the world. But again, those tend to be a little bit lower in terms of gross margin than our company average. Operating expenses. Over this period of time, we have continued to grow our operating expenses in this last year growing more than 19% year on year, all extremely important investments with these large market opportunities that we have in front of us.
Focused primarily on the growth drivers of gaming, AI and automotive, you'll see time and time again our investments to assure that we can be ahead of the overall market and realize these TAMs that we have in front of us. Yes, that has reduced our overall operating expenses as a percentage of revenue, but we'll talk about the need for us to continue to focus on the right efficient investments that we need to do going forward. This is nothing new to us. This is something that we have been investing in for many, many years. Cumulatively from the start, more than $15,000,000,000 alone in R and D, which is probably the main area in terms of our key part of investment, 22% CAGR growth over that period of time.
So we're not uninvesting by any means. This is a place in terms of we will continue in terms of the investment going forward. But this is where it's important to understand. Gross margin, I feel, is kind of a semiconductor metric. I think thinking about how our leverage model has been working and what we really try and think about in terms of the success of our business fuels below overall gross margin.
We talked a lot about our leverage model. We build one product. We build a GPU. We've talked about it a lot that says can somebody else really come in and be able to do exactly what you're doing. It's actually quite difficult.
We're able to address 4 very large markets with 1 solidified overall engineering workforce. So essentially, if we wanted to and if we were a start up company, we could probably invest a couple of $1,000,000,000 in each one of the markets that we have. But we essentially are being able to combine that into one model that we have together. We have engineers more than 8,000 engineers at the company that focus on one thing, GPU. 5 years ago, we actually still had more overall hardware engineers than we did software engineers.
Right now, we've tipped in the last couple of years to where we have more software engineers than we do hardware. We're probably up to about 50%, 53% in terms of overall software. So looking in terms of on the right side, when we talk about our overall platform and people ask in terms of the It is very common for people to understand that they are working on the underlying overall platform in terms of the GPU architecture from an engineering. They'll understand that they're building on top of that the key components of our software. As you know, CUDA is available on every single one of our overall GPUs and much of the middleware that we have.
That makes up the majority, more than majority, probably close to 90% of our overall costs in terms of engineering, focusing on those two pieces. Then the verticalization happens as we address overall gaming, AI and auto. So I think it's important to help people understand that our overall success has been fueled by this model and the continuation of this model going forward. And you can see this with probably one of the very key metrics, which is our operating margin and operating margin expansion over this time. We've essentially nearly almost doubled our overall profitability metric 3 years, approaching now close to 37% overall operating profit.
So all of the things continuing to grow, but this is where we see the basis of our investment in terms of in that engineering fuel in terms of the overall revenue growth that we have and the profitability that you see and you desire. We get a lot of discussion in terms of our cash and our overall cash flow. Our cash flow has also significantly grown, doubled, very similar in terms of what we have seen in terms of our overall profitability. And we're using that carefully to make sure we understand what the best use of that cash is and where we have in terms of the net cash. As you saw in terms of the end of the fiscal year, we continue to relook in terms of our debt offerings, rebalancing our debt offerings in terms of our long term bonds, making sure that our ownership of the facilities would be a better use of doing that than refinancing it.
Therefore, you see our overall net cash not necessarily grow as fast as overall operating cash. You can see our total cash balance as a percentage of our market cap has continued to go down and we're probably at some of the lowest levels in terms of those of our size in terms of overall market cap. But now looking in terms of our capital return program, our capital return program is still very important part of our overall business and what we provide in terms of our shareholders. We have since 2013 returned more than $5,000,000,000 in cryos to shareholders or approximately 70% of our free cash flow. We'll continue to balance that year to year, but it is still an ongoing part of our program through both stock repurchases as well as dividends.
But now let's talk about in terms of what we see as the key uses of our cash and how we prioritize that, investment in the business investment back into the business in both OpEx as well as in terms of our capital expenditures. When you think about our OpEx and where we put, clearly that is engineering. Clearly, that is the sales to reach the markets that we think we need to do long term as well as the infrastructure that is important to each and every single one of these employees for them to conduct their work, which talks about our overall capital expenditures. Our capital expenditures are therefore fueling the infrastructure that we need for the engineers. Our engineers don't work on the finance laptops that we have.
They work on supercomputers. They work on very large supercomputers to make sure what we are building is available and well tested for all of the different markets that we're going after. Additionally, we need the facilities worldwide to support the overall growth that we have in terms of the headcount. We also need the ongoing overall software and the licensing of software. And these are some of the key components in terms of what you see in terms of our capital expenditures.
We're not a large M and A company historically. We have been a relatively small amount. Most of our growth that you have seen is more than 90% from organic. We're going to see in terms of going forward might be an opportunity in terms of M and A. But you know what, history has served us quite well in terms of looking at key tuck in types of things that we can do, and that will probably be a continuation.
But always remember, capital return is still a very important part of this room as well as all of our other investors that may not be here, and we'll continue to have that as a good portion of our business. Okay. A short speech to get you clear. You've heard a lot in terms of the overall organization in terms of the size of the TAMs. And we're now going to take this opportunity to invite the executives back up, and we'll go through a Q and A session.
I've got Jensen and all of the leaders here today. We will have mics for the room for if you have a question at hand, you'll see Simona and Sean that we should be able to get to you. So hopefully, I'll invite the executives back up And we'll see if we can get started.
Okay. I think we can take our first question.
Hi, Atif Malik from Citigroup. I have a question on gaming. You guys shared a number that the Pascal installed base is about 30% is fairly low. Can you just walk us through the different variables you're juggling with this year, including the supply constraints that you've talked about on your last earnings call? How should we think about the landing of the PASCAL cycle this year into next year and the potential launch of your next generation gaming platform?
Well, the last part was definitely no comment.
Yes, no comment.
Look guys, we don't want to ruin your surprise. Pascal is the world's best GPU for gamers. We're selling a lot of them. And you also know that because of the computing model we created, our GPU is the single largest distributed installed base supercomputer the world's ever known. And so its utility in all kinds of computing models, including blockchain is quite interesting.
And so the utility of our GPUs and the functionality of our GPUs continues to grow. And we built it we built our GPUs for 4 main markets. And then one day, of course, Ethereum discovered our GPUs as well. And it's been consuming some of the supply of that. We're going to try to work as hard as we can to catch up to that.
And our supply chain is very, very flexible and our partners are really terrific and we're flying as fast as we can. Okay. And so I think as for the next generation, I can only tell you that it's going to be great.
I have a follow-up question on gaming. Last year, you guys talked about gaming as a service, the cloud offering. And can you just talk about how that market is doing, whether it's a threat or an opportunity? Because we hear from companies like Microsoft who are still very serious about using their cloud in the gaming market, so gaming as a service in the cloud.
May I? Yes, please. Cloud gaming has the opportunity to expand our reach well beyond the PC. One of the things that's really great about the PC is that it's really cost effective. With just a couple of $100 you turn the PC you already have, that you already use into a powerful game machine.
And the performance will always be better than anything you can do with cloud gaming. Not just on first principle reasons, it's actually literally right there. The latency is 0. And so I think that and it's really super affordable. And so I think PC gaming is going to be around a long time, but there are a lot of markets.
And even as it's growing, whether it's 25% per year or whatever it is, even as it's growing, we believe that there are segments in the market we just can't reach that we would love to be able to reach. For example, there's a whole installed base of thin Macs and MacBooks that are bought for really good reasons because they're incredibly good notebooks. But for gaming, it just doesn't have the support of the gaming industry. And so we could go through the cloud to reach the Mac. There are really low end integrated graphics notebooks that are built for thin and light notebook PCs.
We could reach those markets. We could reach Android devices, whether it's Android television or Android devices, tablets or in a way that we otherwise couldn't reach. And so I would say that the first thing to think about for GeForce now, our cloud gaming services, it allows us reach beyond our core market of PCs. The second, it's going to take a long time. We know the technology better than anybody.
We know the technology better than anybody. We don't there's no close second. I mean, we understand every line of code and every millisecond latency that is introduced from end to end. We know it better than anybody. I could tell you for certain, it is hard.
It is really hard. And unlike movies, you can't buffer it up. There's no tolerance for mistakes. The reason why movies are so smooth is because there's you could pause it and you could break the network and you could still watch videos for a while. You could try that.
Just disconnected and video will still run for, gosh, who knows how long, but a couple of seconds. You can't do that for video games because of the interactivity of it. So it's a very, very hard problem. But if anybody's going to crack that nut, it's going to be us.
Hi, Mark Lipacis from Jefferies. Thank you very much for the great and informative presentations today. I had two questions and I think they're related. I guess one of the takeaways for me today is how impressive the ecosystem is that you built. And I wonder the first question is, your operating margins have expanded from 20% to 37% over the last 4 years.
Is there a debate about potentially taking some of that operating margin back, putting into OpEx to extend or press your lead in the market on the great ecosystem that you have? And I think related to that, Jensen, can you talk about how you think about the competitive environment? To what extent
Thank you. Yes. Thanks for the question. I can tell you that our architecture model, there's a reason why there are 4 markets and not 19 or not 2 or there are 4 markets that we serve. It turns out we selected 4 markets that have at the physics level very similar computing constraints.
There is a fundamental reason why we selected these 4 markets to pursue over the course of a decade. You guys have watched us nurture these strategies and pursue these markets for coming up on a decade. We've known for some time that these four markets have several characteristics. The first one is the computing model that we pioneered is perfect for it. That whatever dollar of investment I make into 1, I can leverage 85 percent of it across all 4.
We also selected 4 markets that are arguably unbounded. From where we are today, they might as well be infinitely large. There's no question at this point that the future autonomous vehicles will be large. There's just no question about it. Every EV will be an AV.
There's just no question about it. There's no question that the future of AI, the future of software is AI. There's no question now. It is a fundamental way of doing software and the industries that it will serve and touch is broad. There is no question now that every supercomputer in the world will be accelerated.
I don't think that that's a question anymore. 10 years ago was a question. 9 years ago was a question. 8 years ago was a question. 3 years ago was starting to get a little, it's not a question today.
Nobody's questioning us. Everybody's simply asking not if, but how. There is no question that video games is going to continue to grow. And all we have to do is just survey population today. And I did this exact same survey 25 years ago to get funding.
I asked how many of the investors and Don Valentine was he wasn't old then, but he was older. Okay. Yeah, he was old. So his kids played video games, but he did and his parents did. And I use the same example for myself.
Like both my kids play. And their kids will play, 100% of them. And so it stands to reason that this form of entertainment is not a fad. It's enabled by all of the technology that is now commonplace and it's so enjoyable. It of course has to be large.
We selected 4 markets that number 1, requires the computational model that we pioneered and we're very good at. Number 2, every dollar of investment is 85% applicable to all. And then number 3, 3 large markets so that we can reinvent ourselves, so that we are not a PC graphics chip company anymore as we were 25 years ago, that we will reinvent ourselves into a new kind of company. Now that new kind of company was kind of hard to explain 5 years ago. What kind of company is doing AI and self driving cars and video games and building graphics chips?
And that was kind of a hard thing to explain a few years ago, but I think it's relatively easy to explain today that company is called NVIDIA. This is just what we do. And so I think directly to your question, I'm not holding anything back. It's just the way we invest is so highly leveraged and we're so disciplined about working across all of that to ensure leverage that it appears that we could invest more because based on the keynote that you saw, they must be investing 4 times more, but we're not. It's just highly leveraged, highly thoughtful architectural investment.
And then number 3, we can grow faster than our investment because after 10 years, these markets are large. They're much larger than any semiconductor company, if that's what we even are, any semiconductor company can be. They're just large, large markets. And it's taken us 10 years to get here. So those are the three reasons.
Really strategic selection, one singular architecture and super, super thoughtful way of investing, highly leveraged investing and 3, 4 large markets, excuse me, 3rd reason, 4 large markets.
Toshiya Hari, Goldman Sachs. Shankar, you updated your long term TAM forecast from $30,000,000,000 last year to $50,000,000,000 Was that primarily of you guys identifying manufacturing, healthcare, some of those verticals as opportunities or were there other developments that drove the change?
I believe he's going to be much more polite in answering it. Let me give it a try. They were just wrong. And the reason for that is because in large markets that are being created, you often are wrong in the beginning. It's bigger than we thought.
Our opportunity for inference is bigger than we thought. Our opportunity for inference is bigger than everybody thought. How hard inferences for hyperscale is harder than anybody thought. But you just have to apply some common sense. Hyperscale computing is the hardest form of computing the world's ever known.
That's why it took so long to get here. Otherwise it would have been done first. This way of doing computing is so hard because the number of people who are using that fungible resource in the billions, sending in queries that they want instantaneous results and everything is moving to AI. And all of these models are evolving and growing and changing and getting more complex and more amazing all at the same time. Inference turned out to have been much, much harder problem than quite frankly, we all realized 5 years ago.
We knew it was hard. We knew it was much harder than people thought it was. But I got to tell you, we didn't realize it was that hard. It's super hard and it's super fun. And it's exactly the type of problem that our company ought to be working on.
You train on architecture, of course, it's going to run our architecture. Of course, it's going to run our architecture. With complete certainty, I can tell you that if you train on architecture, it will run our architecture. We're the only company in the world that can say that. The thoughtful problems are complicated.
The rate by which all of these Internet service companies and all these Internet companies are moving to AI is much faster than we thought. And the scale of it, of course, as you know, is quite large. And so I would say, if we simply applied the two statements we just said, ignoring the size of the training market, ignoring the size of the training market. If we just simply said, we believe every single supercomputer going forward from this day forward will be accelerated. Number 2, we believe that every single hyperscale data center will be accelerated.
And our architecture on the training and the inferencing side and there's every evidence we're incredibly good at this now. And the throughput is incredible, 100x throughput improvement acceleration, it's going to be tough to beat. These two value propositions and these two statements would suggest the market is much bigger than we thought. Now the one area that Shanker didn't include last year, and we could have, we could have, We just didn't know exactly how big it was going to be, but we now have a feeling for it. We now have a feeling for it.
Anybody who's working on autonomous vehicles will not just have one deep learning network in their car, they will have tens of deep learning networks in their car. Just as every company who's working on software and it's a large body of code and it's complicated code, you're not going to write one single file. There's not one object file. There's a whole bunch of libraries and modules and all of that is happening in deep learning as well. And so the first thing is that the number of networks is a lot more than people thought.
Every single network because of the exploration and parallel experimentation you have to do to go find the right answer, It's so large. Every one of our networks essentially backed up by 10 DGXs. And we have 300,000 images per network today. We will have 3,000,000 networks, 3,000,000 images by the end of the year per network. And so if one network is 10 DGX and we're now going to increase it by another factor of 10.
Okay. So one network is 100 DGXs. Let's say a company has 10 networks or 20 networks, 1,000 DGXs sound right. And I know exactly what the engineers are working on. And so if you were to say every single autonomous vehicle company or every single company that's working on AV need 1,000 DGXs equivalent.
It is not out of reason. And so the question is, how many car companies are there? How many people are working on autonomous vehicles? And are they investing in deep learning infrastructure? The answer is absolutely yes.
I would say that that's the only that's a new one that we didn't consider last year, that we revealed more this year because we understand it better this year.
Great. Thanks for the color. And if I may, I had a follow-up on the gaming side. Jeff, you had a chart showing the GPU performance requirements growing exponentially over the next couple of years as things like RTX and VR and 4 ks come into play. Is it fair to assume given those underlying assumptions, is it fair to assume your ASPs grow exponentially or grow faster over the next couple of years in relation to what we've seen over the past 3 years?
Or would it be more prudent to assume something more kind of in line with the past 3
to 5 years? Thank you. I don't have a projection on how fast our ASPs will increase. I guess I wanted to illustrate that gamers are definitely buying up And the price of GeForce GTX is real relative bargain compared to other hardware they buy to play games on. So I continue to see a trend of gamers choosing higher end GPUs to put in their PCs and the demands of games, RTX is certainly going to be very important.
They'll want the best possible experience.
Hi, thanks. Vivek Arya from Bank America Merrill Lynch. Thanks for the very informative presentations. One more on the data center. Jensen, what's your rough sense of what your market share is today, right?
How big is that market today? We understand the long term opportunity. So that's part A of the question. And then part B, if this market is really going to be $50,000,000,000 doesn't that provide enough incentive for your customers, the large in many cases, the large cloud companies to develop their own solutions because then it is worth developing their own solutions. And is it difficult?
Is it easy? How do you consider put that into consideration as you look at the addressable opportunity?
Yes. So I believe that
our market share is approximately 90 plus percent today of accelerated data centers. The reason why I say 90 plus even though I feel it's probably 100% because I'm just not sure. In fact, I just said something that's consistent of every single market we serve. Our market share of gaming GPUs is 90 plus percent of revenue share, probably 100% of profit share. Our revenue share of workstations today is 90 plus percent, 100% of profit share.
Our market share of accelerated computing is 90 plus percent, 100% of profit share. In fact, almost every single market we have selected started out at 0. That's why we have 90 plus percent. We only select markets with no customers. In fact, I just said something that sounds ridiculous, but it is true.
We only select markets that have no if somebody if we think of a new business opportunity and we got all these customers who are interested in that, it's already over. I'm not interested. If you tell me there are no customers, oh, you could just cut my attention. Let's go make something. And so and the reason for that is because we want the market to not exist so that we can make a contribution, not squander the incredible talent of our company, not squander our lives work on markets that already exist, where people are already buying things and making things and our only contribution is to drive the price down.
We want to go create something new. And when we
do that, when we do that,
our market share is very high. I believe that the market is going to be very large. And back in the good old days, you guys won't remember this, but back in the good old days, IBM competed with us building workstation graphics technology for workstations. And silicon graphics competed with us building workstation graphics technology competing with us. HP built workstation graphics technology that came through Arden Computer and Dana Computer and became Arden I think became Arden and then became HP, computer with us.
Everybody, they were giant companies and video is $300,000,000 large. They were giant companies. The one proposition, the reason why I believe long term it made no sense for them is because in the end, that's not what their company is about. Our company is about building computing technology. That's what our company is about.
Their company is about helping people deliver goods faster or find information faster. Their company is about trust or delivery or quality of service. That's their company is not about building computing technology. So long as we are able to do a better and faster and more cost effective, eventually it just makes no sense for them to do it. Why would they want to worry about that?
And even in the case of HP and SGI and IBM, that's not what they were about. Their companies were about helping customers realize their imagination faster, solve a business problem fast. They came to the conclusion that frankly it makes no sense for them to build and they relied on us to build. And so I'm not exactly sure what's going to play out, but here's what I do believe. I believe that very few will be able to invest at the level of the rate of the velocity and magnitude of capability that we're going to bring to bear.
And very few will be able to do it as cost effectively as we can. I wasn't kidding today. DGX-two costs a few $100,000,000 to make. BGX-one costs $1,000,000,000 to make. So I'm going to sell it for $400,000 No human will no company, no matter how great can build a DGX-two for $400,000 It makes no sense, but we can.
And we can sell it for $400,000 and help people save 1,000,000. And so that's basically the logic, but we have to go earn it. We have to go earn it. And that's why we're moving so fast. We're investing at a rate that as you
Thanks for the blinkers.
I'm sorry. Simona, is it time to cut? Oh, okay. Simona, Shanker, they control everything I do. So nobody is going to be able to I really don't believe anybody will be able to innovate at the rate that we have.
500x in 5 years. Just do another 5 years. And everybody who's shooting at our curve, as you know, everybody who's shooting at our curve keeps moving out their schedules. And the reason for that is because by the time that we got to where we were, we are not there. We're now 10x more.
And so that's our job. We've got to earn it. Okay. We've got to earn this and you've got a company that is just so intensely focused on it. Simone, yes, please.
Thanks. Blinker of Barclays. A question for Jensen or Shankar, just on the data center and the DGX-two. Just curious, a lot of the hardware stats are double. I think there's some new technology and then you showed some performance that was 10, 60x better.
So maybe if you could just talk about any areas that drove that performance gain? And then just kind of curious if any of that's transferable to DGX-one. The comparison was versus the DGX-one. I was just kind of confused whether that was the current generation. And you did double the memory and I'm kind of just curious if any of the optimizations are too such transfer over to as well.
The DGX-two demo benchmark that we showed versus the DGX-one demo used both used the Volta 32 gig. And we showed what DGX-one did 6 months ago and we showed what DGX-two does now. The reason why DGX-two is so incredible is this. When you run a simulation across a large number of processors and there's some 80,000 processors, there's a whole bunch of cores and there's a whole bunch of chips. You run for a while, you run for a while and then you have to synchronize, synchronize the parameters.
You have to just like if we were to take a job, a large job and we split it across the 100 people that are in the room, every so often we'll go off and work on our own, but every so often we have to stop and synchronize, share our information to checkpoint each other and start from that point forward. Okay. And so our GPUs basically have to do the same thing. They run like crazy and then they have to synchronize. Well, it turns out the faster you run, the more often you have to synchronize because everybody got their jobs done fast.
And you run so fast with a Volta that all you're doing is synchronizing. You simulate for literally one unit of time and then you spend a lot of units of time synchronizing across PCI Express or InfiniBand or 100 gigabit Ethernet, it doesn't matter. And so we created a fabric that allows every single GPU to communicate with every other GPU without being blocked. They can all talk simultaneously. We can all communicate simultaneously without bothering each other.
And it's called non blocking. And the non blocking transactions and the fact that the switch is not a network, it's a switch, it's a memory fabric that's an extension of our language. Whatever that secret language is inside our chip, that protocol, it's extended out into that natively. As a result, every single GPU can communicate with every single other GPU at the rates that they communicate. As fast as you can talk, we can synchronize.
And so now what we've done is we've turned 16 GPUs into 1 super GPU. That switch isn't that amazing. That switch took us 2.5 years to do It is such a great breakthrough and 12 of them connect, 16 of them and the bandwidth is just shocking. And that's the reason why the speed up appears supernatural. Okay.
It's just that the GPUs themselves were just bounded up, these thoroughbreds being held back and then all of a sudden MBSwitch set them free.
Stacy Rasgon with Bernstein. Wanted to get your view on how you guys are evaluating what's turning into at least optically looking like a fairly robust startup environment in the AI space. I guess we're seeing some innovations coming out of there at least on paper at this point, things like data flow processing and sort of things that have not as yet delivered as part of CUDA. Is that something that's going to be important? Are you seeing anything in that environment that you're at this point not capable of doing?
Are you seeing anything, I guess, on the roadmap that would be something that you would need to develop? And I guess, can you talk a
little bit more
about the role that the startups can play within their own ecosystem given that you, I guess, on the other side, you seem to be obviously developed an ecosystem that is underpinning the development of
the industry at this point? This is a company that has competed with 150 computer graphics companies and we're the last one standing. We love competition. The reason why we love competition is because it makes us intensely honed. And we're solar, We're so alert.
We have systems in the company that are designed to learn everything we can from everybody we can. We don't suffer from a disease called NIH. We just love learning. And so the fact that startups are creating new ideas, I think it's fantastic. It allows us to test against ours constantly.
We recently had a bunch of questions from a lot of people that said, what about these ASICs, deep learning accelerators? We could build ASICs. We know how to build ASICs. So 14 of our guys went off really great, great engineers, went off and cranked out an ASIC. It's called NVBLA.
We built it. It is completely awesome. It is not as awesome as our GPU. And we decided we gave it away. We did it.
We tried it. Fantastic. Gave it away. And now we put it inside the ARM ecosystem and that IP is going to be available to everybody and it will support it with TensorRT and it will enrich our ecosystem. We try all kinds of stuff.
We try all kinds of stuff. I have a joke inside of companies all we do is type and we're just typing all the time. 10,000 engineers are typing all the time. And what's amazing is if you measure the typing rate of our normal engineers and what is ultimately shipped, apparently we throw away 99.9% of our code, which we do. And the reason for that is because we're trying all these ideas.
And so what remains in our body of work is simply the best that can be. We're not ignoring any of the architectures. Bill Dally, Jonah, Brian Kelleher, these are some of the world's finest computer architects. We're not ignoring any of those ideas nor have they been they've not been around a long time. Data flow machines have been around a long time.
Systolic arrays have been around a long time. We're not confused about what they can do or what they can't do. I will say this though, a computing model doesn't come around very often. You just got to think about aside from X86, CUDA, ARM, what are the other computing models on the planet? And so the body of work that software developers put on top of it is what enriches a computing platform.
It's not the chip. It's all of the body of work that the industry has put on top of it. We are grateful for the people that are here. They created the tools, the libraries of systems, the software and interconnect, all of the management software that are now connected into this ecosystem. Those are the people that are here.
This is the computing platform we all built together. It's not going to get created. Another one isn't going to get created overnight. I really doubt it.
C. J. Muse with Evercore ISI. I guess first question regarding your announced partnership with ARM. Can you talk about the revenue model there?
Is the goal there licensing on some of these jelly bean parts? Is it to sell Xavier? Or is it larger than that? Is it really more focused on driving your AI platform everywhere? We'd love to hear your thoughts.
Yes, great question. We're giving it away. It's open source. It was that trivial for us. It is the single best deep learning accelerator the market has.
Other people call it TPUs, but we're going to give it away.
So the goal there
is to drive your moat on the training center?
Yes. But every single everything on top of that accelerator supports is supported by our software stack. So TensorRT will target it, everything will target it. And we hope that we make it easy peasy for everybody to make these AI, IoT SoCs. Only we would love a term like that.
And as a result, these AI devices, smart intelligent little doohickeys will just be literally everywhere. Every sensor will have some intelligent doohickeys in it. And these doohickeys will be sold by the trillions and there'll be little tiny pieces of dust and they'll literally be everywhere. And so there'll be these IoT sensors all over the world. And that's the way I think that's what it's great for.
As a quick follow-up, as you think about requirements for data center and gaming diverging, how do you think about, I guess, especially purpose built silicon while maintaining a singular architecture? I'd love to hear your thoughts on that as well.
If I understand your question correctly, how do I maintain a singular architecture across the different workloads? Is that right?
Yes. As you think about divergent requirements from both gaming and data centers, we push forward and you maintain single architecture.
Did you guys all hear that? Can we maintain the same architecture as we think about the diverging requirements? Now I said something earlier that was just super, super important. We are only engaged in markets where the requirements are virtually the same. The laws of physics is the laws of physics.
Linear algebra was created so that we could solve multi variable large system equations, otherwise known as physics. We are largely a simulation company. We are a world simulator. Think of it that way, a physics simulator of extreme proportions. We can apply the simulator to simulate virtual reality, otherwise known as video games.
We can use the simulator to simulate physics, otherwise known as scientific computing. We can use a simulator to simulate surroundings, otherwise known as autonomous machines. We selected markets very specifically to be in the shape of the requirements that we are supremely good at. And we reject and avoid and give away all of the things that we're not in that envelope. For example, it was easy for me to give away the NVDA because we're not in the business of making IoT devices.
That's not in the domain of the problems that we solve. Of course, one little doohickey is very useful for them. I don't mean to say that we worked on this deep learning processor into doohickey. But in the context of all the things that we do, it's a bit of a doohickey. And so we selected businesses that have very, very large conformance to the core work that we do.
So I don't see a separation if you know what I mean. Yes, sure. In order to be in the data center, the software stack has to include things like Kubernetes, but that's okay. It turns out that I also need Kubernetes in a lot of different applications because in the future, the idea that one application and service can run-in cloud, data center, other clouds, it makes a lot of sense to me. And so that's a great investment to make and it will help everybody.
I wanted to ask about the Pro Visualization business. You Colette mentioned the 6% CAGR in the last 3 years, but it grew faster last year. And I know you started your keynote talking about the importance of ray tracing the new Quadro products that you're introducing. Can you talk about could we be looking at more of an extended product cycle where that business grows faster?
Yes. The guys on everybody here on stage, they know I never think about growth rates. I only think about size of the market we help to serve. And the reason for that is because it's hard to figure out the middle points. But if you think about how many people we can help with the solution that we've created, in the case of visualization, in no time so it turns out that most of the really, really, really great computer graphics that's being done is being done in CPUs, in data centers.
And they're probably something along the lines of a few $1,000,000,000 of servers that are dying to be refreshed. And so that's kind of the market opportunity. Let's say $1,000,000,000 resulted in about $4,000,000,000 worth of installed base. The amount of rendering that's going to happen over time is going to increase, not decrease. And the richness of the images that people generate are going to increase, not decrease.
If I simply took everything that is generated in film and raised it to the level of Jungle Book. The installed base will probably have to grow by 50x just to support it. And so I but I but on the other hand, I think that that's the future. The richness of the visuals that they created is the things that we want to see and it makes sense to see. And so I think that this is a multibillion dollar market opportunity that we're reaching in to serve.
And for the very first time, we can actually serve it. For the very first time, we can serve it. So I think our workstation business remains very robust with CATIA and Adobe and Autodesk and SOLIDWORKS and all the things that we do there. And now it's been added. There's something new that's been added.
That's multi $1,000,000,000 market opportunity that we add a lot of value to that I think we can grow into. And what's the rate? What's the line between? It's kind of hard to tell, but it's going to happen.
We'll have time for a couple of more questions.
Thanks. Will Stein from SunTrust. I want to revisit some of the market share comments you made earlier and maybe recall that as recently as maybe 6 months ago or so, I think you used to talk about how NVIDIA has nearly 100% of the market in training, but nearly 0 in inference. I think that's certainly changed in the last year. Maybe you could talk about what you think your share is in that market now?
And while just to sort of pair with that, you talked about not going after these very small IoT applications. But I think you do have a product called Jetson, which is for inference on the edge that's a more robust platform. Do you see that growing faster or inference in sort of batch in the data center being a bigger part of the growth?
Our near term, call it next 5 years, our near term opportunity, growth opportunity, the largest one is Shirley training. The second one is Shirley inference. Our market share in inference today is about 0.5%. And the reason why I gave it 0.5% is because there's no point that's approximately 0. And the reason for that is because the world's installed base of hyperscale data centers is call it 20, 30 some odd 1000000 nodes.
And I know that we don't have that many GPUs in servers. And yet there's no question in my mind that when you add one of our GPUs, it will accelerate that data center by 100 times. There's no now back a year ago, we can only accelerate images by 100 times. Everything else will still have to run on CPU. But today we can run images and videos and natural language understanding and recommenders and speech recognition and speech synthesis, basically, largely most of the networks that workloads that are in these data centers.
And so I think we have an opportunity to grow into inference in a very big way. And our market share is approximately 0%. Okay. Now inference has a lot of different types. We say it's inference in the data center, but the data center is not a smart as simple as a smart microphone.
It's not as simple as a smart temperature gauge. That data center is a computer. It's running software. It's running software. In order to run software, containers have to run, Kubernetes has to run, okay.
I mean, there are all kinds of different software has to run. And you have to build a world class computer to run all that. You can't take a doohickey and put it in there and do inference. The hyperscale data center is a computer. That's its first thing.
We happen to go after our focus is to go after focus on the areas where high performance computing, AI and computing comes together. A self driving car is a perfect example. Jetson was created. The reason why I called it Jetson is because of that robot, yes, Jetson. It was our early seed into the robotics industry.
It has become the death kit of roboticists. And as we bring Jetson 2 and Jetson 3 to the marketplace, the roboticists are going to love Jetson. And the SDKs that we created on top of it called Isaac and the simulator we created called Isaac to make it easier for people to create these robots. That's what that's for. In order to create a robot, it's not like a smart camera or smart doorbell.
It's a computer. It's a full out computer with really rich software and software is quite arguably even more complex than the self driving car. And the reason for that is because there's more articulated limbs and the world is more complicated. There are very few rules. And so that robotics environment is very complicated and we create Jetson for that.
And that's very interesting to us. Smart doorbells and all those things are very important things and smart little cameras and those are all important things. They'll connect into the cloud where our computers will be, but we'll probably won't be that interested at the edge. Is that helpful? Yes.
Thank you.
Thank you. Srini Pajjuri from Macquarie. I have a cash return question. Obviously, you've done a great job in terms of returning a significant portion of your cash flow. But if you look at the consensus, I think we are forecasting close to $4,000,000,000 this year.
And given the balance sheet that you have and the flexibility from the recent tax bill, my question is why not return more cash? And is it simply because you want to keep some dry powder in case M and A opportunities come along or any other reason for that? Thank you.
Yes, I think we have a good balance of how to leverage and use that cash flows effectively. We have to think about what we'll need in terms of for investment. Investment has just so many different areas of, yes, people and the payroll, but you have to think about the significant amount of resources necessary for the data centers to support these as well as the facilities that they have. I think our overall cash balance is overall reasonable against many other in the industry as well. It doesn't say that there's not an opportunity going forward that we may increase that amount.
We've made an intention for the current year, and I think we're very satisfied with that.
Okay. I think this is a good time to wrap up.
Right. We're going to wrap up on a question on cash.
Do you have any closing?
Simona, I think timing is a skill of yes, I would like to throw that in there. Guys, first of all, I want to thank all of you for coming to GTC. It is incredibly fun for us to do this. And as you could feel in the energy and all the developers and all of our partners that are part of this, the energy around the work that we're doing, the energy around our platform is just really, really vibrant. If you could take away a few things, we announced several groundbreaking pieces of work.
RTX, keep an eye on that. That's a big deal. It is the biggest breakthrough we've made in computer graphics in the last 15 years. And I say 15, even though it's only taken us 10 to do it, it's because the last great contribution we make to humanity was the programmable shader. It's modernized computer graphics.
It defines everything you see today. It is the reason why NVIDIA is here. We just introduced RTX, a brand new thing. The implication to our business is quite exciting, from workstation to rendering to even gaming in the future. The second thing we talked about was of course training.
Anybody who has any questions about the radar, we're moving, surely don't believe that today. We also made it very clear that in order to do the accelerated computing work that we do, the software stack is really complicated. It is in fact largely a software stack problem. And the investments that we've made, the expertise that we have, the large body of expertise we have in this area of accelerated computing is second to none. And that's the reason why we're able to support so many frameworks accelerated so greatly and in literally 5 years' time accelerated 500 times.
The third thing is inference. The investments we made this last several years has been rolled out today in a big way. TensorRT 4.0 so that we can understand recurrent neural networks or sequence models, Our deep integration into TensorFlow, Google wanted to integrate TensorRT deeply into TensorFlow. They did a great job with that. I'm so pleased that they did it.
The acceleration we did with Kaldi the world's leading speech recognition framework. And then lastly, the industry standard that we helped to create called Onyx, it's integrated into PyTorch, MXNet, Windows allows us to address inference irrespective of the markets or the platforms or the frameworks that you choose. And of course, deployment has to be easy. And by making Kubernetes GPU aware, we can now deploy easily as I've shown you during the keynote. So the 3rd growth opportunity, which is right in front of us is inference.
And then lastly, we spent a fair amount of time talking about safety, the importance of doing that work. Now from an investor's perspective, the way to think about it is this. Long before people create and ship self driving cars, long before people create and ship self driving cars, they will need to develop and create self driving car computing infrastructures to develop their software, to train their models, to simulate their cars and all of those who is better at creating that than we are? We are the world leader in doing this. And so those new opportunities in autonomous vehicles were some of the things that I talked about today.
And then don't forget the 2 takeaways. The more you buy, the more you save. And that's why people are buying so much. And then lastly, it is really, really hard stuff. We know that plastering is hard.
Plastering is hard, so plaster. That's why it's so hard. Okay. Ladies and gentlemen, thanks for coming.