NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.68
-8.57 (-4.10%)
Apr 30, 2026, 12:14 PM EDT - Market open
← View all transcripts
Investor Day 2016
Apr 5, 2016
All right. Well, I like to welcome you all to our Annual Investor Day. We hope to accelerate your understanding virtually of all our businesses. No one's laughing. When Jensen makes jokes, you got to laugh.
It's okay if you don't laugh at me, but that would be good. It will help me a lot. Thank you. All right. So anyway, welcome.
The most important slide in the presentation, please read closely. So just wanted to quickly go through the logistics today. So we will start our day with Jeff Fisher, company Solwart, talking about our biggest business, gaming. Then we have Bob Petty making his maiden appearance, talking about professional visualization, followed quickly by Jim McHugh who's also making his maiden appearance talking about virtualized GPUs in the enterprise. We have making an encore appearance from last year, Shankar Trivedi talking about data center and of course, perennial crowd favorite, Rob Changar, who might talk a little bit about his alma mater, but we'll see.
And then if you missed the first two hours of the new computing model, you will get it by now. Gave Jensen an hour this time. And then we will end with Colette, who will give you a lot of good data. Hopefully, you brought your protractors and your compasses, everything. And then we'll end with Q and A.
The bathroom is outside. There are no breaks. And this is much shorter. So if you guys have a red eye to catch, hopefully, we can do that. So with no further ado, I just want to introduce you to Jeff Fisher, who's going to talk about gaming.
Thank you.
Thanks, Arnab. Welcome, everybody. I'm not sure why, but this year I was volunteered to lead off. I appreciate the conversations we had with some folks last night. I think there's a lot of questions about gaming.
Hopefully, for those new to our story or returning, I can catch you up on what's going on in our business. Looking forward to it. First of all, I am inspired a bit by watching CNN and the presidential campaigns, and I just got to say that gaming as a market is huge. It is huge. It is really big, and it's growing.
Gaming worldwide is about $100,000,000,000 market. I think by this time, it's probably close to $120,000,000,000 all told software, hardware, it's vibrant around the world. Gaming is across platforms, mobile, console, PC. Firstly, everybody, if you have kids, your kids are gaming. It's not just in New York or California, it's all over the world.
In fact, gaming is growing fastest in emerging markets and APAC. So we have collectively are raising a new generation of humanity that is growing up gaming. It is a big market and it's growing. Not only that, but if Game Developer Conference last month in San Francisco, the largest event for game developers and gaming pros, about 30,000 people converge on San Francisco. They survey every year what platform these game developers are targeting, what is the most important platform for you going forward.
And again, this year for the last several years and again this year, the PC is the most focused targeted platform followed by mobile and then PlayStation 4 and then Xbox. So PC is a very important platform for gaming that I'm going to talk about. But as we've talked in the past and as you know, gaming is very scalable. PC gaming is very scalable. PCs are open.
They have you have top to bottom price points, top to bottom performance points. People build their own, they buy their own. Keyboard and mouse is a very way is a very interactive way of gaming. So PC is a very important part of gaming going forward. And as you know, I don't have to restate this, but one of the reasons you're all here is that NVIDIA is the world leader in PC gaming.
We're number 1 on Steam. Steam is the destination for somewhere between 120,000, 150,000 gamers worldwide. It's community, it's a store. The number one GPU among Steam gamers is NVIDIA GeForce. We're also number 1 in iCafe.
Many of you are familiar with iCafe, some of you maybe not, but there's roughly 150,000 licensed ICAFEs. There's probably many, many more that aren't licensed in China. Icafes or PC Bangs really got started in Korea. It was a place for kids to go with their friends, get together and game, a place where they have access to PCs, they have access to high performance PCs. It's a destination for young gamers to go in China.
And we are Sheung Wang is the middleware, the top middleware provider for all iCafes in China. And Sheung Wang has a survey of what are all in all the PCs, these 100 and 50,000 iCafes, all the PCs in the iCafes and GeForce GPUs power the vast majority of them, I would say upwards around 90%. Number 1 in ICAFE. Esports, we're going to talk a little bit more about Esports. Esports is a very important part of the dynamics going on in PC gaming.
ESL, which is the longest running professional gaming team and also host some of the most the largest tournaments in the world is standardized on GeForce GPUs for their proteins. In addition, it's anecdotal, but I would speculate that a majority of the proteins train and game on GeForce GPUs. Number 1 in Pro Esports. Number 1 GPU in VR. Just last week, Oculus started shipping the Rift head display.
This week, VIVE, HTC VIVE is starting to ship their head mounted display. They both offered reference platforms bundled with their head mounted display to make sure their gamers have the best possible experience out of the box. All of the Oculus reference PCs that you can buy with their HMD are GeForce GPU powered. Although one of the HTC reference PCs that you can buy with Vive are GeForce GPU powered. The most stable high performance possible solution, GeForce NVIDIA GPUs are number 1 in VR.
So the results speak for themselves. The last of course, we had great growth last year. Over the last 5 years, we've had CAGR of about 20% of revenue. Our ASPs have grown about 11% over the same period of time and our units have grown close to 10% over the same period of time. Units ASP, all function of growth.
Most of our footprint, our installed base, say about 60%, 65% is in developed world, 40% is in emerging markets. I'm going to talk a little bit more about growth opportunity outside of developed in the emerging markets. So how do we get here? NVIDIA has a dedication and has for many years to delivering the best possible experience to PC gamers. As a result, we transitioned from selling a GPU, a component, a chip into a channel to selling a complete to building a complete platform.
Of course, this platform is built on our GPUs. NVIDIA GeForce GPUs are the most advanced GPUs really in the world. And I think you're we're all pretty excited about GP100 as well above a flagship gaming device, but leading the way for Pascal. These GPUs are sold through a channel worldwide. We have somewhere around 10,000 resellers around the world sold through a network of partners.
You can buy an NVIDIA GPU anywhere in the world. We have a stack of products from an entry $99 up to $1,000 for gaming GPUs, so you can fit your price point. And this base of on top of this base of GPUs and our strong channel, we have built software platforms. 1 is called Game Works. Jensen touched on it at the keynote.
Game Works is really our investment in the game developer community, the way we attach to that part of the ecosystem. We have a large team over 300 engineers who figure out how to render real world effects, smoke, fire, water, physical interactions of all three of these on a GPU, how to render these in games. They create libraries, self contained software libraries that we integrate into the top engines in the world, Unreal 4, Unity, the most popular game engine in the world. Unreal 4 is used by all the high end games. And even Amazon's new Lumberyard, they just launched Lumberyard, they're getting into the game development space.
Lumberyard also integrates our libraries and GameWorks technology. And that finds its way into the top games in the world. The top 3 most awarded games last year, The Witcher 3, Metal Gear Rising Phantom Pain and Fallout 4, all 3 won Game of the Year awards from different bodies PC gaming last year. All three of those games featured NVIDIA Gameworks technology. So that's Gameworks, important part of our platform connecting with game developers.
The other part is our investment in the gamers themselves. And we have a client, we consider it an essential client for a gaming PC, we call GeForce Experience. And GeForce Experience, when you install a game, it will configure that game for your specific hardware, one click optimization. Another important feature of GeForce Experience is that it keeps your PC always updated. With every major game that comes out, our software engineers work for months in advance to update the software driver to make sure when you install that game, your PC is tuned and ready to go for that latest game.
We call that game ready drivers and GeForce Experience will automatically keep your PC updated ready for the latest game. And that is the platform that we've built for gamers. Okay. So let's explore now a bit about the market itself. If you look at growth, I mean, a simple way of looking at growth, it's a function of ASP units and more importantly in many ways is new ways to play, new ways, ways that PC gamers are engaging with their PC, how exciting is the platform?
So let me talk a little bit about that. First of all, relative to ASP, and it's a story I've talked to several of you about before. We talked a bit about it last year, but it's about the production value of content. Gamers buy GPUs to play games, and they play games on the best GPU that they can afford or for their use case. And what's happened over the last couple of years is the latest round of consoles launched a little over 2 years ago, PS4, Xbox 1, 2 things have happened as a result of that.
One is they are basically PC architected, x86 GPU, they're basically PC architecture. So it's created a very wide footprint between Xbox, PS4 and PC, a very wide footprint for developers to target. That's very good, a high ROI based on their investment in game. The other is the latest round of consoles raised the bar, raised the baseline performance about 7x to 8x from the prior console. So now when game developers are going after a wide footprint, they're not going to program to a 10 year old console, they're programming to the latest console, which has a lot more horsepower than prior generation.
They could throw a lot more visually rich content on top of it, taking advantage of that horsepower. Well, that now baseline, the console baseline becomes kind of the least common denominator for game developers. And from my perspective, that's about a 1080p, 60 frames per second with high visual quality, and that's roughly a 199 GTX 960 GPU. So I'll call that the baseline for new content coming to market. And
of course,
game PC developers and developers at Target PC have scale up from there. So we'll call that the minimum to get a good gaming experience. In fact, the recommended GPU for all the latest AAA games that have come out starting at early last year and through this year is the GTX 960. In fact, the majority of these games are targeting a 970 or 980 as what they recommend you play their game on. So this is giving a lot of motivation for gamers when they're coming into the market, they're upgrading their PC or they want to play the latest game, they're buying a PC, there's a certain class of horsepower that they need, a certain class of graphics performance they need to take advantage to really enjoy that game.
Production value, if we look at our installed base, we have roughly 100,000,000 the NVIDIA installed base is roughly 100,000,000 gaming GPUs. If I look into that installed base, about 80% of those systems are below that recommended GPU class. So as more AAA games come out, as more gamers are upgrading, we see that as tailwind, a strong incentive for gamers to upgrade to higher end GPUs. And as with every console cycle, as games keep coming out, they're going to push the performance even harder. So we see that lifting even more over the next several years.
I'll also comment that 100,000,000 GPUs that I that are my installed base don't represent the entire gaming installed base of gaming PCs. Of course, I have some discrete GPU competition out there that I also are going to need to upgrade. And as if you look on Steam, you'll see there's also PC gamers out there who are using much older PCs that are going to want to upgrade that are outside of my ecosystem. All told, I would guess there's about 200,000,000 gamers out there gaming on PCs that I would consider all targets for my business upselling to recommended GPUs. Okay.
And let's talk about ASPs. So there's some things going on in the world that may not be obvious to you and but we feel it in a very big way. First of all, the gaming demographic is about 18% to 40%. And while I mentioned before about 60% of our installed base is in developed worlds, In fact, a vast majority of the 18 to 40 population is not in developed worlds. About 80% or 85% of it is in emerging markets.
And I put in China is in an emerging market, is in emerging markets. About 2,800,000,000 is population of these markets. And I've extracted the below emerging market countries. These are developing nations. If you look at broadband penetration as a proxy then, so within emerging markets, broadband penetration, you're not going to buy a PC, you're not going to be a PC gamer, if you don't have access to broadband.
On the other hand, you're not going to sign up for broadband if you don't care to get online with PC. So use broadband penetration as a proxy for our expanding TAM in emerging markets. And broadband, of course, is growing much faster, about 3 times faster than it is in developing markets. So the number of individuals in this demographic that we can touch is growing rapidly and the demographic is huge. And the results you can see are showing inside of our results as well.
GeForce units, if you look at our units, are growing about twice as fast in emerging markets overdeveloped. Our ASPs are growing about 1.5x over developed markets. And if you look at the macro level, our penetration, the number our installed base versus the population, these are big numbers, but our installed base versus the population in developed, we have about a 20% penetration in developed and a very virtually no penetration 2%, considering the size of emerging markets. So we're investing heavily in terms of talking to engaging gamers in the emerging markets, and we expect to see continued tailwind and some sustainable growth in that part of the world. Further, just to give you a snapshot in terms of the upgrade of the market, about 30% of the installed base have moved to Maxwell.
We still have a lot of gamers on older architectures. And as we roll to our next gen later in this year, I think we have an opportunity to move, not just because of recommended GPU, because the architectural advancements move a large part of our installed base forward to our next generation GPUs. Okay. So what's fueling this growth? And nothing is more important in emerging markets today than eSports.
I love to talk about eSports. I mean, it's truly engaging for our gamers. It's social, they get together with friends 5 on 5, they get together online and game. Esports pros have become celebrities in this world. I don't know if any of your sons or daughters really follow or worship any of the Esports pros, but they have a strong following.
The largest tournament in the world from a payout happened last August. It's an annual tournament, DOTA 2, it's run by Valve to $18,000,000 prize pool. Evil Geniuses who took the 1st place took home $6,000,000 as a result. These games, eSports, the pros inspire a whole and whole class of gamers to come into PC and play for themselves and ultimately aspire to compete. But it's not just competing, it's also watching.
There's a much larger base of people who watch eSports than just play. There's about 250,000,000 viewers of eSports online. And where do they go? Twitch is a huge destination in U. S.
And Europe. Twitch had about 240,000,000,000 minutes of watching last year, streaming minutes last year, 240,000,000,000 minutes, it's all gaming. That's 450,000 years of watching last year. It's pretty amazing. But it's not just Twitch, every country has got its own destination for watching eSports.
Nikoneko in Japan, for instance, is the largest destination. YY in China is the largest destination. And as you know, media outlets have also picked up esports, ESPN has got an esports page now, the Bleacher Report, Yahoo Sports. So it is definitely becoming more mainstream. And if you were reading the news recently, even UC Irvine has built an esports arena, and they're offering competitive gaming scholarships for new students.
Esports driving help driving growth in emerging markets. But gamers just don't want to play and they don't just want to watch. 1 of the most vibrant aspects of the PC gaming community, and this is unique to PC gaming, is gaming created content, gamer created content or also called user generated content. How do I take this game in a way that I can express myself or I can interact with it more deeply or tell my own story with the game. And one of the most vibrant areas of gamer created content is mods.
And mods have been going on for years years years. Mod, the developers who create games, they want gamers fully committed to that game. So they give them access to get in and change the game, tell their own story with the game. And it is very popular. Nexus Mods, which is the number one destination for game files.
There's a game, I modified it, I want others to enjoy it, I uploaded it. Nexus Mods has about 150,000 different game files available, modded game files. They've had about 1,500,000,000 downloads from gamers who want to experience others creation based on the games. And these game mods could be anything from the bottom pictures, a cinematic, a fully cinematic mod for Star Wars Battlefront. Looks as good or better than the movie playing the game.
We love mods like that because it drives GPUs hard. The original game has a certain recommended GPU, the mod needs something even something much greater. The mods could be new weapons in a game. This is actually a screenshot of weapons that were created by users to go into a game, or you could put Superman in Grand Theft Auto V, if you care to, and let others play Superman in that game as a mod. Videos is another form of gamer created content.
YouTube, the most engaging content on YouTube, the most engaged exploding helicopter landing on the ground and saving the day, post his video up to YouTube for everybody else, all of his friends and other gamers to enjoy. And then something as beautiful as photography, screenshots, for example, photographers, artists go into games, they find the right angle, they can photoshop and edit it and post and share. Dead End Thrills is one of the main destinations for photography that is gaming based online. User generated content is such an important culture in the PC gaming that continues that helps to sustain and keep PC gaming very alive and powerful. Okay.
So the last thing I wanted to say, I'm sorry, I feel like I got a lot to say, but I'm going a little faster than I like, but we've got such a great story. But the last part of this is virtual reality. And VR, I am so excited about VR. I know some of you seen demos. I really hope you get to the VR village and experience not just Mars, but some of the other great demos that we've got set up.
VR will give us a new way to interact with content, a new way to interact with our PC, with each other. And for developers, it's a new way of telling stories. It's not just going to be Battlefield in a head mounted display. It's going to be a new way to interact with characters in the environment. And it's not just for gaming.
Very important, and I know Bob is going to talk about it later, but very important VR is going to our applications in the professional space. In addition to entertainment, you're going to experience movies in VR. It's going to be a very important paradigm for interacting with content going forward. In fact, one, I got an e mail today that IKEA just put up a VR app for designing your rooms and homes in IKEA furniture. It's on Steam, it went up today and it's beautiful.
You can paint everything, you can set up all the equipment, see how it's going to look like inside of your room. I don't know if that's a professional use or a consumer app. I think it's consumer. And they posted the recommended GPU for the IKEA app. It's GTX 980, which is pretty amazing.
I mean, who would have thought? IKEA is driving GPU horsepower, so you can design your room with an IKEA app in VR. It's very cool. VR opportunity, there's the growth of VR is you guys will read analyst reports that it's kind of all over the map. I happen to align personally more with BI intelligence.
I think they're one of the more conservative sets of numbers that I've seen. But no matter which number you see, I think that everyone is projecting a hockey stick in VR penetration over the next 4 to 5 years. And I think this year, next year, it's the ramp is a little harder to predict. But ultimately, everyone believes VR is going to be is here to stay. And when you experience yourself, you will see that the first VR is mainstream.
It's good, it's great, and I think you'll love the experience. More importantly from my perspective is what the future of VR will look like. This now this first experience in VR is 1080p per eye. You've got a full HD panel per eye. And as great as it is, I think you all can imagine what it will be like when you're at 4 ks per eye or 8 ks per eye.
You have 4, 8, 16, the amount of pixels times 2 for each eye that need to be driven. The frame rate has to be much higher than standard gaming frame rate. You can game at 30 frames per second, which some consoles do, but you cannot get 8 VR at 30 frames per second. You need VR at 90 to 120 frames per second. 4 ks, 8 ks, 16 ks, the headroom required to drive the future of VR is in our is not here today.
It's in our roadmap. So it provides us a lot of innovation to come and a lot more horsepower that's going to be demanded by gamers for VR going forward. Okay. Well, that's the end of my story for gaming. I'll be around afterwards if you have some questions, but I wanted to give you snapshot on what's going on in terms of growth.
So the next speaker is Bob Petty. Bob is our VP of ProVis. He'll add a little bit more of VR, but tell you what's going on in this space. So thank you very much.
Great. Thanks, Jeff. Sounds good. Is this on? Good deal.
So again, my name is Bob Petty. Happy to be here in my maiden voyage, Arnab. Hopefully, we'll earn an encore performance. I think you all know that Quadro and the ProVis platform has been the leader year after year after year. I don't want to go through kind of where we've been, but where we will be and what our growth areas are.
Jensen did a great job today talking about a couple of those, VR and rendering. We see those as not only expanding use cases with our current markets, but actually opening up new markets in the pro space. So, a little bit about those markets. We primarily our volume markets are in CAD design, manufacturing, CAE, Media and Entertainment. That's where you historically see Quadro Technology.
And you can see the number of users up there. Those users are based on the top apps that we know we have Quadro embedded in. I think there's an opportunity to change those workflows. There's an opportunity to add physically based rendering. There's an opportunity to add VR to just about every app that we're currently working with, and that will cause us to not only expand the number of users in each of those markets, as well as open up new markets.
The customers in those fields are the Pixars, the Disneys, Ford, GM, Boeing and the like. In the scientific biz, those are the SpaceXs, the Lockheed Martins, the Shells and the Exons. They all tend to have a very high appetite for both power and speed, which we like, bigger GPUs, bigger frame buffers, faster systems. I think the predominant effect that you'll see here is an evolution of each of these application areas, again, to take advantage of the promise of VR. Pro VR will Jeff alluded to this in terms of VR in its ramp.
Pro will lag the gaming VR because it requires an even more considerably more computing power than gaming VR. You're dealing with real models, you're dealing with real content that need to be connected to real databases, and I'll touch on that in a bit. But first, I wanted to talk a little bit about the platform that we use to address the opportunities in our growth areas of rendering and in VR. So Quadro is not a chip. I mean, first of all, ProVis is not just Quadro and Quadro is not just a chip.
Johnson, when he was interviewing me, said he did not want me to come here and build graphics cards or just build graphics cards. Got to build the best graphics cards in the world and we do that, but we've got to build experiences. We've got to build experiences for our users in all of our markets. And we do that with a platform that serves our end users very well and serves the developers. It's a 2 sided platform.
Fish talked a little bit about GameWorks. In the professional space, we have DesignWorks. Those are a collection of SDKs and APIs for video ingests, processing, stitching, rendering, multi display. It's part of NVIDIA's Uber SDK. And the cool thing about it is you can take elements of GameWorks, elements of ComputeWorks, elements of DesignWorks and put those together to create the most compelling apps possible.
We'll talk a little bit more about VR Works, but a collection of tools that we're putting together to ensure that developers can actually get that 90 frames per second per eye with content that far exceeds what people are normally able to deliver even at 60 frames a second. We'll get to that here in a second. On the end user side, on the customer side, NVIDIA for the first time, we've always provided good drivers. We've always provided great support. That's what people expect from the Quadro brand.
For the first time, we're actually selling software to end users and providing applications that helps them communicate, helps them collaborate both within their company and across companies as a way to build up that loyalty and to make them more productive. So this is a brand new initiative. The first set of products that we'll be rolling out and have rolled out are around our rendering and our rendering plug ins, you can expect more offerings on the end user side so that we continue to build products and solutions that drive GPU adoption to the end users while enabling developers with new SDKs to take advantage of the latest features of our GPUs. So the 2 growth initiatives, again, rendering, physically based rendering in particular and VR. So Iray is our technology for physically based rendering.
It truly enables predictive design. It's ray tracing, Jensen talked about it this morning. Ray tracing is a way to measure how every light and every path from every light looks at a single point in the room. And it's a very computationally intensive process, and I could probably put 20 photos up here if we had the time, and some of them would be real, some of them would be rendered. And most people may not be able to tell a difference between those real or rendered photographs.
Why is that important? Because people can render these things and do today. They may render these overnight. They may render these overnight. They may render them over a weekend on the CPU cluster.
It's becoming more and more important that turnaround time to see if the design is correct and not just aesthetically correct, but physically correct from a heat standpoint, from an audio standpoint. Gensler is designing the new campus that Jensen was showing. They've saved 1,000,000 of dollars of potential engineering change reviews by finding problems with heat, with all the windows in the office, with noise and the like. So using ray tracing for design, for enabling predictive design is becoming very big and we're riding that wave with our Iray suite of products. So you can get Iray in a number of fashions.
SDKs are the core of it. We want to enable as many developers to build new applications that incorporate physically based rendering. And so we'll continue to offer SDKs to those developers to build these new applications. We've worked with our largest ISVs, independent software developers, ISVs, to integrate Iray into their products, the Catillas of the world, Autodesk, Odusso, Autodesk Maya and Autodesk, Mentoray. We just signed up Siemens, one of the largest installed base for CAD and for design, and we'll continue to expand the reach of Iray via that mechanism.
Sometimes applications don't change as fast as we would like. Sometimes you've got to wait for a new release from Autodesk or Adobe to get some of the latest features. And in the meantime, we may have a new GPU or a new feature in our SDK that users want. And so they wanted what users have asked us to do was to give them a more immediate and direct path to getting to those technologies without having to wait for the ISV to update their code. So this year, we've rolled out Iray plug ins.
The first two are Iray for 3ds Max and Iray for Maya, both from Autodesk. You'll see more announcements over the coming year. We intend to continue anybody that offers an API for plug ins to enable rendering, we continue we want to continue to offer those to customers. They demand the latest features, they demand the latest performance, they're going to want access to DGX-one for rendering, and we don't want to wait for the ISVs while we love our ISVs and we'll work with them to ensure the integrations are great. We don't want our customers to have to wait for those ISVs to roll the code.
So this is huge for us, software directly to the end user, connecting the direct user from a loyalty heart standpoint. And we'll continue to look for ways outside of rendering like multi user collaboration to continue to sell software directly to end users. The on the material side, materials are at the core of physical based rendering. Gold should look like gold. It should behave like gold, not just look, but behave like gold.
The emissive properties of materials, whether they're leather, whether they're carbon, whether they're diamond need to be correct. That's an intensive process for people to do. Artists will spend days and days and days writing custom GL shaders to get those materials to look correct. So we are offering 600 free materials and just the beginning, 600 free materials, so that users can easily grab a material, put it into their app. As that model and that asset, whether it's a car or a truck or a house goes from app to app to app, we guarantee that it will look the same and behave the same as light hits it.
So we're weaving our way into the fabric from a rendering standpoint and then from a material standpoint, more and more people are adopting that as a way to ensure consistency in their designs. And we just signed up I mentioned Siemens on the Iray side, we just signed up Adobe on the MDL side. So as with Iray, we have an SDK for our materials definition language. Adobe has got tens of thousands of users that will begin using NVIDIA's MDL. And why is this important?
Why is it important for more people to have these plug ins just if they have a GPU, they can render? It's important because anybody with a plug in, anybody with an Iray app that has Iray embedded and integrated it into it, Anybody that's written an app that has the Iray SDK can accelerate their apps outside of their box by grabbing hold of any GPU in the office, any GPU in the data center and any GPU in the cloud. So Iray scales almost nearly linearly across multiple GPUs and across the network. So everybody that's got a connection to a GPU via these Iray plug ins can dramatically accelerate their workflow, time to decision, time to exact design by running the Iray server software on either their own rendering appliances on other GPUs in the office. We're working with the OEMs.
The OEMs will be rolling out Iray ready workstations and the OEMs will be rolling out certified Iray server platforms. So they can drop in an Iray, an 8 GPU, 4 GPU, Iray certified platform and all the designers and the artists in the office can immediately take advantage of that power. So it will be a way to reach a large number of designers and give them access to GPU rendering that far exceeds what they may personally have available to them at their desk. So, VR, I love VR, but I actually really love AR, augmented reality. Most of the analysts will say that augmented reality will dwarf virtual reality.
And you have to have virtual reality before you can have augmented reality. And so imagine if we could just start the start that video please. VR and AR are not just for the designers. We've done VR and AR for a long time in NVIDIA and past companies creating caves. The price point of the HMDs has now made these immersive environments accessible to many more people and many more apps.
So in this case, we're looking at an Audi showroom demo. This I think will be the showroom of the future. You put on a headset, you go in, you design your car, change the paint, change the wheels, perhaps get into the car, change the leather, change the steering wheel, check out the reach, can I reach the cup holders, are they in the right place? Perhaps in the future, close the door and drive away in that car. We've rolled out several 100 we will be rolling out several 100 of these Audi showrooms around the world and we're working with GM, Ford, all the other automotives to do the same thing, allow their customers to put on a headset and try every single option from every single car that they have without the dealership having to have that car on the lot.
And so as a way to Jeff alluded to this as a way to reach consumers, the design of that app and the design of those cars is a pro use case. The users need to get to that 24 hours a day, 7 days a week from dealers. And so I think the growth for BR and AR will extend down to the consumer play, not just in the gaming side, it will be the IKEA's of the world, it will be the auto dealers of the world and it extends into a number of other places as well, both for the content creators as well as the consumers of the content that gets created. So things like we talked a lot about entertainment, the ability to take tangents in a movie and explore items that may not be visible in the actual theater that you can later then and go examine in a number of ways in a VR environment. You'll see a slew of that coming.
We talked about the design space. Manufacturing, again, I think the bigger opportunity will not be the use of VR to design a better product. I think it will be the use of AR to maintain and train. Imagine Boeing 787 built in 38 different countries. How do you train people to work on a new engine or to work on a new part?
Do you fly everybody in? Do you send parts around the world? Or do you give them a VR headset? If we could get rid of the roll that second video, do you give them a VR headset and actually augment their experience? They can point to a part, they can identify the part, they can take the part apart.
Now take that to surgery, take that to doctors. How do you train more doctors? Virtual surgery is one way. This is a virtual cadaver demo and I promise to stop it before it gets to revealing or grotesque. There's actually another one that was a little too revealing where you actually get haptics as you're slicing into the body.
So you can as you slice through skin and through sinewy tissue and into bone, you get the proper response. Students are getting trained with virtual cadavers. I like to tell people that there is actually a shortage of cadavers. And so you actually want doctors to get as much time knowing what's what inside of a body. So it's not just for training, it's also for augmenting surgery.
With cameras and a headset on, they can see what's inside your body and see what a normal vessel or a normal brain should look like. So as they go in to make an adjustment on aneurysm or heart valve, they get that augmented view and it really helps them make a difference. Architecture, Jensen covered a good bit of this in the keynote, if we could roll the second one. There we go. So again, we know that people will use VR to design homes, to test ceilings.
Jensen is a customer of VR to see what the new building is going to look like. So whether it's a hotel or a firm trying to sell a condo, whether it's travel, whether you're building a home, building a new office building, expanding your city, the ability to recreate that and have that look real. And this is just a movie clip, but with the power of Iray, I could stop at any point change the facade of that building from stucco to stone, change the window, change the furniture in the apartment. As the computing power increases, I'll be able to do that in real time. Today, what you saw, we're able to bake that scene and look at it in a way, rotate our head, tiptoe to get an angle on the right parallax, imagine being able to do that in real time as you walk through the building and then change what you're seeing.
If you don't like the floor, you change it. You don't like the paint color, you change it. That's where it's going again, opening up not just more opportunities for Pro for those people, those architects, but for the consumers of the products that that create. To me, the Holy Grail is actually collaboration. Imagine you didn't have to get on a plane every time to go do a design review on a part, on a car.
Imagine you can actually sit there in something other than WebEx or GoToMeeting and you can actually see each other. You look like more than a gaming avatar. You can talk to somebody as if you're really talking to them. You see their gestures. They may not say it in words how they're feeling, but you can tell with a frown or an expression on their face how they're feeling.
We have to create a very natural and we will create a very natural way for people to interact with one another in the VR and AR space. We have shared whiteboards, so people can actually brainstorm and communicate and people who are remote won't be penalized, save money on travel and get more done by having a collaborative work environment. That's going to take more than just a GPU. It's going to take more than just software. Probably take a cloud service to coordinate where everybody is in their virtual world, what they're saying, what they're writing.
But it is the future. People like people. The reason we built caves in the past, people wanted to get together and walk through seismic data. They wanted to look at DNA strands and they want to do the same thing, but they want to know it's knowing that we're making eye contact and knowing that when we're pointing, we're both pointing at the same thing. And when audio comes in from over here, it should come in from over here, not from the computer speaker in front of me.
And so a lot of work to be done, but a lot of opportunity in the VR space. And I should have brought water on stage. So a couple of things we're doing to address both the computing power required for VR as well as enabling again the developers. I talked a little bit about Designworks. VRworks, we just released as well, has several tools that will assist VR app developers to create compelling experiences.
Again, it's not just about things running faster, It's about thank you so much. Parched. Now I will have to bring water to Jensen probably for the next 10 years, but thank you very much. So VR works enabling developers to create realistic scenes. It's not just about speed.
Yes, we got to draw at 90 frames a second per eye. That's a given. That requires a lot of computing power. It's going to require big GPUs. But you got to handle the audio correctly.
You got to handle text correctly. You got to handle the stitching of video correctly. Imagine trying to recreate a 3 d scene in which we were all watching a basketball game as the basketball game was happening, not after it happened, but while it's happening. We're able to watch it in VR while it's happening because we're creating and reconstructing that 3 d scene in real time. That's a lot of compute power.
It's a lot of compute power and it's something that's definitely possible and something that is a dream of mine that I think is on the horizon. On the system side, yes, big GPUs, large GPUs required, especially as you do things like the light field holographic VR work that you saw. So you will continue to push that envelope. All of the OEMs have worked with us and all the OEMs today have announced a VR ready an NVIDIA VR ready program. It seems like everybody has a VR ready program.
I'll just say that if you can't draw fast enough, if you can't do 90 frames a second from your system, it's not VR ready, we know that. And the core of that is NVIDIA. We love all of our HMD manufacturers, but at the end of the day, we're going to be the ones that help drive the HMDs, the number of HMDs out there with the GPUs. On the Pro side, it's not just 1 GPU, it's most likely 2 GPUs and we'll continue to work that. And the demo that Jensen showed with the light field was multiple, multiple GPUs.
The cloud will help us do that and stream VR to the headset. That might be hard to believe, but that's where I believe the future will go. And that's where growth will be is to enable more experiences, whether it's rendering or VR enabled from the data center, enabled from the cloud. So VR ready systems, both for desktops as well as for data center appliances. And then on the VR side, the VR Works piece and the beauty of it is you can combine VR Works with the other Works, with the other SDKs that we have.
So combining VR Works with the Iray SDK, you get photorealistic VR. That's kind of the Holy Grail, photorealistic VR, where if you're going to do virtual reality, it should look real. Why would you want unreal virtual reality? And so the models are more complex. The models are real.
Someone asked me one time or said one time in a demo that that dragon looked really real. And I don't know what a real dragon looks like, but I guess it looked like that. But people do know what a car looks like. They know what their house looks like. And the models in which those scenes are built come from real content with real data sizes connected to real databases.
So photorealism is going to be what people are going to require. The car needs to look exactly like the one they want to buy. The house needs to look exactly like the one they want to buy. Photoreal VR is where it's at and we're on top of that. So everything I talked about, whether it's our core apps, whether it's physically based rendering, whether it's VR or the combination thereof, all that has to run with our customers in mind.
And customers have changing workflows. They're not just running local workstations. They're not just running on laptops. They're running in data centers. And so again, ProVis is not just Quadro and Quadro is not just a chip.
It is a platform from HMD to desktop to data center to the cloud. And whether they're running a native Quadro app or grid enabled virtual desktop or grid enabled virtual app, the experience has to be the same. That's our job. That's what we're going to do is to make sure that experience is the same and the power of quadros available from the cloud, from the data center and from local devices. And that's probably a good cue to bring up Mr.
Jim McHugh, our Grid VP of Product Management.
Great. All right.
Thanks, Bob. So it's actually fitting that I follow Bob because he was talking about how we're a leader professional visualization. And now I get to say how we extended that to be the leader in graphics virtualization. And we haven't done this just being by being a key component of the solution. We've done this by actually contributing to the growth of the market itself.
And I'm going to walk you through why that is. So these are two examples of 2 customers. And I actually really like the Bell helicopter because after being in this industry for quite some time, I know that we've delivered value in desktop virtualization and computer virtualization to the IT department. But with the Bell Helicopter example, what they're pointing out is, we've delivered value to the employee. So beyond just the flexibility and the productivity, we've increased morale.
And that's been the Holy Grail that we've been shooting for. And when you start hitting that level, as Bob was pointing out, where you're giving the same user experience, whether they're sitting at the workstation or they're remotely accessing and you're doing that, that is a game changer. That is when you start transforming business. Populous, who is a leader a leading company in event management, they built the Olympic Stadium in the U. K, that's what the picture you're seeing in the background there.
They had a similar scenario, where they were stuck going back and forth between the site and actually their office, so they could be working on the workstation. When we remove the restriction of where they worked, they were able to cut their time for design and deployment down significantly. And that becomes not only a transformation of their business and how they do business, it saved them a lot of money and makes them more money. So how are we going to leverage this?
How are
we going to actually scale this out and bring this to many people? Well, we know we need to focus on a few key things. 1st, the technology needs to work. The products have to be solid. And with our latest releases and when we move to latency to a point where we're giving that user experience to these end users that's the same, the same as a PC, same as a workstation you're used to.
In the world where bandwidth is becoming a commodity and available prevalent just about everywhere, latency is the new king. If you can reduce latency, that is actually what is going to drive that remote experience and make it a really great one. So on top of these great products, we need great relationships with partners. And we have probably relationships with about 80% of the virtualization partners out there, but we do have all the leading ones. So Citrix, Microsoft, Red Hat, our relationship with VMware is going so well, we were just named Technology Partner of the Year at their recent Partner Conference.
So we're building upon that. But the reason why we have great products, the reason why we have great partnerships is because we need to deliver great customer value. So taking all the examples that customers are used to in virtualization, especially around servers and taking advantage of those free cycles, by doing the exact same thing for those PCs and workstations, we're saving our customers money and increasing their productivity. And as you saw from customers like Bell, we're improving their employees' morale. Now what do we do?
How do we extend this? How do we continue to take advantage of the great products we have everything? Well, it's really 2 pronged. So with Tesla being the base that this is all built on, it's the hardware foundation for everything we do in the data center. We work with our OEM partners, we get it incorporated into hundreds of systems and make it readily available that way.
At the same time, in the grid business, what we're doing is we're proactively working with our channel resellers, educating them, working with them, training them, so they understand the value. They understand how to do these deployments. They understand the benefits that we can bring our customers together. And then that expands our reach greatly. That gives us a sales force that can go out and help us expand into our key markets.
And these are the key markets that we're working with them in, manufacturing, AC, education, government, energy. Again, a lot of the examples you saw from Bob that you can do on the workstation, a lot of the great benefits of Quadro, when they need to do that remotely, that's where graphics virtualization, that's where grid comes in and enables them to do that. But there's one other component that actually is really important that I'm using as a metric to watch how we're doing and how this is being adopted and accepted in the marketplace. And that's the Fortune 100 installations. And why is that important?
Because that says enterprise sees this approach, they buy into this approach, they understand the value that it can bring to their business and to their employees. So now I'm going to be watching this space closely, because we've actually landed, as you can see, quite a different variety of companies inside the Fortune 100. But landing is just the beginning. Now we need to expand. So if we can just continue to penetrate these 14 that I'm showing up here, it will be a significant increase into the revenue that we have.
So that's going to be our metric that we're looking at the business. How much are we moving in? Is it 5%, 10% and then we're looking at it, revenue continue to grow from a software standpoint. So just a real quick summary. Tesla being the base that we build this all on.
Tesla is the hardware platform and Shanker is going to come up in a few seconds and walk you through how Tesla is a key component of our enterprise business. On top of that, the Grid software and the Grid software model that we built out enables the graphics virtualization of all the applications you're seeing. Whether they're on the workstation side, as Bob pointed out with the Quadro benefits that we brought to them on the workstation, extending that reach now, extending out so they can get access to it remotely or to on the PC side. A lot of people don't actually realize that some of the desktop virtualization challenges out there are because of the PC. A real active power user, someone has multiple browsers open, Excel spreadsheets, PowerPoint can be just as demanding on a desktop virtualization solution.
So by bringing the power of Grid to them, we're going to be improving their user experience as well. So with that, I'd like to welcome up Shankar, who's going to walk you through the data center components.
Thank you, Jim. Hi, everyone. I'm Shankar, and I'm responsible for our enterprise business. And last year, I shared with you how accelerated data center was about a $3,000,000,000 to $5,000,000,000 opportunity in HPC and cloud hyper So today, I'll share with you where we are, how we got there and what was our core value proposition and our 3 simple strategies to grow in the future. And as part of that, I'll share with you that for sure because of the progress we've made, our opportunity is more than $3,000,000,000 to $5,000,000,000 and you'll see why.
So before we get started, I mean, data center is a $50,000,000,000 to $60,000,000,000 market, but we are not in all of data center. What we specialize in is accelerated data center. And within accelerated data center, we focus on 3 large markets, the HPC market, the hyperscale cloud market and emerging market in enterprise artificial intelligence. These three markets are niche markets, but they are large markets. And we serve all these markets with 1 Tesla accelerated data center platform.
So we get great leverage by doing that. And today, quite simply, we are the leader in accelerated computing. We created this category. In high performance computing, our promise is to help our researchers and the academic research community to accelerate their discoveries, to speed up their scientific discovery and drive further innovation. So some years ago, we had a seminal moment when the HIV virus capsid was entirely understood as a result of using Tesla Supercomputer at University of Illinois in Urbana Champaign, right, using molecular modeling software.
The Higgs boson was found faster because they're using accelerated computing technologies. The bird flu virus, if you remember a few years ago, there's this big scare in China with the bird flu and as a result of the work done by supercomputing centers in China and elsewhere, they were able to provide GlaxoSmithKline with an improvement on Tamiflu so that this potential pandemic could be controlled, right? So today, in high performance computing, the square another great discovery, the Square Kilometer Array, which is going to discover what happened only 100000 years after the Big Bang by be able to read these very faint cosmic ray signals with a very intelligently designed Square Kilometre Array Telescope, that will be discovery will be accelerated using Tesla accelerated data centers. So in high performance computing, 75,000 papers have been written by academic and corporate researchers using the power of the Tesla GPU. 900 universities are teaching accelerated computing to kind of old guys like me, but more importantly to young researchers who are waiting out there, who are keen and eager to accelerate the pace of their scientific discovery, all using high performance computing.
And high performance computing is a $15,000,000,000 portion of the $50,000,000,000 data center market. In hyperscale, the seminal moment was when the Google Brain experiment was done by Andrew Ng at that time, he was in Stanford University, where he took 1,000 normal CPU servers and did an experiment where they buy they recognized or the computer automatically recognized cats and YouTube videos. And they repeated that same experiment just 1 year later with just 16 Tesla Accelerated Servers to show 6x better performance improvement with just 16 servers. And that set off the whole hyperscale and cloud community so that everybody now is adopting the Tesla Accelerated Data Center for deep learning. But what was interesting was and the hyperscale cloud opportunity, again, there's lots of debate about how big the opportunity is.
My estimate is at least $10,000,000,000 to $15,000,000,000 of that data center market is in the hyperscale cloud. And so that kind of made things interesting in sort of Silicon Valley and people who understood consumer Internet and doing bleeding edge work using deep learning. But in the enterprise, in traditional business, manufacturing, retailing, medical and so on, industrial, what's really set things off is this alpha go moment. And so suddenly, when Deep Blue beat Kastarov at that time, they said, well, it will take about 100 years for a computer to beat a human being at Go because Go is so much more complicated. But by using this deep learning artificial intelligence algorithms, all of a sudden the machine was able to beat the best human being.
And what's that set off is, when I go and talk to the Chief Innovation Officer or the Chief Information Officer or the Chief Data Scientist in Enterprises, all of them are looking at artificial intelligence. And I believe that this will be a new market opportunity for NVIDIA in accelerated data center. So how did we get to where we are today? It all starts it's sort of I know our GTC advertising says it is rocket science. And when you walk around and look at all the 400 poster boards outside and study the 800 sessions, believe me, it is rocket science.
But the data center value proposition is really very, very simple. All you do is the most important thing is there needs to be an application that's accelerated. And if you accelerate the application and make the customer happy, the cost of deploying it in a data center reduces dramatically. In IT speak, we call it lower data center TCO. Simple.
So if you accelerate an application 5 times, I'm not talking about 15% or 5%, which is a normal Moore's Law normal data center improvements. I'm talking about 500% faster applications. You will experience as much as 60% reduction in your data center TCO. And last year, I gave you 2 very good examples, the Amber example for the HIV capsid where we took a deployment cost of $200,000 on a normal unaccelerated server down to $14,000 with a Tesla accelerated server. And the Google Brain example where you take $5,000,000 of data center cost and reduce it down to $200,000 And what's interesting is the acquisition goes down, the cost of acquisition goes down from in the case of the $21,000 sorry, the $200,000 down to $14,000 But at the same time, your power, electric power consumption in that example goes down from 20 kilowatts down to 1 kilowatt.
And those of you who study the data center and know the data center market, electricity, power is the highest cost of operating a data center. So in addition to reducing your acquisition cost, accelerated computing with a Tesla is able to reduce your operating cost, bringing down your data center TCO dramatically. Now by doing this for customers and making customers successful for one customer, what happens is you get rapid deployment in vertical market segments that use the same application. So what we're seeing in high performance computing is rapid customer adoption, not only in the national laboratories like Oak Ridge National Labs, where we have the world's 2nd fastest computer or CSCS, which has PizDaint, which is Switzerland's national computing center, which is Europe's fastest computer. But we also democratize high performance computing so that all of the universities, whether it's Stanford or Harvard or my alma mater, IIT Delhi, which has India's fastest supercomputer, all of these universities can now build their own supercomputers.
And of course, once you get adoption in national laboratories and in higher education institutes, industry starts to adopt. So in energy and oil and gas, we saw Schlumberger adopting it and they now have many, many petaflops of computing and because Schlumberger adopted it, Chevron adopted it, ENI has the 7th fastest computer in the planet, Statoil has deployed extensively and so on and so forth. And I'm seeing the same in financial services with companies like JPMorgan Chase, with ING Bank, with Barclays and even insurance companies where Aon Benfield has put out a service to insurance. And what's interesting is this year, we found that 7 of the top automotive companies have now deployed Tesla for computer aided engineering using ANSYS and Amicus and other CFD and CAE applications in their data centers. So it's moved from adoption in the research, higher education, national laboratories community into industries because the application is faster and the cost of deploying in the data center is lower.
And likewise, because of deep learning and deep neural networks and the amazing value proposition, not only did all the cloud providers in the United States like Google and Amazon and Facebook and so on adopted The cloud providers, the cloud companies in China like Baidu, Ali, Tencent, Sogou, Qihu and so on, all started adopting deep neural nets in their data centers. And it's not just in the service providers. What's interesting is, it's communication. So iFlytek is China's voice communications company. Skype Translate is done using NVIDIA Tesla processes.
So you get the simultaneous translation from one language to the other in real time using accelerated computing. And we're starting to see some very interesting things happening with retailing, so that your choice of what you buy because dictated not by people who are like you, but you get a recommendation for you, you the individual, you the person because a neural net has been trained to understand who you are and make the best possible recommendations for retailers. So we're seeing amazing adoption both in HPC and in hyperscale. And all of this time, we are leveraging the one data center platform that we have, it's called the Tesla Accelerated Data Center Platform. So what is this platform?
It consists not only of hardware and we have hardware that's tuned to these applications, so M40 and M44 for hyper cloud. We have K80 and now P100 for HBC, multi app HBC for K80 and strong scaling HPC with the P100, but we also put out the DGX-one appliance, an AI supercomputer in a box for the researchers and early adopters. But the hardware is just one part of it. What's much more important is the software. What Jensen talked about, the NVIDIA SDK.
And for the data center, we have a deep learning SDK and we have a ComputeWorks SDK. And this amazing software adds value to our developer community and to our end customers as well as to the hardware partners to really bring out the maximum benefit of acceleration in the data center. And then the deployment is one of 2 ways. One is through our traditional OEM, hardware OEM partners. We call them NPNs, companies like IBM, Cray, HP, Dell, Super Micro and so on.
And they design Tesla into their data center servers, into their racks. We now have more than 400 server models, which are built on Tesla, available to all kinds of customers in every single market. And what's interesting this year, this past year is that Tesla started to be deployed in the cloud. So Microsoft is now previewing their N Series Azure service and the previews are now sold out. Amazon has G instances, GPU instances in their EC2 cloud service.
IBM Software is hosting lots and lots of startup companies and enterprises using their bare metal and Tesla GPU as a cloud service. And I was recently in China and Xiaoming Hu, who's the overall head of Aliyun, Aliyun means Ali Cloud, Alibaba's Cloud, they launched their Tesla GPU as a service in the cloud. So we're starting to see deployments not only in customers on premise data centers, but also in cloud data centers for enterprise computing. So what are our 3 simple strategies for continued growth? So first of all, in high performance computing, it's all about the path to exascale.
And the way I measure the path to exascale is our penetration of the top 500. So we already have over 100, this was a similar year, over 100 of the top 500 systems are accelerated. The vast majority, 96% of the new systems that were accelerated in the top 500 were accelerated by Tesla, 96% of the new systems. And when I first started this business, we had no systems in the top 500. And you kind of do the math, if you have 500 systems and we are on a path to exascale and we'll get there roughly in 2022 maybe, that means there's going to be 750 exaflops of computing in just the top 500.
And you can do I'm sure you all have very sophisticated models and so on, but you can do the math and assume an ASP in 2022 of whatever number and you take the number of you kind of guess what the processor flops might be. You can do the math in a number of different ways. Ultimately, it boils down to this. The top 500 itself is a $1,000,000,000 opportunity for NVIDIA and HPC. And top 500, there are many organizations that are not in the top 500.
Our own nation's national center for supercomputing applications does not put in a Top 500 entry. Our nation's security services use Tesla and don't put in a top 500 entry. All of the oil companies don't typically put in a top 500 entry. So the top 500 is only at best 20% of the HPC market. So if the top 500 is a $1,000,000,000 opportunity for NVIDIA and if you agree with my thesis that ultimately all of the top 500 systems will be accelerated, then our overall opportunity in high performance computing is of the order of $5,000,000,000 And the way we get there is by accelerating, as I said, the applications.
Last year, I told you our catalog, we had 300 applications accelerated. This year, we already have 400 applications accelerated. And my prediction is that growth, more and more applications will continue to be accelerated using the NVIDIA SDK and thereby they'll be reducing accelerating the applications and reducing their data center cost. So that's our first strategy, the path to exascale and high performance computing. Our second strategy is to accelerate all the hyperscale data centers.
So all of you already know, everyone knows that Google uses Tesla, Baidu uses Tesla, Facebook put out an open source, open compute project. They modified the design to include a new type of server called Big Sur with 8 GPUs in 1 4 year unit, right. So that was another seminal moment. So we are already in a large number of these hyperscale data centers And hyperscale data centers is about, I don't know, dollars 11,000,000,000 to $15,000,000,000 opportunity. So having established a presence in these data centers, our job is to now not just do object recognition, image detection and so on.
There's a huge opportunity in other areas. So Brian Catanzaro of Baidu spoke earlier in the keynote and he explained that voice and other problems have a temporal nature. And temporal means as time changes, the neural net changes. And so instead of using CNNs, he's looking at using RNNs and there are other techniques like this. So our job is to address that market opportunity.
And you just think about it, Facebook has 8,000,000,000 video views a day, right? Baidu has 6,000,000,000 searches a day. 10% of those searches are voice searches. Nobody does voice search better. Nobody trains voice search better than a Tesla accelerated data center.
Okay? So that's our job to have deeper and deeper penetration to be more and more relevant in this hyperscale space. Last year, we had just our K80 and our M40 GPUs. As Jensen pointed out, we now have a wonderful new processor called the Tesla M4, which fits into the skinniest of servers, those 1U less than 1 kilowatt skinny servers, which can do processing, inference processing at the edge or an appliance. And our value proposition is we not only accelerate deep learning influence, we also accelerate video processing, transcoding, encoding and a whole lot of general application.
It's a great benefit of having a more general purpose processor such as a Tesla M4. And it gives you an idea of the depth of use. Google, if you look at their TensorFlow announcement, there's this chart in there from Jeff Dean, which shows less a year ago, they had less than 300 calls to their deep learning libraries. Today, they have 1200 applications that are calling these deep learning libraries. So it gives you a sense of how much adoption there has been in less than 1 year just at Google in terms of using GPUs.
So last strategy, and this is where we take an artificial intelligence to the enterprise. And just like in last year, we had the Dev Box with the 4 titans, if you remember that, and that got adopted by these hyperscale and cloud providers to make the first breakthroughs for deep learning. The notion is very similar here. With DGX-one, we get an opportunity to for people to relatively easily try out artificial intelligence in the enterprise. So my prediction is, right now, people are just starting to use it.
They're going to start, but think about it. So some people will deploy artificial intelligence, some enterprises will deploy artificial intelligence in the cloud And that may be part of our hyperscale cloud opportunity. But others may want to deploy it themselves. So for example, you have to ask the question, will Walmart move all of their data to Amazon Web Services? I think probably not.
Now they may put it on Google, so maybe that opportunity is actually in cloud hyperscale, but I'm pretty confident in medical, for example, there are HIPAA regulations. You don't want all of everyone's medical data being put on a shared service of some sort. There needs to be HIPAA type of protection for medical data, for example. So that's our strategy for artificial intelligence in the enterprise. It is a large opportunity.
I have no idea, frankly, how big it is. What I can share with you is over 3,500 Enterprises in advertising, medical, transportation, retail, manufacturing, aerospace, defense services are engaged with us on artificial intelligence in some way, shape or form. So I do believe and the fact is that we can leverage that Tesla platform whether on premise or in the cloud to address this market opportunity. So in summary, we are the world leader in accelerated computing. We are addressing the data center market.
Data center is a $50,000,000,000 market, but we address only 3 large markets within that data center. HPC, which is a $15,000,000,000 market and for us represents at least a $5,000,000,000 opportunity. Hyperscale and cloud, which is around a $12,000,000,000 to $15,000,000,000 data center opportunity and growing very fast as more people adopt the cloud. And there our opportunity is of the order of $3,000,000,000 to $5,000,000,000 and a new emerging opportunity in enterprise with enterprise AI, which is a pretty significant opportunity in my view. Hopefully, I'll come back next year and tell you how we did.
And so thank you very much. And without further ado, I'd like to introduce our Head of Automotive Business, Rob Shangoar.
Thanks, Shankar. The last 24 hours has been pretty exciting for me. First of all, for all of you at the investor reception last night, thank you very much for keeping me informed of the Villanova game score. So I went to Villanova and that's my first exciting news. The other exciting news is, of course, all the automotive announcements we made this morning, DGX-one, end to end mapping and our world's first autonomous race car.
So all of these things are kind of highlighting reasons and indicators of why automotive is an enormous opportunity for NVIDIA. So what I'd like to do is I'd like to walk you through some of the things we're working on, some of the indicators, what our strategies are and then how big we think this is and what the growth potential is, okay? First of all, I think as Jensen indicated this morning, automotive is a very fast growing business for NVIDIA. We grew 75% year on year. But aside from the revenue, which shows past growth, we feel like we've barely scratched the surface.
Every one of NVIDIA's businesses, as you know, starts with the concept of an open platform. And fundamental to creating the opportunity is to give people the platform with which they can innovate, with which they can solve problems on top of our platform. And the first indicators that we look for are what are people doing? Are they developing applications? Are they solving problems?
Are they doing things with our platform? So some of the metrics we look at are, for example, what is the rate of adoption of our software developer kits, what kind of engagements are is happening on autonomous driving. And of course, if you walk around here at GTC, you'll see that the impact of deep learning on autonomous self driving cars as well as cockpit cars is just profound, and that's at the heart of driving our opportunity. So aside from these metrics, the other thing you can do is just walk around. If you just walk around the GTC here, first of all, the numbers, I mean, I don't know if you remember the numbers from last year, but this is more than double, more than double the representation.
So the epicenter at GTC has become all about deep learning, self driving cars and how do you develop solutions for them. So if you look at, for example, carmakers, there's 21 carmakers here this year. I think last year, there were like 9 or 10. This is everybody from Toyota. The keynote on Thursday, Gil Pratt from the Toyota Research Institute is speaking.
You can go to a session here and listen to Volvo talk about the processing requirements for those autonomous cars in the Drive Me project that are going to be driving around Gothenburg, Sweden. You can see a number of presentations from Audi and Audi Research on everything from how to build virtual cockpits to how to build collision detection systems in cars. So just a wide variety of different talks and sessions, not just from the car makers, but also from the ecosystem. So all of the developers, there's 39 technology and software companies here at GTC this year. This is compared to 10 last year, almost 4x.
So for example, all of the complexity that Jensen was talking about this morning in something like, for example, path planning. How does a car drive? How do you calculate or generate trajectories? How do you program in essentially the soul of a car? What is a BMW going to do?
Is it going to just follow behind a car that's driving at the speed limit? Is it going to overtake? How aggressively will it turn? How aggressively will it go? And I think at the heart of that, you also start realizing and you come to the realization that carmakers are not going to outsource that.
They want to understand it, they want to control it, and then they're going to involve a bunch of people to help them. And in order to do that, you need an open platform and you need a forum to do it, and that's what GTC is all about. So to get a good idea of the growth opportunity, I think, in automotive, one of the best ways I would encourage you is to just go listen to some of these sessions. Deep learning is not just about self driving cars. There are sessions here where they're talking about deep learning and how to do face recognition, how to study behavior in the car, how do you have a conversation with the car.
There's a company here called SoundHound who's using our technology in deep learning to have full contextual conversation just like you would with a person. Show me Asian restaurants, excluding Chinese, open after 5 p. M. That take Mastercard. And then have a conversation with the car where it understands the context.
So all of these different applications are being built on deep learning. And GTC is again, a great place for you to get a feel for this. Existing customers, existing technology customers, carmakers, Tier 1s, start ups. I think you see all of it right here. All of this leads to what we believe is a sizable opportunity.
And it's hard to say exactly what the opportunity is, but if you wanted to take or get an idea at how we look at it, how we frame it, We see our opportunity basically breaking down into digital cockpit, all of the revenue that you see so far from NVIDIA, all of that growth comes purely from digital cockpit, from infotainment. And we look at it as roughly 100,000,000,000 cars exist in the year, 20,000,000 of those cars represent what we would call premium cars, right? Premium carmakers plus, let's say, 10% of the top of carmakers. So for example, Toyota would have Lexus or Nissan would have Infiniti. And then in that, we believe our opportunity in there is roughly $2,000,000,000 This is an annual opportunity.
Within the self driving car market, we break it into self driving cars, right, meaning ADAS or so and then transportation as a service. So for self driving cars, we believe that ADAS capabilities is going to be driven by the desire to have, for example, 5 star safety ratings for cars. So that ADAS will become commonplace in many cars by 2020, by 2022, as close to 100% as possible. But within that space, we believe that onefour of that ADAS opportunity is something other than a solution that exists today, which is just forward camera detection. Forward camera detection, we believe, will rapidly become a commodity and we believe that can be brought into the cockpit computer.
But there's lots of ADAS opportunities out there that require deep learning, significant processing, all of those things that we believe we can add value to. So of that, roughly $15,000,000 Sam, and then on top of that, we believe that's roughly $2,000,000,000 opportunity. Transportation as a service, Jensen showed you this morning the Baidu self driving car. These guys are not operating a small little box that goes on the windshield and a single camera. These are very complex cars and self driving cars completely designed to be self driving cars, meaning they are doing everything.
They are fusing sensors, they're proceeding, they're seeing, they're localizing, they're doing path planning and then they're driving the car. So we don't know what that size is. Let's assume it's a couple of million. And based on that, we believe it's a sizable opportunity that and it's a very significant percentage share because there's very, very high processing requirements for that and lots of significant deep learning opportunity or deep learning requirements. So we believe that's a unique opportunity for NVIDIA.
So just roughly, we look at it as about an $8,000,000,000 market right now, give or take. So either way, very big opportunity for us. There's a couple of problems. You guys know that NVIDIA strategy is always to focus on segments of the market, vertical segments of markets where visual computing matters. And in those cases, there's always got to be a significant problem that we're trying to solve.
So the first one is actually one that we've been talking about for a long time, but you're now starting to hear automakers talk about it. And it's not just being driven out of engineering and development. This conversation is now coming from the purchasing departments. This is very significant news. Purchasing is now deciding that you need more processing in the car.
And the reason is the car has evolved into a massive collection of boxes, modules. In the automotive industry, they refer to them as ECUs, electronic control units. So let me give you an example. If you have surround view in your car, today that is a box. It's a $200 box.
It was integrated incrementally, okay? So that's a problem. You have a car now with 65 different modules in it. Each one of them have their own processors. Each one of them have their own software stack.
Each one of them has their own maintenance, their own versioning, their own control code requirements. The cost of maintaining the software is out of control, the cost of the car is out of control. And if you had to start over, if you were a start up car company, you would never design the car like this, of course. If you're a start up company, you actually would pull out a piece of paper and you would design a car with 1 or 2 computers in it, all based on a common software interface. So this is a problem that we believe is not unique to computing, it's unique to the automotive industry and it's an area where NVIDIA can leverage experiences from our other industries and bring it into automotive.
The other problem is, of course, one that we talked about already. Self driving is just very hard. All of those things that we talked about and go and just go and listen into some of the sessions and you can look at all of the detail, all of the thought that's growing, all of the engineering, all of the innovation that's going into things. I mentioned the FKA. That trajectory planning or that trajectory generator algorithm that FKA is working on is actually extremely complex, right?
It takes a lot of work, takes a lot of processing. So all of those different components or all of the different parts of a self driving loop require enormous thought, enormous processing, lots of algorithms and lots of expertise. So based on these two problems, we have a vision. So NVIDIA's vision is that the future of car computing will be that 2 computers will replace many ECUs. 2 computers will replace many boxes.
And as a result, of course, it will significantly reduce the cost of the car. Both of these computers would have access to cameras and sensors, something that doesn't exist today. But if you think about it, the reason why both computers would have access to the cameras and sensors is so that you could load upgradable software and then have them run on either one of the computers. Of course, this computer system would have multiple OSs, many displays, Many of the applications in these cars are powered by artificial intelligence. Today, they use traditional techniques, traditional speech, traditional computer vision that have hit the bar, and it's difficult to progress further.
So artificial intelligence and deep learning is coming to the rescue. I mentioned upgradable software. Upgradable software replaces the hardware ECUs. All of this, of course, would have to be 1 architecture because the cost of software has become the most significant cost in developing a car. And then finally, you have to have the performance headroom in this kind of a computer so that you can load software applications later.
And in giving that higher performance, you can reduce the overall total cost of the car by a significant, significant amount, okay? So this is our this is what we believe. And our implementation of our strategy to address this vision is an architecture around our drive computer systems. So we have 2 drive computers. 1 is called Drive CX, other one is called DRIVE PX.
DRIVE CX is focused on the cockpit, and the idea here is that this cockpit computer is scalable. You can use it to run the infotainment. You can add a GPU virtualization software to it and now run the cluster, But you're running the cluster using SafetyOS, right? The cluster and the instrumentation, you have to have guaranteed frame rate for the cluster. It's considered a safety application.
If you lose your music, nobody dies. If you lose your cluster, that's a problem. So different OSs. And if the cockpit computer has access to the cameras and sensors, why not just bring in the ADAS for free? Do object detection, do classification, do surround view, do a surround mirror application, do a visualization system that replaces the side mirrors on the car, put in a system where you can have surround view and you can detect if children run behind the car or if a dog is running around the front.
All of these things now become upgradable software instead of boxes that cost $200 each. We believe this kind of architecture makes sense because you've seen it, of course, in many other industries. That's the fundamental principle of iPhone, okay? So we think this architecture makes sense, and we believe that there's a lot of people in the automotive industry who see this also. So this is our strategy on the cockpit side.
On Drive PX2, as Jensen mentioned this morning, this is the world's first AI supercomputer for self driving cars. And I think one of the questions I got the most last night, people were saying, hey, this box, it's kind of isn't it kind of expensive? The DRIVE PX2 is a development platform, but it's scalable, right? In other words, you would develop your algorithms on a DRIVE PX2. And then if you chose to, you would just take 1 chip and you would design a feature using you could just start with 1 chip, that chip that Jensen held up this morning.
Or you could add a second chip and do more algorithms or you could take that chip and add a discrete GPU. So it's a scalable architecture, all right? Now if you look at if you're doing ADAS, then the chip solution is the way that you want to go. If you're looking at that collection of computers that was in that picture this morning on Baidu, you look at a DRIVE PX2 and you go, wow, that's really small. That's wonderful.
So now I can compress and take a trunk full of PCs and compress it down into a Drive PX2. So it's intended to be scalable. The strategy is our strategy is scalable from chips, from 1 chip to many systems. Okay? Makes sense?
All of these innovations and everything that we do is based on a fundamental assumption, something that is wired into the DNA of NVIDIA from the beginning, which is that we are an open platform for all developers. It is a fundamental part of our strategy. We believe that for self driving cars, you do not want to outsource the behavior of your car, the soul of your car, how it drives to a third party and not have control over it. You want to be able to own the code, control the code, you can work with 3rd party developers, but above all, you take advantage of an open platform, right? So you can use the best of what's out there.
If you walk around GTC, you'll just see numerous examples of this. I'm highlighting a couple of them here, but please go and look at them and just see the kind of innovation that's happening. Open platforms for here, I'm highlighting kind of a cross section. I mentioned that application SoundHound. This is the artificial intelligence speech, natural language understanding.
Drive AI and ADAS works, they are using object detection, classification to do lane tracking, segmentation, free space calculation. Daimler has developed on our platform to do extremely precise segmentation. The image that you see there, segmentation meaning where can the car drive, right? Of all the pixels that are on the screen, make sure that you understand what is the drivable space, right? This is a curb, this is a car, this is a pedestrian, this is the road, this is where you can drive, okay?
So these
are just examples of open platform. So in summary, our strategies, I think are pretty straightforward. I think I covered them in order. Our fundamental strategy for self driving cars and our opportunity in self driving cars is driven by a powerful need to solve problems that can't be addressed with traditional approaches. Artificial intelligence and deep learning is at the heart of the contribution that NVIDIA brings to the self driving market.
We believe that the execution of this into a car in the form of a computer architecture should be one scalable architecture, and it shouldn't matter. You should be able to scale that architecture to run infotainment, cluster, mapping, ADAS, autonomous driving, all of those in one architecture, exactly the same software interface. And in fact, it shouldn't even matter. It shouldn't even matter where the software actually sits or runs. Ultimately, this is not just about new features and functionality, but we believe that the automotive industry is in a fundamental crisis on how to architect the car.
At the end of the day, putting more processing into the car is going to significantly reduce the overall cost of the car. And the way to do that is going to be that you can increase the functionality and eliminate the need for incremental boxes through upgradeable software. And then finally, in order to do that, we believe the best way to do that is with an open platform for all developers, one where every developer can innovate and create unique things, so carmakers and Tier 1s benefit and have the best choice of what's out there. Okay. That's a summary of our strategies.
And with that, I'd like to introduce the CEO and Founder of NVIDIA, Mr. Jensen Huang.
Good job, Rob. Good job. Thank you.
Did you just call me a flipper? You called me a lot of things over the years, Rob. Flipper, you haven't. Hi, everybody. Welcome to GTC.
GTC is about developers, and developers need platforms because they want to develop applications that are inspired by your technology so that they could solve problems they otherwise can't. Our company, our company, our craft, our thing, what we're all about is building computing technologies for the world's most computing performance demanding customers, the people who need it most. Well, it turned out the customer that we did it for first were gamers. And gamers needed cutting edge technology. They needed the best technology in the world they can get their hands on, so they could have fun.
And as you know, computer games is essentially a physical simulation of the world. It's a very complicated piece of software and some of the world's best computer programmers are game developers. It combines, of course, programming, but also combines physics, it combines AI, it combines art and you can't just do it so that it's right. You have to do it so that it's right, it's beautiful and it's fast and it's fun. Combining all of those things is incredibly hard to do.
It was one of our first It's one of our first customers and is the driving force of a lot of the things that we do today. But our customers range everywhere from designers, developers ranging from designers and now, of course, AI developers. We dedicate ourselves to this one singular field, one singular craft, and we want to be the world's best at it. And the entire company has dedicated itself to this one craft. That's all we do.
GPU accelerated computing. Now a long time ago, when we first started talking to you guys about this, GPU accelerated computing was really for games and extended itself into design. But I think at this point, it's pretty clear that GPU accelerated computing because of the work that we did with CUDA, because of the work that we did inventing this whole field of general purpose GPU computing. The expansion of our vision, the generalization of our architecture and all of the software and tools that we built on top of it has opened up huge market opportunities for us. It was the reason why we started GTC in the 1st place.
We had a vision that one of these days, this particular way of computing, which made it possible for you to take something as heavy, as computationally difficult as a video game and run it at 60 frames a second. Imagine if you could take this application, this capability, this craft, if you could figure out a way to extend this tool so that other people could use it, Imagine what doctors would do with it. Imagine what researchers would do with it. And then one day and one day, one day AI researchers discovered it. They had this incredible problem they had to solve.
They were trying to create this new programming model called DNN, Deep Neural Networks. And they were running their deep neural networks on large clusters. And it was really just a couple of people's idea at Stanford, simultaneously a couple of people at NYU, simultaneously a couple of people at the Swiss AI Lab, IDSIA. A few people at somehow it kind of happened at 3 or 4 different places because I think the pain was just too great. And they were all trying to win ImageNet.
Maybe that's kind of what happened. It all kind of simultaneously came together and the big bang of modern and not happened. All of a sudden it became true that you could actually train these networks that took months. At the time when they started training AlexNet, it took weeks. And now you saw yesterday, it takes hours.
This new computing model is a big deal. Arguably, we might have discovered the killer app of GPU computing. This computing model is not just a killer app for GPU computing. This model computing model is now the killer app for computer science. I don't know if you guys noticed or not, But during the keynote, I don't think I've ever seen any time in the past where the entire computer industry was present.
The entire computer industry was present. IBM was there, Microsoft was staged with us, Google, Facebook, Baidu, HP, Dell, Cray, literally the entire spectrum of the computer industry was there directly on stage, exciting about the platform, building a server, build a framework that we're accelerating together, using our GPUs in their cloud services somehow. This is really, really an exciting thing. We're just so excited about it. And quite frankly, 4 or 5 years ago when we started talking about this, as somebody was reminding me just now, oh, it was Matsuoka san, the head of the supercomputing center that was the 1st supercomputer to use our GPU in the world, Ty Tech.
He said and he didn't even see it at the time. He didn't even see it at the time. He said, Jenson, some 4 years ago, you showed this one application at GTC and you were detecting red cars. And quite frankly, in the audience, it looked kind of gimmicky. Well, that detector today turned into something that everybody is using for inferencing, everybody is using now and we're starting to use it with MGH, Mass General to hopefully advance life sciences.
I mean that basic approach, we were so fortunate to have seen it and realize that the extrapolation of that computing model would somehow become incredibly powerful. This kind of was and I said to him that it kind of reminded me of the Carver Mead moment. Now, I don't know if any of you guys were starting computer science during that time in chip design. But about 35, 40 years ago, I came upon this book written by Carver Mead, it was the Modern VLSI. And what Carver Mead did was a big deal, was a big deal.
Everybody thought of it as kind of a strange book and they were not. It was a big deal. What it did was it took chip design, VLSI design at the time which was a form of art and it was really used, designed by people who have spent 20, 30 years designing chips at TI and companies like that. And then you let them craft these chips and every transistor was different, every transistor was different and was crafted. And Cover Me came along and they said something that was just incredible.
Let's make every transistor the same. Don't craft it, don't make it all the same. And then use design tools. If you made it the same, then you can use design tools to optimize VLSI design. That insight inspired EDA, that insight inspired logic synthesis, that insight inspired high level design, that insight enabled this entire industry, chip designers all over the world.
Until then, chip design was really the realm of expert chip designers. Now, every system engineering on the planet can design chips using EDA tools that are available because the transistors are all largely the same. They're laid out in all the same single direction, largely the same size, all controllable and optimizable by software. Well, when this came along, when DNN came along, it kind of looked the same to me. And when you extrapolate that, when you extrapolate that, it's very clear the type of problems that we'd be able to solve over time.
All the type of problems that we can't solve. When we have time someday, we'll sit down and talk about the nuances of deep neural nets and it's really wonderful. The dimensionality that is able to capture in the data that you just simply present to it and it extracts these features really wonderful stuff. There's no way you could write a software program for that. There's just no way.
We discovered a new computing model. We learned about this early on because we had access to all the researchers and because we're using GPUs to accelerate their work. We're fortunate to have seen it. But the decision that ultimately made every difference was the company's decision to go all in on deep learning. We just went all in because it was such a big deal.
It was such a big deal. If you extrapolate it some 10, 20 years forward, the type of things that you'd be able to solve, the type of problems you'd be able to solve is really quite profound. And so we decided to go all in. Today, you saw some of the results of that work. I absolutely think that this is going to be a very big deal.
This will very likely be NVIDIA's largest market. Now this largest market, when I say that, I don't mean that there's actually going to be a market called deep learning. It's going to be the largest business for us simply because it's going to affect every single business. We have quickly applied deep learning to self driving cars because it's very clear. You're not going to write software programs, you're not going to have a bunch of computer vision scientists sitting around trying to detect every tree, every dog, every fire hydrant, every sign, every lane, every lack of lane, every dirt road, you're simply not going to do that.
It makes more sense to collect a bunch of video in your car and just pound that network with it. Computers work 24 hours a day, they never get tired and they get faster and faster and faster. And so we decided to use that approach. But you're going to see us use deep learning approaches now in all kinds of fields. And we're just so excited about the work that we're doing here, very different computing model.
And as you know, a different computing model also encourages a different computing architecture. One of the things that really drove multicore processors was Java. The fact that you can download on a website a bunch of little Java apps simultaneously launched on all these different cores was really one of the drivers for having a better experience for your laptop. A new computing model drives a new computing architecture. Deep neural networks, the best way to accelerate that is with the GPU computing.
Okay? So that's the big story of this conference. You're going to see a lot of AI stuff, you're going to see a lot of deep learning stuff. So if you have a chance, go through the 500 talks. This is really, really great or even just look at the titles.
The type of work that people are doing, who's doing the work, what is that work doing, read the abstracts, really fantastic stuff. We'll make it all available to you. Now that there's 2 things that I wanted to do today, 1 was just introduce this new computing model, why I'm so excited about it. The second thing is explain probably for the first time ever the NVIDIA strategy as it looks like on the inside. The NVIDIA strategy as it looks like on the inside.
It basically is 2 virtual cycles. Now we've explained it many, many times as it looks like from the outside, but from the inside it kind of looks like this. The first thing, the first part of our strategy is platforms and networks. As I mentioned earlier, we are a GPU computing platform. We're computing platform.
The fundamental core of our business is about GPU computing. And this processor and its nature, the nature of this computing architecture, the nature of this approach has very special characteristics, has very special characteristics. And if we were smart and we were thoughtful, we'll figure out which industry would benefit most by this computing architecture and we would select the right markets. Once you select the markets, once you select the markets, any computing architecture just needs applications. Otherwise, what we make is just a brick.
We need applications and the way to create an application of course is to have developers. However, a developer would only use your platform if 3 conditions were to happen and it's really rough. It's really, really rough having any developers. The first condition of course is that this computing architecture has to add value. If it doesn't make anything better, why would they program to it?
Number 2, it has to be actually easy to use. They have to be able to deliver even though you can promise all kinds of wonderful things, it has to be relatively easy for them to reveal, to extract, to actually realize the potential of this platform, to do that magical thing that your platform somehow uniquely does. And the third thing has to do with this, ultimately their goal isn't to make a great app. Their goal is to make a great app that millions of people can use. And so it turns out that in order for you to be successful with a platform, you also have to have large reach.
Well, it turns out that you can't get large reach if you don't have any apps. And if you don't have any apps, you can't get any apps if you don't have any reach. This is that classic chicken and egg problem. You have to start somewhere. GTC was our way of starting that, starting that flywheel.
It does take time. It does take time. Of course, we also had a laser beam approach to go get more content, to get the right content in the right industries, to somehow turbo charge it ourselves. Notice we created physics, physics simulation, game works, basically utilizes our GPU computing architecture to bring value to a video game industry that already loved us. So we could take video games to the next level using that ability to drive our innovation in parallel computing, in physics simulation, in accelerated computing.
We also used it for ray tracing because we were already very strong in workstations and we could take workstation design, we could take design to the next level with ray tracing. So we used the architecture in smart ways to turbocharge our existing business as we cultivate other markets. The other markets are starting to show up. It started with oil and gas because seismic processing was important. It started with molecular dynamics because it turns
out that
molecular dynamics simulation is a little bit like deep learning requires enormous supercomputer after supercomputer after supercomputer before you know it, we got a flywheel going, okay? So number 1, we have to create a platform approach. Now in order to get the exposure, in order to get the footprint, we have a network of partners to take out our architecture to the larger world. And so we have a network of partners, we call it NPN, the NVIDIA Partner Network. And I think Shankar was talking a little bit about that.
Now our GameWorks our GeForce platform is really a good illustration of that. And so what I simply did was I took the stacks that you guys have heard us talk about many times in the past and I broke it down into how we think about it internally as a strategy. GeForce is the world's best GPU. It's the world's best GPU. But it wasn't until GameWorks came along and then GFE on top of that, that turned it into a living, breathing, vibrant platform, living, breathing, vibrant platform.
And as a result of now, developers all over the world use GameWorks. It makes their games better, which encourages them to enjoy our platform, which encourages them to always have a GeForce in their system. And just about every game developer on the planet has a GeForce in their system. Every gamer has the benefit of enjoying our platform better using GFE and we go to the market with GeForce through our network of partners. You can simultaneously get GeForce at the day of launch in just about any country on the planet.
And so this 3 pronged approach, GeForce through a network of partners, GameWorks through game developers, GFE turning that PC into a game platform for gamers themselves. These three approaches allows us to start our flywheel, our platform approach and our network approach. Okay? Schematic number 1. Schematic number 2.
Well, it turns out the work that we do is really quite expensive. I know this doesn't sound very good as a shareholder. It is expensive. Now it's good news and it's of course difficult news. The fact of the matter is the level of investment that's necessary to build the next generation, The higher it is, the greater the barriers to entry.
So that's good. That is good. What you need, however, in order to invest, no singular market anymore is large enough to sustain the level of investment necessary to bring the next major architecture to bear, which is the reason why our business model is so leveraged and so high in scale. This is a really important part of our strategy. The question is what one chip if you were to build to compete against NVIDIA, what one chip would you build?
Well, it turns out it would take a lot of chips. You would have to build a $50 chip, a $60 chip, a $70 chip, all the way up to a $5,000 chip and apparently today much higher than that. It just takes too many chips to serve all of the markets. Now, of course, building that many chips does cost a lot of money, but not if you leverage one singular architecture. We leverage 1 architecture across all those chips, all those GPUs.
And what specializes each and every one of them is the last slide, is the last slide. Every single one of the vertical markets that we engage, if it's design, if it's oil and gas, if it's life sciences, if it's cars, if it's games, every one of those ecosystems essentially has a schematic like that. And it has its own team of people who takes that platform to engage the ecosystem. And it's taken us a decade to build up these ecosystems. And as a result, we now have multiple platforms that are delivering a great deal of value.
We have an enormous amount of ecosystem expertise. We have enormous amount of domain expertise. They all leverage 1 architecture. And as a result of that strategy of leveraging 1 architecture, but having multiple markets to deliver scale, having multiple markets to deliver scale, it is now very, very difficult without this to be able to invest at the level that we invest and generate the type of returns that we generate. Okay.
So you guys understand a great business model has leverage and scale. A great business model has leverage and scale and it's taken us some 10 years to create the leverage and the scale. And today, I think you can safely say NVIDIA has quite good leverage and scale. Now if we were successful in building these 2 things, 1, which is the platform and the network and 2, the ability to serve using 1 architecture, multiple markets so you can have leverage and scale, then you have 2 feedback systems. Now the chicken and the egg problem goes away.
You have 2 feedback systems, you have 2 feedback systems that are now creating the virtuous cycle. This was, if you will, the fantasy from about 10 years ago, the company that we wanted to reinvent into and that we had to slowly piece by piece by piece build up into the NVIDIA you know today. And as a result of that, there are several characteristics about our company that I think is quite special. 1, we are the world leader in what we do. This is a company that has dedicated itself to one craft.
It is so good at it. The whole company has dedicated itself to simply master this one craft. And this one craft is called GPU accelerated computing. It started with graphics, but it's much more than that now. It started with graphics and games, but it's much, much more than that now.
It's dedicated to one craft called GPU accelerated computing. The second thing is, we have a platform approach, a multi sided platform approach. We have developers on one side that benefit from our platform and you saw today the first thing announced was a product for them. The greatest tools in the world for GPU computing. Codified encapsulates mathematics that we have computational scientists all over the world doing embodied into the NVIDIA SDK.
The other side of the platform are customers. They're customers of ours, excuse me, partners of ours who only because we have such a market reach, only because developers love our product, only because customers love our products, they want to be part of that network so that they can distribute, so they can take our products to market and serve their own business reasons, a network of partners, 2 sided business model, developers on one side, network of partners on the other side. And if we're successful in doing this 4 or 5 times in different markets, gaming, design, high performance computing, automotive. If we could achieve that level of success in each one of those markets, then our scale benefits because we leverage 1 architecture into large ecosystems and the other side of the virtual cycle happens. We could therefore have greater investment, which has allowed us to build even better architectures, which has allowed us to have even more scale and that's what great business models do, leverage and scale, okay?
So that's it. Those are kind of the 2 internal schematics. I get a lot of those questions and Arnab asked me if I wouldn't mind sharing with you guys and that's it. Relatively simple. It's hard to do and rarely done.
And it's rarely done because it's just very hard to create a new computing model. It just doesn't happen that often. It just doesn't happen that often. A new computing model just doesn't come along that often. And I think that that says something about the specialty of GPU accelerated computing that the results that we provide is not 20% better, it's 20 times better.
It doesn't work for everything, but it does work for some amazing things and now we have discovered our killer app. It works for something that might very well be used by just about every single industry. We announced 5 things at this GTC. Very quickly, I'll just read just very quickly one sentence each, NVIDIA SDK, it's about the developers. It is the absolutely best toolkit that you can get for GPU computing.
It's got everything from graphics to computing to VR, from cloud to PC to workstation to embedded to cars, robots and drones. If you're using GPU accelerated computing, this is your essential toolkit. You must have it. The NVIDIA SDK, amazing amounts of technology. VR, you guys know that we're in VR systems all over the world already for entertainment, for architectural walk throughs, but we wanted to push VR in all kinds of directions and you're going to see all kinds of stuff coming from us.
We think VR is a very big deal. This is a new platform, just like mobile was, just like laptops were, just like TV is now, is a computing platform. Virtual reality is a new computing platform. This is a big deal. We're very excited about that.
Our graphics heritage gives us a home court advantage. Tesla P100, this is the work of several 1,000 people. This is the greatest endeavor in processor design that I've ever known. Thousands of people work together. I mentioned 5 miracles, I wasn't kidding.
The good news is that we were able to tackle them all. The execution of the team is really truly amazing. And now we're holding on in our hands in production today, the world's 1st GPU design for hyperscale data centers. NVIDIA DGX1, our first system, a special system, a once one of its kind type of system. There are routers made, there are servers made, there are security systems made.
Well, here you are, a deep learning appliance, that's what this is. It just happens to be incredibly fast. It might as well be a supercomputer because it is based on supercomputing technology. One singular node architecture from top to bottom for deep learning, the entire software stack optimized for deep learning runs every single framework on the planet. You saw Rajan on stage with me with TensorFlow, but CNTK, Torch, Caffe, Theano, Chainer, preferred networks framework, NVIDIA's digits, framework of frameworks, all accelerated by cuDNN.
Just out of the box, plug it in, start training your networks incredibly fast. And then lastly, our scalable platform, end to end platform from in car, cluster, infotainment, drive computer all the way up to the cloud, all the way up to the cloud bringing AI to cars. If there is a machine that desperately needs to be more intelligent that would be it. That machine needs to be more intelligent. And we think that applying deep learning, applying AI could really make a difference.
And you could see in this show, there are so many startups here, The work that they're doing will surprise you. Deep learning using AI, using our platform, you could do magical things for cars. Five announcements here. So that's it. You've heard a lot of different slices, most of the slices you heard from businesses.
But the way I think about it kind of cuts across all the businesses. There are 4 dynamics that are happening in our company that are growing, that are driving our growth that we're super enthusiastic about. Number 1, gaming is going to continue to grow for all kinds of reasons, for all kinds of reasons. And Fish already talked about that. VR, a new platform, a new platform like mobile, like laptops, new experiences will be possible.
Deep learning and AI, this is the biggest thing that we have ever seen. This is simply too important for us to ignore and we're all in on this. And I think at this point, it is very, very clear that not only was it a good decision, this is now driving industries. We're now in the epicenter of a lot of very important discussions, a lot of very important developments. And surely at this point, we can see that this will be our fastest growing business.
And then lastly, self driving cars, end to end model, end to end computing platform to bring AI to cars. Those are the 4 growth drivers. And with that, I think I would like to introduce Colette, who is going to tell you about all kinds of stuff finance. Here we go, young lady.
Good afternoon. I am your last presenter for today and I'm going to wrap up what we've learned about our computing model, what we've learned about our business model in terms of how you see that in terms of our financials. I appreciate all the enthusiasm throughout the year, both from our investors as well as our analysts. Coming here today is probably one of our largest Investor Days that we have ever had. So I appreciate the overall support.
So with that, let's see if we can get started. Let's wrap up in terms of how we finished fiscal year 'sixteen. The title pretty much says it all, records. We hit quite a few records as we finished fiscal year 'sixteen. First, in terms of on the revenue side, we exceeded $5,000,000,000 for the first time and growing 7%.
Probably a model put together, which would surprise probably many of our peers in the market, those focused on mobile, those focused on chips, those focused on components as we grew 7% year over year where we didn't actually see any growth in some of those underlying markets, particularly in terms of the PC platform. Number 2, a record in terms of gross margin. Our gross margin hit 56.8 percent and grew 100 basis points. I think about a year ago sitting here on stage, lot of questions in terms of are you capable of overall growing your gross margins? The answer to that is yes.
We have many of our platforms right now that exceed our overall company gross margin. And as customers really are buying into our higher valued full end to end platforms, we're performance reached $1,100,000,000 growing 18% year over year, more than double the growth rate in terms of what we saw in terms of revenue. Really a display of us managing through our investment portfolio and to see our revenue, our gross margin, holding our overall investment in the right levels, producing the overall profit that we have today. Let's walk back a little bit on our transformation. Just about 3 years ago, looking at the end of fiscal year 'thirteen in terms of our business mix and where we received our revenue.
The blue here, 42% of it stemmed from our overall mainstream PCs, where we are providing GPUs into mainstream types of PCs or whether or not we're providing SoCs into Tegra OEM such as smartphones, tablets and other types of devices. That represented 42% of our business, where our growth platforms represented about 52%. These growth platforms meaning our gaming business, pro visualization, data center and our automotive. But now looking at the end of the 4th quarter of fiscal year 'sixteen, that transformation in terms of where we are right now. The growth platforms represent more than 86% of our overall revenue.
Now we still have bits and pieces in terms of our PC OEM, Integra OEM and that's still going to be a place that we are in, but it's no longer any material part of our overall business as we've continued to transform to many of the platforms that we have today. Looking at our growth mix on the right hand side here and looking in terms of where we've received our growth over this period of time. Over the last couple of years, our growth platforms on average have been about 25% growth year over year in represented. Our overall revenue is a little bit less than that as we've continued to move out of our PC OEM and our Tegra OEM business. Our market platforms.
This sums up what you've heard today from all of our different presenters in terms of their focus on the platforms that they represent. This is a platform approach where it's not selling underlying chips, underlying components. Each one of them have talked about the entire ecosystem around them, the overall development platform that they have with them, the overall software that enables us to have all of those different types of growth rates. Let's start in terms of gaming on the top. Gaming is our largest business, represents a little bit more than 50% of our overall revenue.
In this last year, about $2,800,000,000 growing 37%. Probably one of the key questions that I get from this room all the time is, let's break that down, help me understand, was that related to units, was that related to ASP, was that related to share, what is driving it? The answer is yes. As Fish told you this morning, all three of those are great opportunities in terms of what it is. The gamers, the gamers and their focus, they want to play games.
The higher the production value of those games that are reaching the market, the better that we have an opportunity to monetize. They've bought into our platform, our brand couldn't be stronger. We are number 1 in PC gaming. Our second one, pro visualization. We're also number 1 in terms of graphics in the enterprise.
Graphics that fuel much of the design, much of the manufacturing, much of what you see in terms of oil and gas, any product in terms of the world, probably more than 80% in terms of overall market share. We'll tend to be where enterprise buying is. It's probably a very mature business where we are, but we still have a strong leadership position. Data center, now this is a different type of focus on the top in terms of focusing on gaming and pro visualization that's focusing on our overall graphics business. Now we're talking about being the underlying computing platform.
You've seen so much of our discussion about artificial intelligence, the birth of deep learning over the last several years. We're probably in an inflection point where this will get much stronger. What's interesting about data center in terms of a question that we hear is saying, we know you've been the leaders in high performance computing for some time, But how much of this is now really deep learning and deep learning in terms of what you see into the hyperscales. If you actually look back several years ago and on this chart in terms of 2013, that was probably primarily our high performance computing business, all still growing as you've seen the supercomputer growth, the overall high performance computing market. But over the last couple of years, the inflection that we have seen and will continue to see is really about our leadership in deep learning and our leadership in artificial intelligence going forward.
Automotive. Automotive brings an interesting perspective of bringing what we have today in terms of the cars in terms of the high end infotainment systems on many of those luxury brands. But that's bringing together our graphics capabilities for the best types of cars right now. But what we're talking about in the future is the underlying computing platform for self driving cars. So again, a leadership position in terms of what will drive our cars going forward.
So it's pretty rare to walk into a room and see 4 completely different businesses, all talked about in 1 company, all talking about the same underlying technology. So key thing to take away from this in terms of growth rates, you've got our gaming growth over the last 3 years growing 30%. We have our pro visualization still a very great representation of our company, about 15% of our overall revenue. Our data center growing more than 40% CAGR over this period of time and automotive growing more than 80%. Super, super strong performance from our market platforms.
So what does that mean from a gross margin? Just 3 years ago looking at about 52%, we're now reaching the upper ends of 56% can your gross margins grow higher. When we looked at those other market platforms on the prior page, absolutely 3 of those 4 have gross margins today, better than our overall company average. And we will continue to provide the opportunity for us in terms of gross margins as we go forward. So if we break down our gross margins over this period of time of 2013 to 2016, it's probably a 3 part piece.
1, our overall growth in gaming, growing high end gamers to our high end platforms. Number 2, our overall data center and enterprise growth over this period of time. And number 3, less of our focus on chips and components to overall OEMs and overall driving our gross margin. We could call this operating expenses, we can call this investment. Jensen walked you through our approach of a leverage model and the importance of a leverage model.
We have to go to market against each one of those 4 different markets on the prior page. However, our focus has been on an absolute efficient model in the exact same architecture underlying all of our products. That all architecture starts to change as we get to the development level, as we get to the software that we overall provide. But the great piece that we have is we from an engineering standpoint spend about $1,200,000,000 on gaming, dollars 1,200,000,000 on ProVis, about $1,200,000,000 on data center and about $1,200,000,000 on automotive. That is our ability to leverage that exact same engineering across all four of those markets as we focus on the go to markets to enable each one of those markets.
As you can see here over the last couple of years starting 2014, 2015, 2016, number that comes to mind is about $1,600,000,000 to 1.65 $1,000,000,000 It's been about the same. That does not mean we're not investing. You saw us this morning talk about our future in terms of PASCAL and the many years that we've been working on that. That's in there. But it's a leverage model enabling us to really manage our overall OpEx in total.
So this will continue to be our focus. We still have investments to make sure we have the right go to markets for the new markets that we're working on. But focusing on that investment level in the most efficient way is what we're going to concentrate on. Operating margin expansion. So what this has done is if we take together all we have here, the overall revenue growth across our growth platforms, higher gross margin as we focus on value based platforms for the market, our continued focus on our investment levels, operating margin expansion.
Over the last 2 years, we've grown operating margin more than 600 basis points over this period of time. People ask where is the future? What does the model look like? We've got great markets, great and growing overall TAM size of markets. We have great gross margins in terms of focus.
Now we'll think about how we can continue to expand our operating margins as we go forward. Cash and cash flow, lot of questions. Remind me how much cash you have? We have about $5,000,000,000 of cash. And of that cash, we have parts of it here in the U.
S. And then we have parts of it located internationally. Our U. S. Cash basis right now is about $1,400,000,000 and we continue to have extremely strong cash flow as we move forward.
I think last year when we were here, we were talking about our last 3 years cash flow was a little bit over $700,000,000 750,000,000 dollars or right now on average and over this last year hearing record levels at $1,100,000,000 in terms of overall cash flow. So we continue to focus on our balance of what is here in the U. S, what is here in terms of internationally, but we will feel comfortable with every avenue that we have to make sure we have that right balance. Capital return. Capital return is not a 1 year thing.
It is a program, it's a huge focus of ours to make sure that we can manage returning the highest level of our free cash flow to our investors and to our shareholders. Of course, first, we're going to talk about what types of investments that we need to do to support the business and fund that first. Secondly, we'll think about any additional inorganic type things that we may need to add, small pieces to add to our overall investment level and that leaves us the opportunity to think about what we have available for capital return. Over the last 3 years since 2013, we've returned $3,000,000,000 100% of our free cash flow has been returned to shareholders. As you know, as we started at the beginning of 2017, we also announced our capital return program for the current year.
We intend to return $1,000,000,000 to shareholders. Year to date, 2 months in, we're halfway through, we're at $562,000,000 So it's a large focus of ours. We take a lot of pride in terms of thinking of the shareholders and this entire capital return and we appreciate your support. So driving shareholder value. Let's put this all together in terms of what you've seen from everybody today.
1, driving revenue growth is barring on the most important thing that we're focused on. Building very, very large market platforms for us to go and obtain that revenue. And you've seen that now for the last couple of years in terms of our CAGR growth of these growth platforms. This should be a great opportunity in terms of going forward. Improving gross margins as we go, there is a great opportunity with 3 out of those 4 current markets having higher gross margins than the company average, but also thinking about how we can even improve that 4th one also to a higher gross margin.
Leveraging our investments, we will make investments, but we'll be thoughtful, we'll be conscious about what we want to do in terms of looking at resources across the team and how we can add resources to improve our go to markets. And then expanding gross margins is probably the right way to think about our overall business model and our overall operating model. We do need the software. We do need the right go to market. We need the right marketing for every single one of these.
So the right focus is down at the operating margin to look, not necessarily at the gross margin as stopping there as you may have looked at in terms of prior of the chip or component type of business. And then lastly, returning free cash flow to shareholders. That's what we'll do at the very end after we've made sure we have the right investments going forward. I think we have a great value proposition for the investors. You've seen us in terms of the performance over the last couple of years.
And I think we have a great opportunity in terms of going forward as well. With that, I'm going to invite all of our presenters from today as well as Jensen back on stage. We're going to open up for Q and A. We're leaving at least this last hour to answer any questions that you may have had, what you heard this morning in GTC or any single one of our businesses.
Could I have a less sticky one? Okay. Can I have is there one that's less sticky? Here, you take this one. I've never been quite this comfortable before.
I usually like to be in an attack posture.
Hi, Vivek Arya from Bank of America. Thank you for the presentation and the very impressive long term vision. My question is more on the medium term model. I think we saw how you've grown over the last few years at about a 20% plus pace in your growth areas. Is that a sustainable pace of growth over the next 2, 3 years?
What would be the pluses and minuses? Like what would make you grow slower or faster than the rate? What are the key variables that we should think about? Because I think this is the key question investors have, Jensen, that they understand the very nice vision that you have laid out for the next few years, but we don't know when self driving cars will happen. We don't know when a lot of these AI projects will really translate into revenues.
And for good or bad reasons, we worry about the next 2 or 3 years. So is the growth rate over the last 2 or 3 years sustainable and what are the pluses and minuses to that?
I hope not. I hope we grow faster than the last 2 or 3 years. Now the reason for that, the only way to grow is to grow into a vacuum. The only way to grow is to grow into a vacuum. In fact, most of the time, by the time that you're looking at technology oriented growth rates, by the time that you could start thinking about it in the terms of 10%, 15%, you're probably not in growth mode.
And so I'm excited about our business, because over the course of last decade, we've created the conditions by which we're growing into a vacuum now. We're growing into a vacuum. And I don't think if I were to think about growth, here's kind of how I would think about it. Number 1, in terms of percentage and of course size, I think gaming is still a very big deal. And I'm expecting and I'm hoping that on good first principle reasons that gaming is both going to grow nicely in percentage as well as in dollars, okay?
And it's growing into a vacuum. And the reason for that is because you're as you mentioned as Fish, I'm sure already mentioned, our installed base needs to be upgraded to the current level of gaming capability. That's that difference is vacuum that's going to pull us up. The second part is that developing emerging countries are one of the greatest forms of entertainment is games. It's largely free, right, free to play.
Esports is largely free. It's also a very social thing. And then, of course, there are many other reasons, as Fisch already talked about, in terms of the reasons why we believe GeForce is going to continue to grow. Not to mention, as a super bonus, VR came along as a super bonus. So I think number 1, the growth rate and of course, magnitude because it's a large number already for us gaming.
The second thing, before long before I would get to self driving cars, the second thing is our data center business is likely to be our fastest growing business. I mean, you guys already know that it's a multi $100,000,000 business already. If it could grow at the level of auto and at the percentages that we know data center businesses is involved and the reason for that is because our gross margins are high there, because we save enterprises so much money. The value prop Shankar was gracious in saying that the TCO is incredibly low. I mean, that's another way of saying, you save people money.
I mean, we're going to replace we replace with 1 GPU server, several 100 servers. And so as a result, you save 1,000,000 of dollars, 1,000,000 of dollars to be able to do deep learning at the scale that people are talking about now. It's really not possible, if not for the acceleration and also the amount of expense that we've saved people, okay? So I think you're going to see that Tesla has a really, really exciting growth opportunity. He's got 3 different vectors that he's growing into and they're all growth.
I mean, we're seeing growth in all of them. I mean, they're just real growth. And I think that I would take away the success of M40 and M4. One of the things that people ask us a lot about this year and I didn't get I couldn't really address it until the success emerged that our GPU is not only great for training, it's also incredibly good for production. And that little cute little M4, it fits into every server out there.
And so that I think, that combination, one architecture incredibly flexible, the highest performance as well as the most energy efficient at the same time, That I think is a really great killer combination. So hyperscale, high performance computing, supercomputing, as well as now enterprise are all growth opportunities for our data center business. I think that's going to be our 2nd fastest growing business, both in percentage, there in percentage as well as in numbers, okay? And then, of course, then there's the other matters that we talked about. But I think if you just focus on those 2, you wouldn't have to estimate just like I would have to estimate when self driving cars are coming along.
You and I both know it's going to come along. There's just simply too many companies involved. And all of those companies today, if they're not shipping production self driving cars, you got to ask yourself what are they doing? Well, they've got to be developing, right? And we've created the ultimate self driving car development platform and we're about to be shipping it.
And I think they could be big numbers. I mean, these are pretty robust development systems. And even though it's robust to us, you just need to know it saves them enormous amounts of money. I mean, each one of these cars to build with a data center inside the trunk is $1,000,000 endeavor. And for us to come along and offer something that is in the 1,000, it just brought tears to their eyes.
They're so happy. And so every car company needs to have these self driving platforms, right? And they all want to have AI algorithms and development. They have lots and lots of software engineers. And software engineers need a coherent platform to program.
And who knows that better than we do? Who knows that better than we do? Okay. That's why Drive PX is designed as an open computing platform. And that's why all the APIs are there, the SDK is there.
That's something we know very, very well. Okay. So I think if you think about it from that perspective, start with gaming, then go to high performance compute or go to data centers, the overall Tesla business, which is fueled by high performance computing as well as deep learning. And then you go to self driving cars, you might find yourself much more able to compute some of these things. But I think the high level point is that we're growing into a large vacuum of pretty exciting dynamics that are going in the industry.
Okay. Thanks, Vivek.
Thanks for holding the Analyst Day and for all the great presentations. Jensen, I think if I think about 15 or 20 years ago, you can identify the graphics processor or map coprocessor companies in the market. And today, I think I struggle to think about companies who are making the kind of investments that you guys are making. And so I was hoping that you could talk about your view of the competitive landscape today for you. Who do you expect to be bumping into or who are you competing with you think in your different end markets?
Yes, that's a really good question. It turns out that we compete against inertia. We compete against inertia and what I mean by that is, we're not a co processor anymore. We're not a co processor anymore. A video processor is a co processor, an audio processor is a co processor, a networking chip is a co processor, takes a software stack, an API, a fixed amount of work, a fixed amount of work and you offload it into this thing.
That workload is fixed. When you're decoding Blu ray, that workload is fixed. When you're playing audio, that workload is fixed. When you're doing 10 gigabit Ethernet or Wi Fi, that workload is fixed. Do you guys understand what I'm saying?
That workload is largely fixed. You're limited by the bit rate of the medium. In the case of audio, unless you're a dog, there's a certain dynamic range that you enjoy. And come on, guys, that was fairly funny. I mean, I can't count on ARBNAB to tell any jokes.
I mean, I got to throw it in there. It's fairly funny. And so there is the medium itself limits certain the wired, the medium limits certain things. And those things are offloading functionality and they become co processors. And over time, as Moore's Law continue to advance and number of transistors grow, they get pulled into either the south bridge or north bridge, which is replaced with software on the CPU.
GPU accelerated computing is a very, very different problem. There is no workload that we run. We run an application that a developer writes on top of it. Whether simulation is not a workload, simulating viruses is not a workload, training a network is not a workload, It's never done. It's never done.
You'll never be limited by anything. There's just, what I want to simulate the El Reno tornado and I'm done. It never happens like that. They want to understand more and more and more about it and the computational limits are literally unbounded. So we're a computing platform.
We're like a CPU, if you will. That was the great insight, in fact. Some umpteen years ago, some umteen years ago that was the great insight that the GPU has a unique characteristic about it. That's very different than co processors. It's not like an audio chip.
It's not like a video chip. It's a radically different thing. If we can general purpose this thing, we generalize this thing and we started generalizing it starting with CG, many of you, if you guys go way back, we invented a language called C for graphics, C for GPUs, CG. It was then couple of iterations later became CUDA. But if it wasn't because of CG, we wouldn't be down this road at all.
And so we've been pushing for this idea of generalizing this architecture so that it could be a new computing platform. And that insight, I think vision matters. For building businesses, I believe vision matters. I believe that when you want to build a good company, vision matters, strategy matters, perspective matters, focus matters, these things matter. They probably even matter more than anything else.
Of course, execution matters. But you guys know doing the right thing is far, far, far better than doing things right. I know it sounds strange, but it's true in life. And so if you could do the right thing and do it right, which is what we try to do as a company, I think you could discover these new things. And so GPU computing, is just a brand new thing.
So what am I up against? Well, I'm up against basically people who don't want to take the work or it's not worth it for them to port the code. Getting code ported, getting applications developed, getting new mathematics done, new mathematics, a new form of mathematics, the factoring of the mathematics, getting that done is really hard work. And that's what GTC is all about, to get people to think differently about the computing platform, to think about this architecture as the starting point. Let me give you an example.
Brian Catanzaro, he's such a nice guy. He's so understated. He's one of the great researchers in the world. His starting point for the next generation RNN, which is the fundamental fabric of speech, which is one of the most important things in the next generation of computing platform Microsoft cost was speech as a platform, right? And so this is such an important thing, natural language processing.
If you could figure out natural language understanding, just imagine this, if we can figure out natural language understanding, all we have to do is have a computer go to the Library of Congress and read every book that's ever been written. Read every single book and it will discover new knowledge. Just read it by itself and it will understand it and it will train its new network to read other things. And then before you know it with a little bit of imagination, it might be able to create new knowledge. And we just demonstrated to you guys the creation of new knowledge today.
The unsupervised learning from Jan's lab is the recreation of new knowledge. By learning a whole bunch of paintings, program was able to generate new paintings never done before. It's the creation of knowledge. And so if we could figure this problem out, now here's Brian's work, it doesn't start with a CPU, it started with a GPU. The entire algorithm started completely founded on the architecture of Pascal.
I think we've moved the needle that finally all of these researchers that are coming to GTC no longer see start on another architecture ported to this one, but now they start from this one because they can count on this one. The GPU accelerated computing architecture that we've been promoting and evangelizing for a decade now is now available in the cloud, in enterprises, in IT rooms, in laptops, in desktops, in cars, in embedded devices. This architecture is now everywhere. It is completely accessible. And so I think people can start to rely on it.
And that ultimately is the most important thing we do to deal with inertia, to deal with complacency, otherwise this architecture would have no benefit. Okay. It was a long answer, but it was a really, really important question. And in fact, it's ultimately the Uber question. Yes, sir?
Thank you. Ambrish with BMO Janssen. I had a question on the right here.
Hey, Ambrish.
Hi. I did have tears in my eyes when you made that joke.
It's pretty fine to me.
My question was on deep learning.
Yes.
Two concepts you introduced today and I get the same feedback I talk to potential customers or current customers and my access is nowhere close to yours. On the fact that, yes, GPUs are great for training, but when we go to deployment, we go to CPU. So the question is with the M4, it can't just be, hey, here's a chip M4 that does it, so you can take training and then move to execution with us. What are some of the hurdles? And Harshankar, you talked about TCO 60% saving.
What's the right way to think about how does this the fact that we're going from training to deployment gets facilitated by M4.
Thanks. There are several conditions that has to happen in order to be successful with business. I mean, I just when you launch a new product just because it looks good on paper, it doesn't like you're probably inferring, it doesn't mean that it's going to be successful. There are several conditions by which it has to happen. Number 1, your customers' condition has to change.
In order for a customer to consider a new solution, their condition has to change. Otherwise, they would just keep buying what they used to buy. I buy the same largely the same products out of from Safeway as I always do. And so unless my condition radically changes, there's really no reason for me to change my brand of toothpaste or my brand of this and that, okay? And so I think the customer has to have a different condition.
Now their condition that's happening in hyperscale that is pretty dramatic. Number 1, the first condition is that their workload is changing. There's 2 types of workload that is going into the data center that didn't used to happen before. The first one, of course, is as you saw in Shankar's presentation and as you heard Rajat say on stage, Google's application of deep learning is going through the roof. That application, the use of deep learning in applications obviously changes the dynamics in which it uses the data center, it must, it must.
And so the workload is changing. The second workload that's changing is live video. You know that live video is probably one of the most sought after new form of content in just about all social platforms out there. It's just much more engaging to be communicating with somebody you know live. And now it's possible to actually have live video.
Let's see, Amazon bought Twitch, Amazon bought this company that was a startup company that came here before called Elementor for real time transcoding of video, Sam Blackman's company was fantastic. Real time transcoding is something our GPU does incredibly well. M4, wow, it's so killer. It's so killer, another workload. And these two workloads are consuming a large part of the data center.
So number 1 is the workload is changing. Number 2, their alternatives also has to support a change. If you want to have if you want to 10x the amount of workload of these type of problems onto a data center, but your CPUs aren't increasing in performance by a factor of 10 over any near horizon, while keeping power and cost the same, then you've got to find another solution. And nothing is better than having a GPU accelerated data center, because you offload all of those things. And as a result, by adding 1 GPU into all of the servers, you could keep your data center cost approximately the same and the workload increases dramatically and their IT cost, their capital cost is controlled.
And so we save people money at a time when they have very few options. And then 3rd, your solution has to work. And so our criticism, the criticism of GPUs up until we announced GIE, the GPU inference engine, which is the production side of the network, the GPU inference engine, the performance wasn't that great. It was 4 images per second. Four images per second is faster than a CPU, but it's not that much faster than a CPU.
Well, 24 images per second is many times faster and much, much more energy efficient. It's like 5 times the energy efficiency, it's 5 times the performance. And so that's a big deal. You can now load your data center with 5 times as much workload and really doesn't add that much more cost. You don't have to build another data center, for example.
So multiple conditions has to happen. And I think we're seeing that. We're seeing the workload is changing. We're seeing that the alternatives isn't emerging. Designing your own chips is a lot of work.
You're only going to buy 100,000 of them, 200,000 of them, 500,000 them, that's not a big number. If you want to design chips, you really need to have millions and millions of them before it makes economic sense. And then lastly, I think an alternative solution has come along. Those three conditions all happened. Yes, sir.
Yes,
sir. Sanjay over here. Sanjay from Nomura. Hi, Sanjay. One question on automotive, Rob.
You talked about how the automotive industry is in crisis and now they have to select a platform to move forward with self driving cars and choice of the platform. And you proposed or you indicated that there are 2 ways to do that, a 2 part CPU solution, one is for cockpit and other is for self driving. And my question is, is that one automotive's this particular view, is it coming from just one automotive? Is there a consensus emerging among other automotive makers as well that this is the approach they want to go? So 2 part question, how long do you think this debate will resolve?
And second, why automotence would not think of this issue as something what Facebook is doing to compute and networking via open compute project kind of approach versus selecting these choices from a vendor, which is first in these markets.
Is this on? Yes. It's a great question. I think there's 2 different types of car companies. First of all, I think within start up car companies, there is no ambiguity, there's no question.
They're starting very much with the approach of a modern car computer and that's an advantage for being a startup car company, which is you don't have 100 years of inertia and legacy and organization and momentum to overcome. So I think for start up car companies, I think the answer to your question is, I think this type of architecture happens very fast. And that the concern about being left behind is a motivator that pushes at the existing car companies, but they also have everyday issues to deal with. They have to ship existing solutions. So I think part of what we're proposing is an approach that's kind of founded on small steps, big vision.
You don't have to completely reorganize the entire car company and shift people around and do massive reorganizations within a couple of minutes, you can have a vision of moving towards an architecture that starts by putting more processing headroom into a car and then taking small steps. You can provide infotainment as well as cluster as well as bring in some basic ADAS and start there. And this architecture allows you to do both. It allows you to build this supercomputer and it allows you to create 1st initial steps. Does that make sense?
Hi, David Wong, Wells Fargo. So we heard about the P100 today, big hefty new chip. What are you guys going to be bringing out to keep the gamers excited this year? Can you give us some feel for what's coming out this year on the new product front? And also after PASCAL, is there some other scientists we can look forward to and what would that involve?
Yes. So may I take this one? The answer is no. I don't know which question was. We have some really, really exciting new products coming in the future.
And the products we're working on in the future are better than the ones we made in the past. But they're not available today. And the products that we have today are the best in the world and customers should continue to enjoy them. And then when the new ones come, they should buy those and quickly dispose of the old ones. In terms of the next scientist, Volta,
Janssen, hi. It's John Pitzer with Credit Suisse. I might just be describing something you said earlier in a different way.
I felt I didn't get the necessary reaction. Volta is actually a very important scientist. I'm sorry.
No, it's fine. Finish.
Go ahead. Well, I felt that people didn't appreciate the name nearly as much as we appreciated the name. I mean, Volta is actually quite pretty close to our heart.
Yes. I just might be describing something that you described earlier in a different way. But one of the advantage you have in a lot of the markets you go after, you don't those markets don't suffer from the it's good enough issue. And you can argue in hindsight that one of the problems with NVIDIA in the handset market is that was a form factor where things got good enough pretty quickly. Given that a lot of the markets you're going after are not performance saturated, can you help us understand what that means for average selling prices going forward?
If you don't want to give us absolute numbers, do you think ASP growth is going to be a larger component of your top line growth going forward than it has been over the last couple of years? Or how do you think about that dynamic given that as you bring out faster and faster performing GPUs, it's likely that your customers are going to buy up in the stack?
I really do hope that our unit growth is going to continue to grow. I mean, there's no question that the applications that we serve today like data centers hey, Arnaud. The gentleman is right in front of you. I thought that was just one of your jokes. You were trying to he kept leaning this way and you kept rocking in synchronous to him.
I thought it was very clever.
Jokes don't make are better than the ones
I try to make. That's right. That was very clever. I thought you had eyes behind your head. It was incredible, perfectly synchronous.
The thing that it is true that many of the applications we serve today, the ASPs are much, much higher and it's because of data center applications, it's very software rich. And so you can't really use semiconductor cost of goods sold anymore really as a way to think about the value of the product. It's just simply too much software involved. And so, I think increasingly you are going to find that our company is going to become more and more of an algorithm company, a software company and much more of a platform company. And we are of course enabled by the processor that we use.
And if it's not for the processors, we wouldn't be able to reveal and provide these new capabilities. And so, surely, surely the ASPs will grow and the margins will grow, but because of the different nature of our business, the different nature, not because we are now just selling the servers, but we're selling into servers in a very different way, one that in fact most companies have never experienced. If you look at our grid business, as Jim was saying earlier, it's now separated from our processor business. The processor business is Tesla and the software business is grid. And so, the business model for the company is going to increasingly continue to change and we start from first principles, always start from first principles.
What is it that you're making? For whom are you making it? What is the most efficient way that we can provide that capability to customers so that they can enjoy it in the most easy way and for you to economically benefit from that and continue to invest. And so we think about those businesses from first principles. Now, the second part about units, there is nothing about the businesses that we're engaged in that's going to reduce units.
For example, I happen to believe that the car industry will have multiple computers in it and it's a volume business, it's a volume business. I happen to think that our gaming business is going to continue to grow, it's a volume business. And so I think there is a fairly good first principle reason why I think we're going to continue to grow in units as well.
I had a question about Dave Lachman from Stanford Research. I had a question about the growth in pixel count of displays and gaming versus the fact that the consoles are now frozen at 1080p60 frames per second. So is there a cyclical nature to the graphics business that helps you right now because now the consoles are out and frozen, but displays keep getting better?
The truth is actually both good and bad. I think if the consoles were 640, let's use that as an example, and most game developers were targeting 640, it would be rather hard for us to continue to drive higher end content, because the content would have been created for 640. 1080p is full HD and it gives us plenty of headroom to really add a lot of content to it. And so, I think, for example, when you're looking at 1080p television even on a 68 inches LCD display, 1080p looks pretty darn good. And so there's still quite a bit of work that we can do to enhance 1080p.
Going to 4 ks is of course even better than that, but there is so much we could do. I think for example, one of the big initiatives of course is moving to HDR to be able to process color that is at the extremes of the fidelity that your eyes can pick up. Right now, if you look at HDR content and you look at, look, well, if you just take a high dynamic range picture on your phone, look how much richer it is. And so imagine doing that in real time instead of processing that photo over the course of call it 1 or 2 seconds, we are doing it at 60 frames a second in 3 d. And the other thing of course is that VR needs much, much higher resolution.
I think Vishal already said it earlier, but today you have 1080p to each eye. 4 ks is barely enough and I think it would be kind of nice if we had even more than that. And so, I think we have plenty of headroom in the work that we want to do, but resolution, the higher it is, the better.
Over here in the middle, Erik Linde from AllianceBernstein, now known as AB. So E. L. From AB is my joke of the day.
EL to AB. Yes. That's hilarious.
So, I wanted to ask about VR versus AR. And I know that in VR, it's pretty obvious what you would buy today. It would probably be like a GTX 980 Ti or 980 or 970. But for AR, what kind of card would we need? And looking at the prototypes of today, such as HoloLens and some of the drawings of what will be Magic Leap, it doesn't look like there is a whole lot of room for like a beefy GPU there.
So what kind of hardware would they use and what's your angle there?
Yes, there's 2 types of AR. Well, there's more than that, but let me just highlight 2. In fact, there's you already know that there's 2 types of VR as well. And one is Samsung Gear VR and then the other one is the Oculus Rift. The Oculus Rift is connected to a supercomputer.
The Samsung Gear VR is connected to a little mobile chip. And so there are different types of experience you can have with each one of them. One of them is much more, if you will, pre rendered or pre authored content, which is like a Samsung Gear VR. The other one is interactive content and is generated in real time, much more like Oculus Rift. In terms of AR, you're going to see a few different forms.
And one of my favorite forms, the one that I'm super excited about is, now what you described is HoloLens and I'm super excited about HoloLens. I think the work that they do is really quite amazing and there is a lot of real time computer vision work being done. You could just imagine in order to register where the computer graphics is in 3 d space relative to the environment, okay. That's just the first step. The second step and this is the part that they'll have a hard time with unless you really start to add a lot of very beefy graphics is, if you want the object to essentially melt into the environment, so it doesn't poke out like some computer graphics that is seamlessly integrated into the environment, you have to relight, it's called relighting.
Relighting is super hard. First, you have to figure out what is the lighting condition in this room and then you have to go and relight the 3 d object that you have into this room and then register and do all that stuff in computer vision and essentially hide that object into the room. Well, that's very, very possible to do, but it's very hard to do. But nonetheless, HoloLens is really about putting menus up, a little bit of the Ironman effect, if you will, which I think is very cool. The AR that we think about is think of it starting from VR, but your head mount display is translucent.
Imagine this VR, just like your Oculus Rift, but there's cameras in your head mount display as well. And so you start with computer graphics and you mix in video instead of you start in with a piece of glass and you mix in computer graphics go the other way around. That form of AR is going to be much better for design for example, architectural walk through for example, that kind of attitude. Where the 3 d is more important than the object and the life you could add a person into it for example, but instead of the other way around. Okay.
So the answer is you're right. Most AR stuff today is rather basic. But when we think about AR, it's really VR plus. So it's actually much harder. Good question.
Fun question. Nothing gives me more joy to answer those questions. Hi. Thank you, A. B.
Hi, this is James Wang with ARK Invest. Over here, Justin. Okay. Tell Sheryl, her team did a great job on the show. It's beautiful.
Well, thank you. Thank you, James. She did do a good job. She is amazing.
My question is on data center. I am a big believer. I saw Tesla right from the early days. But my trouble is every time I try to do the CEO math so to speak on the opportunity, I get a small number. So just an example.
Well, who taught you CEO math?
No comment.
You can't use my trademark phrase.
Well, maybe you can do so.
After you leave NVIDIA, you can't use the word CEO math. You can only use it inside the walls of our company. There's a phrase that NVIDIA is called CEO math. Now it turns out CEO Math, okay, let me just explain for all of you guys who would like to understand this. CEO Math is founded on understanding first principles And in a world a lot of ambiguity you can very easily understand the major variables and estimate the future.
I can estimate a lot of things very, very quickly based on that. Well, over the years for the less inclined, CO Math has become known as basically when you don't know the answer. And I think, James, you're applying it in that way. That was funny. Okay.
SEO math, yes. I thought in fact, so I'll just disagree with you right off the bat so that you don't have to finish your thought. Since you used to work for me, I can do that. The reason why I would disagree with it right off the bat is because all you have to do is do the CEO math that Shanker did. In fact, that was really quite perfect.
All you have to do is ask yourself if we are going to get to exaflops. That's the first question,
if. If
we're going to get to exaflops, the next question is when. Now if you can you conclude that there's enough workload to get exaflops and it's very clear there is. It's very clear, it's just abundantly clear that there is, okay. If you conclude that then the next thing is when and he said 2022, okay, I think it's a year late, but that's okay, right. Then all you have to do is then extrapolate backwards.
How many of those computers are going to be GPU accelerated? Well, it turns out unless your GPU accelerated you won't get there. It's actually theoretically impossible at this point. There is no one, there's literally no one on the planet that I know of who believes that you can take off the shelf processors and add them all together connected with InfiniBand and then we're going to get exaflops. Nobody believes that, not one person or not one same person, okay?
And so therefore, every one of them is going to be accelerated. That's the reason why Corals accelerated. The next 2 largest supercomputer on the planet built I'm just really delighted to say that it's built by our nation and it's our nation that cares about science and both of those are GPU accelerated. The work that we do with IBM, it's architected a lot like the Watson and it has NVLink in it, NVLink 2.0, has a scientist in it called V and Volta and it's going to take us to ex scale. If you just draw that straight line.
I believe in HPC story. No, I believe in HPC story. The story I'm having trouble with is the hyperscaledata center problem, right?
I'll just go this way. Yes. Well, I just did one math for you, it's already multiple 1,000,000,000 of dollars, it's not small.
Sure. That is totally vetted out and your revenues show it, right? But for just for let's say hyperscale right now Facebook, Jan Lincotta said that Facebook uploads about 600,000,000 photos a day. They run 2 nets against that and it takes 2 seconds to process it across 2 nets. So that's just on the CPU.
If you do the math on that, it only takes really about 1,000 CPUs just to support real time everyday upload of Facebook's photo upload limit. So a 1,000 CPUs, call that 100 GPUs. So that's not a large number. The AlphaGo system used 50 GPUs in training, 300 to 200 or so in deployment and Baidu's cluster uses about 800 for training. So everywhere I'm looking bottom up wise, I understand the top down approach, but bottom up wise.
But that's bottom up is not CEO math. That's true. World. So, these are large companies, right? These are the largest companies out there.
But not that
I'm just going to counting beans on the ground, I'm going to help you understand the future, right? And so the fact of the matter is
that bottoms up approach is
not CEO math. Let's just agree on that. I agree on that. Okay, all right. And so and that's why you're not working in a video anymore.
And so I had to let you go, all right. And so that's the basic math, okay? And so, James, I love you, you know that. Okay. So the way that you should think about it is, if you want to project into the future, you can't use the present.
That's just kind of first principles. And the reason for that is because if we use the present, all you have to do is go to yesterday's present, it was 0. It was 0. Well, 0 extrapolated to 0 the year before that, when we get into anything more than 0. And so the question is, then after that is just belief system.
It's a belief system. It's a belief system. It's not what people tell you, it's what you believe. Now, if you want to invest in the future, you got to ask yourself what do you believe. If you want to invest in a start up, you got to ask yourself what do you believe?
Do you believe that DeepMind is going to be a company of importance? Do you believe that Google made the right decision investing $500,000,000 in a company that literally had no products and no intentions of having a product. I thought it was one of the best investments in the history of mankind. It's a belief system, it's simply a belief system. And so the question is what do you believe?
Now I happen to believe, I happen to believe this. I happen to believe that most data centers, most hyperscale data centers will consume a large part of their data center just training, just training. I wouldn't be surprised if half of the data center will just be training. We have supercomputers in our company and the number of supercomputers we're dedicating now to training is just growing and they're running 20 fourseven. They're running 20 fourseven for good reason.
You take a week to train a network and this network is fairly robust, you deploy the network. The moment you deploy the network you start collecting new information. You have to take that new information, which is different than the old information and you have to take that new information, you have to train the network. And so we're in a 20 fourseven training cycle. We're not unique.
The number of networks and the number of the amount of data that people has trained is increasing. Number 2, inferencing, recognizing images, searching images is really not that big of a deal. It's true. You upload the images, you don't even have to go and categorize it, classify it, do run inference on it, in real time, you could just do it offline. In fact, YouTube just write upload video, it takes half an hour before it shows up on YouTube.
So you can run that offline in spare CPU cycles. The type of applications that I wonder if you can run offline is when billions of people are streaming video live and they would like to share whoever the social network would like to share that uploaded video to the relevant person who would like to enjoy that video. I'm just filming my kids playing soccer, Everybody on Facebook probably doesn't want to watch it, but some people would like to watch it. And I have to do an analytics on the social network to figure out who would enjoy watching my kids playing soccer and I should send them a text, let them know that this particular scene is about just happened and we record the last 30 seconds, go and watch it. Well, you're going to have to do classification inferencing on real live video for billions of people.
I'm not making up some kind of a use case scenario, but imagine that network, imagine that workload, I think it's going to be a few more than 2 CPUs. So that's just a theory. And so you guys just have to keep thinking forward. Now after that, do we really believe that our brain all it does is recognition of objects all day long exit sign, door, handle, water, cup, coffee. That's not really what our networks do.
Our networks apply that information to do something. And so do we really believe that unsupervised networks of CNNs is the end of the story. And then lastly, do we really believe that 60 layer networks is the end? Or do we think that maybe Microsoft is on to something? That if you believe that a deep network such as the network they just revealed, a 1,000 layers deep is able to achieve accuracies not at 90 6%, not at 97%, which is superhuman, but 99.999%, which is our goal.
That feels like it would take more cycles. And so the question is just is simply belief systems. Do you see what I am saying, James? No. And that's the only way that you can talk yourself into investing multiple 1,000,000,000 of dollars into doing some of these things, which obviously Microsoft is doing.
If you get a chance to catch up with Harry Shum, who is the Head of Microsoft Research, I mean, they are all in on deep learning. I mean, it is not you don't hear Bill say this often that the next person who figures out AI will be the next giant company. I think it's true and I don't think it's 20 CPUs, I'm just guessing. Thank you. Okay.
I had a question for Colette because she's been so quiet. I want to talk about operating margins and just kind of looking forward. How comfortable are you with the company continuing to expand operating margins? A couple of headwinds worth thinking about, which I'm sure you'll have offsets for. 1, you're going through a manufacturing transition.
Typically early stages of that, you have higher cost, potentially lower yields than you had on 28 nanometer. And then the obvious would be the Intel headwind with the loss of that royalty revenue stream. So could you talk about what will offset that enough to continue to expand operating margins? Thanks.
Yes, I'll start and Jensen will probably finish up a bit on this. At any one point in time or one day, there's a difference between what we realized the day before and what we realized the next day. We have been working through product transitions for quite some time, even through the last two and a half years, even on Maxwell. There's probably been a new product out each quarter, each month, as we walk through this. But we're going to walk through the next transition as we move to Pascal.
The team is well focused on it And there may be something different one day later. But over the long term, this is absolutely the right process change for us. And we do believe we'll go through it on that side. In terms of Intel, there's been a lot of focus on how much is Intel as a percentage of your business, what can we expect, will it be renewed, how do you want to focus on that piece. We are comfortable with a lot of people assuming what if it didn't renew and not from any reason of any type of signal.
It's a binary decision. It's either going to happen or it's not going to renew. But we are running the company just to be assured that we can continue to grow our top line and we believe it will produce an operating margin growth as we go forward. That current deal runs through Q1 of next year in terms of through revenue. But I can probably say the next day that the agreement stops and if it didn't renew, our gross margin would be different.
I think that's a pretty safe thing to be able to model on that next day. But again, it's about the long term growth models that we have on these platforms and the long term growth that we see in the value that we're providing both from the software perspective to produce an operating margin and focus on operating margin growth.
Yes, I agree with Colette. I think the answer on Intel is we would really, really like and I've said this last year, I've said this repeatedly, we would really, really like all of our investors to invest on the basis that the Intel license does not renew. And if it does renew, let's just have a nice cocktail party and then it will be a nice bonus. But we should invest on the basis that it does not. And then this way you don't have to guesstimate every single day.
And then we can focus on the first principles of our business, which I think is really fantastic. It's very, very rare that we're looking at this much exciting growth opportunities at the same time. And a lot of it, of course, is all because of our investment in one singular area, not because we invested in 4 different areas. And I think the days for GPU computing is finally here. With respect to process, we systematically, we methodically, we purposefully decided on our current manufacturing strategy.
We did build the world's largest 16 nanometer FinFET. However, we are far from being the first one to build it. Our strategy is to lag behind the giants by one click. I mean, we're really fortunate that now there are companies like Apple that are going into new process nodes fast. We're really fortunate because when they go into new process nodes fast, they leave the new process notes fast.
And they leave behind really wonderfully pristine and well organized and well tuned manufacturing line. And so I think it's fantastic. So that's our strategy. I think 16 FinFET is not an insignificant endeavor. However, we're yielding fantastically and it's all because it is the process that's been left behind by the giants.
They are moving so fast and we could benefit from the yield improvement that's already happened, the capacity that has been left behind, etcetera, etcetera. I mean it's really, really a good posture. And we did that very, very well on purpose as fast as possible, just one click short. Hi, it's Ian Ng of MKM. Can you talk more about the benefits of moving with the P100 to multi chip and chip on wafer on substrate versus monolithic die, is there some benefits there in terms of yield?
Is that something we can expect across Pascal and can that apply to gaming also? Thanks. Good question. A fun question. The answer is no, I can't tell you the second part, but I will tell you the first And the reason for that is because as you know Pascal is an architecture, Pascal is not a product.
P100 is a product. The manifestation of P100 has everything to do with the applications by which we built it, okay. And so it has nothing to do with all the other products. One of the things that I was trying to say earlier in my business model talk is that the benefit of having scale, scale and leverage, we leverage 1 architecture and therefore all of the software that's done works across everything. It's the same CUDA that runs across all of it, it's the same optics that runs across all of it, it's the same Iray that runs across all of it, it's the same Open GL that runs across all of it, the same physics that runs across all of it, okay.
Exactly the same software stack. That's incredibly powerful when it comes down to leverage. The same architecture, same processors, the same process tuning, the same layout for a lot of things. However, the scale allows us to create multiple products for workstations, for data centers, for enterprise virtualization, for cars, for using the same architecture, leverage and scale. Because we have the scale, we have the ability to build a very purposeful product for data centers.
There are many things about data centers, features that I said about that we design into data centers that I didn't mention because there is just too much to say. But for our data center customers, the high performance computing customers, the supercomputing customers, they need those architectural features that we've designed in. And we have the ability to do that, because we have the scale. Now with respect to the stacking, it turns out I'm going to give you a couple of statistics, it's really fantastic. The P100 has you saw a whole bunch of those inductors, they're thinnest profile inductors in the world and it's 16 phase power supply.
It takes 18 watts, excuse me, 300 watts at 18 volts and steps it down into a perfectly, perfectly wonderful DC power supply at 1 volt and 300 watts, which would suggest that it has a current draw, RMS current draw of 300 amps. Passing 300 amps RMS through otherwise monolithic components stacked together connected by PCBs, it's just not a very effective way to do things. And so we would have wasted too much energy, we would have wasted too much power in the long traces and resistivity and cross coupling of capacitance and just a lot of junk that we would have to deal with. And so as a result, the best way to miniaturize this otherwise would have been this big into something this big is really to stack it. And as a result, our bandwidth is really high, our energy efficiency is incredibly good and the density of supercomputers, high performance computing clusters of DGX-one is off the charts.
The density is just off the charts. Okay, so that's one of the advantages of COWAS. But it is very challenging. Yes, sir?
Yes, it's Matt Ramsay from Canaccord. Jeff, I wanted to ask you a question about VR. Thanks for the data. I think well informed perspectives are helpful on VR since there's so much noise out there. I guess a couple of things.
1, given the installed base that you talked about being primarily in emerging markets that needs to upgrade, how big of a driver is VR in upgrading your installed base? And I guess second, when are we going to see eSports on VR and how big of a driver is that for your business potentially? Thanks.
On the first question, installed base, the numbers that I had shown were based on that baseline console number and 80% is a worldwide number. It's not just an emerging markets number. It was on the same chart, I think, but it's a worldwide number. For VR, it's a higher bar. So I mean, there's a smaller percentage of a much smaller percentage of gamers who have who are VR ready.
From my view, I think some of those gamers who are ready will naturally gravitate to VR, they're early adopters, but there's a vast majority of other users who are attracted to VR that will want to upgrade for with new GPUs. And VR content is also going to push GPU is a lot harder. Today, the recommended GPU is a 9 70, but to really enjoy content, as I mentioned, IKEA is recommending 980. I think you're going to want a higher generation, a higher performance GPU and ultimately a next generation GPU for VR. With regard to eSports, I think you're going to see use cases that maybe are watching eSports in VR.
I'm not aware of content in the short horizon that's going to be gaming and esports gaming in VR, but I expect it will come.
Hi, this is Steven Chin from UBS. A question for Shankar on the data center side. For last year, the business grew in line with the overall company top line. But just given that the big TAM projections going forward, I was wondering from a technical enabling standpoint, were there technical enabling standpoint, are there certain ecosystem partnerships that still need to mature further or other technical enabling milestones that need to be reached before we see some further kind of more heady growth to come?
In all cases, it's application adoption. So as we add more applications, both in hyperscale as well as in HPC, the adoption rate increases. And so that's all I'd like to say, application adoption.
I would say though, Stephen, that and Shankar, you said this during your talk. If you guys if you remember last year's slides, you will discover there were 0 hyperscale customers.
Yes. 0.
And so when you asked about the partners that we need in order to enable use of Tesla in providing services of course, this is one of the channels that we never had. We never really even had a product for hyperscale before. Remember, Tesla M40 is brand new product this year. It was right, really Right, a year ago it didn't exist. And the reason for that was mostly people were using our K80s to train their networks and we created a much better product for them to train their networks, call it M40.
It's significantly, significantly better for network training, that's all you want to do. And so 1 year ago, we had no hyperscale customers. And I think I said pretty clearly this is our fastest growing business. I think hyperscale alone will likely help Tesla become the fastest growing business in our company.
We can get into every OCP 1.0 server, that was
the market we want to address. Did not exist. And so Tesla M40 and Tesla M4 is pretty important. These are 2 very, very important products. They've proven to be very, very successful and they came out at exactly the right time when deep learning is being adopted by every Internet, every hyperscale company, every platform company on the planet.
Jensen over here. Back to data center, where do you expect the role of FPGA acceleration will sit relative to GPU acceleration. Clearly, there's the benefits of the open programmability. But where do you think FPGAs find its mark relative and size relative to what GPU compute will be? There were on a technical basis, I think from a product basis there's just never anybody who said, you know what nothing makes me happier than have to program in Verilog.
That's the only language, I mean if you were growing up and if you're any of your children were to become programmers, I don't think Verilog is it, okay, try a different language. But that's what chip designers do and that's what FPGA designers do, they program in Verilog. And so that's not a joyous experience, it's really frustrating designing chips, that's why it takes us so long. We have thousands of people doing it and you have to craft it, because getting software to work is plenty, but getting a chip to work is not enough, getting it just to work is not enough, you have to get it to be performant, otherwise you just waste a lot of energy. And so, FPGA is designed, nobody enjoys the user interface of it, nobody enjoys it, nobody said nothing gives me more joy than designing FPGAs, okay.
So it's not the usage case, it's not the programming case, it's not the efficiency case, it's not the convenience case. There were two things that FPGAs were rather good at. 1, it has a lot of on chip memory as a percentage of flops and you heard Brian today say very clearly and I described very clearly, Pascal has more on chip, our register file storage than first tier memory than any processor out there and the throughput of it is 80 terabytes per second, I mean it's just an abnormal number, 80 terabytes per second of on chip streaming RF register file storage in 14 megabytes basically says you're going to be flops limited all the time. And so that's a big deal for Pascal. That was a very, very big deal.
And you saw you heard what Brian says, he thinks this is going to be a complete renaissance in how certain algorithms are going to be developed. The second thing is the efficiency part, because if you crafted the chip meticulously and you go and it does take a lot of work, I mean, you can't just slop a program, slop a design into FPGA and hope to come close to something we spent several 1,000 man years to go architect. So, if you meticulously design an FPGA and you were successful in doing so, then the benefit could have been energy efficiency. But unfortunately, there is a lot of wires in FPGAs and it wasn't designed in that particular way just to do deep learning. And over the years, we adapted our architecture, so that is better and better at deep learning.
And one of the things that we did is introduce the new engine designed just for deep learning GIE. This is a compiler, it's a runtime engine for taking deep neural nets and really energy efficient way do it. We process deep neural nets now at 24 images per second per watt. It is for a floating point deep neural net, this is a floating point deep neural net, this isn't a binary deep neural net, this is a floating point deep neural net, which is the version that largely is used by everybody that's doing progress work right now. That kind of that level of performance, there's just no reason to go design a chip.
Just use our chip. There's just no reason to design a chip. And deep learning, as James was alluding to earlier, you could argue that it's not the only thing that's run-in the data center. I mean they also do transcoding, they also do image resizing, they also do right video decoding or up resolution or they do all kinds of stuff and you could do that all on a GPU, just does it fantastically. And so I think we announced something that this time that is a very big deal for production.
M4 and GIE, those two things in combination is a very big deal. I lobbed it out there because not in a very big way, but most of the hyperscale data centers that are working with us are already super, super excited about it.
Let's take a couple more questions maybe. Yes.
Hi, thank you for taking my question. Right down front here, Mark Bachman from ITG.
Hi Mark.
I had a question on the automotive opportunity. I believe that there's been a figure out there of about a $10,000,000 installed base. And I'm just hoping that you could give some more color around that. Can you talk about maybe 1 over what timeframe did you achieve that $10,000,000 2, where was that number a year ago and then 3, where do you see it a year from now?
So there is a I'm going to let Rob answer the infotainment questions since I've taken too many of the questions already. But let me just tell you one thing about forward looking. And I think part of success is about looking for early indicators of future success. It's not about drawing a straight line. Most successful scenarios don't most dynamics don't happen through a straight line and nor a Bezier curve.
They happen to be a couple of weak signals and early indicators that tells you something might be changing. I think there is something there is a couple of things that are overlooked about the future of cars. I think the first one is, don't forget to consider the importance of and Rob mentioned this, the startups, the companies who are prototyping self driving cars, the incredible amount of energy that is being put into AI recently in self driving cars as people discover how hard it is. For example, Toyota setting up a $1,000,000,000 lab in Silicon Valley, for example, GM just buying Cruise, which by the way uses NVIDIA GPUs in their cars. All of these startups that are all over the place don't necessarily have to become successful companies like Tesla.
They may very well be doing R and D for large companies who would be more than joyed to take this architecture into their mainstream cars. And so there are many new ideas that are happening in the automotive industry that I think that is worthy of some attention. The second thing I'll say is, I don't remember the last time that 300,000 people lined up to buy a car. I'm just super delighted that I placed my order right off the bat, sight unseen. And I can tell you that I can't wait, okay.
I just can't wait. And the reason for that is once you drive a computerized car, you can never drive a not computerized car. I mean that's what everybody is in love with. The fact that it's electric is fantastic. I mean and Elon builds a great car.
But the thing that's really, really awesome, it's a computerized car. Don't think for a second that the other automobile car companies didn't notice that 300,000 people lined up to buy a car. It's never happened. I don't remember the last time. Didn't line up 300,000 people didn't line up to buy a Ford Mustang and that's a cool car.
300,000 people didn't line up to buy a Camaro and that's a cool car. And so the fact of the matter is something is going on and I think there is just a global vote that it's time to move cars to become computerized, because we're just tired of all the other stuff. And so I think that those two conditions are pretty big deals that there are all these startups, why are they starting the company, do they all think that they are going to be GM and what is it that they see, 30 smart companies founded, 4 in China, what do they see? What is it that they see? And then number 2, these 300,000 people lining up to buy cars is really quite a phenomenon.
Rob?
Yes, I think there is if you've looked at any of our past presentations, we talk about our past business, we talk about the numbers and the answer to your question is, today the number would be somewhere over 10,000,000 cars on the road, I think before it was roughly 8,000,000 and then 2 years ago it was roughly 6,000,000. But all of that growth has really come out of one fundamental value proposition, which we delivered, which was we delivered visual craftsmanship into the car, essentially graphics, right? For the carmakers that want to have graphics in the car, they want to have a level of craftsmanship, they love that word, craftsmanship in the same level that they pay attention to the upholstery inside the car or they bring jewelers into this design the headlights, they want to bring visual craftsmanship into the car. You notice today we didn't talk about that at all. Visual craftsmanship of course is something that NVIDIA can provide, but all of the new opportunity that we're talking about now is born of bringing, 1st of all, consumer experiences into a car that consumers would expect.
So deep learning is driving self driving cars, it's also driving killer apps for consumers. Why wouldn't those go into a car? Now the automakers know that. So they're looking at how do I build a computer that's going to not just deliver visual craftsmanship, but I'd like to have natural language understanding. I'd like to have ADAS.
I want to eliminate my $200 surround view box. These are all capabilities and functions that the new cockpit is going to have. So it's a little bit of a different dynamic of just how many models of cars do we have on the road that deliver graphics and is it linear or whatever. The growth opportunities I think that we are talking about are articulating a couple of fundamental facts. The cockpit is moving beyond graphics.
Visual craftsmanship is part of what we deliver, but it is now a whole lot more. It's about these deep learning applications and the experience you have inside the car and then on top of it self driving.
Was there one more question? Let's take one more and then let's wrap up real quick. Yes,
sir. In terms of the PASCAL products that you announced today and the ones you didn't talk about, is there any reason to think that you would have different segments adopting a new process node and a new architecture at different times? Or would they sort of be synced up in terms of gaming the products you announced today, should those be out around the same time?
I think when you use 16 nanometer for I think that's a very clever way of causing me to answer when we're launching our next generation products. And I just want to let you know that I'm going to tell you the answer fully recognizing that I'm walking into this trap and just so you didn't trap me, okay? Just want to make sure that you and I both know that you didn't trick me into this, okay? I'm doing this completely in a self aware way. So, we're going to use 16 nanometer for some time, because 16 nanometer is really good process, it's a good node.
And there is no immediate reason why we jump to 10 or 7, but we are doing test strips on 10 and 7 and when the appropriate time comes, we'll actually go. But we'll use 16 nanometer for a little bit. Let me just so I appreciate that question. And so you could imagine that there are many Pascals in flight and so but we're not announcing anything today. Today we're just announcing the Tesla P100, we're announcing the DGX-one.
I want to thank all of you guys for the time that we spent together. There's I think there is several messages that you probably got in listening to the presentations from NVIDIA's management team. I think the number one thing is that our business model is fundamentally based on a platform approach that leverages 1 architecture. It's a platform approach that leverages 1 architecture. The 1 architecture gives us leverage operational efficiency, the ability to invest and yet drive operating income increase at the same time.
The platform approach gives us the ability to offer not a chip to a market, but a solution to a market, a platform to a market, an answer to a market. And the markets that we have served and so by offering a platform approach, our solution is much stickier from business terms. However, the way we think about it, it adds more value. It makes it easier to reveal the wonderful capabilities that our platform offers, these 2 sided platform approaches. And so the number one thing to realize is our business model is a platform approach that leverages one architecture.
The second thing is we selected markets based on this new computing model that we care so much about is called GPU accelerated computing. GPU accelerated computing is a brand new computing model that has come to its day has come. Its day has come. And this is what GTC is about. You see all these people here, they come here because they want to know about GPU accelerated computing, they're doing work in GPU accelerated computing, they want to do more in GPU accelerated computing, they want to learn from other people who's doing GPU accelerated computing.
GPU accelerated computing is a brand new computing model, a brand new computing architecture for a computing model whose time has come. That's the second thing I think it's really important to take away. The markets that we selected are of course nascent markets therefore that relies on GPU accelerated computing that need a GPU accelerated computing in order for it to flourish. And that's partly the reason why we're in the middle of 3 very important growth dynamics that's pretty exciting. Without GPU accelerated computing there is no VR.
Without GPU accelerated computing there is no deep learning, we know that. And without GPU accelerated computing we know we're not going to be able to bring AI to cars and we think cars need AI. The world is just too complicated without it. And so we selected these markets partly because of the capability of GPU accelerated computing, partly because they found us. And that's exactly what you want, you want customers to find you.
You want customers to find you for the capabilities that you offer in a significant way. And so now we find ourselves in 4 growth drivers, gaming is a very big one, AI is a very big one, VR is a very big one and autonomous machines and our favorite one, the self driving car is a very big one as well. And so we have multiple growth drivers. And then I appreciate the question about how to think about growth in the short term. The way you are to think about growth in the short term, if you just go right to gaming and some sense about accelerated computing for data centers, the Tesla accelerated computing business.
And the Automotive business is going to continue to grow at the pace that we've told you guys about. The design wins take a long time to happen. And the only thing that we'll add on top of that is all of the self driving cars that are going to be driving all over the road, the mapping cars which will be in large quantities, it won't be in 1,000,000, but it will be in large quantities, mapping cars all over the world. Those kind of early superchargers, if you will, for autonomous driving, I think will show you the way to a short term growth. And as Colette says, it's really important for us, it's really, really important for us to invest into these growth opportunities, but to do so in a way that's balanced so that our operating income percentage continues to grow.
And if we if our top line grows nicely and our operating income percentage grows nicely, hopefully, we'll continue to delight shareholders. Okay. Thank you very much for coming today.