Hello! We have, can you guys hear me? Do we have the mic working? Yeah, I can hear it now. I am not AI, that's why I could not hear that. All right. I'm so happy to be here in DC. I like to walk a lot, and I feel like those people in the back aren't gonna get to see me, so I'm gonna come say hi. Hi. I wanna go over there sometime in the future to say hi to you all. Happy to be here in DC, the first time since twenty nineteen, that horrible, horrible pandemic that changed our lives. Great program here. You know, what we're doing here in this special address is just giving you a glimpse of all the great work that you have done with us.
You know, it's really about the work that you are doing, that we're ideally helping you enable. So it's a little bit different. You know, in our normal keynotes, we've got this... We've got platform announcements. This is really about celebrating the work that you're doing. This is really about working with companies, working with government agencies to help the US lead in AI, to help democratize AI. And perhaps most importantly, you know, when a high tide rises, all boats should be lifted. All boats. All of our workforce should be lifted to benefit from what AI is doing, because we are in the dawn of a new industrial revolution. All right, so lots of partners here. I'm gonna read every one, so just sit back and wait. Please check out, please check out the exhibit floor.
A lot of great work that I'm not gonna be able to get into, or if I do get into detail, it's not gonna be enough intel for you to really learn from it. I'm here to give you a teaser to go talk to not only NVIDIA, but all of our partners. I'll be honest. First, let me thank our sponsors. A lot of people are here, but our sponsors, it wouldn't have been possible to do this if it wasn't for you all. You know, Dell, Hewlett Packard Enterprise, Microsoft Azure, Oracle, CenturyLink Federal, thank you. AWS, Deloitte, Lockheed Martin, and all of our gold partners. I'll be honest, though, I've been doing this for forty-plus years. Started when I was two.
In those forty-plus years, I've learned to live vicariously through what the work that our customers do and the work that our partners do. You know, I get emotional when a NIM AI video plays. I don't know about you, but it's just uplifting to hear about the potential. To be honest, it's not about the AI, it's about what you all do with the AI. So I live vicariously for those of you who are much smarter than me, that take these tools and help improve humanity, help improve our planet. But first, you know, let's... Jensen actually drew this up. He actually is a very good artist. Let's just take a look back and figure out how we got to this moment.
I've been told not to carry a drink around from my friends over there, so I'm gonna set it here. Is that okay? So, we are in the greatest computing transformation of our times. You know, you'll hear us say that again and again. You know, sixty years ago, IBM revolutionized computing, inventing the CPU. They put a general purpose computer in the hands of every company and eventually every home. In the same vein, the iPhone put computers in everybody's pocket, and now Samsung and other partners as well. But the iPhone was really the first moment where we had computers in our pocket. And what we're seeing over these last sixty years that have caused some of these tectonic shifts are kinda two forces that have collided. One is that CPU scaling.
CPU scaling just couldn't keep up at the pace it was keeping up. Moore's Law is essentially dead, and so while we were used to getting, you know, a 50% improvement every couple of years, that's not happening anymore. That was the birth of accelerated computing. To move from just the general purpose computing, you got the answer when you got the answer. If you grew up when I did, you submitted a stack of punch cards that basically printed out, "Hello, world." It was very exciting. And now we're in a different moment where we're looking for more real-time information, much harder problems to solve. That requires accelerated computing. At the same time, more CPUs are improving, but not improving at the scale that they were.
We're seeing the amount of data grow... and not just from those that are tweeting on a regular basis, but from all the papers that you all write, all the computations that you all perform, all of our papers that we write back and forth, all of the research that's going on, all of that is information. Intelligence in this world is basically you all and that data, and that's growing exponentially.
So those two things, you know, I can't say the word SH, but you're SOL if you're trying to accelerate, not the NVIDIA speed of light SOL, but if you're trying to do more compute with more data growing exponentially, but your compute infrastructure is not growing to keep up, you're gonna be increasing the number of data centers, increasing the time it takes, and some of these are more real-time needs that we have, especially in today's world. And we just don't have the power. We don't have the land, we don't have the infrastructure to keep going along the old path. That's why NVIDIA invented accelerated computing. So we started in 1993, invented a chip for computer graphics, gaming. I was at Silicon Graphics at the time, SGI.
We had a competing chip, and we decided to go into the HPC route. We sold our technology to NVIDIA. Thank God! They really delivered on the promise of not just visualizing cool things, but the interactive computation and visualizing of those things. The human brain understands when you can see something. Yeah, you can read a report, but, you know, the fastest pathway is between our eyes and our brain. So 1993, we invented the computer graphics work. In 1999, we invented the GPU, parallel processor, mainly to accelerate computer graphics, but, you know, a lot of algorithms, those in the HPC community, benefit from matrix math, matrix multiplication, parallel computing. We saw our first contact with AI in 2012. AlexNet.
We didn't yet have Tensor Cores in our GPU, but our researchers discovered CUDA, which we'll talk about in more detail. Our researchers discovered CUDA and applied it in an AI use case. After that moment, we started designing and thinking about what they needed to accelerate AI, accelerating Tensor Cores, adding Tensor Cores to our design. We released that in 2017. Jensen called that the first RTX Turing processor, the first GPU to have graphics shaders, computational units, and Tensor Cores for AI.
Some of that may be more detail than you'd like to see, but those Tensor Cores are essential to accelerating AI, 'cause it turns out in the old days, you need sixty-four-bit computing to be absolutely sure that that building would go up and not collapse, that that bridge would work as advertised, that the aerodynamics of that car is gonna work as it is. We still need that in certain situations. That's still a good validation of structural integrity and performance, but AI now can operate not just at sixty-four bits, but at thirty-two bits and sixteen bits, at eight bits and six bits, at four bits. So that, that addition of Tensor Cores really amped up the system, and in 2022, because of that, we had ChatGPT. That was, that was the big bang. That was the big bang when people said, "Oh, sh...
Stop!" Come on, I almost said it. I did say it. "Oh, great. Look at what this can do. look at what this can do." And because of that, we started thinking about how to improve that ChatGPT interaction, how to produce those large language models more efficiently, and how to provide the access to those large language models, and that's how we ended up with the AI factory. So what has been like a $3 trillion IT industry, you see on the slide in front of you? That $3 trillion IT industry is really gonna be driving and already is. It's gonna accelerate growth across the $100 trillion of industries that we have on this planet.
I mean, essentially, AI is a new product, and that product that it produces are tokens. The tokens are pieces of intelligence. And what is intelligence? Usually, it means money. You're improving your supply chain, you're curing cancer at a faster rate, you're optimizing your employee productivity, employee retention, you're improving customer revenue. That's what it's all about. So let's talk a little bit about how we got to these AI factories. It really started a long time ago as we were working on the parallelization use cases that I mentioned outside of computer graphics. I remember personally installing one of the first HPC supercomputers back in 2009. We physically were installing 50,000 PCI cards into a variety of racks.
We were doing that only because we had looked at the problems that people were trying to solve, and we started an initiative, and we invented CUDA. We invented CUDA, a platform to accelerate some of the most often used libraries in some of the toughest problems that we have. We've been doing that work for over twenty years. We've got over nine hundred libraries, 4,000 applications. I think it's very reasonable to say if we hadn't started work on CUDA back in 2006, there may not have been the big bang of AI. AlexNet started with CUDA. CUDA started well before then, and that's what actually surprised us in terms of our ability to make progress, not just because of we NVIDIA, but as tools, but because of how you all used it.
And it was an amazing thing to watch. You know, the CUDA libraries accelerated algorithms that we thought weren't, you know, acceleratable, if that's a word, it is now. At the rate that we were able to accelerate them, 10X, 20X, 40X. You know, as we accelerated those libraries, we attracted more developers. And no one's gonna build. No one's gonna start developing on something unless they know there's a payoff, right? We proved that there was a payoff to accelerate certain pain points. Pain points that were very real in the drug discovery and citizen protection and the manufacturing of new equipment. That recruited a bunch of new developers and a bunch of new applications, which recruited then a bunch of new users of those applications.
We continued to have those new developers improve this performance, scale it. When you reach a certain tipping point, the number of customers that want to use that attracts an ecosystem, attracts OEMs, attracts CSP. And the work that then begins to happen because of the widespread availability of this open platform, that's very, very broad ecosystem, is that you get even bigger and bigger speedups, and I'll talk about those in a second. So in essence, you know, CUDA's really achieved what we call this virtuous cycle that's really driving not just performance, but understanding intelligence and driving economic development. So today, as you can see, I always want to look behind me, but it's right in front of me. We've got over five million developers, five million developers and growing.
A lot of you in the room are CUDA developers, or in your organizations or agencies, you have CUDA developers. We're gonna continue to see bigger and bigger speedups across every industry. What we'd like to learn from you, and this is one of the things we'd like to learn from you in this conference, is: what else can we do? What are your biggest pain points? What's the most important problem you need to solve by industry, by agency? It's a very broad ecosystem, 20 years of work. We're not stopping, and we're doing it for you, and so we need your feedback. What should we work harder on? That would be...
If we just got that from this conference, that would be wonderful news for the man in the front row and for all of us at NVIDIA, in terms of we're a pivot of research. Jensen Huang is the one in the front row. He's probably behind the screen. So again, I don't think we would be in this dawn of a new era, a new computing paradigm if it wasn't for our CUDA ecosystem. So CUDA enables acceleration. We have to be concerned about energy. We have to be concerned about climate. CUDA is not just about enabling acceleration, it also turns out to be one of the most impressive ways to reduce energy consumption.
What you see here is kind of a canonical example of CPU general purpose computing, the number of data centers that have racks and racks of CPUs in them. By having this tight connection with the CPU and the GPU, accelerated computing, running CUDA on those GPUs, we get significant speedups, as I mentioned. So just take T as some element of time, take it as a second, take it as a year, whatever the element is. Time and time again, we're proving in most cases, what ran in 100 units of time in a CPU-only environment runs in one unit of time. On average, are there 20X? Yeah. Are there 200X? Absolutely.
But one unit at a time, a 100X speedup in performance, and you might expect, logic would say that if you're gonna increase performance by a 100X, that you might increase power by a 100X. Makes sense. That's the way we did it with CPUs. We kept ramping up the clock and other design changes. But it turns out that we're only increasing power by 3X. That's a 30X per watt savings. 30X per watt. If you can do a hundred times more work in the same amount of time or with one-hundredth of the infrastructure, you're saving energy. It is beneficial to the climate. And you could say, "Well, why don't you just reduce the power of the GPUs?" People have asked that.
That would be also reducing our expectations of helping you cure cancer, helping understand what Hurricane Helene was really gonna do, what COVID was really gonna do. We can't stop trying to find ways to improve our planet and humanity. Performance is gonna continue to need to go up. I'm just here to tell you that we're designing our systems with not just performance in mind, but with energy efficiency in mind. It's going to mean that we're gonna need data centers that understand this accelerated computing infrastructure. It's going to mean that we understand and approach things not just from a chip level, but from a full stack level, but what we're doing on the chip to maximize performance and energy consumption, but also how we're applying the software stack to do that.
And just as an example, I think, seven out of 10 of the top Green500 supercomputers around the world are NVIDIA computers. So yeah, do they draw power? Absolutely. Look at the amount of intelligence they produce, and they wouldn't be on the green list if they weren't efficient. And they're efficient because of the libraries that you see on the screen behind me. These, these CUDA-X libraries are critical to helping with that energy efficiency, getting more work done in the same amount of power. And we're working across every industry. These are some of the most downloaded and utilized CUDA-X libraries that we have. If you're a data scientist, you either are or you hopefully are using CUDA-X to accelerate Spark, to accelerate Pandas, and now to accelerate Polars.
If you aren't, it's a zero code change. It's a really zero code change, just an include line to a point where you're getting Spark acceleration, Polars, Pandas acceleration. If you're using RAG, retrieval-augmented generation, which is a way to accelerate and improve the understanding and performance of on-prem AI. I'll get to it in a second. cuVS is the world's really first accelerated semantic search, 150x to accelerate these RAG workflows. I won't go into a lot of detail on retrieval-augmented generation. I know a lot of you know about it, but come talk to us about it. It is gonna be an essential part of this next wave of AI moving from the cloud, moving to on-prem.
In manufacturing, we're making a big effort to move manufacturing, especially of semiconductors, back into the U.S. Thank God. I grew up in Pittsburgh, so I hope we still manufacture steel elsewhere because I couldn't breathe growing up. But we must be able to do it in a more efficient way with less impact to our climate. cuLitho accelerates computational lithography used for semiconductor manufacturing, cuLitho running at 45X improvements, and these are just averages. Yeah, all of these time after time, there are significant improvements across every industry, and we'd like to know what's missing. There should be a long list all the way down. That next panel should be more cu's. They're all about improving energy efficiency.
They're all about improving the speed at which you gain insight, the speed at which you gain intelligence. Because we don't just try to optimize the chip. We're no longer just optimizing the GPU. We're optimizing the system. We look at a full, full stack approach of how the system operates, the environment in which that system operates, and the software stack that runs on the system. And with that in mind, that's how we spent years developing our Blackwell platform. And so let's take a look at Blackwell. Holy stuff! Holy stuff. That was amazing, wasn't it? I'm embarrassed to say, like, I... When I get bored, I watch that video. This is mostly what you saw in that animation. When you look at the technology in here, we left out the sheet metal.
Seven different chips from GPUs to CPUs to NICs to NVLink for fast communication between the GPUs, to Spectrum-X, our AI network that was specifically built for AI computing, Quantum-3, and then InfiniBand. We've incorporated liquid cooling. I used to work for SGI, which then bought Cray. You would, you know, back then, you weren't using liquid, you were using chlorinated, and you had to be very careful about how you were cooling systems. Standard liquid that comes in at 45 degrees C, exits at 65 degrees C. 65 degrees C that can be then used to heat radiators, to heat buildings. All of it is meant to reuse. The amount of technology in there is just... It's a marvel of engineering.
All of these components, you know, we purposely build these racks of computing Grace. We make sure that those racks of computing Grace work together. And on top of that platform, I mentioned full stack platform. You know, this is not the platform. This is the AI super cluster platform. The real platform adds the CUDA-X libraries, our software acceleration libraries. And then to further advance work in AI and physical AI, we add our two main platforms, NVIDIA AI Enterprise, with our microservices, our large language models, and Omniverse, a digital twin, you know, where we use the virtual world to improve the physical world. That's what it's all about. It's about training our AIs to exist in the physical world for the better. So the...
As we look at each element of these, we'll continue to look for opportunities to both improve performance, but also to improve energy efficiency, you know, because that's the way we designed Blackwell. It was not just with an unprecedented improvement in performance for AI, but also with energy efficiency in mind. If you take a look at this chart, this goes back over 10 years from our Kepler GPU to our Blackwell AI supercluster platform. We've achieved a 100,000x reduction in energy over the last 10 years. A 100,000x, and that's on the inferencing side, so the generation of tokens is on the industry side. On the training side, I think we've reduced energy consumption by 2,000x. To put that 100,000x into perspective, you could drive a car for 300 years.
Every eighty years, you'd probably get tapped out and get a new driver, but you could drive that car for three hundred years on a single tank of gas if those cars were designed with the same energy-efficient improvements that we put into our architectures. It's an amazing stat. It's an unbelievable stat, but it's a real stat. What's even more unbelievable, at the same time that we're improving the energy efficiency, if you were to mirror this curve and look at the performance gain, you would go above the screen because getting a hundred thousand X improvement in efficiency resulted in a hundred and twenty-five thousand X improvement in performance. More done with less energy, sustainable computing.
People don't appreciate enough about that, and it's not just about CUDA, it's not just about the way we design it, it is about liquid cooling, which will be an essential part of our data centers, that we have to help the U.S. build and repurpose, but it's what we're going to need to produce AIs that are really capable of helping the U.S. lead in AI for so many different use cases that can benefit our citizens and mankind. And, you know, so it is an amazing feat of engineering. I'm a graphics guy, and we used to have this thing called Real or Rendered, where we'd put different pictures side by side, and ask people to, "Is it real or rendered?" This is not rendered.
These are all real pictures of Blackwell systems being brought up all over the world. We've got eight, eight partners that are bringing up Blackwell, Blackwell as we speak. We're still slated to go into volume production ramp in Q4 of this year. And the way we're able to do that is at the same time, we're developing these complex architectures. Our group is responsible for building reference designs, reference recipes, if you will, on how to put these things together with blueprints. These reference designs are how we enable all these partners that you see, build these things and pretty much build them at the same time, so that those eight partners will all be time to market. You have a choice. You have a choice in who you buy them from.
A lot of them feed some of the other companies that you might buy them from. But Blackwell is coming, it's in production, and some of our biggest and best systems will be shipping here in this next quarter. We will continue on our path, not just to develop new GPUs, to develop the entire platform. You know, it shows you how complicated it is. One of the most important part when you're developing something at this scale, the GPU is important, the CPU is important, storage is important, and software is incredibly important, but the networking is becoming more and more important. You know, in training an AI, maybe not so much, you're doing a bunch of checkpoints of your data, and you're running for months and months. You're not expecting an answer in real time.
But if you've got a customer service agent that needs to give somebody an answer on why they're having difficulty or needs to give an employee a suggestion on how to improve the system that they're working on and maintenance training, latency matters, access to data matters, and the way . . . not just the access, but the way that access is packaged back together in a flow that allows an AI to make a recommendation. So networking is one of the most important elements of this AI factory. If these things all operate independently, you just have a bunch of great AI jobs running together, but you couldn't easily bring them together to solve one problem.
So we are in that yearly cadence, you know, that platform cadence, we'll be on that yearly cadence, developing at a data center scale, and we'll be updating various components from CPUs, sometime it's a clock rev, sometime it's a new CPU, similar to GPU, similar next. Our commitment is to continue to push the envelope, continue to push the envelope in that. In fact, let me go back to that slide. Continue to push the envelope. Am I really going back? I am now. In every aspect of computing, we're not gonna, we're not gonna update everything at the same time. We're gonna do it in a smart way, based on where we see pain points. The beauty of this is that it's the software that we embedded back in twenty twenty-two still runs. It's compatible, it's backwards compatible, and it's architecturally compatible.
It just runs. That's why accelerated computing is growing as fast as it can, because you're not having to re-architect, or even though you get a new GPU or a new CPU, you're not having to re-architect for an FPGA or a different accelerator. So compatibility is at its core. But this AI factory that we've been building, this AI factory needs a new software stack. You know, we're moving from intention or instruction-led computing, where I tell the system what I think it wants to tell me or what I think I want to know, but you're already biased because you're programming it based on what you think you wanna know. And so what you expose it to, what you allow access to, is based on instructions, right?
We're moving from instruction-led computing to intention-led computing, and that software has to be new. We have to be able to talk to our computers, to ask questions. This is gonna be the second part of what's gonna drive this industrial revolution, and probably one of the most important parts. We've got years of skills here, and the amount of energy. We have more software people at NVIDIA working here than we do hardware people for this reason. Instead of a computer being a tool that you use to get something done, the computer is now going to be able to generate intelligence, generate skills.
Serving one's industry is, I think there's a stat that, by the end of the decade, this method of computing, accelerated computing, AI, GenAI, is gonna have, like, a $20 trillion impact to this already existing $100 trillion group of industries. So really important to get done. If that impact is gonna come from productivity gains, business process automation, it's gonna come from the consumer demand to need new AI. It's gonna come from augmentation of the workforce and improving the productivity of the workforce, and it's gonna come from being able to provide solutions at kind of a breakneck speed. I'm putting down my diet Coke just so I don't get yelled at. All right. So let's now dive into what that software is. We talked about we're in the middle of this, the next industrial revolution.
We also are in the era of Agentic AI. So what's Agentic AI? It's, you know, we have DHS or FBI in here. These are not AI agents. These are AI agents to help us do our job in a better way. The first wave of AI was really around these foundational models, right? We talked about that. You run these massive systems in the cloud. You run them for days and months, and you basically get a massive model that regurgitates back everything that we all intentionally or mistakenly put on the internet. That can be useful, can be very useful. The first wave of the use of LLMs got better and better at fine-tuning that into more useful cases, but it was always about you had to know the question you wanted to ask.
You prompted, and then you got an answer back. And the way you asked the question determined the way the answer came back. But what if you don't know the question to ask? And what if the question is so broad, like, How do I improve my customer retention by 10%? How do I improve the training scores of my employees by 50%, right? How do I accelerate the creation of new drugs, vaccines by 200%? Those questions require not just a prompt and an ask system, they require agents, AI agents that are a culmination of NIMs. They're capable of perceiving, they're capable of reasoning, of learning and taking action, and we have several examples on the screen right now.
You know, so these AI agents are very good at collecting a lot of data, synthesizing that data, summarizing a lot of data from many different sources. More importantly, they can collaborate with each other. So let's take an example of a manufacturing floor. You might have an inventory of AI agents that's constantly monitoring your inventory. It notices that you have excess capacity, but your floor is not taking advantage of that excess capacity. They're working on a plan to produce X number of widgets today. That AI agent can communicate with the factory floor agent. That factory floor agent can help accelerate the continued building versus sending people home early or whatever, continue to build, which then communicates with the purchasing AI agent to be replenishing the materials at a rate.
And you can do in minutes what used to take days, and often it was at the beginning of the week, where you'd figure out how much you could really get done, and then you'd get supplies in and figure out how much you can do. You can accelerate the creation of every watch, every car, every plane, every train, every automated automobile. So this era of Agentic AI is really gonna transform a lot of different industries. We've already seen examples of some horizontal use cases, where digital avatars are used to improve the way we talk to customers, to improve the way we enable our employees, to be more productive. We've seen some great examples in the past, on content generation, whether it be texts, images, videos now.
A lot of great work in product R&D, not just in healthcare, what we show there, but three that you see there are three things I want to talk about in a minute. And we can use that AI agent, since it is proactive, since it's not waiting for you to prompt it to see if you have an issue, we can use that AI agent to let you know you have an issue, that you have a vulnerability, that you have a hack. I'm tired of getting emails telling me how much my personal data is in the dark web. It's mind-boggling. I think I've locked my credit report for, like, the last 10 years. All right, so a little bit more about these AI agents.
The thing to remember is they're really built on the software that we've been working on for a while. At the core of these are our NVIDIA NIMs. These NIMs understand a lot of different models and a lot of different modalities: speech, vision, text, imaging. And chaining these together into an AI agent, we've been working with a number of partners to make sure that we NIMify all of these, all of these different modalities, all of these different models. You don't have to use NVIDIA models. We have models. You can download it from Hugging Face. To build your own, you can download it from Meta. Look at the companies across the bottom. The beauty of it is that we have a NIM factory, and so we're working with Meta.
We'll take Llama 3.2, we'll optimize it, and what we're seeing in our NIM factory, now we'll be republishing and publishing new NIMs every Monday. What we're seeing is that you get a two to five X improvement. Two to five X improvement over what you would have had just by downloading Llama 3.2. That might be informative for Meta to do more, but at a minimum, there are a slew of people that are working to make sure that the models that we provide are fully tuned. Think about it. The number of GPUs that we've developed over the last several years that support Tensor Cores, the number of configurations of those GPUs with differing amounts of memory in workstation formats and server formats, one, two, four, eight GPUs connected with NVLink, not connected with NVLink.
It's not just about the model, but it's about the model and how it's gonna run and over what it's gonna run, and that's where the NIM factory really comes in. We test them on a variety of systems to make sure that you always are getting state-of-the-art, that you're always getting state-of-the-art performance. Okay. These NIMs are essentially an essential part of how enterprises will implement productivity changes. Why is there a camera, like, at every corner? Hi. Thank you. It's just like I try to get away from one, there's another. These collections of NIMs, like our NeMo, NVIDIA NeMo large language model, and the ability to fine-tune these NIMs with NeMo is part of what we call this AI flywheel.
So, you know, change enterprise to agency, to educational institution, to whatever you want. But the NIMs, while we work to optimize them across any configuration, they have the data in which they were created with, right? We want to enable you all to incorporate your data because, you know, you're not gonna optimize your supply chain on ChatGPT alone. You're probably gonna take a smaller model, you're gonna fine-tune it, you're gonna fine-tune it maybe in your particular industry, and you're gonna fine-tune it for your particular corporation because you have different semantics, different lingo, right? Training the AI, make sure that you expose it to the way that your company or your organization is able to interpret it. That reduces hallucinations, right? That makes it tailored. But you still don't stop there. You don't look back and train it.
You've got to continually expose it to new data, new intelligence. That data is intelligence and companies, people are intelligence. The papers that they write, the emails that they send, the customer feedback they get, the supply chain database. This is a constantly... should be a constantly running loop. This flywheel will continue to amaze you in terms of what it's able to predict, what it's able to produce, because you're not stopping at a moment in time. You're not asking ChatGPT a question from a large language model that was built a couple of years ago. You're asking your own model a question that was optimized overnight, and the question that you ask and the answer that's given helps them, helps feed that and helps inform that model for the next set of questions.
So this AI flywheel concept, built on NIMs, available from all of our partners on-prem and in the cloud, is available across multiple industries. Please take this away. You may not need to remember NIMs. You may not need to remember NeMo. Remember the flywheel. Remember the need to continually improve the AI. That's how we learn. We don't stop kids going to school at fifth grade because they cannot read and write, at least not in some states. We continue to train them, and we need to do a better job of training them. We need a better job of implementing this flywheel in our education system and in our workforce, 'cause tools will be changing, processes will be changing. This is one of the most important things to take away from this, is that-...
Flywheel is the way you're going to keep your AI agents and your AI models up to date. So we've worked with a lot of different software developers. They like the idea of NIMs, they understand the idea of flywheel and continually feeding it with new data, but they're looking for a little bit more. Really happy to announce our NIM Agent Blueprints. It's a way to connect your data flywheel, your data flywheel to a bunch of different reference workflows. We're not an application company. CUDA is not an application. The blueprints are not an application. We provide reference examples based on some of the most often asked workflows of our customers that you bring to us, some of the biggest pain points and some of the ones we care about the most.
Improving customer satisfaction, employee satisfaction, getting smarter, better, safer products to market, helping companies cure some really, really horrible diseases, and protecting our government IP, our personal IP, and having a strong AI ecosystem to lead the world. We've already introduced the digital avatar for customer service, which is obviously a horizontal NIM and product design. And to be clear, these are not applications, they're workflows. Consider them actually more than a recipe. It's like I don't even have time for things like HelloFresh, but it's like HelloFresh, right? You get the recipe, you get the ingredients, and if you do the right thing, you get a decent meal. But you can add to it. You can put more spice in it, right?
You can change the ingredients, and the intention is that you do and change those ingredients. We are not trying to tell people how to design cars, how to do drug design. We are trying to give people a starting point which they can build their own AI agents, build their own blueprints based on the most important pain points that they have. One of those blueprints, as you can see, is James, and as a graphics guy in the real versus rendered vein, he is not real. He looks real. He's not real. But James, the blueprint that's available on build.nvidia.com, is going to tell us a little bit more about all the blueprints that we have.
Hello, I'm James, a digital human. Feel free to ask me anything about NVIDIA and its products.
Hi, James. Tell me about NVIDIA NIM Agent Blueprints?
NVIDIA NIM Agent Blueprints provide customizable templates for deploying AI models in production, simplifying the process for developers and enterprises.
How is NVIDIA AI improving cybersecurity?
NVIDIA AI is enhancing cybersecurity by accelerating threat detection, improving incident response, and enabling real-time threat analysis.
All right. Thank you, James. I want to talk a little bit about James mentioned the cybersecurity blueprint. But first, let me tell you, you probably noticed a two or three second delay as James thought, which is actually much better than a lot of humans that I've spoken to. In the questions that he was asked, that's very appropriate time. You're like: Okay, I think I would say XYZ, right? There are plenty of use cases where latency matters, and so, you know, James, right now, we've trained generically to answer questions. People can use those avatars to feed in more information about their products, the existing bugs, the known known issues, the top customer complaints, so that the response is in milliseconds, right? You're not waiting on the phone or on the computer to get a response.
And that's a way to improve customer sat and customer retention. James mentioned cybersecurity, so really excited to talk about the availability of our newest container. And speaking of software containers, Homeland Security is probably worried about other container security, but we're worried about software container security. As you deliver software around the world, as we all deliver software around the world, you got to worry about software mismatches that leave a backdoor, that leave a vulnerability. So yes, we're using it for cyber, but it can be used for a software rollout, which might create a cyber incident. But, you know, it also could just create a blue screen. It could create something that's not working.
So this, you know, triaging of a software rollout that all companies have to worry about can take hours or days by analysts, by humans, and this blueprint automates it, includes four different LLMs, finely tuned LLMs, a bunch of NIMs to take that hours and days down to minutes, to ensure that what you're about to ship is going to work on the target that you're going to ship, and we're also happy to announce that the adoption of this blueprint by Deloitte as part of their CyberSphere security service platform.
They do agentic analysis for open source software, specifically for the problem that I was talking about, you know, identifying threats and vulnerabilities before you ship an update, before you roll out from a sandbox into production. Definitely talk to Deloitte, dramatic reduction in the identification of these vulnerabilities, these CVEs. Let's look at a little bit more detail of this particular blueprint in action.
The NIM agent blueprint for container security cuts vulnerability triage from days to seconds. While humans manually retrieve, integrate, and analyze hundreds of details, agentic systems can access tools and reason through full lines of thought to provide instant one-click assessments. This boosts productivity by allowing security analysts to focus on the most critical tasks while AI handles the heavy lifting of analysis, delivering fast and actionable insights.
I don't know if you saw that, but the counter on the left and the counter on the right were two orders of magnitude, increasing the velocity of... You know, you may not need to, you know, deliver your software two orders of magnitude faster. But there's a hell of a lot of software out there that you might need to analyze two orders of magnitude faster. That's the value of that blueprint, and encourage you to talk to Deloitte. Come to the booth and talk to us about what that software and that blueprint can do, but it's really much part of a wider ecosystem that you see behind you. A huge investment by NVIDIA in cybersecurity, which is very top of mind for public sector.
We don't just stop at the NIM level, at the blueprint level, because it is a full stack problem, so we implement network security and hardware. We implement runtime security with our DOCA platform. Certainly has NIM, NeMo. One of the most important features that we have is that, you know, data at rest is encrypted. When you transmit data, it's encrypted. Mostly safe. But LLMs are plain text. You're gonna train an LLM or a small language model, an SLM, it's plain text. Everything you've said is in plain text. That knowledge is in plain text. NVIDIA is the only provider to implement confidential computing to protect those LLMs, and that's the only way protecting your crown jewel to deliver secure and trustworthy AI.
People can try to attack the encryption codes and where data is moving, but I think we all can imagine it's much easier to read a Word file than it is a, you know, an encrypted file, no matter what the level of encryption. NVIDIA is the only platform to implement confidential computing in hardware to protect the integrity of LLMs. So we're working with many partners to implement these zero trust solutions. We trust no one. We trust no one. My family's probably watching and thinking, "Ah, shit, they don't even trust me." Ooh, crap! Sorry. I said it once. I knew I was gonna say it. I said it once. Oh, crap. All right. So, again, I'm in this kind of a venue, you know, this one-hour special address.
We're not trying to dive deep, so please come to talk to us at the booth and talk to our security experts about what we're doing on cybersecurity. Next one, next industry. I'm just gonna hit a couple, not all of the industries that we have. Healthcare. I can't think of a more important focus area for us as a, as a civilization. Improving the lives of every child, every person, improving the lives of people with disabilities, enabling companies to ideally cure cancer, to come up with new vaccines for new diseases that we know will happen. We've been preparing for this like we prepare for a lot of industries.
We build deep domain expertise in manufacturing, in telecom, in cyber, and especially in healthcare, 'cause we've been providing technology to healthcare companies around the world, several of which you see over there. Our GPUs are in... If you had your CT scan for a new crown, that's NVIDIA. MRIs, that's NVIDIA. We really are working to improve the outcome. We're not directly affecting the outcome. We're working with companies to help improve the outcome. Part of it is by improving early detection, and part of it is by improving the tools that we give to research. We have tools like MONAI, which you see on the screen, Catalog and AI microservices by NeMo. Up there at the top, it's basically a gen AI platform for drug discovery. Parabricks for genomic analysis.
Nine out of ten, I think it is, nine out of ten genomics companies around the world use Parabricks. We use our Inception program for startups. Those inception startups who are trying to work with some of the big companies out there, they've received over two hundred national grants, two hundred from NIH, from NCI, from NSF. I think almost three hundred million of grants. We're trying to, you know, let them, you know, as all tides rise, it's not just about the workforce, it's about small businesses, it's about these startups. That's where innovation. You get all these boats rising at the same time, you're not spending time pulling people out of the water, but you're raising everything that's done.
So a lot of work that you'll see out there on the floor in healthcare. We're also continuing to innovate and add to this full stack solution. We're announcing two new NIMs, AlphaFold2, Multimer, RFdiffusion, and 90% of you probably the same as me, it's like, "What the heck is that?" Well, you'll see these. They're now publicly available on build.nvidia.com. You know, in drug discovery, these help determine how proteins bind to their targets, right? And those, that helps us determine the therapies for different diseases. So with AlphaFold, we can actually accurately predict what these structures are within minutes, to help scientists really never have to do the trial and error of stepping into a lab.
And with the RFdiffusion, RFdiffusion NIM, we can make a more accurate and accelerated design of de novo proteins. So we know what effectively binds, so we know what the most promising drugs, know what the most promising vaccines are, something very relevant over the last few years. So let's take a look at just our... One of the blueprints that we have online is the virtual screening blueprint, and I will try to do justice to what you're about to see. So we're starting with this choosing an amino acid that we know plays a role in a specific disease, pick a disease. With AlphaFold2, we can quickly predict what that looks like in 3D. Then we pass it to MolMIM.
The MolMIM is gonna help generate the thousands of possible drug combinations that can be applied to that disease. We then apply that to a different NIM. This is all automated in a blueprint, DiffDock. That's gonna predict the likelihood of those molecules binding to that target protein. And you'll see the generation of the molecule and the binding to the protein that we're trying to target. We can do that at five, at six times the speed at which it normally happens, with increased accuracy. So what used to be, you know, a trial and error, by scientists and by researchers, is now a very focused, very focused intelligence search to go through thousands of possibilities to accelerate the possible candidates, whittle down, you know, separate the wheat from the chaff.
That's how we're gonna be able to continue to address new diseases in a much faster way. And it's not gonna take the 40 years that we thought. We know the human genome, we know how these interact, and we can use AI over what's already been accelerated by humans. We can use AI to further accelerate those humans in the development of these drugs. All right. I honestly think Jensen's backstage adding slides. Jensen? No. Okay. One more announcement. We've been working with ServiceNow quite a bit. I know several of the institutions in the room use ServiceNow. They've been using our NeMo, our NIMs, to use gen AI to improve summarization and to improve, you know, some of the customer service examples that I talked earlier.
They're announcing very specific, very specific analysis solutions for public sector, for telecom, and for healthcare. Public sector, telecom, and healthcare. All based on the software that I just shown you. They'll be rolling it out to these industries. They'll continue to add industries. Again, I mentioned one size doesn't fit all, so the models that you pick, the NIMs that you pick, the data that you want to operate on is different. And then going through a lot of the heavy lifting of packaging this together, and then the entity that's gonna use this in the public sector feeds it that relevant data. So we're really happy to announce ServiceNow is now digging into vertical specific use cases versus the horizontal improvement of AI to increase productivity.... Okay, we talked about a lot of things, skimmed over a lot of things very fast.
Everything that we just showed and much more is on build.nvidia.com. I encourage you to go there. I encourage you to, to look, to evaluate, to criticize, to give us crap. Use the blueprints. You'll see the model card there. You'll see reference code, documentation. You'll see walk-through examples on GitHub. This is all open. This is all open. NVIDIA doesn't make an application that tells you what to do. NVIDIA makes tools and reference architectures and reference workflows that enable you to do what you want to do. And so please take a look and tell us what's missing, 'cause we're gonna continue to add to this every single day. Every single day. All right. I'm not gonna raise my arms like that, but era of Agentic AI, we've talked about that. We talked a lot about NIMs and NeMo. What's next?
The next wave is physical AI. Physical AI. Most of the focus, like I mentioned, has been on foundational models. It's been on prompting and response. It's been on building these AI agents. Good friend here in the front row regularly preaches about the biggest area that we think AI will have an impact in is in the physical world. You know, where push comes to shove, yes, we need better vaccines, we need better drugs, we need safer appliances. How do we model that? How do we model how, you know, either robots or autonomous vehicles or a self-driving car operate? It is a massive opportunity. We think that, you know, primarily to date, we've really affected the IT industry. We've affected the electron part of this, and the next step is to affect the proton part of this, the physical world.
We know that robots can help save lives, help do jobs to make it safer for manufacturing employees to work. They can be assistant to people with disabilities. We know multiple organizations are talking about the need that we'll have four million jobs created in the next several years. We're not sure where that workforce could come from. Some of it might come from the jobs that are too dangerous with robots. Some of it, and I hope a lot of it, comes from our ability to upskill our current workforce. But let's dig a little bit more into physical AI. If someone doesn't mind bringing me another diet Coke, that would be great. Sorry. All right. Physical AI is a three-computer problem.
While it's a great show on Netflix, it's not a three-body problem, where if you don't get it right, bad crap happens. The three-computer problem is more about we need to make sure those three computers are operating in tandem. We've talked a lot about the first computer, about the creation of large language models, training, inferencing, new major blueprints. The way that you would simulate those, the way that you would simulate those then, is in the second computer. That second computer is Omniverse. So you see in front of you, the... It's basically an operating system and a development platform. Thank you so much. It's used to build and deploy industrial and physical AI simulation applications. Before you roll out an AI, you wanna see how it's gonna operate, right?
And so it's essentially a feedback loop, so that AIs can then go to fix the physical world. It makes sense because these are where, you know, the digital world is where these AIs were created. And so testing them in that digital world before we turn them loose on the physical world. The reason we do that in such photorealistic detail is. Holy crap! I'm gonna be water. You know, in order for... Let's say we have a robot. I think in our booth, we got Spot the dog, the robot. Please go see him. If we didn't train Spot in a virtual world that looked like the real world, Spot's not gonna be able to function, right?
And if we had to train Spot in a real world, we'd have to fly all over the world to show him all the different things that could happen. But what if we could train Spot? What if we could train a robot on the manufacturing process in the virtual world? Train him in the virtual world. Train it at scale. Don't train one autonomous machine, one car, one robot, one at a time in the physical world. Train millions of them. Train millions of them virtually, using that agentic AI knowledge that they need to have to reason and learn, train them in the second computer to simulate how they operate, throwing at them a multitude of conditions, bad and good. And when we have success, we take that neural net, and we put it in the third computer.
We put it in the car, we put it in the submersible, we put it in the drone, we put it in the human, the humanoid. So Omniverse is our way really to test and optimize, I don't know, people's design and all the operational processes that go to producing this, these physical AIs. It's a virtual training ground. And people are building these simulations from digital twins like this, which has been happening for decades, to the facilities that they're manufacturing, factories, to the factories operating in a city, city in a nation, nation in a planet, and will eventually go beyond. So the second computer is Omniverse, and we've essentially have entered an age where, you know, every infrastructure is a robot, including this building that we're in, right?
Not right now, 'cause I would really like the AI to realize how blinding these lights are. But every building will essentially be an autonomous machine. We honestly believe that. And again, you don't train once, you continuously learn. If we're gonna model buildings, if we're gonna model machines, we're gonna model robots, we've got to be continuously training them. We've been historically good at building this, right? We build great training and inference systems. Omniverse is an amazing product. I mean, I'm a visualization guy at heart. It blows your mind away. But ironically, it's not about the pretty graphics. If we never saw the amazing graphics, it's about testing those AIs, but letting them see those amazing graphics so they learn how to learn.
We've typically been good about putting devices at the edge, whether it's in cameras or sensors. We've got GPUs and medical devices around the world. The real key is these things working together. You know, this, like, the enterprise flywheel that I showed you, you have to be constantly reinforcing and relearning what the AI is doing, bringing in new information with your flywheel, simulating it so you understand how it's gonna react in the environment that you want to deploy, and then actually deploying it in the edge. And that edge can take a number of form factors. This is what we have to perfect, and this is why we say it's a three-computer problem. Not that it's a problem, but it is a problem/opportunity that requires three computers to work together effectively.
Plenty of examples of people that are building digital twins in this vein. BMW, on the slide you see, to improve logistics, Continental to improve predictive analytics in the products that they make. Mercedes-Benz, improve the assembly lines. Foxconn is the world's largest electronics manufacturer. They're training AI robot arms and autonomous machines to perform tasks in real-world situations that need to be extremely accurate. It's electronics manufacturing. One of the most audacious things we've taken on. And, to tell us a bit more, let me bring my friend in. Tell us about Omniverse.
Demand for NVIDIA accelerated computing is skyrocketing as the world modernizes traditional data centers into generative AI factories. Foxconn, the world's largest electronics manufacturer, is gearing up to meet this demand by building robotic factories with NVIDIA Omniverse and AI. Factory planners use Omniverse to integrate facility and equipment data from leading industry applications like Siemens Teamcenter X and Autodesk Revit. In the digital twin, they optimize floor layout and line configurations and locate optimal camera placements to monitor future operations with NVIDIA Metropolis-powered vision AI.
Virtual integration saves planners on the enormous cost of physical change orders. During construction, the Foxconn teams use the digital twin as the source of truth to communicate and validate accurate equipment layout. The Omniverse digital twin is also the robot gym, where Foxconn developers train and test NVIDIA Isaac AI applications for robotic perception and manipulation, and Metropolis AI applications for sensor fusion.
In Omniverse, Foxconn simulates two robot AIs before deploying runtimes to Jetson computers. On the assembly line, they simulate Isaac manipulator libraries and AI models for automated optical inspection for object identification, defect detection, and trajectory planning. To transfer HGX systems to the test pods, they simulate Isaac Perceptor-powered Terabot AMRs as they perceive and move about their environment with 3D mapping and reconstruction. With Omniverse, Foxconn built their robotic factories that orchestrate robots running on NVIDIA Isaac to build NVIDIA AI supercomputers, which in turn train Foxconn's robots.
Oops, I almost chirped. Thanks, Jensen. I think a pretty good example of both talking about the benefit of a digital twin, but also bringing in that three-computer problem, you know, is the continual evolution of those robots to improve the physical AI. And one of the most demanding things, you know, we've seen robots serving drinks at airports, at least I have. But in that environment, it has to be perfect. In manufacturing vehicles, it has to be perfect, and so that training loop is significant. Using Omniverse, we're really happy to also announce the partnership with MITRE. It's a government-sponsored, non-profit research. I hope MITRE's in the room. Hi, MITRE. MITRE, along with the Mcity test facility at the University of Michigan. It's an autonomous vehicle test facility.
So they're gonna develop and bring together a physical test AV platform with MITRE's virtual digital twin AV test platform. So in one scoop, researchers and developers of these autonomous machines, robots, humanoids, can train, test, and validate everything in one location. Really happy to see them get together, and we're looking forward to amazing results. All right. Digital factories, physical AI for robots that operate all over the world. We can also use physical AI to analyze how we communicate. Analyze how we communicate. Real-world radio networks, 5G, 6G. We're now using NVIDIA Aerial RAN to improve both the transmission, the location of towers, and the utilization of how these networks exist. You know, a year ago or two years ago, we would have talked about placement of towers. That's great.
You don't want to like, "Can you hear me now?" kind of a thing. It is a three-computer problem, both in the placement of the towers, but in the ability to deliver the promise of 5G and 6G. We've got partners like Ericsson that are building digital twins for that 5G deployment. Great work. Ansys is modeling electromagnetic solvers to help all these companies figure out interference effects. T-Mobile and SoftBank that you see there on the screen are building new AI-accelerated radio access network solutions. When you see RAN on the screen, on the next one you will, Radio Access Network, and it is truly a three-computer problem, and that edge computer is one of the most critical things to implement this. So we're gonna want to announce our NVIDIA Aerial RAN computer.
It's really the first common platform to not just do RAN for AI, but do RAN with AI. You know, most networks are out there built on purpose-built ASICs that do one thing. They try to accelerate and eliminate noise and deliver signals over 5G. That's important to us as a community, but it's also important to emergency response, the people that are protecting our citizens, citizenry, to make sure that we have no gaps. Those networks today are about 30% utilized. 30% utilized, because they're basically focusing all on the spectral efficiency of the networks. This is the first platform, again, that accelerates RAN, AI for RAN, improves the efficiency, and enables RAN to deliver AI. It enables telecom companies, the entire telecom industry, to provide AI services at the edge.
More interesting consumer experiences, XR, VR experiences, to take advantage and prepare for 6G, to help us plan what 6G needs to be. I mean, with Aerial RAN, you know, we'll basically have this aggregated network of communications, a distributed grid, if you will, that will make it easy for people to determine the best spot in which to deliver experience, if it's an XR experience, if it's an AI agent, whatever the case may be. But we shouldn't underestimate the value of increasing the utilization from 30% to something greater. That's hard money. That's hard money for the telecom industry. They're all in with this. And so happy to announce that we've got plenty of adopters for our Aerial RAN computer for this massive telecom industry. Okay. A little bit more on Omniverse, and then we'll close.
So, we've gone from factories and robots that operate in those factories to how we communicate in our world. We previously talked a lot about, at other conferences, about how people are using Omniverse to better fight wildfires, to better predict climate. A couple of examples up there on the screen, you know, the Space Center Orbital digital twin from Lockheed Martin, which is sitting in front of you. The Weather Company has a brand new, very high resolution, weather forecasting system using Omniverse. All of this is around enabling climate researchers and companies to improve the services and to help protect our citizens. NVIDIA ourselves have been building a digital twin to help do that. A digital twin of the Earth.
Space Force talked to us a couple of years, and they asked us to help build a digital twin of the universe. Which is possible. I'm not dissing anybody from Space Force because I wanna work on that one. I really do. If I die, I wanna work on that one. But you got to start somewhere. You start with where things are done, where people work. You start with how people communicate. You start with problems you want to solve. You start with a planet. You start with things that are orbiting our planet, and then you go beyond as we learn more and more about our universe.
But studying our climate, much like healthcare, is extremely important. Look at Helene. Look at Milton. Greg, you probably want to sell your house in Florida soon, but it's okay. This exercise to build Earth-2 is one of the biggest challenges I think that humanity's ever taken on. Physically accurate, digital twin of our planet. Today, we're mainly using it again for climate, and we've got plenty of examples. I'm not gonna go into the details here, but I'll cover them in this cool video. So let's tee that up.
NVIDIA Earth-2 is a platform where climate experts can import data from multiple sources, fusing them together for analysis using NVIDIA Omniverse. Here, imagery from the NOAA GOES satellite shows Hurricane Helene forming in the Gulf of Mexico on September twenty-first, twenty twenty-four. In another part of the globe, over Greece in August twenty twenty-four, we can see how terrain data from Planet Labs can be fused with the thermal information from OroraTech's infrared detections. This combined view of different types of sensors is insightful for wildfire detection and mitigation. NVIDIA Earth-2 brings together the power of simulation, AI, and visualization to empower the climate tech ecosystem.
Cool stuff. I wanna talk a little bit about Helene. I think we did a disservice, in this case, to the work we still have to do. We alluded to the ability to bring in multiple sensor data. You know, Earth-2 can deploy a variety of observational data from sensors and ground sensors and human feedback and knowledge from weather models and the like. All we showed you there was that Hurricane Helene was forming. Next time I show this to you, I want to show you how we should have portrayed it. Looking back in time and seeing how our weather models have been off. Training AI to figure out not just where they may have been off, not just stopping at the coast, but for clearing out the inland impacts.
We build. We do a fantastic job of building those weather models that are mostly accurate, but we have conditions are changing. As variables change in those prediction models, we need to know that they're changing. We could look at the history of all the hurricanes in the world, but in each one of those, there's some subtle differences that are occurring. We need all of those sensors to help an AI inform us that this one might be different, that this one's not coming off the coast of Africa with a couple weeks of notice, that this one might take a different path for these reasons, and at least give us a little bit more time.
So we really want to use Earth-2 to help enable climate science, climate improvement, build geospatial systems that are AI interrogators and AI simulators as we introduce new things, as the world continues to produce carbon dioxide and the like. An ambitious task, working with some great partners, and we'd love to talk more about it at a booth. Okay, so lots going on, looking inward, looking at our planet. It's all cool. You know, when I was a little kid, I wanted to be an astronaut. I ended up designing airplanes, so I came close. I wanted to be an astronaut 'cause I couldn't get in my head where the galaxy ended, where the universe ended. Something, you know, was it under Horton's fingernail? Was it, you know, something had to be beyond.
So we've been constantly striving to figure out what lies ahead. You know, things like NASA's Lucy project is helping people simulate communications. So if we do set up operations on the moon, we look for dead spots. We look to leave no man or woman behind. We look to make sure we have the dark side of the moon, if you will, covered. We can model that in Omniverse so that we can build the right thing when we get there. You know, we've got the James Webb Space Telescope. You know, what previously took astronomers like 10 years to survey our galaxy, we can now do in days with another accelerated version of it. CERN's using it to look at the proton collisions that are occurring in our galaxy every day. Then there's SETI.
So today, we're announcing that SETI is releasing their first real-time search for fast radio bursts. And you're probably saying, "What are fast radio bursts?" SETI's mission is dedicated to searching for life in the universe. You know, they use our Holoscan Edge platform, that third computer. They use that third computer to listen to signals from deep space. Fast radio bursts are one of those kinds of signals, but they happen so fast. They happen in fractions of a millisecond, fractions of a second. So the only way they know they happen is they go back to the stored telescope array data. They see it happen, but it's long gone. It's long gone. We have no idea really where it came from. And so the ability to spot these things in real time is really accelerating our ability to find the source, identify the source, analyze these frequencies.
You can assume what these frequencies might be, but the amazing thing about them is not that they are radio frequencies coming from distant galaxies, that the energy coming from these distant galaxies are brighter than some of the galaxies in our universe. A fast radio burst of milliseconds is producing energy that's greater than the energy produced by some of our galaxies. So really excited about that. As a space guy, I'm excited about that. You might be saying, "So what?" But it's a massive application to help us understand not just about our planet, but what might be out there, what might be out there as we go beyond Mars, as we go beyond the moon, go beyond Mars, and further go out. So great work with SETI. Really happy to work with them and continue the exploration of space. All right.
This is probably my favorite slide. This is probably the most boring slide, but I think it's the most important slide. We're here in DC to talk to policymakers, you know, not to, not to give presentations like this. To talk to policymakers, to talk to companies, to talk to integrators. We wanna make sure that we're partnering with state and local agencies, that we're partnering with higher education institutions, that we're partnering with learning communities, maybe significantly underserved learning communities. We will be doing an injustice if AI only helps a few. I said before, high tide raises all boats. We have to raise all boats. Some of that will happen because AI is gonna improve your lives, your ease of finding information, your ease of doing a job, making things more successful.
But we also want people to benefit in understanding data science, understanding quantum computing, understanding how to be a systems administrator for these AI factories. We're conducting training all over the world. We trained over six hundred thousand people. We're training several more right now as we speak. This is one of the most important things that we can do. I'd encourage you to talk about us if you don't have a training program with us, if you're a municipality, an agency, higher education institution, that's part of our mission. And after the keynote, Greg Estes is gonna be giving a talk in Salon E about the upskilling and reskilling of workers that we believe is needed to continue to drive this economic development in this dawn of a new industrial revolution.
So very, very important activity, and I encourage you to talk with us. So what did we say? We talked about the virtuous cycle of CUDA. We talked about how accelerated computing has moved so many things along, how CUDA became this massive open ecosystem. We talked about the need to look at computing in a different way, not just at a chip level, not just at a node level, but at a data scale, data scale center level, about building AI factories. These AI factories produce product. Those products are tokens. Tokens are intelligence. Intelligence is money, money that's gonna revolutionize every industry on this planet. We're here in DC 'cause we want to help the U.S. lead in AI, every aspect of it. We need to help the U.S. lead in AI.
We owe it to our citizens to make sure they benefit from AI. So please work with us. Please come talk to us on the democratization of AI, just as general purpose computing was democratized 60 years ago. Please work with us on the upskilling of our workforce for tomorrow. Talk to us about what can be next. There's so many possibilities. Where are your biggest pain points? Where can we help? All right.
Don't forget the expo. 90 partners, 30-plus startups are out there, all talking about gen AI, agentic AI, robotics, and the like. Stop by our booth. Don't forget to spot the dog. Talk to James, our digital avatar. Get a demo of it, too. I wanna thank you all for coming. Thank you for listening. We skimmed over a lot of stuff just to give you seeds of information, and we look forward to spending the next few days talking to you about it. So thank you all very much.