Hello, Mumbai! Wow, it's so great to be here. Mumbai, so much happening. This is, as you know, India, very, very dear to the world's computer industry, central to the IT industry, at the center, at the core of the IT of just about every single company in the world. My industry, your industry, that we've built over the last several decades is going through fundamental change, seismic change, tectonic shifts. Let's talk about that today. But before we start, let me thank all of our partners, our incredible partners that we're working with here in India to transform the IT industry together. And so I'm delighted that all of you have joined us today. There are two fundamental shifts that are happening at the same time. This hasn't happened since 1964, the year after my birth. It wasn't because of my birth.
But in 1964, the IBM System/360 introduced the world to the concept of IT, the IT industry as we know it, introduced the idea of general-purpose computing. They described a central processing unit, a CPU, I/O subsystems, multitasking, the separation of hardware and application software through a layer called the operating system. IBM described family compatibility for applications so that you could benefit by the install base of your hardware to run your software over a long period of time. They described architectural benefit across generations so that the investment that you make in software, the investments you make in using the software is not squandered every single time you buy new hardware. They recognized in 1964 the importance of install base, the importance of software investment, the importance of building computers that run the software. Architecture discipline all described in 1964.
I've just described today's computer industry, the same industry that the Indian IT industry was built from. General-purpose computing as we know it has existed for 60 years until now. For the last 30 years, we've had the benefit of Moore's Law, an incredible phenomenon. Without changing the software, the hardware can continue to improve in an architecturally compatible way. And the benefits of that software double every year. As a result of doubling in performance every year, depending on what your applications are, you're reducing your cost by a factor of two every single year. The most incredible depreciating force of any technology the world's ever known. By depreciation, cost reduction, it made it possible for society to use more and more of IT.
As we continue to consume IT, as we continue to process more data, Moore's Law made it possible for us to continue to drive down cost, democratizing computing as we know it today. Those two events, the invention of the System/360 , Moore's Law with Windows PC, drove what unquestionably is one of the most important industries in the world. Every single industry has subsequently been built on top of it, IT. But we know now that the scaling of CPUs has reached its limit. We can't continue to ride that curve, that free ride. The free ride of Moore's Law has ended. We have to now do something different, or depreciation will end, and we now will not enjoy depreciation, but experience inflation, computing inflation, and that's exactly what's happening around the world.
We no longer can afford to do nothing in software and expect that our computing experience will continue to improve, that costs will decrease and continue to spread the benefits of IT and to benefit from solving greater and greater challenges. We started our company to accelerate software. Our vision was there are applications that would benefit from acceleration if we augmented general-purpose computing. We take the workload that is very compute-intensive, and we offload it, and we accelerate it using a model we call CUDA, a programming model that we invented called CUDA that made it possible for us to accelerate applications tremendously. That acceleration benefit has the same qualities as Moore's Law. For applications that were impossible or impractical to perform using general-purpose computing, we have the benefits of accelerated computing to realize that capability. For example, computer graphics.
Real-time computer graphics was made possible because of NVIDIA coming into the world and making possible this new processor we call GPUs. The GPU was really the first accelerated computing architecture running CUDA, running computer graphics, a perfect example. We democratized computer graphics as we know it. 3D graphics is now literally everywhere. It could be used as a medium for almost any application, but we felt that long-term, accelerated computing could be far, far more impactful, and so over the last 30 years, we've been on a journey to accelerate one domain of application after another. The reason why this has taken so long is simply because of this. There is no such magical processor that can accelerate everything in the world, because if you could do that, you would just call it a CPU.
You need to reinvent the computing stack from the algorithms to the architecture underneath and connect it to applications on top. In one domain after another domain, computer graphics is a beginning. But we've taken this architecture, CUDA architecture, from one industry after another industry, after another industry. Today, we accelerate so many important industries. cuLitho is fundamental to semiconductor manufacturing, computational lithography, simulation, computer-aided engineering, even 5G radios that we've recently announced partnerships with that we can accelerate the 5G software stack, quantum computing so that we can invent the future of computing with classical quantum hybrid computing. Parabricks, our gene sequencing software stack, cuVS, one of the most important things every single company is working on, is going from databases to knowledge bases so that we can create AI databases. Using cuVS, we can create and vectorize all of your data. cuDF, DataFrames.
Data frames is essentially another word for structured data. SQL acceleration is possible with cuDF. In each one of these different libraries, we're able to accelerate the application 20, 30, 50 times. Of course, it takes a rewrite of software, which is the reason why it's taken so long. In each one of these domains, we've had to work with the industry, work with our ecosystem, software developers and customers in order to accelerate those applications for their domains. cuOpt, one of my favorites, combinatorial computing application, a very compute-intensive application, for example, the traveling salesperson problem. Every supply chain, every driver-router combination, those applications could be accelerated with cuOpt, incredible speed up.
Modulus, teaching in AI the laws of physics, not just to be able to predict the next word, but to be able to predict the next moment in time of fluid dynamics and particle physics and so on and so forth. Of course, one of the most famous application libraries we've ever created called cuDNN made it possible to democratize artificial intelligence as we know it. These acceleration libraries now cover so many different domains that it appears that accelerated computing is used everywhere. But that's simply because we've applied this architecture, one domain after another domain, that we've covered just about every single industry. Now, accelerated computing or CUDA has reached the tipping point. Several years ago, about a decade ago, something very important happened. Most of you have seen the same thing. AlexNet made a gigantic leap in the performance of computer vision.
Computer vision is a very important field of artificial intelligence. AlexNet surprised the world with how much of a leap that it was able to produce. We had the benefit of taking a step back and asking ourselves, what are we witnessing? Why is AlexNet so effective? How far can it scale? What else can we do with this approach called deep learning? And if we were to find ways to apply deep learning to other problems, how does it affect the computer industry? And if we wanted to do that, if we believe in that future and we're excited about what deep learning can do, how would we change every single layer of the computing stack so that we could reinvent computing altogether? Twelve years ago, we decided to dedicate our entire company to go pursue this vision. It is now twelve years later.
Every single time I've come to India, I've had the benefit of talking to you about deep learning, had the benefit of talking to you about machine learning. And I think it's very, very clear now the world has completely changed. Now, let's think about what happened. The first thing that happened, of course, is how we do software. Our industry is underpinned by the method by which software is done. The way that software was done, call it software 1.0, programmers would code algorithms we call functions into to run on a computer. And we would apply it to input information to predict an output. Somebody would write Python or C or Fortran or Pascal or C++, code algorithms that run on a computer. You apply input to it, and output is produced. Very classically, the computer model that we understood quite well.
And it, of course, created one of the largest industries in the world right here in India, the production of software. Coding, programming became a whole industry. This all happened within our generation. However, that approach of developing software has been disrupted. It is now not coding, but machine learning. Using a computer to study the patterns and relationships of massive amounts of observed data to essentially learn from it the function that predicts it. And so we are essentially designing a universal function approximator using machines to learn the expected output that would produce such a function. And so going back and forth, looking, this is software 1.0 with human coding to now software 2.0 using machine learning. Notice who is writing the software. The software is now written by the computer. And after you're done training the model, you inference the model.
You then apply that function now as the input, that function, that large language model, that deep learning model, that computer vision model, speech understanding model is now an input neural network that goes into the GPU that can now make a prediction given new input, unobserved input. This way of doing software, notice, is based on fundamentally machine learning, and we have gone from coding to machine learning, from developing software to creating artificial intelligence, and from software that prefers to run on CPUs to now neural networks that runs best on GPUs. This, at its core, is what happened to our industry in the last 10 years. We have now seen the complete reinvention of the computing stack. The whole technology stack has been reinvented. The hardware, the way that software is developed, and what software can do is now fundamentally different.
We dedicated ourselves to advance this field, and so this is what we now build. What all of you have initially, when I first met India, we were building GPUs that fit into a PCI Express card that goes into your PC. This is what a GPU looks like today. This is Blackwell. Incredible system that is designed to study data at an enormous scale. Yeah, thank you. A massive system designed to study data at an enormous scale so that we could discover patterns and relationships and learn the meaning of the data. This is the great breakthrough. In the last several years, we have now learned the representation or the meaning of words and numbers and images and pixels and videos, chemicals, proteins, amino acids, fluid patterns, particle physics. We have now learned the meaning of so many different types of data.
We have learned how to represent information in so many different modalities. Not only have we learned the meaning of it, we can translate it to another modality. So one great example, of course, is translating English to Hindi. Translating English large body of text into other English, summarization from pixels to image, image recognition from words to pixels, image generation from images, videos to words, captioning from words to proteins used for drug discovery, from words to chemicals, discovering new compounds, from amino acids to proteins, understanding the structure of proteins. These fundamental ideas, essentially a universal translator of information from any modality to another modality, has led to a Cambrian explosion of the number of startups in the world. They're applying the basic method I just described. If I could do this and that, what else can I do?
If I can do that and this, what else can I do? The number of applications has clearly exploded. In the last couple of two, three years, the number of generative AI companies around the world, tens of thousands, tens of billions of dollars have been invested in this field, all because of this one instrument that made it possible for us to study data at enormous scales. I just want to say that in order to build the Blackwell system, of course, the Blackwell GPU is involved, but it takes seven other chips. TSMC manufactures all of these chips, and they're just doing an extraordinary job ramping the Blackwell system. This is, and Blackwell is in full production, and we're expecting to deliver in volume production in Q4. So this is basically Blackwell.
Now, this is one of the things that's really incredible about the system. Let me show it to you. Nothing's easy this morning. This is NVLink, and it goes across the entire backplane of a rack of GPUs. These GPUs are all connected from the top to the bottom using NVLink, driving these incredible SerDes, the world's longest SerDes for copper. It connects all of these GPUs together. 72 dual GPU packages of Blackwell, 144 GPUs connected together, so it's one giant GPU. If I were to spread out all of the chips to show you what this connects together, it's essentially a GPU so large, it'd be like this big. But it's obviously impossible to build GPUs that large.
So we break it up into the smallest chunks we could, which is reticle limits and the most advanced technologies, and we connect it together using NVLink. This is NVLink backbone. You're looking at all of the GPUs being connected. That's the Quantum switch that connects all of these GPUs together on top. Spectrum-X, if you would like to have Ethernet. And what connects this together, this is like 50 pounds. I'm just demonstrating how strong I am. This is connected to this switch. And this is one of the most advanced switches the world's ever built. Now, all of this together represents Blackwell. And then it runs the software that's on top. The CUDA software, cuDNN software, Megatron for training the large language models, TensorRT for doing the inference, TensorRT-LLM for doing distributed multi-GPU inference for large language models.
And then on top of that, we have two software stacks. One is NVIDIA AI Enterprise that I'll talk about in a second. And then the other is Omniverse. I'll talk about both of those in a second. This job is surprisingly rigorous. So this is the Blackwell system. This is what NVIDIA builds today. Those of you who have known us for a very long time, it's really quite surprising how the company has transformed. But literally, we reasoned from first principles how computing was going to be done in the future. And this is Blackwell. Now, the Blackwell system, the Blackwell system is extraordinary. Of course, the computation is incredible. Each rack is 3,000 pounds, 120 kilowatts, 120,000 watts in each rack. The density of computing, the highest the world's ever known. And what we're trying to do is to learn larger and smarter models.
It's called the scaling law. The scaling law comes from the fact that the observation that the empirical observation and measurements that suggest the more data you have to train a large language model with, and therefore the correspondingly large model size, you know, the more information you want to learn from, the larger the model has to be, or the larger model you would like to train, the more data you need to have. Each year, we're increasing the amount of data and the model size each by about a factor of two, which means that every single year, the computation, which is the product of those two, has to increase by a factor of four. Now, remember, there was a time when the world Moore's Law was two times every year and a half, or 10 times every five years, 100 times every 10 years.
We are now moving technology at a rate of four times every year. Four times every year over the course of 10 years, incredible scaling. And we continue to find that AI continues to get smarter as we scale up the training size. The second thing that we've discovered recently, and this is a very big deal, after you're done training the model, of course, all of you have used ChatGPT. When you use ChatGPT, it's a one-shot. You ask, you give it a prompt. Instead of writing a program to communicate with a computer today, you write a prompt. You just talk to the computer the way you talk to a person. You describe the context, you describe what it is you're querying about. You could ask it to write a program for you.
You could ask it to write a recipe for you, whatever question you would like to have. And the AI processes through a very large neural network and produces a sequence of answers, producing one word after another word. In the future, and starting with Strawberry, we realized that, of course, intelligence is not just one-shot, but intelligence requires thinking. And thinking is reasoning, and maybe you're doing path planning, and maybe you're doing some simulations in your mind. You're reflecting on your own answers. And so, as a result, thinking results in higher quality answers. And we've now discovered a second scaling law. And this is a scaling law at a time of inference. The longer you think, the higher quality answer you can produce. This is not illogical. This is very intuitive to all of us.
If you were to ask me, what's my favorite Indian food, I would tell you chicken biryani. Okay? And I don't have to think about that very much, and I don't have to reason about that. I just know it. And there are many things that you can ask it. Like, for example, what's NVIDIA good at? NVIDIA is good at building AI supercomputers. NVIDIA is great at building GPUs. And those are things that you know that it's encoded into your knowledge. However, there are many things that require reasoning. For example, if I had to travel from Mumbai to California, I want to do it in a way that allows me to enjoy four other cities along the way. Today, I got here at 3:00 A.M. this morning. I got here through Denmark. And right before Denmark, I was in Orlando, Florida.
Before Orlando, Florida, I was in California. That was two days ago. I'm still trying to figure out what day we're in right now. Anyways, I'm happy to be here. If I were to tell it, I would like to go from California to Mumbai. I would like to do it within three days. I give it all kinds of constraints about what time I'm willing to leave and able to leave, what hotels I like to stay at, so on and so forth, the people I have to meet. The number of permutations of that, of course, is quite high. The planning of that process, coming up with an optimal plan, is very, very complicated. That's where thinking, reasoning, planning comes in. The more you compute, the higher quality answer you can provide.
And so we now have two fundamental scaling laws that are driving our technology development, first for training and now for inference. The number of foundation model makers has more than doubled since the beginning of Hopper. There are more companies that realize that fundamental intelligence is vital to their company and that they have to build foundation model technology. And second, the size of the models has increased by 20, 30, 40X the amount of computation necessary to train these models because of the size of the models, but also multi-modality capability, reinforcement learning capability, synthetic data generation capability. The amount of data that we use to train these models has really grown tremendously. That's one. And then the other reason, of course, is that Blackwell is also used for generating tokens at incredible speeds.
And so together, all of these factors have led to the demand for Blackwell being incredibly high. Let's talk about now how we're going to use this technology. The headline I thought was really good. NVIDIA is AI in India. Now, aside from the letter V, you could use NVIDIA to create the rest of that sentence, which I thought was really cool. You don't know this story, but in 1993, we had to come up with a name for our company. And the reason why we chose NVIDIA, I'll do the extreme short version. The reason why I chose NVIDIA in the end was because I really love NVIDIA being sounds like a mystical place. Invidia, NVIDIA. And it sounded like a great place. And so if it turns out that computer graphics and accelerated computing didn't work out for us, we could do almost anything.
And so I'm just happy it worked out. Okay, so NVIDIA in India. We have a really rich ecosystem here. The first thing that you have to realize is that in order to build an AI ecosystem in any industry or in any country, you have to start first with the ecosystem of the infrastructure. And we announced that Yotta, that E2E, Tata Communications, and our other partners are joining us to build fundamental computing infrastructure here in India. And in just one year's time, by the end of this year, we will have nearly 20 times more compute here in India than just a little over a year ago. That's the amount of infrastructure we're getting yet. So the first part of building an AI ecosystem is the AI infrastructure, just as the first part of infrastructure for the internet ecosystem was building the infrastructure of networking.
Of course, the infrastructure of networking internet consists of the personal computer, cloud, and internet itself. In the case of AI, it starts with the AI computing infrastructure. The next part, the operating system of AI, is large language models, and we've worked with partners here in India to build the Hindi large language model. Hindi large language model, as you know, there's 25 different formal languages here in India with apparently a new dialect every 1,500 kilometers, so you don't have to go very far before you need to train another model. This is the hardest language model region in the world, and if anybody could do it, you can do it. Once India figures out how to create the Hindi large language model, you can figure it out for every other country, so the next layer is the application layer above that.
And working with us to bring AI to the ecosystem of India, of course, AI native companies that are creating new applications that are started that are made possible only with AI. And then our service partners from Wipro to Infosys to TCS working with us to take the AI models and the AI infrastructure out to the world's enterprises. Now, that's NVIDIA in India. I'm going to have Vishal, our country leader, come join me on stage because I would love for him to talk to you about some of the companies that we're working with here in India. Vishal. Vishal Dhupar. Okay, so I'm going to introduce a couple of other ideas. And so earlier, I told you that we have Blackwell. We have all of the libraries, acceleration libraries that we were talking about before. But on top, there are two very important platforms we're working on.
One of them is called NVIDIA AI Enterprise, and the other one is called NVIDIA Omniverse, and I'll explain each one of them very quickly. First, NVIDIA AI Enterprise. This is a time now where the large language models and the fundamental AI capabilities have reached a level of capabilities we're able to now create what is called agents, large language models that understand the data that, of course, is being presented. It could be streaming data, it could be video data, language model data. It could be data of all kinds. The first stage is perception. The second is reasoning about, given its observations, what is the mission and what is the task it has to perform. In order to perform that task, the agent would break down that task into steps of other tasks.
It would reason about what it would take, and it would connect with other AI models. Some of them are good at, for example, understanding PDF. Maybe it's a model that understands how to generate images. Maybe it's a model that is able to retrieve information, AI information, AI semantic data from a proprietary database. Each one of these large language models are connected to the central reasoning large language model we call agent. These agents are able to perform all kinds of tasks. Some of them are maybe marketing agents. Some of them are customer service agents. Some of them are chip design agents. NVIDIA has chip design agents all over our company, helping us design chips. Maybe they're software engineering agents. Maybe they're able to do marketing campaigns, supply chain management.
And so we're going to have agents that are helping our employees become super employees. These agents, or agentic AI models, augment all of our employees to supercharge them, make them more productive. Now, when you think about these agents, it's really the way you would bring these agents into your company is not unlike the way you would onboard someone who's a new employee. You have to give them training curriculum. You have to fine-tune them, teach them how to use, how to perform the skills, and understand the vocabulary of your company. You evaluate them. And so there are evaluation systems. And you might guardrail them. If you're an accounting agent, don't do marketing. If you're a marketing agent, don't report earnings at the end of the quarter, so on and so forth. And so each one of these agents is guardrailed.
That entire process, we put into essentially an agent lifecycle suite of libraries, and we call that NeMo. Our partners are working with us to integrate these libraries into their platforms so that they could enable agents to be created, onboarded, deployed, improved into a lifecycle of agents, and so this is what we call NVIDIA NeMo. We have, on the one hand, the libraries. On the other hand, what comes out of the output of it is an API inference microservice we call NIMs. Essentially, this is a factory that builds AIs, and NeMo is a suite of libraries that onboard and help you operate the AIs, and ultimately, your goal is to create a whole bunch of agents. We have partners here that we're working with in India, and Vishal, if you could tell everybody about our ecosystem here.
Absolutely, Jensen. You know, the word that stuck me as I was standing behind was a word called mystique. This is the mystique of India. Jensen was here exactly 12 months back, and he asked me a pretty profound question: that the rich tapestry of India, how are you going to encode it? And it all began from the infrastructure. As mentioned, in just 12 months, today, we have computing from Yotta, which has built the state-of-the-art infrastructure. Tatas are going live. E2E has been in existence, giving us excellent computing infrastructure for a long, long period of time. All this computing helped us to leapfrog to solve one of India's largest problems. That is about communication. Like Jensen said, we speak in so many languages. He did say 1,500 kilometers, but all of you know, every 50 kilometers, we change our dialect. We don't only speak English.
We speak Hinglish. And if you are from the South, there'll be a little bit of Malayalam also added into it. So how do we make this really work is the work of some of our partners. Sarvam is a classical example. Sarvam basically started their efforts to basically help India talk. They decided we're going to do voice-to-voice. And while doing voice-to-voice, they had to understand how does this language work, which is multimodal. How do we make sure that it performs? And they came into existence pretty quickly because there was infrastructure that was available to us. Similarly, we saw projects coming from BharatGPT. Again, a work that has been done predominantly in academia. The academia in India has been rich with ideas. And every time they wanted an idea to be translated into reality, they need infrastructure.
Today, the work that we are doing in IITs, the work that we are doing at different organizations is all a result of coming together, solving the critical issue that India has. Not only was the language getting solved, we also realized very quickly that there are many mega challenges that India has.
No one loves India more than Vishal.
A well-spoken Indian and a healthy Indian always make a difference. And that's why we have companies who've been working on health. As many of us know, it's been challenging how do we look after our health. But diagnostics coming from SigTuple, Qure.ai is really helping us solve many of these challenges. So with that promise, Jensen.
That's awesome.
Healthy.
Yep.
And well-spoken.
Healthy and well-spoken. And the important thing here, it takes an entire ecosystem of partners to be able to help the world apply AI to help their employees be more productive. And this is whereas India was focused on IT, the back office, operations of software, the delivery of software, producing software. The next generation of IT is going to be about producing and delivery of AI. And as you know, the delivery of software, coding, and the delivery of AI is fundamentally different, but dramatically more impactful, insanely more exciting. And the ability for this industry, for India, to help every single company around the world to enjoy the benefit of agents, to enjoy the benefits of AIs across all of their different functionalities, to be able to deploy it at scale. I don't know anybody else who could do it. This is just an extraordinary opportunity.
Our job is to help you build AI and deploy AI. Your job is to take these libraries and the capabilities that we have, combine it with your incredible IT capabilities, software capabilities, so that we can create agents and help every single company benefit from it, and so this is the first part. The second part is this: what happens after agents? Now, remember, every single company has employees, but most companies, the goal is to build something, to produce something, to make something, and those things that people make could be factories, it could be warehouses, it could be cars and planes and trains and ships and so on and so forth, all kinds of things, computers and servers, the servers that NVIDIA builds, it could be phones. Most companies in the largest of industries ultimately produce something, sometimes production of service, which is the IT industry.
But many of your customers are about producing something. That next generation of AI needs to understand the physical world. We call it physical AI. In order to create physical AI, we need three computers, and we created three computers to do so. The DGX computer, which Blackwell, for example, is a reference design and architecture to create things like DGX computers for training the model. That model needs a place to be refined. It needs a place to learn. It needs a place to apply its physical capability, its robotics capability. We call that Omniverse, a virtual world that obeys the laws of physics where robots can learn to be robots, and then when you're done with the training of it, that AI model could then run in the actual robotic system.
That robotic system could be a car, it could be a robot, it could be an autonomous moving robot, it could be a picking arm, it could be an entire factory or an entire warehouse that's robotic. That computer we call AGX, Jetson AGX, DGX for training, and then Omniverse for doing the digital twin. Now, here in India, we've got a really great ecosystem who is working with us to take this infrastructure, take this ecosystem of capabilities to help the world build physical AI systems.
You know what I've really loved is that Addverb is one of the largest robotics companies. They build robotics, and more importantly, they put it in a digital twin where optimization takes place. They teach the robot all the inputs that come out of the physical world. Not only is that work taking place, our system integrators, Accenture, TCS, Tech Mahindra, are taking that knowledge not only into India, but also outside India. Do it in India for India and do from India for the globe.
Start locally, grow globally.
Right.
Right. That's fantastic. Okay.
Thank you. Thank you.
Thank you very much, Vishal.
Thank you.
Thank you. We made a short video to help you put everything together that I just said. Run it, please.
For 60 years, Software 1.0, code written by programmers, ran on general-purpose CPUs. Then Software 2.0 arrived, machine learning neural networks running on GPUs. This led to the Big Bang of generative AI, models that learn and generate anything. Today, generative AI is revolutionizing $100 trillion in industries. Knowledge enterprises use agentic AI to automate digital work.
Hello, I'm James, a digital human.
Industrial enterprises use physical AI to automate physical work. Physical AI embodies robots like self-driving cars that safely navigate the real world, manipulators that perform complex industrial tasks, and humanoid robots who work collaboratively alongside us. Plants and factories will be embodied by physical AI, capable of monitoring and adjusting its operations or speaking to us. NVIDIA builds three computers to enable developers to create physical AI. The models are first trained on DGX. Then the AI is fine-tuned and tested using reinforcement learning physics feedback in Omniverse, and the trained AI runs on NVIDIA Jetson AGX robotics computers. NVIDIA Omniverse is a physics-based operating system for physical AI simulation. Robots learn and fine-tune their skills in Isaac Lab, a robot gym built on Omniverse. This is just one robot. Future factories will orchestrate teams of robots and monitor entire operations through thousands of sensors.
For factory digital twins, they use an Omniverse blueprint called Mega. With Mega, the factory digital twin is populated with virtual robots and their AI models, the robots' brains. The robots execute a task by perceiving their environment, reasoning, planning their next motion, and finally converting it to actions. These actions are simulated in the environment by the world simulator in Omniverse, and the results are perceived by the robot brains through Omniverse sensor simulation. Based on the sensor simulations, the robot brains decide the next action, and the loop continues, while Mega precisely tracks the state and position of everything in the factory digital twin. This software-in-the-loop testing brings software-defined processes to physical spaces and embodiments, letting industrial enterprises simulate and validate changes in an Omniverse digital twin before deploying to the physical world, saving massive risk and cost.
The era of physical AI is here, transforming the world's heavy industries and robots.