[Foreign language], welcome to the stage, NVIDIA Founder and CEO, Jensen Huang.
Hello, Taiwan. [Foreign language]
It's great to be here. My parents are also in the audience. [Foreign language] They're up there. NVIDIA has been coming to Taiwan for over 30 years. This is the home of many of our treasured partners and dear friends. Over the years, you have seen NVIDIA grow up and seen us accomplish many exciting things and have been partners with me all along the way. Today, we're going to talk about where we are in the industry, where we're going to go, announce some new products, exciting new products and surprising products that open new markets for us, create new markets, new growth. We're going to talk about great partners and how we're going to develop this ecosystem together. As you know, we are at the epicenter of the computer ecosystem, one of the most important industries of the world.
It stands to reason when new markets have to be created, we have to create it starting here at the center of the computer ecosystem. I have some surprises for you, things that you probably wouldn't have guessed. Of course, I promise I'll talk about AI. We'll talk about robotics. The NVIDIA story is the reinvention of the computer industry. In fact, the NVIDIA story is also the reinvention of our company. As I said, I've been coming here for 30 years. Many of you have been through many of my keynotes, some of you, all of them. Just as you reflect on the conversation, the things we talked about in the last 30 years, how dramatically changed. We started out as a chip company with a goal of creating a new computing platform.
In 2006, we introduced CUDA, which has revolutionized how computing is done. In 2016, 10 years later, we realized that a new computing approach has arrived. This new computing approach requires a reinvention of every single layer of the technology stack. The processor is new, the software stack is new, it stands to reason the system is new. We invented a new system, a new system that on the day I announced it at GTC 2006, no one understood what I was talking about and nobody gave me a PO. That system was called DGX-1. DGX-1, I donated the first one to a nonprofit company called OpenAI. It started the AI revolution. Years later, we realized that in fact, this new way of doing software, which is now called artificial intelligence, is unlike traditional ways of running software.
Whereas many applications ran on a few processors in a large data center, we call that hyperscale. This new type of application requires many processors working together, serving queries for millions of people. That data center would be architected fundamentally different. We realized there were two types of networks, one for north-south, because you still have to control the storage, you still have to have a control plane, you still have to connect to the outside. The most important network was going to be east-west. The computers talking to each other to try to solve a problem. We recognized the best networking company in east-west traffic for high-performance computing, large-scale distributed processing, a company that was very dear to our company and very close to our heart, a company called Mellanox, and we bought them five years ago, 2019.
We converted an entire data center into one computing unit. You heard me say before, the modern computer is an entire data center. The data center is a unit of computing, no longer just a PC, no longer just a server. The entire data center is running one job. The operating system would change. NVIDIA's data center journey is now very well known. Over the last three years, you've seen some of the ideas that we're shaping and how we are starting to see our company differently. No company in history, surely no technology company in history, has ever revealed a roadmap for five years at a time. No one would tell you what is coming next. They keep it as a secret, extremely confidential. However, we realized that NVIDIA is not a technology company only anymore. In fact, we are an essential infrastructure company.
How can you plan your infrastructure, your land, your shell, your power, your electricity, all of the necessary financing around all over the world? How would you possibly do that if you didn't understand what I was going to make? We described our company's roadmap in fair detail, enough detail that everybody in the world can go off and start building data centers. We realize now we are an AI infrastructure company, an infrastructure company that's essential all around the world. Every region, every industry, every company will build these infrastructures. What are these infrastructures? These infrastructures, in fact, are not unlike the first industrial revolution when people realized GE, Westinghouse, Siemens realized that there was a new type of technology called electricity. New infrastructure had to be built all around the world. These infrastructures became an essential part of social infrastructure.
That infrastructure is now called electricity. Years later, this is during all of our generation, we realized there was a new type of infrastructure. And this new infrastructure was very conceptual, very hard to understand. And this infrastructure is called information. This information infrastructure, the first time it was described, made no sense to anybody. But we now realized it is the internet. Every internet is everywhere, and everything is connected to it. There is a new infrastructure now. This new infrastructure is built on top of the first two. This new infrastructure is an infrastructure of intelligence. I know that right now, when we say there is an intelligence infrastructure, it makes no sense. I promise you, in 10 years' time, you will look back and you will realize that AI has now integrated into everything. In fact, we need AI everywhere.
Every region, every industry, every country, every company all needs AI. AI is now part of infrastructure. This infrastructure, just like the internet, just like electricity, needs factories. These factories are essentially what we build today. They're not data centers of the past, a $1 trillion industry providing information and storage, supporting all of our ERP systems and our employees. That's a data center, a data center of the past. This is similar in the sense that it came from the same industry. It came from all of us. It's going to emerge as something completely different, completely separated from the world's data center. These AI data centers, if you will, are improperly described. They are, in fact, AI factories. You apply energy to it, and it produces something incredibly valuable.
These things are called tokens, to the point where companies are starting to talk about how many tokens they produced last quarter and how many tokens they produced last month. Very soon, we'll be talking about how many tokens we produce every hour, just as every single factory does. The world has fundamentally changed. We went from a company on the day that we started our company, I was trying to figure out how big our opportunity was in 1993. I came to the conclusion NVIDIA's business opportunity was enormous, $300 million. We're going to be rich. $300 million chip industry to a data center opportunity that represents about a trillion dollars to now an AI factory, an AI infrastructure industry that will be measured in trillions of dollars. This is the exciting future that we're undertaking.
Now, at its core, everything we do is founded on several important technologies. Of course, I talk about accelerated computing a great deal. I talk about AI a great deal. What makes NVIDIA really special is the fusion of these capabilities. Very especially, the algorithms, the libraries, what we call the CUDA-X libraries. We're talking about libraries all the time. In fact, we're the only technology company in the world that talks about libraries nonstop. The reason for that is because libraries are at the core of everything that we do. Libraries are what started it all. I'm going to show you a few new ones today. Before I do that, let me show you a preview of what I'm going to tell you today. Everything you're about to see, everything you're about to see is simulation, science, and artificial intelligence.
Nothing you see here is art. It's all simulation. It just happens to be beautiful. Let's take a look. This is real-time computer graphics I'm standing in front of. This is not a video. This is computer graphics. It's generated by GeForce. This is a brand new GeForce RTX 5060. And this is from ASUS. My good friend Johnny is in the front row. And this is from MSI. We took this incredible GPU and we shrunk it in here. Does that make any sense? See? This is incredible. This is MSI's new laptop with 5060 in it. GeForce brought CUDA to the world. Right now, what you're seeing is every single pixel is ray traced. How is that possible that we're able to simulate photon and deliver this kind of frame rate at this resolution? The reason for that is artificial intelligence.
We are only rendering one out of 10 pixels. E very pixel that you see, only one out of 10 is actually computed. The other nine, AI guessed. Does that make any sense? And it's perfect. It's completely perfect. It guessed it perfectly. Of course, the technology is called DLSS, Neural Rendering. It took us many, many years to develop. We started developing it the moment we started working on AI. So it's been a 10-year journey. And the advance in computer graphics has been completely revolutionized by AI. GeForce brought AI to the world. Now AI came back and revolutionized GeForce. Really, really amazing. Ladies and gentlemen, GeForce. You know, when you're CEO, you have many children. And GeForce brought us here. Now all of our keynotes are 90% not GeForce. But it's not because we don't love GeForce.
GeForce RTX 50 series just had its most successful launch ever, the fastest launch in our history. PC gaming is now 30 years old. That tells you something about how incredible GeForce is. Let's talk about libraries. At the core, of course, everything starts with CUDA. By making CUDA as performant as possible, as pervasive as possible, so that the install base is all over the world, applications can find a CUDA GPU quite easily. The larger the install base, the more developers want to create libraries. The more libraries, more amazing things are done, better applications, more benefits to users, they buy more computers. The more computers, more CUDA. That feedback path is vitally important. However, accelerated computing is not general-purpose computing. General-purpose computing, write software, everybody writes it in Python or C or C++, and you compile it.
The methodology for general-purpose computing is consistent throughout. Write the application, compile the application, run it on a CPU. However, that fundamentally doesn't work in accelerated computing because if you could do that, it would be called a CPU. What's the point of not just changing the CPU so you could write the software, compile the software, run it on a CPU? The fact that you would have to do something different is actually quite sensible. The reason for that is because so many people worked on general-purpose computing, trillions of dollars of innovation. How is it possible that all of a sudden, a few widgets inside a chip, and all of a sudden, computers become 50x faster, 100x faster? That makes no sense. The logic that we applied is that we could accelerate applications if you understood more about it.
You can accelerate applications if you were to create an architecture that is better suited to run at the speed of light, 99% of the runtime, and just even though it's only 5% of the code, which is quite surprising. Most applications, small parts of the code consume most of the runtime. We made that observation. We went after one domain after another. I just showed you computer graphics. We also have numerics. This is cuPyNumeric . cuPy is the most pervasive numerical library. Aerial and Sionna. Aerial is the world's first GPU-accelerated radio signal processing for 5G and 6G. Once we make it software-defined, then we can put on top of it AI. Now we could bring AI to 5G and 6G.
Parabricks for genomics analysis, MONAI for medical imaging, Earth-2 for weather prediction, cuQuantum for quantum classical computer architectures and computer systems, cuEquivariance and cuTENSOR, contraction of tensor mathematics, Megatron. This whole column here consists of all of our deep learning and all of our libraries necessary for training as well as inference for deep learning. This revolutionized computing. It all started with all these libraries, not just CUDA, but cuDNN. On top of cuDNN, there was Megatron, Megatron, then TensorRT-LLM, and then now lately, this brand new operating system for large AI factories, Dynamo. cuDF for data frames like Spark and SQL. Structured data can be accelerated as well. cuML, classical machine learning, Warp, a Pythonic framework for describing CUDA kernels, incredibly successful.
cuOpt, mathematical operations, optimizations, things like traveling salesperson, the ability to optimize highly constrained, large number of variables type of problems like supply chain optimization. This is an incredible success. I'm very excited about cuOpt, cuDSS, and cuSPARSE for sparse structure simulators. Those are used for CAE and CAD, fluid dynamics, finite element analysis, incredibly important for EDAs and CAE industry. Of course, cuLitho, one of the most important libraries for computational lithography. Mask making could easily take a month. That mask making process is extremely computationally intensive. Now with cuLitho, we could accelerate the computation by 50x , 70x . As a result, this is going to set the stage, open the world for applying AI to lithography in the future. We have great partners here. TSMC is using cuLitho quite extensively, ASML, Synopsys, excellent partners working with us in cuLitho.
The libraries themselves are what makes it possible for us, one domain of application after another domain of science, after another domain of physics, to be able to accelerate those applications. It also opens up markets for us. We look at particular regions and particular markets, and we say, "That area could really be important to transform to the new way of doing computing." If general-purpose computing, after all these years, has run its course, why hasn't it run its course in every single industry? One of the most important industries, of course, is telecommunications. Just as the world's cloud data centers have now become software-defined, it stands to reason that telecommunication should also be software-defined. That is the reason why we've taken now some six years to refine and optimize a fully accelerated radio access network, RAN, that does incredible performance.
For data rate per megawatt or data rate per watt, we are now on par with the state-of-the-art ASICs. Once we could do that, once we could achieve that level of performance and functionality, after that, we can layer on top AI. We have great partners here. You could see SoftBank and T-Mobile, Indosat, and Vodafone are doing trials. Nokia, Samsung, Kyocera are working with us on the full stack. Fujitsu and Cisco are working on the systems. Now we have the ability to introduce the idea of AI on 5G or AI on 6G, along with AI on computing. We are doing that with quantum computing. Quantum computing is still at the noisy intermediate state, intermediate scale quantum, called NISQ. However, there are many, many good applications we could already start to do. We are excited about that.
We're working on a quantum classical or quantum GPU computing platform. We call it CUDA-Q, working with amazing companies around the world. GPUs could be used for pre-processing and post-processing, for error correction, for control. In the future, I predict that all supercomputers will have quantum accelerators, all have quantum QPUs connected to it. A supercomputer would be a QPU with QPUs and GPUs and some CPUs. That would be the representation of a modern computer. Working with a lot of great companies in this area. AI. Twelve years ago, we started with perception. AI models that can understand patterns, recognize speech, recognize images. That was the beginning. The last five years, we've been talking about generative AI, the ability for AI to not just understand, but to generate. It could generate from text to text.
We use that all the time in ChatGPT, text to images, text to video, video to text, images to text, almost anything to anything, which is the really amazing thing about AI that we've discovered a universal function approximator, a universal translator. It can translate from anything to anything else if we can simply tokenize it, represent the bits of information. Now we have reached a level of AI that's really important. Generative AI gave us one-shot AI. You give a text, and it gives you text back. That was two years ago when we first engaged ChatGPT. That was the big, amazing breakthrough. You give a text, and it gives you text back. It predicts the next word, predicts the next paragraph. However, intelligence is much more than just what you've learned from a lot of data that you've studied.
Intelligence includes the ability to reason, to be able to solve problems that you've not seen before, to break it down step by step, to maybe apply some rules and theorems to solve a problem you've never seen, to be able to simulate multiple options and weigh its benefits. Some of the technology you might have heard about, chain of thought, breaking it down step by step, tree of thought, coming up a whole bunch of paths. All of these technologies are leading the ability for AI to be able to reason. Now, the amazing thing is, once you have the ability to reason and you have the ability to perceive, that is, let's say, multimodal, read PDFs, you could do search, you can use tools, you have now agentic AI.
This agentic AI just does something that I've just described all of us do. We're given a goal, we break it down step by step, we reason about what to do, what's the best way to do it, we consider its consequences, and then we start executing the plan. The plan might include doing some research, might include doing some work, using some tools. It might include reaching out to another AI agent to collaborate with it. Agentic AI is basically understand, think, and act. Understand, think, and act is the robotics loop. Agentic AI is basically a robot in a digital form. These are going to be really important in the coming years. We're seeing enormous progress in this area. The next wave beyond that is physical AI, AI that understands the world.
They understand things like inertia, friction, cause and effect, that if I roll a ball and it goes under a car, depending on the speed of the ball, it probably went to the other side of the car, but the ball did not disappear. Object permanence. You might be able to reason that if there's a table in front of you and you have to go to the other side, the best way to do it is not to go right through it. The best way is maybe go around it or underneath it. To be able to reason about these physical things is really essential to the next era of AI. We call that physical AI. In this particular case, you're seeing us simply prompt the AI, and it generates videos to train a self-driving car in different scenarios. I'll show you more of that later.
That's a dog. It said, "Generate me a dog. Generate me one with a bird, with people." It started out with the image on the left. The phase after that, we take reasoning systems, generative systems, physical AI. This level of capability would now go into a physical embodiment. We call it a robot. If you could imagine that you can prompt an AI to generate a video to reach and pick up a bottle, of course, you could imagine telling a robot to reach out and pick up the bottle. The AI capability today has the ability to do those things. The computer that we, that's where we're going in the near future. The computer that we're building to make this possible has properties that are very different than the previous.
The revolutionary computer called Hopper came into the world about three years ago, and it revolutionized AI as we know it. It became probably the most popular, most well-known computer in the world. In the last several years, we've been working on a new computer to make it possible for us to do inference time scaling, or basically thinking incredibly fast. Because when you think, you're generating a lot of tokens in your head, if you will. You're generating a lot of thoughts, and you iterate in your brain before you produce the answer. What used to be one-shot AI is now going to be thinking AI, reasoning AI, inference time scaling AI. That is going to take a lot more computation. We created a new system called Grace Blackwell.
Grace Blackwell does several things. It has the ability to scale up. Scale up means to turn what is a computer into a giant computer. Scale out is to take a computer and connect many of them together and let the work be done in many different computers. Scaling out is easy. Scaling up is incredibly hard. Building larger computers that are beyond the limits of semiconductor physics is insanely hard. That is what Grace Blackwell does. Grace Blackwell broke just about everything. All of you in the audience, many of you are partnering with us to build Grace Blackwell systems. I'm so happy to say that we're in full production. I am also, we can also say it was incredibly challenging. Although the Blackwell systems based on HGX have been in full production since the end of last year and have been available since February, we are now just putting online all the Grace Blackwell systems.
They're coming online all over the place every single day. It's available in CoreWeave now for several weeks. It's already being used by many CSPs. You're starting to see it coming up from everywhere. Everybody's starting to tweet out that Grace Blackwell is in full production. In Q3 of this year, just as I promised, every single year, we will increase the performance of our platform every single year like rhythm. This year, in Q3, we'll upgrade to Grace Blackwell GB300. The GB300 will increase the same architecture, same physical footprint, same electrical mechanicals, but the chips inside have been upgraded. It has upgraded with a new Blackwell chip. It's now 1.5x more inference performance, has 1.5x more HBM memory, and it has 2x more networking. The overall system performance is higher.
Let's take a look at what's inside Grace Blackwell. Grace Blackwell starts with this compute node. This compute node right here, this is one of the compute nodes. This is what the last generation looks like, the B200. This is what B300 looks like. Notice right here in the center, it's 100% liquid cooled now, but otherwise, externally, it's the same. You could plug it into the same systems and same chassis. This is the Grace Blackwell GB300 system. It's 1.5x more inference performance. The training performance is about the same, but the inference performance is 1.5x more. This particular system here is 40 petaflops, which is approximately the performance of the Sierra supercomputer in 2018. The Sierra supercomputer has 18,000 GPUs, Volta GPUs. This one node here replaces that entire supercomputer. 4,000x increase in performance in six years. That is extreme Moore's Law.
Remember, I've said before that AI, NVIDIA has been scaling computing by about a million times every 10 years, and we're still on that track. The way to do that is not just to make the chips faster. There's only a limit to how fast you can make chips and how big you can make chips. In the case of Blackwell, we even connected two chips together to make it possible. TSMC worked with us to invent a new CoWoS process called CoWoS-L that made it possible for us to create these giant chips. Still, we want chips way bigger than that. We had to create what is called NVLink. This is the world's fastest switch. This NVLink here, right here, is 7.2 TB per second.
Nine of these go into that rack. And that nine, those nine switches are connected by this miracle. This is quite heavy. That's because I'm quite strong. I made it look so light, but this is almost, this is 70 lbs. And so this is the NVLink spine. Two miles of cables, 5,000 cables structured, all coaxed, impedance matched, and it connects all 72 GPUs to all of the other 72 GPUs across this network called NVLink switch. 130 TB per second of bandwidth across the NVLink spine. Just put in perspective, the peak traffic of the entire internet, the peak traffic of the entire internet is 900 Tb per second. Divide that by eight. This moves more traffic than the entire internet.
One NVLink spine across this NVLink, nine of these NVLink switches so that every single GPU can talk to every other GPU at exactly the same time. This is the miracle of GB200. Because there's a limit to how far you can drive SerDes, this is as far as any SerDes has ever driven from this goes chip to the switch, out to the spine, to any other switch, any other chip, all electrical. That limit caused us to put everything in one rack. Now, one rack is 120 kW, which is the reason why everything has to be liquid cooled. We now have the ability to disaggregate the GPUs out of one motherboard, essentially, across an entire rack. That entire rack is one motherboard. That's the miracle. Completely disaggregated. Now the GPU performance is incredible. The amount of memory is incredible. The networking bandwidth is incredible. Now we can really scale these out.
Once we scale it up, we can scale it out into large systems. Notice almost everything NVIDIA builds is gigantic. The reason for that is because we're not building data centers and servers. We're building AI factories. This is CoreWeave. This is Oracle Cloud. The power density of each rack is so great they have to put them further apart so that the power density could be distributed. Really, in the end, we're not building data centers. We're building AI factories. This is the xAI Colossus factory. This is Stargate, 4 million sq ft, 4 million sq ft, 1 GW. Just think about this factory here. This 1 GW factory, this 1 gigawatt factory is probably going to be about $60 billion-$80 billion.
Out of that $60 billion-$80 billion, the electronics, the computing part of it, these systems are $40 billion-$50 billion of it. These are gigantic factory investments. The reason why people build factories is because you know the answer. The more you buy. Say it with me. The more you buy, the more you make. That's what factories do. The technology is so complicated. The technology is so complicated. In fact, just looking at it here, you still cannot get the deep appreciation of the amazing work that's being done in all of our partners and all of the companies here in the audience in Taiwan. We made you a movie. I made you a movie. Take a look.
Blackwell is an engineering marvel. It begins as a blank silicon wafer at TSMC. Hundreds of chip processing and ultraviolet lithography steps build up each of the 200 billion transistors, layer by layer on a 12-inch wafer. The wafer is scribed into individual Blackwell die, tested and sorted, separating the good die to move forward. The chip-on wafer on substrate process done at TSMC, SPIL, and Amkor attaches 32 Blackwell die and 128 HBM stacks on a custom silicon interposer wafer. Metal interconnect traces are etched directly into it, connecting Blackwell GPUs and HBM stacks into each system and package unit, locking everything into place. The assembly is baked, molded, and cured, creating the Blackwell B200 superchip. At KYEC, each Blackwell is stress tested in ovens at 125 °C and pushed to its limits for several hours. Back at Foxconn, robots work around the clock to pick and place over 10,000 components onto the Grace Blackwell PCB.
Meanwhile, additional components are being prepared at factories across the globe. Custom liquid cooling copper blocks from Cooler Master, AVC, Aorus, and Delta keep the chips at optimal temperatures. At another Foxconn facility, ConnectX-7 SuperNICs are built to enable scale-out communications and BlueField-3 DPUs to offload and accelerate networking, storage, and security tasks. All these parts converge to be carefully integrated into GB200 compute trays. NVLink is the breakthrough high-speed link that NVIDIA invented to connect multiple GPUs and scale up into a massive virtual GPU. The NVLink switch tray is constructed with NVLink switch chips, providing 14.4 TB per second of all-to-all bandwidth. NVLink spines form a custom blind-mated backplane, integrating 5,000 copper cables to deliver 130 terabytes per second of all-to-all bandwidth. This connects all 72 Blackwells or 144 GPU dies into one giant GPU.
From around the world, parts arrive from Foxconn, Wistron, Quanta, Dell, ASUS, Gigabyte, HPE, Supermicro, and other partners to be assembled by skilled technicians into a rack-scale AI supercomputer. In total, 1.2 million components, 2 miles of copper cable, 130 trillion transistors weighing 1,800 kilograms. From the first transistor etched into a wafer to the last bolt fastening the Blackwell rack, every step carries the weight of our partners' dedication, precision, and craft. Blackwell is more than a technological wonder. It is a testament to the marvel of the Taiwan technology ecosystem. We could not be prouder of what we have achieved together. Thank you, Taiwan.
Thank you. That was pretty incredible, right? That was you. That was you. Thank you. Taiwan does not just build supercomputers for the world. Today, I am very happy to announce that we are also building AI for Taiwan. Today, we're announcing that Foxconn, Taiwan, the Taiwanese government, NVIDIA, TSMC, we're going to build the first giant AI supercomputer here for the AI infrastructure and the AI ecosystem of Taiwan. Thank you. Is there anybody who needs an AI computer? Any AI researchers in the audience? Every single student, every researcher, every scientist, every startup, every large established company. TSMC themselves does enormous amounts of AI and scientific research already. Foxconn does an enormous amount of work in robotics. I know that there are many other companies in the audience I'm going to mention you in just a second that are doing robotics research and AI research. Having a world-class AI infrastructure here in Taiwan is really important. All of that is so that we could build a very large chip.
NVLink and Blackwell, this generation, made it possible for us to create these incredible systems. Here is one from Pegatron and QCT and Wistron and Wiwynn. This is from Foxconn and GIGABYTE and Asus. You could see the front and the back of it. Its entire goal, its entire goal is to take these Blackwell chips that are, you could see how big they are, and turn it into one massive chip. The ability to do that, of course, was made possible by NVLink. It understates the complexity of the system architecture, the rich software ecosystem that connects it all together, the entire ecosystem of 150 companies that came together to build this. This architecture and the entire ecosystem in technology and software and industry has been the work of three years. This is a massive industrial investment.
Now, we would like to make it possible for anybody, anybody who wants to build data centers. It could be a whole bunch of NVIDIA GB200s or 300s and accelerated computing systems from NVIDIA. It could be somebody else. Today, we're announcing something very special. We're announcing NVIDIA NVLink Fusion. NVLink Fusion is so that you can build semi-custom AI infrastructure, not just semi-custom chips, because those are the good old days. You want to build AI infrastructure. Everybody's AI infrastructure could be a little different. Some of you could have a lot more CPUs, and some of it could have a lot more NVIDIA GPUs, and some of it could be somebody's semi-custom ASICs. Those systems are so insanely hard to build. They're all missing this one incredible ingredient, this incredible ingredient called NVLink.
NVLink so that you could scale up these semi-custom systems and build really powerful computers. Today, we're announcing the NVLink Fusion. NVLink Fusion kind of works like this. This is the NVIDIA platform, 100% NVIDIA. You got NVIDIA CPU, NVIDIA GPU, the NVLink switches, the networking from NVIDIA called SpectrumX or InfiniBand, NICs, network interconnects, switches, and all of the entire system, the entire infrastructure built end-to-end. Of course, you can mix and match it if you like. We now today make it possible for you to mix and match it even at the compute level. This would be what you would do using your custom ASIC. We have great partners I'll announce in a second who are working with us to integrate your special TPU or your special ASIC, your special accelerator. It doesn't have to be just a transformer accelerator.
It could be an accelerator of any kind that you would like to integrate into a large-scale-up system. We create an NVLink chiplet. It's basically a switch that abuts right up to your chip. There's IP that'll be available to integrate into your semi-custom ASIC. Once you do that, it fits right into the compute boards that I mentioned, and it fits into this ecosystem of an AI supercomputer that I've shown you. Maybe what you would like is you would like to use your own CPU. You've been building your own CPU for some time, and maybe your CPU has built a very large ecosystem, and you would like to integrate NVIDIA into your ecosystem. Now we make it possible for you to do that. You could do that by building your custom CPU. We provide you with our NVLink chip-to-chip interface into your ASIC.
We connect it with NVLink chiplets, and now it connects and directly abuts into the Blackwell chips and our next-generation Rubin chips. It fits right into this ecosystem. This incredible body of work now becomes flexible and open for everybody to integrate into. Your AI infrastructure could have some NVIDIA, a lot of yours, a lot of yours, and a lot of CPUs, a lot of ASICs, maybe a lot of NVIDIA GPUs as well. In any case, you have the benefit of using the NVLink infrastructure and the NVLink ecosystem, and it's connected perfectly to SpectrumX. All of that, you know, is industrial strength and has the benefit of an enormous ecosystem of industrial partners who have already made it possible. This is the NVLink Fusion. Whether you buy completely from us, that's fantastic.
Nothing gives me more joy than when you buy everything from NVIDIA. I just want you guys to know that. It gives me tremendous joy if you just buy something from NVIDIA. We have some great partners. We have some great partners, LCHEP, Astera Labs , Marvell, and one of our great partners, MediaTek, are going to be partnering with us to work with ASIC or semi-custom customers, hyperscalers who would like to build these things, or CPU vendors who would like to build these things, and they would be their semi-custom ASIC provider. We also have Fujitsu and Qualcomm who are building their CPUs with NVLink to integrate into our ecosystem. Cadence and Synopsys, we've worked with them to transfer our IP to them so that they can work with all of you and make that IP available to all of your chips.
This ecosystem is incredible. This just highlights the NVLink Fusion ecosystem. Once you work with them, you instantly get integrated into the entire larger NVIDIA ecosystem that makes it possible for you to scale up into these AI supercomputers. Now, let me talk to you about some new product categories. As you know, I've shown you a couple of different computers. However, in order to serve the vast majority of the world, there are still some computers that are missing. I am going to talk about them. Before I do that, I want to give you an update that, in fact, this new computer we call DGX Spark is in full production. DGX Spark will be ready, will be available shortly, probably in a few weeks. We have tremendous partners working with us: Dell, HPE, ASUS, MSI, Gigabyte, Lenovo, incredible partners working with us.
This is the DGX Spark. This is actually a production unit. This is our version. This is our version. However, our partners are building a whole bunch of different versions. This is designed for AI-native developers. If you're a developer, you're a student, you're a researcher, and you don't want to keep opening up the cloud and getting it prepared, and then when you're done scrubbing it, okay? You would just like to have your own, basically your own AI cloud sitting right next to you, and it's always on, always waiting for you. It allows you to do your prototyping, early development. This is what's amazing. This is DGX Spark. It's 1 petaflops and 128 gigabytes. In 2016, when I delivered DGX-1 , this is just the bezel. I can't lift the whole computer. It's GBP 300 . This is DGX-1 . This is one petaflops and 128 GB.
Of course, this is 128 gigabytes of HBM memory, and this is 128 GB of LPDDR5X. The performance is, in fact, quite similar. What is most important is that the work that you could do, you could work on this is the same work you could do here. It is an incredible achievement over just the course of about 10 years. Okay? This is DGX Spark for anybody who would like to have their own AI supercomputer. I will let all of our partners price it for themselves. One thing for sure, everybody can have one for Christmas. Okay? I have got another computer I want to show you. If that is not enough and you would still like to have your own personal—thank you, Jeanine. This is Jeanine Paul, ladies and gentlemen. If that one is not big enough for you, here is one.
This is another desk side. This is also going to be available from Dell and HPE, ASUS, Gigabyte, MSI, Lenovo. It'll be available from BOXX, from Lambda, amazing workstation companies. This is going to be your own personal DGX supercomputer. This computer is the most performance you can possibly get out of a wall socket. You could put this in your kitchen, but just barely. If you put this in your kitchen and then somebody runs the microwave, I think that's the limit. This is the limit. This is the limit of what you can get out of a wall outlet. This is a DGX Station. The programming model of this and the giant systems that I showed you are the same. That's the amazing thing. One architecture, one architecture.
This has the ability, enough capacity and performance to run a 1 trillion parameter AI model. Remember, Llama is Llama 70B. A one trillion parameter model is going to run wonderfully on this machine. That is the DGX Station. Now let's talk about—remember, these systems are—thank you, Jeanine. These systems, these systems are AI natives. They're AI native computers. They're computers built for this new generation of software. It doesn't have to be x86 compatible. It doesn't have to run traditional IT software. It doesn't have to run hypervisors. It doesn't have to run all of the—it doesn't have to run Windows. These computers are designed for the modern AI native applications. Of course, these AI applications could be APIs that can be called upon by the traditional and the classical applications.
In order for us to bring AI into a new world, and this new world is enterprise IT, we have to go back to our roots, and we have to reinvent computing and bring AI into traditional enterprise computing. Now, enterprise computing, as we know, is really three layers. It's not just the computing layer. It's compute, storage, and networking. It's always compute, storage, and networking. Just as AI has changed everything, it stands to reason that AI must have changed compute, storage, and networking for enterprise IT as well. That lower layer has to be completely reinvented, and we're in the process of doing that. I'm going to show you some new products that opens up, unlocks enterprise IT for us. It has to work with the traditional IT industry, and it has to add a new capability.
The new capability for enterprise is agentic AI. Basically, digital marketing campaign manager, a digital researcher, a digital software engineer, digital customer service, digital chip designer, digital supply chain manager, digital versions, AI versions of all of the work that we used to do. As I mentioned earlier, agentic AI has the ability to reason, use tools, work with other AIs. In a lot of ways, these are digital workers. They're digital employees. The world has a shortage of labor. We have a shortage of workers. By 2030, by about 30 million-50 million shortage, it's actually limiting the world's ability to grow. Now we have these digital agents that can work with us. 100% of NVIDIA software engineers now have digital agents working with them so that they can help them, assist them in developing better code and more productively.
In the future, you're going to have this layer. That's our vision. You're going to have a layer of agentic AIs, AI agents. What's going to happen to the world? What's going to happen to enterprise? Whereas we have HR for human workers, we're going to have IT becoming the HR of digital workers. We have to create the necessary tools for today's IT industry, today's IT workers to be able to manage, improve, evaluate a whole family of AI agents that are working inside their company. That's the vision of what we want to build. First, we have to reinvent computing. Remember what I said. Enterprise IT works on x86. It runs traditional software such as hypervisors from VMware or IBM Red Hat or Nutanix. It runs a whole bunch of classical applications.
We need to have computers that do the same thing while it adds this new capability, while it adds this new capability called Agent AI. Let's take a look at that. Okay, this is the brand new RTX Pro, RTX Pro Enterprise, and Omniverse server. This server can run everything. It has x86, of course. It can run all of the classical hypervisors. It runs Kubernetes in those hypervisors. The way that your IT department wants to manage your network and how they want to manage your clusters and orchestrate workload works exactly the same way. It has the ability to even stream Citrix and other virtual desktops to your PC. Everything that runs in the world today should run here. Omniverse runs on here perfectly. In addition to that, in addition to that, this is the computer for enterprise AI agents.
Those AI agents could be only text. Those AI agents could also be computer graphics. Little TJs, you know, coming to you, little Toy Jensens coming to see you, you know, helping you do work. Those AI agents could be either in text form, it could be in graphics form, it could be in video form. All of those workloads work on this system. No matter the modality, every single model that we know of in the world, every application that we know of should run on this. In fact, even Crysis works on here. Okay? Anybody who's a GeForce gamer, there are no GeForce gamer in the room. Okay, what connects these eight GPUs, the Blackwell, new Blackwell RTX Pro 6000s, is this new motherboard. This new motherboard is actually a switched network. CX8 is a new category of chips.
It's a switch first, networking chip second. It's also the most advanced networking chip in the world. This is now in volume production, CX8. In the CX8, you plug in the GPUs. The CX8s are in the back. PCI Express connected here. CX8 communicates between them. The networking bandwidth is incredibly high at 800 Gb per second. This is the transceiver that plugs into here. Each one of these GPUs has their own networking interface. All of the GPUs are now communicating to all of the other GPUs on east-west traffic. Incredible performance. The surprising part is this, how incredible it is. This is RTX Pro. This is the performance, and I showed you guys at GTC how to think about performance in the world of AI factories. The way to think about this is throughput. This is tokens per second, which is the y-axis.
The more output your factory, the more tokens you produce. Okay? Throughput is measuring tokens per second. However, every AI model is not the same. Some AI models require much more reasoning. You need those AI models, you need the performance per user to be very high. The tokens per second per user has to be high. This is the problem with factories. Factories either like to have high throughput or low latency, but it does not like to have both. The challenge is how to create an operating system that allows us to have high throughput, the y-axis, while having very low latency, which is the y-axis, interactivity, tokens per second per user. This chart tells you something about the overall performance of the computer, of the overall computers of the factory. Look at all of those different colors.
It reflects on, it represents the different ways you have to configure all of our GPUs to achieve the performance. Sometimes you need pipeline parallelism. Sometimes you want expert parallelism. Sometimes you want a batch. Sometimes you want to do speculative decoding. Sometimes you do not. All of those different types of algorithms have to be applied separately and differently depending on the workload. The Pareto, the outside area, the overall area of that curve represents the capability of your factory. Okay? Notice something. Hopper is our—this is the most famous computer in the world, Hopper H100. The HGX, $225,000 is Hopper's down there. The Blackwell server you just saw, the enterprise server, is 1.7x its performance. This is amazing. This is Llama 70B. This is DeepSeek-R1 . DeepSeek-R1 is 4x .
Now, the reason for that, of course, is DeepSeek-R1 has been optimized. This is—DeepSeek-R1 is genuinely a gift to the world's AI industry. The amount of computer science breakthrough is really quite significant and has really opened up a lot of great research for researchers in the United States and around the world. Everywhere I go, DeepSeek-R1 has made a real impact on how people think about AI and how they think about inference and how they think about reasoning AIs. They've made a great contribution to the industry and to the world. This is DeepSeek-R1 . The performance is 4x the state-of-the-art H100. That kind of puts it in perspective. Okay? If you're building enterprise AI, we now have a great server for you. We now have a great system for you. It's a computer you could run anything on.
It's a computer that has incredible performance. Whether it's x86 or AI, all of it runs. Okay? Our RTX Pro server is in volume production across all of our partners in the industry. This is likely the largest go-to-market of any system we have ever taken to market. Thank you very much.
The compute platform is different. The storage platform is different. The reason for that is because humans query structured databases like SQL. People query structured databases like SQL. AI wants to query unstructured data. They want semantic. They want meaning. We have to create a new type of storage platform. This is the NVIDIA AI Data Platform. On top, just as SQL servers, SQL software, and file storage software from your storage vendors that you work with, there's a layer of very complicated software that goes with storage.
Most storage companies, as you know, are mostly a software company. That software layer is incredibly complicated. On top of a new type of storage system is going to be a new query system we call IQ, NVIDIA AI Q or IQ. It's really state-of-the-art. It's fantastic. Working with basically everybody in the storage industry. Your future storage is no longer CPUs sitting on top of a rack of storage. It's going to be GPUs sitting on top of a rack of storage. The reason for that is because you need the system to embed, find the meaning in the data, in the unstructured data, in the raw data. You have to index, you have to do the search, and you do the ranking. That process is very compute-intensive.
Most storage servers in the future will have a GPU computing node in front of it. It's based on the models that we create. Almost everything that I'm about to show you starts with great AI models. We create AI models. We put a lot of energy and technology into post-training of open AI models. We post-train these AI models with data that is completely transparent to you. It is safe and secure data, and it's completely okay to use to train. We make that list available to you to see. It's completely transparent. We make the data available to you. We post-train the models. Our post-trained model performance is really incredible. It's right now downloadable open-source reasoning model. The Llama Nemotron reasoning model is the world's best. It's been downloaded tremendously.
We also surround it with a whole bunch of other AI models so that you can do what is called IQ, the retrieval part of it. It's 15x faster than what's available out there, 50% better query results. These models are available, all available to you. The IQ blueprint is open-source. We work with the storage industry to integrate these models into their storage stack, their AI platform. This is Vast. This is what it looks like. I'm not going to go into it. I just want to give you a texture of the AI models that are integrated into their platform. Let's take a look at what VAST has done.
Agentic AI changes how businesses use data to make decisions. In just three days, VAST built a sales research AI agent using the NVIDIA IQ blueprint and its accelerated AI data platform. Using Nemo Retriever, the platform continuously extracts, embeds, and indexes data for fast semantic search. First, the agent drafts an outline, then taps into CRM systems, multimodal knowledge bases, and internal tools. Finally, it uses Llama Nemotron to turn that outline into a step-by-step sales plan. Sales planning that took days now starts with an AI prompt and ends with a plan in minutes. With VAST's accelerated AI data platform, organizations can create specialized agents for every employee.
Okay, so that's VAST. Dell has a great AI platform, one of the world's leading storage vendors. Hitachi has a great AI platform, AI data platform. IBM is building an AI data platform with NVIDIA Nemo. NetApp is building an AI platform. As you could see, all of these are open to you. If you're building an AI platform with a semantic query AI in front of it, NVIDIA Nemo is the world's best. Okay? That gives you now compute for enterprise and storage for enterprise.
The next part is a new layer of software called AI Ops. Just as supply chain has their Ops and HR has their Ops, in the future, IT will have AI Ops. They will curate data. They'll fine-tune the models. They'll evaluate the models, guardrail the models, secure the models. We have a whole bunch of libraries and models necessary to integrate into the AI Ops ecosystem. We got great partners to help us do that, to take it to market for us. CrowdStrike is working with us. Dataiku is working with us. DataRobot is working with us.
You could see these are all AI operations creating fine-tuning models and deploying models for agentic AI in enterprise. You could see NVIDIA libraries and models integrated all over it. This is DataRobot, here is DataStax. This is Elastic. I think I heard somewhere that they're downloaded 400 billion times. This is Nutanix. This is Red Hat. This is Trend Micro here in Taiwan. I think I saw Eva earlier. Okay. Hi, Eva. Okay. Wait, some biases. Okay. That is it. This is how we're going to bring the world and bring to the world's enterprise IT the ability to add AI to everything that you do. You're not going to rip out everything from the enterprise IT organizations because companies have to run. We can add AI into it. Now we have systems that are enterprise-ready with ecosystem partners, incredible ecosystem partners.
I think I saw Jeff earlier. There's Jeff Clarke , the great Jeff Clarke . He's been coming to Taiwan for as long as I have been coming to Taiwan. He's been a partner of ours for a long time. There's Jeff Clarke . Our ecosystem partners, Dell and others, are going to take these platforms to the world's enterprise IT. Okay. Let's talk about robots. Agent AIs, agentic AIs, AI agents, a lot of different ways to say it. Agents are essentially digital robots. The reason for that is because a robot perceives, understands, and plans. That's essentially what agents do. We would like to build also physical robots. These physical robots, first, it starts with the ability to learn to be a robot. The ability to learn to be a robot can't be done in the physical world productively.
You have to create a virtual world where the robot can learn how to be a good robot. That virtual world has to obey the laws of physics. Most physics engines don't have the ability to, with fidelity, deal with rigid and soft body simulation. We partner with Google DeepMind and Disney Research to build Newton, -- the world's most advanced physics engine. It's going to be open-sourced in July. It's incredible what it can do. It's completely GPU accelerated. It's differentiable so you could learn from experience. It is incredibly high fidelity. It's super real-time. We could use that Newton engine. It's integrated into MuJoCo. It's integrated into NVIDIA's Isaac Sim , so irrespective of the simulation environment and framework you use. With that, we can bring these robots to life. Who doesn't want that? I want that.
Can you imagine one of those little ones or a few of them running around the house, chasing your dogs, driving them crazy? Did you see what was happening? It wasn't an animation. It was a simulation. He was slipping and sliding in the sand and the dirt. All of it was simulated. The software of the robot is running in the simulation. It wasn't animated. It was simulated. In the future, we'll take the AI models that we train and we put it into that robot in simulation and let it learn how to be a great robot. We're working on several things to help the robotics industry. You know that we've been working in autonomous systems for some time. Our self-driving car basically has three systems.
There's the system for creating the AI model. That's GB200, GB300. It's going to be used for that, training the AI model. Then you have Omniverse for simulating the AI model. When you're done with that AI model, you put that model, the AI, into the self-driving car. Okay? This year, we're deploying Mercedes around the world, our self-driving car stack, end-to-end stack. We create all of this. The way we go to market is exactly the same that we work everywhere else. We create the entire stack. We open the entire stack. For our partners, they use whatever they want to use. They could use our computer and not our library. They could use our computer, our library, and also our runtime. However much you would like to use, it's up to you because there's a lot of different engineering teams and different engineering styles and different engineering capabilities.
We want to make sure that we provide our technology in a way that makes it as easy as possible for everybody to integrate NVIDIA's technology. You know, like I said, I love it if you buy everything from me, but just please buy something from me. Very practical. We are doing exactly the same thing in robotic systems, just like cars. This is our Isaac GR00T platform. The simulation is exactly the same as Omniverse. The compute, the training system is the same. When you're done with the model, you put it inside this Isaac GR00T platform. Isaac GR00T platform starts with a brand new computer called Jetson Thor. This has just started in production. It is an incredible processor, basically a robotic processor. It goes to self-driving cars, and it goes into a human or robotic system.
On top is an operating system we call NVIDIA Isaac. The NVIDIA Isaac operating system is the runtime. It does all of the neural network processing, the sensor processing, pipelines, all of it, and delivers actuated results. Then on top of it, pre-trained models that we created with an amazing robotics team that are pre-training these models. All the tools necessary in creating this we make available, including the model. Today, we're announcing that Isaac GR00T N1.5 is now open-sourced, and it's open to the world to use. It's been downloaded 6,000 times already. The popularity and the likes and the appreciation from the community is incredible. That's creating the model. We also open the way we created the model. The biggest challenge in robotics is, and, well, the biggest challenge in AI overall is, what is your data strategy?
Your data strategy has to be—that's where a great deal of research and a great deal of technology goes into. In the case of robotics, human demonstration, just like we demonstrate to our children or a coach demonstrates to an athlete, you demonstrate using teleoperation . You demonstrate to the robot how to perform the task. The robot can generalize from that demonstration because AI can generalize, and we have technology for generalization. You can generalize from that one demonstration other techniques. Okay? If you want to teach this robot a whole bunch of skills, how many different Teller operation people do you need? It turns out to be a lot. What we decided to do was use AI to amplify the human demonstration systems.
This is essentially going from real to real and using an AI to help us expand, amplify the amount of data that was collected during human demonstration to train an AI model. Let's take a look.
The age of generalist robotics has arrived with breakthroughs in mechatronics, physical AI, and embedded computing. Just in time, as labor shortages limit worldwide industrial growth, a major challenge for robot makers is the lack of large-scale, real, and synthetic data to train models. Human demonstrations aren't scalable, limited by the number of hours in a day. Developers can use NVIDIA Cosmos physical AI World Foundation models to amplify data. GR00T-Dreams is a blueprint built on Cosmos for large-scale synthetic trajectory data generation, a real-to-real data workflow. First, developers fine-tune Cosmos with human demonstrations recorded by teleoperation of a single task in a single environment.
They prompt the model with an image and new instructions to generate dreams or future world states. Cosmos is a generative model, so developers can prompt using new action words without having to capture new teleop data . Once a large number are generated, Cosmos reasons and evaluates the quality of each dream, selecting the best for training. These dreams are still just pixels. Robots learn from actions. The GR00T-Dreams blueprint generates 3D action trajectories from the 2D dream videos. This is then used to train the robot model. Groot Dreams lets robots learn a huge variety of new actions with minimal manual captures, so a small team of human demonstrators can now do the work of thousands. GR00T-Dreams brings developers another step closer to solving the robot data challenge.
Is that great? In order for robotics to happen, you need AI. In order to teach the AI, you need AI. This is really the great thing about the era of agents where we need a large amount of synthetic data generation, robotics, a large amount of synthetic data generation, and skill learning called fine-tuning, which is a lot of reinforcement learning and an enormous amount of compute. This is a whole era where the training of these AI, the development of these AI, as well as the running of the AI, needs an enormous amount of compute. As I mentioned earlier, the world has a severe shortage of labor. The reason why humanoid robotics is so important is because it is the only form of robot that can be deployed almost anywhere brownfield. It does not have to be greenfield. It could fit into the world we created.
It could do the tasks that we made for ourselves. We engineered the world for ourselves, and now we could create a robot that fit into that world to help us. Now, the amazing thing about humanoid robotics is not just the fact that if it worked, it could be quite versatile. It is likely the only robot that is likely to work. The reason for that is because technology needs scale. Most of the robotic systems we've had so far are too low volume. Those low volume systems will never achieve the technology scale to get that flywheel going far enough, fast enough, so that we're willing to dedicate enough technology into it to make it better. But humanoid robot, it is likely to be the next multi-trillion dollar industry. The technology innovation is incredibly fast, and the consumption of computing and data centers is enormous.
This is one of those applications that needs three computers. One computer is an AI for learning. One computer is a simulation engine where the AI can learn how to be a robot in a virtual environment, and then also the deployment of it. Everything that moves will be robotic. As we put these robots into the factories, remember, the factories are also robotic. Today's factories are so incredibly complex. This is Delta's manufacturing line, and they're getting it ready for a robotic future. It is already robotics and software defined. Now, in the future, there will be robots working in it. In order for us to create robots and design robots that operate as a fleet, as a team, working together in a factory that is also robotic, we have to give it Omniverse to learn how to work together.
That digital twin, you now have a digital twin of the robot. You have a digital twin of all of the equipment. You're going to have a digital twin of the factory. Those nested digital twins are going to be part of what Omniverse is able to do. This is Delta's digital twin. This is Wiwynn's digital twin. Now, while you're looking at this, if you're not, if you look at it too closely, you think that it's, in fact, photographs. These are all digital twins. They're all simulations. They just look beautiful. The image just looks beautiful, but they're all digital twins. This is Pegatron's digital twin. This is Foxconn's digital twin. This is Gigabyte's digital twin. This is Quanta's. This is Wistron's. TSMC is building a digital twin of their next fab. As we speak, there are $5 trillion of plants being planned around the world.
Over the next three years, $5 trillion of new plants. Because the world is reshaping, because reindustrialization is moving around the world, new plants are being built everywhere. This is an enormous opportunity for us to make sure that they build it well and cost-effectively and on time. Putting everything into a digital twin is really a great first step and preparing it for a robotic future. In fact, building that $5 trillion does not include a new type of factory that we are building. Even our own factories, we put in a digital twin. This is the NVIDIA AI factory in a digital twin. Kaohsiung is a digital twin. They made Kaohsiung a digital twin. There are already hundreds of thousands of buildings, millions of miles of roads. Yes, Kaohsiung is a digital twin. Let's take a look at all of this.
Taiwan is pioneering software-defined manufacturing. TSMC, Foxconn, Wistron, Pegatron, Delta Electronics, Quanta, Wiwynn, and Gigabyte are developing digital twins on NVIDIA Omniverse for every step of the manufacturing process. TSMC, with Meta AI, generates 3D layouts of an entire fab from 2D CAD and develops AI tools on Omniverse that can simulate and optimize intricate piping systems across multiple floors, saving months of time. Quanta, Wistron, and Pegatron plan new facilities and production lines virtually prior to physical construction, saving millions in costs by reducing downtime. Pegatron simulates solder paste dispensing, reducing production defects. Quanta uses Siemens Teamcenter X with Omniverse to analyze and plan multi-step processes. Foxconn, Wistron, and Quanta simulate power and cooling efficiency of test data centers with Cadence Reality digital twin.
To develop physical AI-enabled robots, each company uses its digital twin as a robot gym to develop, train, test, and simulate robots, whether manipulators, AMRs, humanoids, or vision AI agents as they perform their tasks or work together as a diverse fleet. When connected to the physical twin with IoT, each digital twin becomes a real-time interactive dashboard. Pegatron uses NVIDIA Metropolis to build AI agents who help employees learn complex techniques. Taiwan is even bringing digital twins to its cities. Linkervision and the city of Kaohsiung use a digital twin to simulate the effects of unpredictable scenarios and build agents that monitor city camera streams, delivering instant alerts to first responders. The age of industrial AI is here, pioneered by the technology leaders of Taiwan, powered by Omniverse.
My entire keynote is your work. It is so excellent. It stands to reason. It stands to reason that Taiwan is at the center of the most advanced industry, the epicenter where AI and robotics are going to come from. It stands to reason that this is an extraordinary opportunity for Taiwan. This is also the largest electronics manufacturing region in the world. It stands to reason that AI and robotics will transform everything that we do. It is really quite extraordinary that for one of the first times in history, the work you do has revolutionized every industry, and now it is going to come back to revolutionize yours. At the beginning, I said that GeForce brought AI to the world, and then AI came back and transformed GeForce. You brought AI to the world. AI will now come back and transform everything that you do. It has been a great pleasure working with all of you. Thank you.
I have a new product. I announced several products already today, but I have a new product to announce. I have a new product to announce. We've been building out in SpaceDock for some time. I think it's time for us to reveal one of the largest products that we've ever built. It's parked outside, waiting for us. Let's see how it goes. NVIDIA Constellation. NVIDIA Constellation. As you know, we have been growing, and all of our partnerships with you have been growing. The number of engineers we have here in Taiwan has been growing. We are growing beyond the limits of our current office. I'm going to build them a brand new NVIDIA Taiwan office, and it's called NVIDIA Constellation. We've also been selecting the sites. We've been selecting the sites, and all of the mayors and all the different cities have been very kind to us. I think we got some nice deals. I'm not sure. It seems quite expensive.
Prime real estate is prime real estate. Today, I'm very pleased to announce that NVIDIA Constellation will be at Beitou-Shilin . We have negotiated the transfer of the lease from the current owners of that lease. However, I understand that in order for the mayor to approve that lease, he wanted to know whether the people of Taipei approve of us building a large, beautiful NVIDIA Constellation here. Do you? He also asked for you to call him. I'm sure you know his numbers. Everybody call him right away. Tell him that you think it's a great idea. This is going to be NVIDIA Constellation. We're going to build it. We're going to start building as soon as we can. We need the office space. NVIDIA Constellation, Beitou-Shilin . Very exciting. Okay.
I want to thank all of you. I want to thank all of you for your partnership over the years. We are at a once-in-a-lifetime opportunity. It is not an understatement to say that the opportunity ahead of us is extraordinary. For the very first time in all of our time together, not only are we creating the next generation of IT, we've done that several times. From PC to internet to cloud to mobile cloud, we've done that several times. This time, not only are we creating the next generation of IT, we are, in fact, creating a whole new industry. This whole new industry is going to expose us to giant opportunities ahead.
I look forward to partnering with all of you on building AI factories, agents for enterprises, robots, all of your amazing partners building the ecosystem with us around one architecture. I want to thank all of you for coming today. Have a great Computex, everybody. [Foreign language] Thank you. Thank you for coming. Thank you.