Good morning, everyone. I'm Mylene Mangalindan with NVIDIA Corporate Communications. Thank you for joining us to discuss the press release we issued today regarding a strategic partnership between NVIDIA and Synopsys to revolutionize engineering and design. With me on the call today are Jensen Huang, Founder and CEO of NVIDIA, and Sassine Ghazi, President and CEO of Synopsys. At this time, all participants are in listen-only mode. After prepared remarks, we will conduct a Q&A session related to the partnership announcement we made this morning. As a reminder, this call is being recorded. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without prior written consent. During this call, NVIDIA and Synopsys may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties.
For discussion of factors that could affect their business, please refer to the disclosure in NVIDIA and Synopsys' most recent Forms 10-K and 10-Q, and the reports that they may file on Form 8-K with the Securities and Exchange Commission. With that, let me turn the call over to Sassine.
Good morning, everyone. Jensen, it's great to be here with you on this special day as we're announcing the expansion of our relationship. I know our relationship has been with NVIDIA for decades.
Since the founding of our company.
Since the founding of NVIDIA. What I'm most excited about here today is truly revolutionizing how engineering is done across multiple industries. At its core, what we're announcing today is bringing together Synopsys' engineering software and domain expertise with NVIDIA's accelerated computing and AI technology to transform how engineering is done. I often refer to it as how to re-engineer engineering in this era of pervasive intelligence. NVIDIA accelerated computing sparked an AI revolution. Today, most of our experience with AI is through software on screen. As AI expands into the physical world, or physical AI, the engineering complexity of designing such systems is massive because you are dealing with multiple engineering domains that need to come together at the system level in order to make sure it's right the first time.
We're talking about electronics, electrical, mechanical, structure, thermal, connecting to a bunch of sensors, reading the physical world, and being able to prototype, design, simulate, and make sure you're doing it in a cost-effective way on time. That complexity of dealing with the system design will not happen, and it cannot happen in a practical way without accelerating at all levels of the stack, at the computation level to GPU acceleration, where we'll be able to achieve factors in terms of acceleration, as well as any AI capability to change the workflow, all the way up to the system-level modeling with the digital twin, and what is possible to create and prototype these systems virtually before you create a physical prototype.
Combining what NVIDIA is bringing with what Synopsys offers and our leadership position in EDA and IP, which has been essential to tame the complexity over the last number of decades at the semiconductor level to the system-level simulation and analysis. The solution we're bringing as silicon-to-system engineering solution with an accelerated compute and the NVIDIA stack is something that we're very excited about. The other part that's often understated is the go-to-market and customer reach. With the Ansys acquisition, we broadened our customer base to engineering teams across nearly every industry. With that combined with the technology and that global network of thousands of direct sellers and channel partners to drive adoption, the opportunity is significant.
To summarize, we will integrate the strength of Synopsys' unmatched engineering solutions with NVIDIA's leadership in accelerated computing and AI to help our R&D team design, simulate, and verify these intelligent products with greater precision, speed, at lower cost. Together, we aim to unlock new market opportunities. Let's roll a quick video and then over to Jensen.
That's incredible. Pretty amazing stuff.
It's cool.
The incredible work that we do together. Thank you, Sassine. It's great to be with you and to announce our partnership. We're at a major inflection point in computing for design and engineering. This is one of the most compute-intensive industries in the world. In fact, EDA was the killer app of the workstation industry. It drove the generation of computing before us. For the last 30, 40 years, it has supported an enormous industry of general-purpose computings and CPUs. We, as you know, NVIDIA pioneered CUDA accelerated and AI computing that is now revolutionizing every industry. We're excited to announce that we're accelerating that for EDA, SDA, system design automation, computer-aided engineering, many of the things that you saw just a second ago, and of course, computer-aided drug discovery, the next frontier. Our multi-year partnership spans NVIDIA CUDA acceleration, agentic and physical AI, and Omniverse digital twins.
These are all the things that you know I've been working on for coming up on a decade. Finally, we've reached a level of maturity and capability that we are able to revolutionize the entire design and engineering industry. The slide, the order of magnitude speed-up has unlocked the opportunity to do physically accurate digital twin simulations at a scale never before possible. You know, the speed-up of NVIDIA's GPU- accelerated computing has made it possible over the last decade to shift the way that scientific computing is done. In 2016, the mix of CPU to GPU computing in a supercomputing data center for scientific simulations, physical simulations, which is also the foundation of the work that's being done here in the EDA and SDA and CAE industry. In 2016, it was 90% CPUs and 10% GPU accelerated. This year, that entire mix has flipped.
Over the course of the last decade, we've now shifted it to 90% accelerated computing and 10% general-purpose computing. The same shift is going to happen in this industry. The order of magnitude speed-up is going to unlock opportunities that have never been possible before. The performance gains are remarkable. You can just take a look at some of the sampling here from computational lithography to logic simulation to circuit simulation to fluid dynamics and AI physics using AI to emulate first-principle physics simulations. The performance gains are remarkable. It's across core engineering workloads that I just mentioned. We're seeing speed-ups from 10x to over 1,000x. Basically, what this means is something that would take weeks could now happen in hours.
It is also very important that we can now scale the simulation from offline to real-time, or just from a part of a system to the entire system, or from just a little tiny part of a simulation, maybe to an entire factory or city. We are extending this acceleration next year to place and route, structural mechanics, electromagnetics, and thermal simulation. Basically, as we think about where we are in the journey of doing product design, from designing the silicon to the systems to the system of systems, we are in the future going to do all of this inside the computer.
With accelerated computing, we can now essentially have a digital twin of the final product that we want to build, all living inside the computer so that we can explore the design space and make the product as perfect as we can before we even make the first version of its physical embodiment. All of this runs everywhere across major clouds, providers, and OEM systems. This is one of the unique properties of NVIDIA's installed base. We're in every single cloud. We're in every single computer company. We're available on-prem, at the edge, or supercomputing centers. Just about everywhere there is computing, we can now run NVIDIA. As a result, just because of our installed base and our reach, Synopsys can also run everywhere. Engineering teams of any size anywhere in the world can now have the benefit of NVIDIA's accelerated computing and AI computing from day one.
Our partnership will open new market opportunities for both of our companies. As I mentioned earlier, this is one of the largest compute-intensive industries in the world. It has not been serviced, not been addressed by accelerated computing until now. We are accelerating that process with our deep partnership. Together, we will address nearly every industry where scientists are inventing new technologies, engineers are creating new products, and factories that make them. Synopsys is expanding our opportunity from chips to nearly every industry. NVIDIA's computing has a new domain of applications to accelerate. We are super excited about that. Sassine, this is a great partnership. We have been partners for, well, all 33 years of NVIDIA's life. I have many stories to tell when we have time for everybody. This is an exciting moment for both of our companies. We are revolutionizing the entire product design and engineering world.
It's a huge expansion of market opportunities for Synopsys. It's a huge expansion opportunity for NVIDIA. For both of us, this is a non-exclusive relationship. The reason for that is because, of course, everything that we do here is incredibly exciting for both of our companies, but it's really going to revolutionize the entire space. It's going to be a giant growth opportunity for all of us and all of our partners. Exactly.
Should we take some questions?
Thank you. That concludes our prepared remarks. We'll now open the call for questions. Your first question is from Tay Kim. Tay, go ahead and unmute and ask your question. Thank you.
Hey, guys. Hey, Jensen. Hey, Sassine. Good to see you guys again. Coding is the first vertical, I think, where we're seeing tangible, huge speed-ups and this mass productivity increase. Cursor talks about this University of Chicago study where all their clients are seeing 40% gains in productivity. I saw that you name-checked auto, industrial, and aerospace R&D. When can we expect to see those kind of big step-up productivity gains using the AI and GPUs in the R&D of these other verticals?
NVIDIA is a huge customer of Cursor. 100% of NVIDIA software engineers, chip design engineers, every single engineer now is augmented by AI. Now, of course, Cursor is generative AI with text. The work that we're doing here has to obey the laws of physics. Text is hard. Of course, learning teaching an AI how to program a computer, basically communicate with a computer, telling it what to do is one thing. Creating software that accelerates physics and has it be physically based and physically accurate so that we could design products spanning from chips to systems, the system of systems, all the way out to factories and robotics, that requires a whole new level of computation. This is really about the intersection between computing and the physical world. If you will, this is much more akin to scientific computing, physics simulation, robotics.
That field of AI is quite new. We have been working on this for coming up on a decade. The combination of the work that you have seen me do over the last decade from CUDA and all of the software stacks associated with that, for example, CUDA Litho, CUDA DSS, CUDA FFT, CUDA Blast, and all of these types of libraries that sit on top of CUDA, to NVIDIA's physical AI pioneering work, to Omniverse, our digital twin platform, all of these libraries are now going to be integrated into Synopsys, our partnership. What is really exciting is Sassine has pivoted and marshaled resources across the entirety of Synopsys to go after this opportunity. We are so excited and so pleased by the partnership. We are going to make a $2 billion investment into Synopsys.
Overall, this is going to be a gigantic growth opportunity for the industry.
Yeah. Maybe to add a little bit more color, we started redesigning some of our products about seven years ago on NVIDIA GPU using the CUDA layer. In a number of cases, we've seen a significant speed-up. When you talk about 10x, 15x, 20x speed-up on a work that may take two, three weeks that you can bring it down to hours, customers will adopt it because the bottleneck of designing these complex chips or complex systems is limited by your ability to verify and the computation requirement to verify these systems. The worst thing you can do is assume that you're ready to go and launch a product and it doesn't work as intended. That costs hundreds of millions of dollars. You spend a lot of energy in the design and simulation phase.
We have a number of products already in use at customers. It's still very early stages in terms of a broad adoption. That's why we refer to an expanded opportunity for both companies.
Thank you. Your next question comes from Ian Kutrus. Ian, feel free to ask your question.
Hey, Jensen. Hey, Sassine. Congrats on the news. Thanks for taking my question. Jensen, you're often talking about shrinking that go-to-market time across industries for other companies using EDA and multiphysics and accelerated compute. In this partnership, is there something materially new with this enhanced collaboration that changes how fast NVIDIA itself can bring silicon and platforms to market beyond what you already do today? For Sassine, is Jensen's answer going to be universal?
First of all, GPU adoption in the world of engineering and engineering is quite low. The world today has hundreds of millions of CPU cores or tens of millions of CPU systems, general-purpose computing systems, running EDA tools. We do that here at NVIDIA. In fact, our first supercomputer was not for AI. It's for running EDA so that we could design our chips perfectly so that we could have the speed of innovation so that we can waste as little money as possible when we have to do, if you have to do, redesigns. The best way to do things cost-effectively is to do it perfectly the first time. This industry has been growing with Moore's Law for 40 years. Finally, as you know, Moore's Law has really reached its limit. We need to give it a new computing, new way of doing computing.
This is where NVIDIA comes in. What's really exciting about this partnership is that it's broad and deep between us and Synopsys, between using CUDA to accelerate the software, physical AI to emulate with AI to expand its speed and scale, and also connected into digital twins on Omniverse. It's broad and it's deep in scale. We're going to accelerate the time to market. We have pretty significant teams assigned to each other to accelerate all of these software tools to create these new products that Synopsys can take to market. The time is really now. I think that the expansion of the market opportunity goes from Synopsys and the EDA industry addressing a several hundred billion dollar chip industry to now addressing a multi-trillion dollar every product industry. In the future, every product will be designed in digital twins.
To answer your second part of the question, any company with engineering R&D that is designing the next system, the next sophisticated intelligence system, you need the software stack that we deliver. In order for them to make it effectively, deliver it on time, cost, et cetera, and tame that complexity, they will be a target customer that they will welcome that speed-up and ability to design those systems. It is not unique only to NVIDIA what we were building. Every company that is building either silicon or system will welcome that speed-up and the solution we're collaborating on.
Yeah. This partnership essentially enables this industry to address the entire R&D budget of the whole world's GDP. That's a pretty big deal. Everything that's going to get designed and built will be done first in a digital twin. I said earlier that in 2016, 90% of the world's scientific supercomputers running physical simulations and biological simulations and such was 90% general purpose on CPUs. Today, it has flipped completely. CPU-only general-purpose computing and supercomputers is only 10%. NVIDIA accelerated computing is now 90% of the world's physical science simulation computers. This is going to happen also to the EDA industry. In addition to that, of course, the expansion of the TAM, expansion of market opportunity because of the work that we're doing.
Your next question comes from Stephen Nellis.
Hey, thanks for taking my question. I have one complicated question and one simple question. The complicated question is, in many of these physical simulations and things like engineering on critical components and aircraft and whatnot, there does still need to be a full double precision sort of simulation done at some point. How does this address that bottleneck of still having to do that at some point, even if you can do more iterations on the design first, but you still have to verify at the very end at double precision? That one's for Jensen. How much of a bottleneck is that still? The simple one for Sassine is, how much of this $2 billion is going to go toward purchase of GPUs or GPU cloud computing services to get your software ready to do all this?
Simulation and the evolution of simulation to co-simulation with emulation is multi-resolution. It's no longer just FP64. Of course, all of our chips support FP64. And we support FP64. We support FP32, FP16, and all of the tensor processing configurations that sit at the intersection of all that. NVIDIA's architecture is incredibly good at this. This is a fundamental difference between what NVIDIA makes and what an ASIC is. We can address the world of simulation exactly as you're pointing out. We can address the world of simulation. We can address the world of AI emulation and everything in between, co-simulation in between. For industries that are fundamentally based on physically based and mission-critical applications, this capability is really important. We address the application space completely.
The challenge, of course, is to reformulate the algorithms, the simulation algorithms, so that it could be accelerated on CUDA. That is a multi-year journey. It took Sassine and I some seven years, probably, to do CUDA Litho. CUDA DSS took several years to do. CUDA FFT took several years to do. Now integrating it and reformulating Synopsys' applications to take advantage of this acceleration is what this is all about. We have some 20 applications now that are CUDA accelerated. All of it will be CUDA accelerated and also AI physics infused and accelerated over time. We have a lot of work to do. That is what this partnership is really about, focusing the two engineering teams, deploying across, pivoting resources across the entire companies so that we could take this capability to market as soon as possible.
Maybe to add on Jensen's point here before I answer your easy question, this is not replacing an accurate simulation because you're doing something at a higher level or virtualized with a digital twin. You still need both. Accelerating something that is taking weeks or is not even practical to do and make it happen through this acceleration is where our customers are looking for that opportunity. Now, for your second part of the question, the $2 billion investment will provide Synopsys optionality. As you know, we have a very strong balance sheet. We're already a customer of NVIDIA in our data center. There is no intention or commitments to use that $2 billion to purchase NVIDIA GPU. This is something that we do in the normal course of business. We've been doing it for many years now.
Sassine and Synopsys making such a large commitment in this partnership. We thought we would also make a large commitment in this partnership. There is no purchasing relationship between the investment and anything else. Synopsys is already a customer of NVIDIA's. In the future, of course, as we move into the world of accelerated computing and AI computing, it is a much larger customer of NVIDIA. There is no relationship between the two sides of that.
Exactly.
Thank you.
Your next question comes from Nitin Dahad.
Hello. Can you hear me?
Yes.
OK. Good. Hi, Sassine. And hi, Jensen. Nitin Dahad with EE Times. Just to follow on from that last question, I didn't really understand, or maybe what's the reason for an investment rather than just a straight partnership when both companies are investing resources and continuing to invest? So where is that going into? Is it more engineering resources? I think you just said it's not extra GPUs. It's normal course of business. The second part of it is, is there any timeline? You talk about a multi-year partnership. You talk about bringing certain products to market. What are the key things that are coming out immediately from this partnership in terms of products that customers can use? I have lots of other questions. I'll do that later with you separately.
The investment is a demonstration of commitment and appreciation for Synopsys going all in on the NVIDIA platform. Not to mention, I think it's a great investment. We're revolutionizing EDA, SDA, CAE, computer-aided drug discovery, basically all aspects of R&D, R&D for anybody who does product design and product innovation and product manufacturing. This is such a large expansion of the market opportunity. Along with the partnership, they're making such a large commitment on building on NVIDIA. This was a wonderful way for us to show our commitment in the partnership. I recognize that none of this is exclusive. Synopsys has a lot of chip partnerships they're going to continue to nurture and continue to advance. NVIDIA has a lot of partnerships with Cadence and Siemens and Dassault that we're going to continue to nurture and advance. Each one of these partnerships are different.
This just felt natural to us. We're delighted to do it. I think it's going to be a great investment for us.
Nitin, on the why Synopsys, you heard from Jensen the why NVIDIA made the investment. From a Synopsys point of view, the why did we take the investment is really about optionality and acceleration. We can do the work we're doing on our own. We've been doing it for seven years on our own. It's not like we're looking for a motivation to do it. It's going to become table stake. We know the market is going there. Can we run faster? Can we deliver faster to the roadmap? You mentioned as well, by when will this technology come about? We have already a number of products that we have demonstrated and are in use with customers that are demonstrating that speed-up. We're in early, early days.
There are so many opportunities to truly change the way simulation is done, the various bottlenecks of design, how do you accelerate it, not only with the GPU CUDA layer, but with the AI workflow that it will change with agentic, not only the generative that we already have in the portfolio, but how it will pivot to change the way engineering is done. At the system level, how do you virtualize the system to reduce the cost, improve the speed to go to market for our customer? For us, it's all about acceleration and grabbing the opportunity that we strongly believe is going to happen.
You look at all of the work that we've done to prepare our go-to-market at this point. Twenty-some-odd applications have been accelerated. Where are they going to run? Notice, we showed you two systems, two incredible systems that are going to accelerate all of these tools. One of them is Blackwell B200. The other is Blackwell RTX Pro. One of them is optimized for the highest possible speed in all the simulation and all the AI physics emulation, or one that's designed to do all of that in addition to Omniverse. These two architectures are now available from all of the world's OEMs. These two architectures are available in all the world's clouds. Now we have the ability to accelerate applications for anybody wherever they like to be able to do it.
Everything from the partnership, the deployment of resources across all these different domains of applications and tools, to the go-to-market, preparing the OEMs, preparing the clouds, all of that is now ready. This is a very big moment for the industry. Now the race is on to move the world from general-purpose computing to adding on top of it accelerated computing.
Your next question comes from Matt Hamblin.
Hi, everybody. Thank you very much. I still don't understand what multi-year means. I mean, three years makes sense. But 10 years seems like unrealistic.
We showed a roadmap that by 2026, we are targeting a number of areas that today I call them bottlenecks in the design. What does that mean? Where you have a choke point driven by the time it takes to do a task. Those are areas that we have committed R&D teams, prioritization on accelerating our product and workload for NVIDIA GPU by 2026.
I think that makes a lot of sense. I mean, basically, the race is on now. NVIDIA has partnered with design tool companies across the industry for some time. This is kind of, if you will, the inflection point that everybody is now having to race towards over the next couple of years. I think we're going to see that the industry shifts from just general-purpose computing to accelerated computing. The ability to scale up simulations in order of magnitudes to be able to scale up the speed of simulation by orders of magnitude, that day has come. I think every single engineering organization over the next couple of years are going to enjoy the benefits of the work that we're doing here and the platform shift that's happening. I don't think it's going to take 10 years.
I said earlier that scientific computing, which moves relatively slowly, actually, in the course of the last 10 years went from 90%, 10% general-purpose computing to now 90% accelerated computing. That shift took 10 years. This is the industrial space. People here, their livelihood depends on it. These tools are mission-critical. Time to market is mission-critical. Competitiveness is mission-critical. I think we're going to see a platform shift of this really quite gigantic computing industry over the next couple of two, three years. Almost every engineering organization will consider accelerated computing starting tomorrow morning.
Yeah. The most difficult part is doing the work, meaning designing to get to the acceleration. Once the acceleration is there, getting the customer to adopt is something I'm less worried about because the bottleneck, the need for that speed-up is there. Customers are often looking for new methods to get the work done faster and still with high level of accuracy. The 2026 reference I gave is the commitment from our Synopsys R&D to prioritize this key technology to be accelerated on NVIDIA GPU, as well as demonstrating that acceleration with the customers. The adoption will happen.
Your next question comes from Christina Partzanevlis.
Hi. Sorry about that. Just two questions. The first question just has to do with regulatory concerns. Jensen, if you're investing in Intel, Anthropic, the list continues, CoreWeave, are you concerned that a $2 billion investment in Synopsys would start to raise some eyebrows? The second question is, let's say I'm an AMD engineer. Does this mean that your tools now will be optimized for NVIDIA and it makes it more difficult for competitors to be utilizing them? Thank you.
The reason why we're investing in our ecosystem is because we're going through a platform shift from general-purpose computing to accelerated computing and AI computing. It is sensible that when we're building and you know that our platform consists of CUDA and all the CUDA X libraries and Omniverse and AI, both agentic as well as physical AI. Those libraries, that software, NVIDIA is in a lot of ways a software company that builds great chips. Everybody thinks of us as a chip company. In fact, what really gets integrated into companies like Synopsys are all of the libraries that we designed and created. When we are able to invest in key parts of the ecosystem, we accelerate the entire ecosystem. That investment makes perfect sense for us. The partnership is non-exclusive. There are no obligations whatsoever for Synopsys to only buy NVIDIA.
They're welcome to continue to work with their rich ecosystem of chip partners. We're going to continue to work with our ecosystem of really important EDA and SDA and CAE industry partners like Cadence and Siemens and Dassault. With respect to the tools that we all buy, as you know, NVIDIA uses lots of x86. We partner with Intel. We partner with AMD. We buy lots and lots of CPUs. All of NVIDIA's EDA, if you will, the way we do chip design, the way we do system engineering today, is still largely based on x86 CPUs. This is really the beginning of that platform shift. In the future, that's going to be augmented and accelerated by NVIDIA GPUs. I'd be delighted for all of the chip industry to be buying NVIDIA GPUs for designing their chips.
Just as I buy their chips to design our chips.
Yeah. Christina, to be clear, today Synopsys' pretty much entire portfolio is x86-based. When a number of customers started using, say, ARM, we ported our software to ARM-based architecture. Hyperscalers investing in their own compute, we port our software to deliver to their own compute. We follow the customer requirements and needs. What's unique about the partnership here with NVIDIA is we have a partner that is investing in the CUDA layer, not only the architecture of the compute, and is aiming at the market of engineering computation and speeding up the solution because that requires an investment from both sides. This is not making our software available on an AMD or an Intel architecture or ARM, et cetera, because it's already available. Can you take the partnership to the next level of acceleration and value to the customers? That requires investments from both sides.
If an AMD or an Intel or whichever customer wanting to capture a similar opportunity, it's not exclusive. We're willing and happy to work with them. That's what's unique about what we're talking about here today.
That concludes our Q&A.
One of the things that I will say, one of the things I will say is that of all of the AI opportunities, industrial AI, physical AI is the largest of all. The reason for that is very clear. The world's industries represent the vast majority of a $100 trillion industry. Today, that industry, whether you're designing cars or trains or planes or designing computers, all of that largely is based on general-purpose computing. We know that that journey, which has taken us the last 40 years, has been incredible. Moore's Law has enabled us to reach wherever we the incredible condition that we are in today.
In order for us to go even further, in order for us to do even more, expanding the reach of the design and engineering so that we could do almost everything in the world inside a digital environment long before we create the physical manifestation, that journey we've been preparing for several years now. Today, our announcement really kicks it into turbocharge. This is a huge opportunity for NVIDIA, a huge opportunity for Synopsys. I'm grateful for our partnership over the last 33 years, frankly, since the very first day of our company. In the first day of our company, Synopsys enabled NVIDIA to design our chips. Now our partnership is going to enable everyone to design everything that's physically manifested in the future. Thank you for your partnership. I'm very excited about this.
I'm looking forward to incredible returns on my investment.
You'll get it.
Thank you, Jensen.
Thank you. Thank you, Sassine.