Welcome to Synopsys Converge. I'm so excited to be here. This year is our year one of the new Synopsys. While 2026 marks our 40th year anniversary, it is the year one of the new company. In 2024, we set out on a mission to transform Synopsys, and it took us two years to get to that journey. This year is the first year of the joint company with Ansys. I'm so excited to share with you the why and what is it that we see in the future in order for us to make that big investment and decision to deliver to the technology of the future that is necessary to deliver to the ambition of the products that you want to build and deliver. I have three sections for my presentation.
The first section I'll walk through what do I see when I live and operate in the future, and I often find myself, I'm living five years from now, then I have to bring myself back and decide with the team what are the decisions we need to make in terms of technology to deliver to our customers. I'll share as well the key investments we're making as a company to address the challenges and opportunities to re-engineer the engineering and deliver to those future opportunities. With that, let's get started. You've noticed we have a new name for our conference, Synopsys Converge. We're bringing together the silicon chip users with the system users, so the traditional SNUG and the Ansys users in one conference. You will recognize, and I hope you will see the importance of that convergence in order to deliver for the engineering solutions.
Among our users, we have our customers, executives, and number of our partners as well. The convergence decade. We are truly at an amazing point in human history when it comes to technology. The pace in which technology is moving is like something we've never witnessed before. I don't want to ramble and go through what have we witnessed over the last three decades, because we've all lived it, and we've all lived the technology, disruption that has been rolled out over the last three decades that changed the way we work, the way we live, the way we operate. A lot of great things have happened. If you look at just the last three years, and let's call it the ChatGPT moment, that questioned every industry and every executive, every human on what is possible with artificial intelligence.
There are massive things that are changing due to technology. What's different when you go and look at the next decade is there are gonna be mega trends happening simultaneously. When mega trends happen simultaneously, there's a need to think how do you converge from a technology point of view to deliver to the simultaneous disruption that is happening. Compute and AI are absolutely at the center of making that convergence possible and happen. If you look at compute, it's always been on an exponential with Moore's Law. But look at the last five, seven, eight years. That exponential, driven by many new architecture, advancement in process technology, advancement in packaging, that is making the capabilities of the compute strong enough to make AI flourish. AI would not have happened with the power that we're witnessing without the power of compute.
Now, our own experience with AI is we took advantage of the opportunity of AI to see how can we bring it into EDA to make our engineers, our customers who are designing the compute even more efficient to design even a stronger compute. There is this loop of optimization that AI is impacting, and compute that is touching many other industries and making things possible that before were not possible. Having stronger, better, faster machines and having the intelligence layer is making possible for new material to be discovered, for health and life sciences possibilities to change, curing diseases and other. That all requires a lot of energy, new energy sources, a new storage of energy. They're all being explored. Again, unlike isolated technological breakthroughs, we're witnessing that unprecedented convergence of technology.
We are in the era of pervasive intelligence, where AI is infused everywhere. Making it possible for products to move from being smart to intelligent. The difference? Intelligent products can learn, adapt, act autonomously, and that requires massive compute and software intelligence in order to make it happen. All of this is running and powered by silicon. That is the opportunity for this team and our engineering community to deliver to that future of era of pervasive intelligence. Now, everything we've witnessed so far in AI is digital AI that lives in data centers or on your devices. Very cool, very exciting, very useful. It hasn't moved yet to the physical AI applications and possibilities. As it starts moving to physical AI, that's where the bits meet atoms, and I would like actually to think of it more as bits will inhabit and command the atoms.
There are many versions of physical AI. If we were to separate the versions into embodied and non-embodied physical AI, where the non-embodied are operating in the real world, untethered, dealing with many unknowns, compared to a lot of the advancements in embodied AI, non-embodied AI that are more in a controlled environment, still very intelligent, but in a controlled environment. The difference between the two is orders of magnitude of complexity in order for engineering to deliver to it. That requires different techniques, different approach of digital twinning, what is your building, otherwise you, your cost, your time will be impossible to meet. These technical challenges need to be rethought, and how do we redesign the approach to deliver to these products? Now, let's hear from some of our customers who are designing that future and see what they have to say.
There are going to be tens of millions of unfilled labor positions by 2030. Therefore, we have a problem. That's where humanoids can fill this void.
We are a few years away from humanoid robots showing up in production-type facilities.
We see humanoids as augmenting the human workforce.
Physical AI is enabling artificial intelligence to understand environments, physics, behavior of physics, and everything what's around us.
Much like AI agents are helping us do our work, there will be physical AI agents, which are basically humanoids, helping complete tasks.
There's no one better than Schaeffler, having already scaled in the automotive industry and now also in robotics.
We have a partnership with a customer, and we are integrating their robots in our production facility to do tasks like picking up boxes and putting them on conveyor belts.
This will continue to expand as the capabilities of these humanoids increase and they become more generalistic.
Now the big question is, when can a robot take on more difficult tasks? If you have heat treat operations, and you're working next to a heat treat oven, you have 120 degrees Fahrenheit, or maybe you're dealing with chemicals that are not good to breathe in. These are the types of environments where humanoid robots could really help out.
Physical AI will be part of our DNA in the future. Our vision is that we have generalist humanoids, and these humanoids then won't only be at one portion of one machine, but they will take care of centers of machines. In simulation, we can train these robots, so the robots then will get brought into the plant. They will have this programming already, and there will be minimal training then left for the robots to do.
10, 20 years from now, I see humanoids basically doing every task we don't like to do, and this is the next big step of humanity. Doing things we love to do or focusing on things which are important for us.
We're just on the cusp of physical AI.
I get goosebumps when I see these videos and the contributions that we deliver in order to create that future. If you think of that future as the age of reasoning or age of embodied AI, what an amazing possibility is here. Now, it's not easy to deliver to that future. There are many challenges. They're gonna be power-constrained. As I mentioned earlier, these systems will operate untethered without a power cord, battery-operated. Latency sensitivity is critical. You cannot send an action to a data center and come back to that physical system. The uncontrolled environment is very complex because there's so much you can train, but when the system is operating in an uncontrolled environment, the complexity is at a whole other level. Lastly is the cost.
In order for these systems to be practically deployed, we have to manage the cost to deliver and design to these systems. Now you may ask, how do we deliver to this future? The we in this case is not Synopsys alone, is how do we as an ecosystem deliver to this future? If you look at the ecosystem that we operate in at the silicon level and the system level. The reason it's so expanded is complexity. Complexity is what drives the need for tighter collaboration and optimize and improve the final outcome for the customer. Now Synopsys' roots in the silicon with many ecosystem partners, with the Ansys expansion at the system level, and there's a productivity layer.
A lot of these systems are gonna operate using computation on the cloud, different type of compute, needing the ability to have hybrid and mixed type of optionality with resources as well, and the partners that deliver with us to that future. Synopsys is the leader in engineering solution from silicon to systems. As I said earlier, this is a special year. This is the year where we're refounding our company, where we have done. I wanna brag a little bit here, a great job continue on delivering on being the number one in EDA for more than a decade and a half, due to our great investment and customer relationship and trust to continue on innovating and deliver to the EDA leadership. Silicon IP, which is an essential component to deliver to these complicated systems, we're the number one in interface IP.
With the Ansys addition, where Ansys brings in the leadership position in multiphysics simulation and analysis. 50-year-old company that has delivered true innovation and kept on the leading edge to maintain that number one trusted high-fidelity physical multiphysics simulation and analysis. We are investing and expanding into creating the system simulation analysis for fast-growing digital engineering world, and I'll touch more on this when I talk about digital twinning. It becomes very important, how do you modernize the engineering of the future? When we talk about convergence, the assets are very important in order to deliver on that future. There are three key investment areas we're making as a company. The first one is co-design, second digital twin, and agentic AI. Now, for the skeptical engineer sitting here, you may argue, "What's new about co-design and digital twin?
We've been doing co-design or digital twin for decades. Now, that's true. The concept of co-design and the concept of digital twin is not new. Engineers or companies in general, you don't make an investment in your workflow just because. You do it because you have constraints to deliver. When I just shared how we envision the future products, the constraints are massive. You cannot continue designing without rethinking how to engineer your product. I'm gonna start with co-design. I'm a big believer that innovation happens and that the brilliant innovation happens when the constraints get tighter and tighter. When the constraints are the highest, that's where you have to start thinking outside the box to the system, to the system of system, and you open up the opportunity to optimize. In order to do so, there's a lot of complexity that you need to deliver.
First, what is the definition of co-design? In co-design, you have each domain that interacts, that's back to the system, and you open up to the next level of system, and you have multiple engineering disciplines in order to deliver the final product. That reasoning robot is not only an electronics challenge, it has electrical, thermal, mechanical fluids, many engineering complexity we need to take into account. I want to introduce the concept of horizontal co-design and a vertical co-design. In a vertical co-design, you're optimizing within the engineering domain, and a horizontal co-design is across. If you look at the vertical co-design, we can argue that in semiconductor and silicon, we've been doing vertical co-design for a while. What drove it? Its complexity of silicon started taking longer time to deliver to the silicon to bring it back to software developers to write the software.
I'm sure many of you remember the software used to get started either on an older silicon, and the derivative's gonna look pretty much the same, or you wait for the silicon to come back, and you start building your software. Automotive industry was a perfect example where the life cycle of a design was multiple years, is because of the complexity of designing each part independently. When you bring it together at the system level, that's where the complexity of design and optimization, and in many cases over-designing, you build so much design margin as you hand off between the different domains. Now, within chip design, and you look at 3D advanced packaging, the challenge is no longer electronics only.
You have to take into account mechanical issues, fluid, stress, many aspect you need to start looking horizontally to co-design as you're optimizing vertically. The key here is how to reduce the overdesign margin. Now, within the electronics, and using the reasoning robot as an example again, you have massive amount of silicon. If you look at the three main functions from an electronics standpoint, you have the embedded reasoning, the VLA, which is the vision language action, that has to be connected to multiple functions like cameras, microphones, actuators, sensors. Some of these electronics, they may come from different suppliers, they may be on a different node, different technology. The way the electronics is happening, architecture, you need to have models of that silicon in order to see are you getting the right optimization, the right cost? Are you overdesigning for it or not?
I can argue when you talk about system, a system designed by its definition is a co-design challenge. You cannot talk about system without having the multiple optimization domains and opportunities that you have. Let's hear from one of our customers about their approach and how they're thinking about system design and co-design.
Co-design's really important. We really have to partner with our OEMs and our tiers. Again, it's really having access to the data and understanding, one, the technology roadmaps that everybody's really working on and what the vision and the execution they're trying to actually execute in the market for their customers. To be able to provide that feedback loop, and not from a physical perspective, but also from a software and a data perspective is really, really important.
Co-design from the aspect of getting that feedback, working with partners is central to how we achieve the differentiated Audi experience today. This is typically achieved by working very closely with partners, by simulating and also producing early prototypes and getting feedback as soon as possible.
Okay, I'll say it again. Isn't that cool? It's very cool. All right. Now we talked about how does the future looks like, the co-design, and why. I wanna talk about the current portfolio and what is it we're doing, where are we investing, and the stages of the investments we're in in order to deliver to these capabilities. If you break down EDA into the various functions of digital, analog, verification, sign-off, you start from a spec and a requirement, then you go through the phases of the design. In multiphysics, it's multi, which is the various type of physics that you need to take into account when you're simulating or bringing it into the design stage.
By having the strength of this portfolio, even though every part of the portfolio is open, it's interoperable, our customers can pick and choose how you use it in your workflow, there is a value and a power in fusing the technology. You reduce your margin, you have a more converged design. As we start looking toward the future of system design and simulating the system, it's important whatever you're simulating in the system is gonna match to real. There is the whole sim to real concept that I'll touch on. Synopsys' DNA over the last decade plus has been how do we fuse the technology together in order to help our customers reduce the iteration due to step number five of the design does not match the step number three, etc .
I wanna reminisce a little bit when I look at this slide because I've lived each stage of it. I remember the days around 2010 timeframe when the big value we provided our customers was the value links, we called them. How do you go from a synthesis to PNR to extraction to timing with value links? Our customers reached a point where the biggest challenge was timing closure. They get to that final stage, then they keep iterating to close timing. We embarked on a mission around 2017 timeframe to create Fusion Compiler or the design fusion platform.
Instead of correlating, you bring in the actual sign-off engine inside the design stage, and you reduce the loops later in design. As technology moved to the Ångström era , it wasn't only about timing and extraction, you had to bring in many other aspect of the design process and flow fused into the design platform. As you go into the AI super chips era, that's what we just talked about. It's no longer a vertical optimization, it's a horizontal co-design. Most of the challenges in advanced packaging, and I'm not gonna oversimplify the complexity of designing the electronics, but it's how you take into account the thermal, the warpage, the cracking of the dies, the mechanical aspect. If you do it too late, then think about the cost. How do you bring that same capability and fuse it inside the design platform that you're building?
I am so excited to announce our Multi-Physics Fusion technology, and the big, big thank you here is to our R&D team. When we announced the acquisition of Ansys, and when we closed on it and started integration, our customers, their first question was, "When do I get the technology?" We promised the first half of 2026, and we said, when we promised that first half of 2026, it's because we know the team. We've had a partnership with Ansys in 2017. We know very well the customer requirements because we work very closely with customers. To deliver on that fused technology in that timeframe was a fantastic execution from our team, so big thank you for our Synopsys R&D team to deliver to it. Thank you. I can ooze with excitement when I'm talking about this announcement.
As I go through and tomorrow, later today actually, Shankar will go through some of the details of what does it deliver to our customers, you will see the value and the thesis of bringing the technology together. Now I'm gonna be repetitive. We're gonna be announcing actually as part of the Multi-Physics Fusion technology, in the first wave, 4 technologies that we are in beta testing with customers today, and it's going through the normal NPI process, the new product introduction process, that once we do the testing internally, once we work with the beta customers, then our field organization start pressure testing it and expanding the customer engagement. That's the phase we're in right now. That first wave, we have Multi-Physics solution for timing sign-off, for multi-die, design closure, and analog design. The concept is the same.
How do you bring in electromagnetics, thermal, mechanical effect into the design phase so when you're designing, you have visibility to this multiphysics impact? Where you see the purple, the Synopsys product is the host, and the Ansys product is integrating into the host. Static timing analysis, this is PrimeTime, and we have RedHawk, which is the leader in thermal stress IR-aware STA analysis, will be plugged into the STA solution to deliver to a more complete sign-off with extreme conditions that today is very difficult for our customers to sign off with these various conditions. The next product is multi-die design. This is 3DIC Compiler as the host, and we're bringing HFSS-IC for the electromagnetics and the SI, signal integrity analysis with RedHawk and Totem, both at the device and the cell level into 3DIC Compiler.
Design closure has always been the last mile bottleneck. How do I close the design for timing, for power, etc ? With PrimeClosure, which is the golden sign-off ECO, we'll be bringing again the technology from RedHawk and Totem to deliver on sigma DVD or dynamic voltage drop, and driven by the power grid analysis insight from the Ansys analysis technology. Analog. It's always, it feels with analog like the forgotten child maybe, but I love analog. That's where I started my life and my career, so I'm gonna maybe spend a little bit more time over here. When you think of Synopsys, sometimes you may not be thinking analog. We have both the schematic and layout analog design platform with Custom Compiler, and we have a leading position with PrimeSim, which is the circuit simulation for AMS and analog.
Those products will be the host, and we'll be bringing in Totem to plug into this technology in order to deliver to the same thing. How do you have broader, deeper inductance analysis in your design? I remember the days when parasitics used to be less than a third of the elements of analog design. Right now just the parasitics at 5 nanometer or below, they can be significantly greater than the design elements. The complexity of designing and doing that verification is very essential, and that's what we're doing over here. Multi-physics fusion is here. We have a proven integration working with a number of customers in beta with great feedback and immediate impact to reduce the overdesign. Think of it, what is the value of the multi-physics fusion? It's going from an overdesigning to co-designing, which is very important.
That's where the cost, the time, the energy comes in when you do a lot of that overdesign. With that, I want to turn it again to a customer to hear what they have to say.
Thank you, Sassine. Multi-physics fusion is critical to enabling advanced packaging such as EMIB and to our advanced logic nodes. As we move into Angstrom era, the impact of thermal, structural, and electromagnetics and other physics effects become more dominant. To reduce design margins and deliver the highest performance and most efficient silicon, multi-physics analysis is critical. Intel and Synopsys have a strong, long-standing partnership from bringing advances in silicon technology to powering data centers and physical AI. We are excited to continue our strong partnership with Synopsys into the Angstrom era with our collaboration on Intel 18A, 14A, and beyond.
It's great to hear from Intel and Lip-Bu Tan. We covered the co-design. Another aspect of co-design, back to the electronics of a reasoning robot where you have multiple chips needing to interact with one another. One of the key elements is IP. With AI, as you know very well, it relies on data. How do you move data around between chips? How do you store data? How do you operate on data? That all happens with interfaces that connect these chips together. UCIe from chip to chip, PCIe, the SerDes, the HBM. There's a great level of complexity, and with the AI push, that standards are no longer what we wait for before we deliver this next IP, because you deliver to a standard, you're always late.
The acceleration that our customers, hyperscalers and others, are pushing because they see that the bottleneck of moving data around is becoming essential. Even though it's a standard, it does not make it easy to design. We have the number one position in interface IP. The one thing actually I was so excited to hear from the team last week, because it's very rare for Synopsys to play in that domain, which is actually getting an actual silicon. It was the first HBM4 IP test chip with one of our partners that is leading with HBM memory. This test chip is the connectivity layer between the logic die and the HBM stack. You can have the best, most competitive available HBM stack. You need to connect it to the base layer, the logic layer, and this is where we play.
This is the first test chip that was released. Actually we got it in our labs last week. It's operating, if I'm not mistaken, at about 9+ Gbps with potential to go up to 12. That's very special, and thanks again to the team to deliver something very essential to the build-out of these data centers. Thank you. Now, I talked quite a bit about, I want to say, the Synopsys product as the host. We have a number of technologies from Ansys, as Ansys is the host product, and we're fusing the Synopsys technology. We had the first major Ansys product release since the acquisition, the R1 release. Again, we did not miss a beat. There was a lot of worries from customers.
This is a massive acquisition, complex integration, and we did it incredibly well based on our commitment to both our team internally and understanding the customer requirements to deliver to it. I don't want to go through these in details. Anthony will share a lot more about the products and the highlights of the R1 release. The key areas is the in optics or photonics IC, where Synopsys have the design and simulation platform capability with Synopsys OptoCompiler plugging into the Ansys optical software. The QuantumATK is a technology that is an atomic simulation platform plugging into Granta. The last one is a fault simulation for automotive application. Again, Anthony will go into this in more details. Let's go to a customer testimony as well here.
It's great to join you at Synopsys Converge. Sassine, thank you for the invitation. We value the partnership with Synopsys and the work that we're doing together. Engineering and design are growing in complexity. Simulation and emulation are more demanding than ever, and verification cycles are in super high demand. AI is in fact becoming such a critical part of how products are imagined, tested, and brought to market, and the engineers need to be able to run more of their design, test, and verification workloads in parallel. That's exactly what we focus on, building high performance and AI computing to help solve the world's most important challenges. It's not just more compute, it's co-design at scale. At AMD, when we build today's most complex multi-die 2.5D, 3D package chiplet-based designs, we are optimizing across the full system from the very, very start.
Architecture, silicon implementation, packaging, and software all moving together. That co-design increasingly spans multiple physics domains. You're balancing power, performance with the thermal behavior, the signal integrity, the mechanical stresses, and advancing the packaging and long-term reliability, and you're doing it while, again, the verification cycles are actually compressing. The goal is to reduce guard bands by improving predictability, so you can deliver higher performance and better efficiency with confidence and ultimately accelerating time to market. That's why our work with Synopsys and Microsoft is so important. With the Microsoft Discovery platform powered by AMD EPYC CPUs, we're enabling faster access to industry-leading EDA software from Synopsys. Together, we're helping engineers simulate faster, iterate more quickly, and bring better products to life. Our focus is simple. Deliver the performance, the efficiency, and the flexibility engineers need to move from idea to silicon faster.
We look forward to what we can build and advance together. Thank you and enjoy Converge.
I'm enjoying these videos because I'm getting my steps on the stage, so it's fairly cool. All right, we covered co-design, which I hope by now, even though the concept of co-design has existed for years, the need for it is driven by constraints and complexity, and the essentialness of delivering to it is how to fuse the technology so you don't have surprises later in the design stage. The next key investment area, as I mentioned in the beginning, is digital twin. Now, with digital twin, both at the silicon level, but more importantly at the system level, with the representation of the silicon at multiple accuracy and a level of abstraction becomes very important. Before I dive into the technology, just look at the global industry R&D spend in 2025. $1.7 trillion was spent in R&D.
Only 10% of it goes to automation and technology tools. Most of that spend is still in labor and physical prototyping. The age of physical prototyping is gonna become impossible if you want to deliver to an intelligent system. If you want to deliver better product faster, cheaper, there is no way to deliver to it without evolving and moving to the next phase. Now, within that 90%, the time spent on failure elimination is 75% of it, and 60% of the failures are in the initial design phase. If this is not compelling enough for companies to evolve and go to the next phase of digitization of their flow, then I'm not sure how these companies can survive. Automotive is a good example.
We all remember the visualization of crash testing as you see through physical prototype is not only expensive, it takes a lot of time. How do you bring in a virtual prototyping for many aspect of this, the analysis and the simulation of what happens in the physical world becomes very important. Now, a digital twin is only useful if it has the high-level fidelity. If you can trust whatever you're assuming and building at the design phase is gonna match the sim to real in the real world. It has to be accurate. It must be adaptable, flexible, and scalable in order for it to be practical versus it's always behind where the engineering team needs to be. Now, digital twin is happening in number of industries, and the reason it's happening is, again, for the time and cost to build a product.
More importantly, as you have intelligent systems, you need to have insights of how that product is operating in the real world, and what is it you need to change in your software or other aspects of the product in order to deliver to it. There is cost reduction, faster design cycle, and predictive insights in order for you to build a better product and cheaper once it's in the field. I wanna use automotive digital twin as an example to illustrate couple things. One, you need to have digital twin of the physics, of the actual physical product, and that's a separate engineering effort to create the digital twin of the physics. There's a digital twin of the electronics.
Increasingly, the bill of material of building these systems is electronics, so how do you design and have a digital twin of the electronics? As these systems become more autonomous and operating in the real world, you need a digital twin of the environment. You need multiple level of digital twins of the physics, electronics, and environment. With that, you need an ecosystem to deliver. I'm so excited to announce today that we're delivering on the electronics digital twin platform. We have had very good history with virtualizing the silicon, providing hardware capability to emulate the silicon. What our customers have been challenged with to adopt, the customers at an OEM level, in this case if we stick to automotive, is the rest of the ecosystem. What we're announcing here is the actual platform, and the platform, the key for it is the ecosystem.
It's an open platform, it's cloud-based, that brings in the ecosystem to plug in either their SIL Kit or their software-in-the-loop kit or the different test environment that they have for the automotive vertical in particular. We have more than three dozen ecosystem partners are already on this platform. Think of it as the operating system that you need in order to design a digital twin for an autonomous vehicle. Not only the virtualization of the silicon needs to happen, is how do you verify the silicon, and these are getting very complex in terms of size, capability, etc , both enable the software development and a faster verification cycle. This is what. When you think of software-defined anything, software-defined car or anything, you get an update on your phone because the software got updated.
All of a sudden, your phone can run faster or lower power, consumes less energy. Same thing for your car. This is the modern way of providing a software-defined, hardware-assisted verification to our customers. You can have a ZS5 installed in your fleet, and we can provide you a software update that gives you better performance, better debug is an easier word for me to say it, a better debug, without having to change the hardware. It's a software update for the hardware that you have. Historically, going from one generation of hardware to another, say from ZS3 to ZS4 to ZS5, typically we deliver about 2x of performance and 1.5x of capacity. This is getting somewhere close to the 2x performance with the same hardware that we have.
The other part that we have launched few years ago is the EP, the Emulation Prototyping system, where it gives our customers that flexibility and modularity to have same software layer running on a different hardware configuration. The other wave of our hardware system is providing the next capacity improvement from a 6 FPGA- 12 FPGA, and again, this is all based on the software-defined system. The reason we have an advantage here in delivering a software-defined system is the architecture of our platform. This is an FPGA-based platform, and it gives us the flexibility to have that architecture to provide that rhythm of capacity improvement, performance, debug, etc .
Here, of course, the FPGA that sit in the system is the AMD Versal. Now within digital twin, so far what I've described in digital twin, we've described the platform that is needed to plug in the ecosystem. We talked about virtualizing the silicon. We talked about HAV, which is needed for software development. Another layer of digital twin is the environment. Few months ago, we announced a very special and unique partnership with NVIDIA. Not only that we're racing ahead at the GPU acceleration, which has significant opportunities for pretty much every product in our portfolio to improve the runtime from whatever baseline is to 10x, 15x, 80x. We have some use cases 100+ x acceleration by enabling our product and optimizing it on the NVIDIA GPU. The next layer of partnership was around Omniverse.
Think of Omniverse as the operating system of physical AI, where you can visualize, simulate in a photorealistic environment, what will the end product look like? You reach a point in that design before you commit it to manufacturing, where you do need actual high-fidelity physics simulation. That's where our technology is plugging into the NVIDIA Omniverse to deliver high-fidelity digital twin of the environment. The first product here is Ansys Fluent, which is a CFD product, computational fluid dynamics, with another product which is the Ansys AVxcelerate, which is specific for the autonomous vehicle use case, and accelerate the bring up of that digital twin inside Omniverse.
Now, so far as I've described the world of embodied AI, which are intelligent systems operating untethered, as I call them, in the real world, there's a whole other level of similar innovation that you need to deliver in the data center to constantly deliver higher performance, more efficiency, lower power. With that, let's hear from one of our customers on what they're doing there.
AI is really a story of extreme densification at the chip and the silicon level, very high heat loads, but it's also one of different performance criteria. We're talking about hundreds of megawatts and gigawatt-level sites. We have to be more intelligent in our design process, and that intelligence really comes from things like simulation and co-design and co-development of multiple functions at the same time. The way we're considering digital twins really ultimately helps accelerate our time to market. It allows us to really look at data centers as an AI factory and see how all of the pieces fit together, see the impact of subsystems on other components and other systems. In many instances, we're on 12-month design cycles where the core infrastructure, the core IT load and capacity is changing inside of 12 months.
That's stressing the physical capability of design and prototypes and product development and time to market in a way that doesn't allow us to just be manual. We can go through kind of what-if scenarios in the field in a virtual environment that allows a much more effective and a much more efficient deployment model for how we can ultimately start to take care of these sites at gigawatt scale. We can drive towards enriched physics-based models of everything we can comprehend. 10 years from now is going to be transformative.
We covered so far the co-design, digital twin. Last but not least is agentic AI. With agentic AI, when I talk to customers, there is still at the, I wanna say, the user level, the engineering level, a mix of excitement and fear. The fear is around, how will it change my job? The excitement is a huge productivity booster. As you go up the organization change, there's a lot of excitement. Why? Because management sees that there are so many opportunities to go capture, and most of the time we're limited by the number of engineers and how much we can do as an organization to deliver to that future. It's incredible the pace in which the technology has moved.
The compounding complexity of engineering, be it what's happening with the AI acceleration, the silicon complexity we talked about, the software-defined systems, the co-designing of all of this, the bringing together new methods, new workflows in order to deliver the product better, faster, cheaper, requires re-engineering that engineering. It requires a human engineer to partner with agentic engineering technology to deliver on it. Last year, we introduced this framework from L1- L5, which is the five levels of autonomy as we envision from a copilot to autopilot in various parts of the design flow. We've made tremendous progress. In L1, we have six copilots. Pretty much every part of the design flow, the user today can interact with a copilot to get assistance to generate certain outcomes versus them doing it manually. In L2, we have 24 task agents.
These are specific agents that their entire role and job is to deliver on a specific task. At an L2 level, that orchestration is happening by a human engineer that assign tasks to multiple agent engineers. In L3, we have three multi-agent workflows. What is that orchestration layer? Is the cognitive layer that you can take task, an agent engineer to the next level up where it's being managed by another agent engineer and is orchestrated, as such. Moving to L4 and to L5. In order to push to the L4 level, there is the contextual intelligence that is necessary. Think of it as the dynamic orchestration of these multiple agents by an intelligent system. That cognitive layer is essential to create a reasoning agent that is able to manage the multiple agents, the task agents that we have delivered.
We're investing in the four layers of the chip design flow, design and verification, digital implementation, analog, and manufacturing and SLM, which is the silicon lifecycle management. At the heart of it are our solvers, our algorithms, our products. This is where every product we have today has a copilot capability as well as, any opportunity we have seen to implement reinforcement learning or other, we have already done it. The bottom layer is the infrastructure. There's a whole other level of security, telemetry that our customers need when they're deploying up that stack. There we have a number of partnerships with Microsoft, NVIDIA, AWS, and Google to deliver to that infrastructure layer. Now, it's all about optionality for customers, and our customers, in many cases, I wanna say are demanding that, "Hey, I may have my own agent. I have my own data.
How do I plug it into what you're delivering to me?" The answer is yes. You can bring in your own agent and plug through an MCP to the Synopsys stack. You can have your own infrastructure. You may wanna run this on-prem. It doesn't matter. We're providing this entire level of optionality to our customers. I want to announce the industry's first L4 agentic workflow. This is the cognitive layer, the adaptive layer where you have contextual view of the multiple agents to orchestrate the example we have right now, where you look at a spec to an RTL. There are multiple tasks you need to do. You have the architectural spec. You start designing the RTL. You need to build a test and a test plan. You have to go through some formal verification, static verification, and what is the coverage and debug.
Each one of those are a separate task agent, and at the end what you have in output is an RTL. Will that RTL be good enough to take it to the next phase of the design? That's why you go through the last coverage and debug and the quality of the RTL. We've made tremendous progress here. My urgency to you is explore, be open-minded. I know skepticism is always good, but not good to a point that not taking advantage of what's possible and the speed in which things are moving. With that, let's listen to Satya, CEO of Microsoft. Satya, great to see you, and thank you for taking the time.
As I reflect back at our collaboration and partnership, about four years ago, we introduced EDA on cloud with the Azure partnership to enable our customers to get more capacity given the workloads are gonna become more complex and increasing. About two, three years ago, we looked at everything generative to change and simplify and assist our users with the user interface with Copilot. Last year, at our Synopsys User Group, we were in the early stages of agent engineers or agentic AI with Discovery. It's amazing now how we're seeing many great proof points with tasks agents moving into planning and orchestration. Now it's accelerating into adaptive learning and orchestration. How do you see the world when it comes to agentic AI for science and engineering?
Yeah, no, first of all, Sassine, it's so wonderful to be back with you again. It's unbelievable, like the rate of progress here that you've been making and riding essentially what is this new wave. I always go back, you know, to the foundation of what is this deep learning revolution been all about? It's about basically predictive power on trajectories, right? I mean, at the end of the day, what happens next in any workflow is what this generative AI has given us a new powerful capability. As you so rightfully pointed out, you know, at first we used it.
If I try to put it like whether it's in an EDA tool or whether it's in a coding tool, we first used it to say, "Oh, can I predict the next thing?" Then we said, "Oh, can I ask any question and get responses back? Can I assign some tasks? Or can I now fully build an autonomous agent on a multi-agent system, which is the agent engineer, if you're Bill?" I mean, the idea that you can sit in front of an EDA tool like yours and sort of in natural language express high level intents and then have this agentic system go plan, execute, verify, right? I mean, that's the other magic. In fact, one thing that I think we miss is I sometimes think the coding stuff, why is coding getting so much better?
It's because of the human engineered feedback loop with something like Git and work trees and what have you, right? Without that, I don't think we would be that far. That's where I think all the engineering you all have done in EDA as a tool, as a scaffold, as a harness for a generative AI, I think is what's gonna make this agent engineer the next big driver of productivity, and the world needs that, right? Because when I look at the pace with which this industry is moving, EDA becoming more productive is going to be critical for AI to be, you know, actually created.
No, exactly, we're seeing early proof points. Given the complexity of engineering we're dealing with, the tasks agents are evolving into more adaptive, sophisticated, engineers that is taming that complexity. The role of a human engineer is becoming more relevant in terms of guiding and accelerating the possibilities of that innovation. Now, Satya, if you were to forecast a year from now, I don't want to go further. When you look at how AI's moving from a digital AI to physical AI, the innovation that happened with digital AI is the plethora of LLMs, foundation models, and the training that has gone through to make them smarter and specific to certain domains.
When you look at physical AI and modeling the world and environment that end product is gonna live in, the way we see Synopsys is silicon to system physics models, that they're accurate physics model, that you put it in a robot, in a car, etc , you're gonna trust they're gonna function. In that context, how do you envision our partnership and the world changing?
Yeah, see, that's very exciting to me, right? I think one of the things that I would say you're pulling the thread on a couple of things, which is what is Synopsys? Synopsys today has deep knowledge on how to think about that core physics of creation of semiconductor products, right? That's what EDA is all about. That is there today in the tools you create, in the human capital that works at Synopsys, and soon it'll be in the foundation models of physics that are gonna be part of Synopsys, right? You're going to encapsulate essentially all the traces, all the data that you have, and turn that into this basically a model that has the understanding of the natural world, in other words.
... physics. Of course it'll come as essentially the engine that fuels the next generation of EDA. Now to me, that is what is exciting, which is, people ask me, "How many model companies are going to be there?" I kinda think about this as how many companies are going to be there in the world, that many models are going to be there, right? That's. I think we're missing the point. There'll always be a foundation model for some general purpose. The idea, the direction of travel is multi-agent systems. You will definitely use a natural language model in order to drive an interface, but it will then interface with a physics model, that then creates the plan for what sort of output that needs to be there in EDA.
That ability to envision your own IP in a model, use of other models, open-weight models, and bringing all of these things together as these, I would say, open agentic multi-agent systems is where we're going.
Thank you, Satya. By the way, that's what makes me excited as well is that deep domain physics expertise that we have. How can we enable our customers and engineers to deal with the future of these complex intelligent systems that they're designing? I don't wanna take too much of your time. I appreciate our partnership and the trust, and I look forward for more exciting things we can do together. Thank you, Satya.
Thank you so much, Sassine. Thanks for the partnership.
Take care.
Thank you. Okay, hang in there. Maybe 10 more minutes. I confirm Jensen is here. Last time he scares the bejesus out of me. I was on stage and not sure if he's there or not, so he came early. All right. Just to sum it up, from L1 - L5, we're in active deployment and in early pathfinding for L5. Great capabilities focused primarily on the efficiency, the productivity, better output, better results of the workflow that we have delivered, with that we deliver to our customers. Just to summarize the key themes and investment areas, the first one is the need and necessity for co-design, co-design both vertically and horizontally. Horizontally is across multiple engineering domains, that's where multiphysics comes in.
The necessity to create a digital twin representation of the physical product for all the obvious reasons from a cost development cycle, insights on the product, updates for the product, and all the exciting things we can do with agentic. Without further ado, I would like to welcome Jensen on stage. I think he's messing with me. I know he's there. Here he is.
Hey. Welcome to GTC. Oh, hang on. What
It's okay.
Is this my show or yours?
I'm okay if you wanna take over. Welcome, Jensen, welcome.
Well, you know what? I didn't wanna be late. I've been here since last night. No, everybody wants to know. Since the last time, so we were talking and I told the audience the big secret about our companies. The two of us have been friends for a very long time, and apparently that we didn't have any money in the beginning, so I gave Synopsys shares. I gave Synopsys 250,000 shares when NVIDIA was worth $10 million. Are you guys following me? Can you do this math? I think that's the
Do you see Aart is right there?
I have it right here.
Okay. All right. Everybody's asking, and Art says he can't find the shares. There's no question I gave it to you. I think somebody ought to look into this. I have some concerns. I do believe that they're probably sitting in your desk somewhere, and it's probably, at the moment it would be worth, I guess, half a trillion dollars. Congratulations, somewhere in your office is half a trillion spare change. It'll come in handy sometime, I'm sure.
$ Half a trillion.
It's nothing.
We should get there.
Oh, yeah. Yeah. One of us should get there.
No, no. I don't wanna go there. I can-
It reminds me. There was this one joke. It was, I think it was Ryan Gosling at the Oscars. He goes, "Hey, Sassine, between the two of us, our company's worth, like, $4.5 trillion.
Do you want me to leave you and wanna finish the?
No, no.
Yeah?
No. The joke went between the two of us. We have two Oscars, but Ryan Gosling had none. That was the joke.
You have time to watch the Oscar?
Yeah, yeah. All four hours of it.
Yeah.
Anyways.
Anyways.
Welcome to GTC.
Yeah.
I'm so happy to be here. First of all, because the audience are designers and I love all of you, and I know all of you are using Synopsys tools to build competitive products and things like that. Still, I love you.
Okay. We talked about co-design.
Yeah.
I know you often refer to it as extreme co-design.
Yeah, yeah.
NVIDIA-
As you know, NVIDIA, since the very beginning, was a software and architecture and algorithm company. It's the reason why we could invent modern computer graphics, and we reinvented modern computer graphics two or three times. First of all, the first time, taking the entire what was a programmable pipeline and put it completely into pipelining. A processor, a programmable architecture into a pipelined architecture. The second time, introducing programmability back to it with the invention of programmable shaders. Third time, inventing CUDA. Then the next time, inventing neural rendering, which allows us to fuse traditional computer graphics or programmable shaders that we invented with neural rendering with artificial intelligence. The combination of that is incredible, and I'm gonna show some new stuff in a couple days.
But, uh, but-
That is only possible if you're co-designing, inventing algorithms, inventing architectures at the same time.
Yes. I referred to it earlier as that's a vertical co-design.
Yeah.
There's the concept we introduced, which is the horizontal co-design.
That's right.
where you need to go from electronics to electrical to mechanical to fluid, et c.
Right.
As you're envisioning the end product as a robot or a drone or a car. Otherwise, you end up with a lot of over-designing.
Yeah
In order to bring it all together.
Now look at the complexity of the stuff that we, you know, all the Synopsys tools that we use is to help us go from now we went the stack co-design, systems co-design, right? From chips all the way out to the system, and then now multi systems because in order for us to do distributed computing at the scale that we do, the computing fabric is inclusive of the CPU, the GPU, the scale up switch, the scale out switch, and the network processors are all part of the software stack. We have to design the whole thing and refactor and redesign our algorithm so that and the architecture of the whole thing all at the same time.
We work from the top down, bottoms up, inside out, outside in, all at the same time. That's extreme co-design. The next step, of course, is the system, this computer, are the size of these buildings. If you will, the building becomes part of the system and it's a gigawatt data center, and if you're, you know, 10% wasteful, that's an enormous amount of money. It's $1 billion a year just in power.
Yep.
$100 million is. It adds up. I think now the co-design includes the AI infrastructure.
Yes. Now, as you envision the world, you still need a lot of that data center, the training, etc , as you move into physical AI.
Mm-hmm
... where this is an uncontrolled environment these systems are operating in. In our recent partnership announcement, we talked about two layers of that partnership, or actually three if you include the middle layer, the GPU acceleration, because that co-design was not possible before without having acceleration, otherwise it takes too long, and a different architecture, different modeling, different virtualization, etc . The top layer was the connection to Omniverse, how to have a visualization of that future product. At the end, whatever you simulate has to be accurate at the physics level, so the sim to real accuracy is very important.
Yeah. Just like when we're doing design and simulation, we create test benches.
Yes.
The test bench for a digital simulation is not trivial, but it's easy. It's not trivial, but it's relatively easy physically, technologically. However, if you wanna create a test bench for physical AI-
Mm-hmm
now the test bench has to be representative of the physical world, has to obey the laws of physics.
Mm-hmm.
That's extremely hard. Omniverse was created so that it number one, represents physics. It represent multimodal physics, and it has to operate with simulations, software in loop, or it has to operate with computers, hardware in the loop, and it has to allow us to intermix principle algorithms as well as AI algorithms.
Yes
so that robots can learn how to be robotic inside this test bench, if you will, you know, this Omniverse test bench. It's designed so that multi-users, multi-agents could be inside at the same time, right? Don't forget, if you have 10 robots and they're all running software in the loop, you essentially have 10 agents operating simultaneously in this Omniverse. Well, we could scale that up to 1,000- 1,000,000. You could have people from all different people, you know, you got people interacting with robots interacting with cars. Omniverse is one of the most complex software systems the world's ever made, and it took us almost a decade to get here, so I'm really delighted that we're working together on that.
Yeah. No, no, thank you. The part that's very exciting to me is as we envision the future of product development, especially these intelligent systems, they're operating with the context that you need to take into account when you're developing and designing that product.
Mm-hmm.
You cannot wait until you design it. You send it to the real world.
Right
... do all that work. How that agentic reasoning needs to be training and commanding that physical system at the same time.
Yeah.
Actually earlier, I'm not sure if you were here or not, I talked about announcing the platform, the electronics digital twin platform where you need the ecosystem to bring in the software in the loop, the hardware in the loop in order to drive that end system validation.
These systems in the future include so many different computers from so many different vendors, and almost everything is software defined.
Exactly.
And so-
Exactly
Integrating these large complex systems into its digital twin is an endeavor of magnificent scale.
Exactly.
We created Omniverse to allow for the accommodation of all that. You mentioned our partnership. This is really something that we've been working on for a long time. You know, one of the things that's really great is that we've known for a long time, you know, this is the 20th year of CUDA. We've known for 20 years that CUDA was going to revolutionize the way that algorithms are run. Fundamentally, Synopsys is an algorithm company. We've always believed that CUDA and NVIDIA GPUs could accelerate quite significantly the algorithms and the tools, the solvers, that Synopsys creates. However, it's taken a long time for CUDA's installed base and for its penetration into computing to essentially be everywhere.
The thing that's really exciting is that finally, through our conversations, we realized that GPU acceleration is now commodity, and Synopsys could now take advantage of CUDA wherever Synopsys wants to be. If Synopsys wants to be in a car company, or Synopsys wants to be in the Azure cloud, or Synopsys wants to be on-prem inside the factory, NVIDIA's already there. That tipping point, that phase shift, realizing that CUDA is now pervasive. Really was the pedestal, if you will, the foundation for us saying Synopsys and NVIDIA ought to partner deeply and just accelerate everything Synopsys. The beautiful thing about accelerating everything Synopsys is all of a sudden your customers, your engineers, could either do work 10 times faster, or you could scale up simulation 1,000 times larger.
Exactly.
Both are simultaneously possible. I hope none of my competitors use it. However, use the old stuff, CPUs. That was a joke. I love CPUs. Anyways,
You know how you love all your customers?
I do.
I love all our customers.
I know. I love them too. They're all gonna be customers of mine.
Yeah.
Okay.
Okay
Anyways, the next layer though, the next layer is the agentic layer. You know, once we have everything on the CUDA architecture, of course, you could accelerate principal solvers on the one hand, you could now also build AI agents on the other hand. The one thing that I wanna say, and this is the thing that almost every single analyst gets wrong, everybody that I've heard get wrong, is that, remember, the limitation of NVIDIA is not anything except for the number of engineers we have.
Mm-hmm.
That's the reason why we're constantly hiring more engineers. We went from one chip to seven chips to build one generation, and now we're building seven chips per year, right? These systems are enormous. The complexity is incredible. This is the thing that I'm most excited for you, and I'm delighted to partner deeply with you in this area. The number of virtual engineers that will be using Synopsys tools is gonna increase by several orders of magnitude. Every single one of our engineers will have a whole bunch of Synopsys agents that are specialized in different parts of the design phase working with us. Isn't that right? Each one of those agents are gonna be spinning off a whole bunch of sub-agents.
Mm-hmm.
The number of Synopsys tool users are gonna go through the roof.
Mm-hmm.
You know? If you don't have a site license, you better get a site license soon. Because I gotta tell ya.
We-
The tool use is gonna skyrocket exponentially. In fact,
We don't sell that stuff anymore.
You...
Jensen
I'm here to sell Synopsys stock.
Oh, thank you for that spot. Yes. This year is our fortieth year anniversary.
No kidding. Wow.
Yes. When I kicked off this conference, I said, "This is a year one of the new Synopsys as the leader in engineering solutions from silicon to systems," as we brought in Ansys as part of the portfolio.
Yeah, that was a great.
How are we?
Great acquisition.
Delivering to the core design of multiphysics.
Mm-hmm
both at the silicon level and the system level, the necessity to build a digital representation of your product in order to do all the things we talked about, the whole agentic layer to enable agent engineers with human engineers. Any final comments? Again, thank you so much for the partnership.
Well, I think that the Synopsys that I grew up with, designing chips, was a revolutionary tool. Even back then, the idea that a piece of software could design chips as well as we could and optimize the pipelines as well as we could was unbelievable. Art sitting in front of me, he knows this. In the early days, no chip designer thought that optimization software was gonna do a better job optimizing they, than they could. They missed one gigantic idea. The one gigantic idea is that in order for Synopsys and tools and now agents to work, the foundational layer of information and data and methodology, that gets built up over time, and that flywheel never leaves the company.
That flywheel by which future AI Synopsys agents are gonna be designing and running on are gonna keep getting better and better and better and better over time. Therefore, your company's, your design expertise, your design scale, and your company's value could grow exponentially with time instead of resetting every time you start with a new design when humans do it. That's number one. The second thing is that agents, just like Synopsys optimization tools and the solvers that you guys create, can work at such an enormous scale that it can't possibly fit into anybody's head, and therefore, the optimization could be, by definition, co-designed.
Yes, that's right.
That AI agents would be able to optimize across the different layers of abstraction, across electronics to mechanical, okay.
Exactly
To thermal, to you name it, right? It'll be able to optimize across multiple domains at the same time in a way that humans can't. I think that we are at the renaissance of a new way of doing design, and I got the benefit, especially with, you know, I grew up in the generation of Synopsys in a lot of ways. I was the first generation of engineers that were able to use tools to design chips rather than just using schematics and doing it by hand.
That's right.
The last generation of engineers before me never believed it. The generation of engineers now after me can't imagine living without it. We're now in a new phase. This new phase is exactly as you said. It's really extreme co-design. It's really about agentic AI. Just the final thing I would say is, you know, you go get those licenses for Synopsys tools nailed down because your agents are gonna use a lot of tools. All right.
Jensen, thank you so much. I appreciate it. Thank you.
Okay, guys. Thank you.
Mike, your job is easy right now. Mike Elia will be standing here taking the orders. No, just kidding. Thank you so much for joining me. I know we went over time, but I hope you enjoyed the conference. I want to go to our mission, Empower innovators to drive human advancements. We have couple days with you. Thank you so much, and I look forward to the interactions we're gonna have. Thank you.
Thank you for joining us for the Synopsys Converge keynote. SNUG attendees, please remain in Mission City. Allow other attendees to leave first, then pick up refreshments outside Mission City and return for the SNUG keynote in this room. If you are attending the Executive Forum, please proceed to the Grand Ballroom. Enjoy refreshments in the foyer outside the Grand Ballroom. Simulation World attendees, please head to the Santa Clara Ballroom in the Hyatt.
Good morning, and a very warm welcome to all of you to SNUG 2026. Right up front, I wanna start with a big, big thank you to all our users, our partners, our customers for including us in your innovation journey and for working closely with us to deliver many of the foundational technologies that are gonna pave the way for innovation over the next decade. A big thank you to all of you from everyone at Synopsys. Today, I'm gonna be talking about re-engineering the future of silicon design. How do we turbocharge chip design? I was at a few conferences this year, starting with CES at the beginning of the year, the Supercomputing conference, and what was evident in all these experiences was we are absolutely in the age of intelligent systems.
Everything from humanoid robots, autonomous vehicles, all the way to robot lawnmowers and robot pool cleaners, and even an AI data center is an example of an intelligent system. How do we call something an intelligent system? It needs to have four key pieces. First, the system hardware. Second, the silicon hardware, all the chips that go into it. Almost all these intelligent systems today are software-defined and run tens of millions of lines of software. Fourth, they all have one or more AI models at the core of them. That's what defines an intelligent system. With the acquisition of Ansys, Synopsys is in a very unique position in order to cater to all your engineering needs of those building intelligent systems.
On the system hardware side, with the industry's most trusted simulation and analysis portfolio, with really some of the most widely used products like Ansys Mechanical, HFSS, Fluent for computational fluid dynamics. On the silicon hardware side, the broadest and deepest EDA software portfolio all the way from architecture to manufacturing, the most complete IP portfolio across foundation and interface IP, the highest performance hardware platforms, moving to software-defined and all the capabilities to virtualize processors and virtualize operating systems to enable application development and firmware development and OS development. Then, of course, in the building of AI models, capabilities like Omniverse that we discussed in the earlier keynote, as well as simulations to create synthetic data that go into the creation of these new models that will be at the core of these intelligent systems.
Synopsys is uniquely positioned, and we look forward to working with all the intelligent system builders everywhere across all these aspects of building an intelligent system. What I'm gonna focus on today is about silicon hardware, and we are in truly an exciting time when it comes to silicon hardware. If you look at the announcements over the last few months, we saw examples of three large AI super chips that were announced by leaders in the industry. NVIDIA announced Vera Rubin, Microsoft announced Maia 200, AMD announced MI450. The reason these are very unique is typically you're constantly trading things off, right?
There's a performance axis where you're trying to deliver generation over generation performance. There's a quality axis where you're trying to deliver silicon that is first time right, and then there is a schedule or a velocity axis where you're trying to do this in a very short period of time. What is very unique about this age of AI and this build-out of AI is there is absolutely no way to compromise on any of these axes. These engineering teams and many more that are building these AI super chips are essentially pushing the limits without compromise on all of these three axes. Let's take a look at some of the market trends that are driving the need for this. On the velocity side, the clock of our semiconductor industry has changed. We are now on a one-year clock.
From NVIDIA, AMD, Tesla, everybody is talking about this one-year clock. Why are they talking about the one-year clock? The reason they're talking about the one-year clock is because of what the requirements of AI compute are. If you look at the generation-over-generation performance that needs to be delivered, we're looking at trying to keep up with the computation requirements of AI of 4.4x per year. This is when Moore's Law is giving you 15%-30% per year in terms of gains. How do you close that gap from a node transition, 15%-30%, and 4.4x per year? This is where all the exciting innovation across architectures and multi-die packaging and a lot more things are happening in order to help companies to close that gap.
Essentially, if you look at some of the largest models that have been released, like Grok 4 at 7 trillion parameters, or Claude Opus 4.6 from Anthropic, which has several trillion parameters, that curve of, on computation is showing no signs of letup. This is why if your compute requirement is growing 4.4x per year, if you don't deliver a chip per year, there is no way you're going to keep up with that requirement. The other equally important graph is the graph that talks about $ per million tokens or joules per million tokens, and that curve needs to come down and is coming down in a hurry because this is how AI gets broadly deployed and AI applications get broadly deployed.
Both of these curves are extremely important to all of you and everybody that is in this AI superchip arena because this is exactly what you're trying to optimize for. Now, the third axis here, of course, is quality. When we talk about AI, we are really talking about a five-layer stack that starts from energy, goes up through the chips, the infrastructure, through all the middleware, up the models, and then the applications themselves. How do you deliver first-time right silicon in this type of a setup? You need to do extensive verification and validation. Today, this is one of the biggest challenges that the industry is facing. We estimate you need to do quadrillion, that which is 10 to the power 15 verification cycles or validation cycles, in order to be able to claim to deliver first-time right silicon.
How do you build that verification, validation approach, methodology to be able to get the coverage across this five-layer stack? That is another big challenge that these teams are all facing. Essentially, there is no letup. You have to deliver these chips on a one-year cycle. You have to deliver that generation-over-generation improvement, and they have to be first time right and scale to hundreds of thousands of nodes, maybe millions of nodes, in order for a supercluster to be effective. The key to this is essentially this term we call extreme co-design, where essentially you have execution across multiple swim lanes. All of them have to be going at extremely fast pace. At the same time, you're co-designing and optimizing across these different swim lanes.
As Sassine talked about earlier, there's vertical co-design as well as horizontal co-design, and all of this is happening in parallel. Now, Synopsys is making significant investments in order to enable your goals with respect to velocity, performance, and quality. Today's talk, what I'm gonna do is give you an overview of all these initiatives and investments that we've made in order to meet your objectives and your requirements. Let's start with velocity. Of course, one of the big, big opportunities in terms of velocity is the application of AI in the whole EDA design flow. Now, Synopsys was a pioneer in the introduction of AI to EDA, and our journey started when we introduced reinforcement learning to RTL-to-GDS implementation back in 2020 with the DSO.ai solution. Then very quickly, we broadened that to extend reinforcement learning for verification, for test, for analog.
When the ChatGPT moment happened in late 2022, we rapidly pivoted to then apply EDA with LLMs, and we invested significant amount of resources to go build out these capabilities where AI now evolved from being an optimizer to an assistant. Here we are talking about capabilities which are assistive capabilities, where AI was a copilot assisting an engineer to get access to expert knowledge, expert information, write scripts, as well as creative capabilities, where essentially AI was helping to generate a lot of the collaterals for the design or the verification.
Now, the next big transition from AI as an optimizer to AI as a assistant is really AI as a colleague, and this is the transition to agentic AI, where we now have the ability to essentially set up tasks and have agents go and execute these tasks with very little human intervention. This is gonna be a big unlock in terms of productivity because, again, remember those three axes. How do you get that big velocity improvement without compromising performance or compromising quality? That's where this next transition in AI is going to come from. Let's take a look at the progress we've made across each of these, right? Let's start with AI as an optimizer.
If you look at the entire chip design flow, starting from spec and architecture all the way through design and verification, implementation, sign-off, all the way to manufacturing, we have made significant investments to essentially apply reinforcement learning across this entire flow. All of you have worked with us very closely to incorporate many of these technologies in your current flow. If you look at the work that we have done on VSO.ai, where essentially we are using reinforcement learning to do regression optimization, we are using reinforcement learning to improve coverage, and customers have reported 2x reduction in the number of tests needed to achieve the same level of coverage and also associated compute reduction.
We've also seen customers reporting improvement in coverage because AI is able to hit spots in the search space for your design under test that you are not able to hit with just your existing test bench. When we now move to RTL - GDS, we have introduced several generations of DSO.ai with the most recent version natively integrated into Fusion Compiler with a much smaller compute footprint than before, which is now being widely used by many of our users. Here we are able to deliver significant performance improvement as well as power improvement using DSO.
We've also taken that reinforcement learning and integrated it into our 3DIC platform, and we have customers who are now using AI to optimize the interposer routing between compute and compute or compute and memory, and we are seeing some very nice improvements in terms of signal integrity parameters as a result of doing that.
Moving to analog, which is a space where which has not seen much automation, and which is very often, you know, really the long pole in terms of many projects, we are really excited about the results we are seeing with our analog AI capability, ASO.ai, where we are using AI to not just optimize analog circuits with respect to their key performance indicators, but we are also using it to migrate analog circuits from one node to another, which obviously accelerates, especially in this age of AI, where things have to be done in a one-year cycle, the whole challenge of moving IPs from one node to another. Moving on to test. The cost of test is something that's a big concern to all of you.
The time on the tester, the test volume is a big care about, especially when you are applying test in in-field situations like in a data center, for example. Here again, we've applied our reinforcement learning technology, TSO.ai, to really impact that test pattern counts and test volume by 20%-45%. All the names of the customers you see there, we thank them for their close partnership, collaboration. Many of them have presented in SNUG last year, and many of them are presenting in SNUG this year. I would encourage everyone to take the time to go check some of these presentations out to get a sense of how reinforcement learning is becoming completely pervasive all across the Synopsys flow and delivering really good results over what the baseline flows used to be.
Now, one of the big things I'm really excited about is analog design, because this is an area that has not seen much automation and much application of AI, and it's really one of the long poles for design execution. I'm super excited to talk about our latest capabilities in the area of layout synthesis. Typically, if you look at an analog design flow, you've got analog designers designing a schematic, then they simulate that schematic, then they hand that schematic off to a layout team, which is often busy with the previous project, and then they wait for the layout team to complete their work, extract the parasitics, and then give it back to the design team so that they can do further improvements to the design.
Now, this is a long, laborious, iterative process, and with our new AI-based layout synthesis capability, we now have the ability to take a schematic and identify all the key structures from that schematic based on all the training that the AI has seen and automatically place those structures in the analog layout and automatically route the connections, thereby creating an analog layout in a much, much more automated fashion than ever before, which then allows us to extract the parasitics, use the data to improve the schematic further, and also use AI to do the whole optimization of the schematic and layout.
This is a big breakthrough technology, analog layout synthesis, and this is promising to really help the analog design community further accelerate their design cycles, especially in a time where they are, everybody's being pressured to deliver that chip within a year. Let's now move to AI as an assistant. All the work that we did with the LLMs and to really understand how LLMs work, how to scaffold the outputs of LLMs, how to prompt the LLMs correctly to get the answers that we want, has translated into a suite of assistants around the Synopsys products. We built knowledge assistants for tens of Synopsys products.
Essentially, pretty much across the whole portfolio, you now have a knowledge assistant available, which is really capturing that decades of Synopsys expertise and putting it at the hands of an engineer sitting at a desktop and being able to make them a lot more productive in terms of getting answers to very complex questions as well as how to do things more efficiently. We have workflow assistants, which are essentially helping engineers to write scripts and to develop flows and methodologies much, much faster. We have run assistants, which help engineers to look at the log files from all the EDA execution and determine what are the areas to focus on, how to separate out the signal from the noise.
Now, all of these are showing significant gains in terms of productivity, but the big question on AI really is, does any of this scale? I'm quite excited by the progress we've made on the scaling of our AI solutions over the past year. You can see over 15,000 users, both internal and external, are now using the Synopsys assistive capabilities, and over millions of queries have essentially been submitted, with a very high satisfaction rate with respect to the answers that we are generating. Another part of generative AI is to really develop the collaterals for EDA. If you are a formal verification engineer, you know one of the most challenging tasks is to take that spec and generate all your SystemVerilog assertions to do formal.
Well, we now have Formal Advisor, which is now closely tied to the Synopsys VC Formal tools that enables you to do that using an AI model in a much, much faster way. We have Lint Advisor. As a verification engineer or an RTL developer, you're running Lint constantly to make your RTL Lint clean. It's a high toil step, and here we are using AI to run Lint automatically, look at the Lint errors, classify them, cluster them, and even make the RTL changes needed to fix those Lint errors. Of course, Code Advisor, which is a boon for any RTL developer to essentially do a natural language prompt and have the code get generated, run through Lint, run through VCS, and thereby accelerating the RTL development cycles. Here again, the scaling of the solution matters.
It's not about making it work for one customer or two customers, and today, millions of lines have been linted using Synopsys Lint Advisor, and over 200,000 assertions have been generated using Synopsys Formal Advisor. The next big evolution is AI as a colleague, the transition to agentic EDA. As Sassine talked about earlier in his keynote at SNUG last year, we talked about this roadmap to go from Copilot to autopilot in progressively higher levels of autonomy from L1 through L5.
We talked about how we, in order for an assistant to first become a task agent and then for multiple task agents to be orchestrated still with a human in the loop, and then moving to higher levels of dynamic orchestration where the human is consulted, but it's more autonomous, moving all the way to an agent engineer on the right where the entire execution is autonomous. You can assign a specification to an agent engineer and have them go through a design and verification flow or assign a block with a floor plan to an implementation agent engineer and have them take the entire block to closure. We've made a lot of progress in this over the past year and really are very, very proud of the Synopsys R&D and engineering teams.
At this point, we have over 15 L1 - L4 customer engagements, and we have delivered these to many of our customers. These agents range from agents that are checking and running Lint and fixing your Lint violations autonomously to agents that are reading your spec and generating SDC to agents that are plugging into your verification debug flow and essentially doing analysis of the waveform and determining what are the anomalies that should be double-clicked further. All these task-level agents have been delivered, and these are integrating into the customer's AI flows and adding a lot of value. Over 15 agent engagements delivered so far. There's multiple customer engagements and customer success stories.
To just give you a sense of how these agents are working, I wanna show you two demos. The first one is a Lint agent, which is essentially plugging into a Lint flow and optimizing Lint errors autonomously without a human in the loop.
First, we prompt the agent to run and fix Lint errors. The agent executes checking and fixing steps iteratively. Here, the agent has identified the violations. The agent has fixed five out of six violations in the second iteration. In three iterations, the agent has fixed all Lint errors and delivers a clean RTL.
If you're an RTL design or verification engineer, I hope that got you excited because this is one of the high toil steps in your workflow. To be able to assign that task to an agent and have that agent execute that task autonomously, and then for you to check the work of the agent and say, you know, "Good job. Let's move to the next step in the flow," saved you work and essentially made this process into adding one more layer of that productivity gain that you're looking for. Now, if you're a backend engineer, I know that fixing congestion issues, fixing late-stage DRCs is a big place of where the toil is spent. We built an agent for that, which is a congestion and DRC relief agent, and let's take a look at how this agent works.
Here, we find thousands of congestion violations. The agent first analyzes congestion map and report data. The agent then analyzes internal tool data to root cause these violations. The agent creates a plan of attack. The agent finds the root causes of violations and orchestrates the steps for resolution. Congestion issues are now resolved, and the routing layout is clean. Now, the agent runs foundry sign-off rule deck checks and finds several additional violations to address. The agent comes up with a plan for resolution and implements the fixes and delivers a final layout free of DRC violations and ready for tape-out.
We all know how painful that is. You know, help is on the way, and, I hope, we can engage with many of you to, evaluate and, really deploy these agents in your workflows in order to accelerate your progress. Let's now move to what an L4 agent looks like, right? Basically, the best way to think about this is you're progressively moving to tasks that need higher and higher level of autonomy and that are becoming coarser and coarser, right? Think about the task of given a spec, how do you generate a functionally correct RTL that is ready for synthesis? This is a 30%-40% of the overall, design time type of activity, and you know that it's not just a one-shot read the spec, wrote the RTL. It's a very iterative process.
It's a hierarchical process where you break down the spec into, like into sub pieces, and you write modules for the different sub pieces. You write tests for the modules. You run a simulation. You fix the errors. You run lint. You fix the errors. Eventually, you put it all together. You run your synthesizability checks. All this stuff is part of this workflow. A level four agent is really about executing a hierarchy of agents. What is very important is it's just not a static execution of that hierarchy of agents, it's a dynamic execution, and it depends on a lot of additional data.
If you look at the picture on the left, we are depending on underlying information like a knowledge graph that really captures all the expert knowledge in the area of how to take a spec and generate high-quality RTL, how to verify a module, and so on. It also relies on a skills database. Hey, how do I, you know, run lint and fix errors? How do I run VCS, our simulation, and basically look at the waveforms and find out where the opportunities to improve are? There's a skills database that is also needed. Of course, these tasks are what we call as long-running tasks because it's really multiple agents getting spawned off. They are executing. Their work is being checked. More agents are being spawned off.
Their work is being checked, with humans being essentially consulted along the way, saying, "I'm about to make this decision. Are you okay with it? I'm about to make that decision, are you okay with it? And if not, then how would you like me to do it differently?" It's a very complex orchestration of agents with humans essentially consulting with that agent along the way, and then doing a much higher level coarseness task than before. The same concept also applies to implementation, where I can basically provide a block, a floor plan, and constraints and ask the agent to go through the entire RTL- GDS flow and fix the DRC violations.
Again, it's a coarse task, lot of sub-tasks, and lot of work along the way to check the outputs and improve it, and so on. We are super excited with really announcing the first L4 class agents in the industry. We have multiple early engagements underway in these areas. Just to give you a feel for why we are so excited about it, I wanted to share with you a demonstration of our Spec-to-RTL L4 agent, just to give you a sense of how it's a very complex orchestration of the full design and verification workflow.
We begin by prompting the agent to generate functionally verified and lint-clean RTL from specification. The Spec-to-RTL agent reads the specification and generates RTL. The agent generates and runs tests and identifies two failed tests. The agent debugs and fixes the failure. Next, the agent executes lint checking and fixing steps iteratively. The agent verifies RTL synthesizability with Fusion Compiler using MCP. Finally, the agent delivers functionally verified, lint-clean, synthesizable RTL.
Think about the complexity, think about the scope of the task, and of course, a lot of this is possible because we have amazing tools that are sitting at the foundation of this, and the agents are obviously interacting with the tools very closely in order to be able to achieve these outcomes. Essentially, the progress on agentic EDA is extremely good, and we are marching in our way to L4 and L5 agents, and we're looking forward to engaging with many of you on the entire spectrum of AI solutions that I presented today. Now, this is not the only way to get velocity, right? To get your designs done faster. Another big aspect of getting designs done faster is can you just do the steps faster? Can you execute the tools faster?
Our engagement with NVIDIA, which we, the partnership we announced in December, is a critical part of this, where we want to harness the power of accelerated compute in order to accelerate many of our applications. We've already made very good progress across areas like computational lithography or OPC, circuit simulation, SPICE, TCAD, atomistic simulation. On many of these areas, we have worked very closely with NVIDIA and are delivering significant gains in terms of runtime, with and without accelerated compute. What is super exciting over the next 12 months is many more Synopsys capabilities, both on the EDA and simulation analysis side, are coming online. You know, tools like RC extraction, physical verification, the whole power integrity flow with RedHawk-SC Electrothermal.
All these things are coming online with GPU acceleration, and all this gives you an opportunity to further accelerate your design velocity. I wanna share with you a couple of examples of how this is really meaningfully impacting customer design. If you look at something like the design of HBM or the design of very large analog IPs, which has to be done at a much faster pace, the SPICE simulation is extremely complex because, like, an HBM3 or HBM4 cube is already 1 billion devices with several billion parasitic elements, and the SPICE simulation of that can be extremely onerous and time-consuming if it's not accelerated in a significant way. With the GPU acceleration, we are able to show dramatic acceleration for SPICE simulation, as you can see from the charts here. Another area is computational lithography, right?
Pretty much in the end, all the GDS and masks that are created from the GDS essentially go through computational lithography simulations at the foundries, and that is a very time-consuming step. Here again, our partnership with NVIDIA has resulted in dramatic gains with respect to the total turnaround time of that mask creation step with all the optical proximity correction features introduced to the masks. You can see here that the runtime gains are actually growing year-over-year because not only are we benefiting from the newest GPU architectures like Hopper and Blackwell, but we are also benefiting because we are constantly working with NVIDIA, learning and improving the APIs, improving our code.
In the case of OPC, we have gone from, you know, a 5x type of speedup to nearly 30x in just three years. This is the kind of trajectory that we expect to see across many of the portfolio capabilities. Now, another way to accelerate velocity is not just about accelerated compute, but also can we rewrite our engines to make them go much, much faster. Here I'm super excited by the work done by our physical verification team with our IC Validator product, where we are able to now show significant throughput advantages of using IC Validator over the current incumbent flows. 3x faster for PERC, which is programmable electrical rule checking for advanced nodes. 2.5x faster for antenna checking for advanced nodes.
The reason this is important is generation over generation, the number of rules is increasing and the number of operations per rule are also increasing. If the algorithms are not scaling, you are going to have physical verification be a very long choke point in your overall design velocity. We are really humbled by all the confidence shown by many, many customers who are presenting at the SNUG this year and also presented last year. Over 270 tape outs now completed at 3 nanometer and below with Synopsys IC Validator technology. Another area is static timing analysis. For over three decades, PrimeTime has been the bedrock of the industry.
Every chip that has been taped out goes through PrimeTime as that final check to make sure things will be okay from a timing perspective, a signal integrity perspective, and the correlation between the static timing analysis reports and the final silicon performance has really been the bedrock of the semiconductor industry. We are not resting on our laurels. We are constantly hard at work at trying to stay ahead of your most complex requirements, and we're really excited about several new capabilities in PrimeTime to essentially stay ahead of the requirements of AI super chips. We've introduced distributed STA a few years ago, which is showing dramatic speed-ups with respect to static timing analysis of radically larger dies.
We've also introduced technology for 3D IC STA that enables stack STA to happen without having a huge computational explosion while still being functionally and silicon-wise correct. One of the big announcements this Converge is the multiphysics fusion, where we are really bringing the Ansys multiphysics technology much closer and deeply integrated into the Synopsys EDA technologies. PrimeTime is really one of the great examples of that, where we have taken engines from the RedHawk IR drop analysis and the RedHawk-SC Electrothermal and stress analysis, and we have natively integrated those engines into PrimeTime to provide the industry's first sign-off multiphysics-aware platform. In addition, regardless of all the work we have done in PrimeTime, it's really your trust that matters the most, right?
In the end, we have got multiple customers who are speaking at SNUG this year about really the trusted EDA flow that they depend on, companies like Intel, companies like Arm, that are doing some of the most complex designs in the industry today. We really want to thank the whole community for your strong support, and we are not resting on our laurels. We are continuously charging ahead to stay ahead of your requirements. Last but not least on velocity, the fastest way to do something is to not have to do it at all. Here is where our Synopsys IP portfolio comes in. The industry's broadest and most trusted IP portfolio across foundation as well as interface IP. The Synopsys IP portfolio is essentially considered and deployed by multiple AI super chip companies because of our leadership in interface IP.
Multiple generations of leadership in PCIe, along with the industry's first PCIe Gen 6 Gold system. The HBM4 test chip that Sassine showed earlier, up to 12 gigabits of performance with respect to the interface between the HBM and the compute dies. The silicon provenness at N2P, where we have over 30 customers engaging with us on N2P IP development. This shows you that the Synopsys IP portfolio is really one of the most trusted portfolios in the industry, and it's something that can help you dramatically accelerate your velocity as well. That covers the whole velocity section. Let's now move to the performance section. Where does that generation-over-generation performance gain come from? Here, we talked about it earlier in Sassine's keynote, that Synopsys has been on a progression, right?
From 2000 to 2015, 2017, we were in this phase of really, you know, multiple, tools, synthesis tools, place and route tools, extraction tools, sign-off tools, value links between the tools. It was breaking as we were moving to 16 nanometer and 7 nanometer, more iterations, misprediction of RC effects, misprediction of timing effects, and we introduced the new Fusion architecture where we brought all these things together with the advent of the Fusion platform and Fusion Compiler. Then as we approach the Angstrom era, we further augmented that Fusion platform with additional capabilities like 3DIC and AI, and this is the platform that many of you are using to design your most advanced chips.
When we now look at this AI super chips era, the whole discussion has shifted from how many transistors do I put in a single, radically limited die, to how many transistors do I put in a package, and in order to meet that 4.4x per year compute growth that we talked about earlier. As the complexity stays at the die level down to 2 nanometer and 14 angstroms and so on, but also has multiplied at the package level, we need the integration of all the multi-physics technologies deeply with the Fusion platform, the ability to measure thermal, the ability to measure IR drop, to measure stress, measure warpage, all within the same platform, both at the 2D and 3D levels, in order to keep moving the industry forward.
Let me give you an example of how and why this is so important, right? We all know what the trends are, right? VDD is dropping, we are at 0.65, we are gonna we are moving to 0.55 or even 0.5 to get that power improvements. We are moving to the most advanced nodes, 14 Angstrom and below. All this is showing up in the form of dealing with variations, multiple design margins across the whole flow, which are compromising PPA, thermal challenges, IR drop challenges, and so on. The current flow essentially is you may execute your design in Fusion, but then when you go to your sign off IR drop tool, you see many IR violations and you're executing many iterations to fix those IR violations.
You are running your thermal analysis tool, you're finding thermal hotspots. The thermal hotspots are impacting your timing measurements, impacting power, you're coming back to the implementation flow, applying D-rates and trying to model that and trying to improve the design, right? Very iterative, lot of margining and loss of PPA. What we are announcing with Multi-Physics Fusion is the native integration of all these physics engines inside the Synopsys implementation and sign-off platforms. We're integrating the IR drop engines, the thermal engines, the stress engines inside Fusion Compiler, inside PrimeTime, inside PrimeClosure. That type of native integration is incredibly hard because you have to, you cannot compromise the accuracy, and yet you need a runtime that does not compromise your full flow execution runtime. That's where the big innovations happened.
Really, the benefit of all this is much, much fewer iterations at the very end of the flow and no impact on PPA, and in fact, you save PPA because you're not margining so much in your design. One of the results that we have from our Multi-Physics Fusion is shown on the right, where that blue curve or the area under that curve was the previous IR drop profile, and that red or the orange one is the new IR drop profile with the execution of Multi-Physics Fusion. This is an example of how by integrating Multi-Physics into EDA in a native fashion, we have addressed a very high-value problem for the industry. Now when you move to analog design, the situation is not very different.
When we are designing analog for AI, it's all about very, very high speed PHYs and other types of circuits. When you look at the whole HBM complexity, that has also increased dramatically. When we move to those Angstrom nodes, you're dealing with variation, and then you're dealing with reliability as well. Here again, the analog flow is very segmented. You've got your design and simulation platform, but all the Multi-Physics was very loosely connected into this platform. Here again, that led to more iterations and loss of performance. Again, by that native integration of the Multi-Physics into the analog platforms, we have essentially again solved a very high-value problem because we can cut down those iterations, improve the analog productivity, and most importantly, improve performance.
On the right, you have an example of how within that Custom Compiler framework, you can now invoke inductor synthesis with the VeloceRF technology. You can invoke circuit simulation with HFSS parameter extraction built into it and simplify a flow that was previously hopping through five or six tools in order to do something fairly straightforward. This is the kind of advantages we are bringing in the analog area. Now, the big opportunity to get that big performance boost generation over generation is really in the advanced packaging. Here, even if you look 10-15 years ago, the challenge of packaging was essentially you had a die, and maybe the dies got close to reticle limit, and you had to basically create the package around the die, and the complexity was manageable, right?
Maybe thousands, perhaps, you know, low tens of thousands of connections, and not a whole lot of complexity in terms of those connections. Today's state-of-the-art is 3.5x - 5.5x reticle size packages with a trillion transistors in it, and with a very quick roadmap moving to 9.5x reticle size, right? We've got a roadmap that stretches into the entire system on a wafer, and then to entire system on a panel, and look at the transistor counts exploding as you go to the right. Why is all this needed?
This is all needed because we are trying to meet that 4.4x per year trajectory that we saw earlier in terms of the compute trends, and we're trying to do it with a power envelope that is still, manageable, acceptable, right? This is basically what's happening with advanced packaging. The whole space has been completely disrupted, and all the previous generation technology that was designed for thousands or maybe tens of thousands of connections and maybe a few dies is completely falling apart. A highly segmented workflow where you prototype in one place, and you design in a different place, and you run your analysis in a third place, and you take the results from the analysis and try to change your design, and then just very expensive, very iterative, and very disconnected.
Synopsys built a brand-new platform to solve this problem, and we call this platform as 3DIC Compiler, right? Essentially, we built on the goodness of the Fusion Compiler, which was a very high-capacity platform, and we built the 3DIC platform on top of that for a variety of reasons. First, we wanted to bring together the prototyping, the construction, and the sign-off all in a single cockpit, and to avoid this highly disconnected flow that we have today in the advanced packaging community. Second, we wanted that 2D to 3D co-optimization, which is happening in every one of your teams as you are determining how many chiplets do you have in your package. And if I do this chiplet in, you know, 2 nanometer versus 3 nanometer, what are the trade-offs? We want that 2D, 3D unified platform.
The connectivity, right? Today, when you look at a complex, multi-die package, you're dealing with everything from hybrid bonding, stacking logic over logic, to silicon bridges, to silicon interposers, organic interposers, substrate routing. The PMIC has now moved in the form of involved, IVRs in the interposer or in the substrate. Co-packaged optics is getting integrated in because you're trying to reduce the energy consumption of the interconnect. All this integration is happening, and you need a platform that can scale to all these connectivity requirements and also move beyond even the current 5.5x reticle size packages to 9.5x and anything that holds in the future that we looked at earlier. Now, without multi-physics, all of this decisions cannot be made in a vacuum.
The native integration of multi-physics is essential. Of course, when you have now you know, millions of connections that need to be made across all these different levels of hierarchy, you have a fantastic opportunity to apply AI to significantly improve the results. This is the next big Multi-Physics Fusion announcement, which is we have natively integrated many of the Ansys multi-physics technologies into the Synopsys 3D IC platform in order to enable multi-physics to be left-shifted all the way to package prototyping, where you're doing a lot of what-if analysis, package construction, where you're actually making all the connections and anchoring the dies, and package sign-off, where you're finally checking this entire package and making sure things are working well from a stress perspective, a warpage perspective, a thermal perspective, an IR drop power distribution perspective.
This type of native integration now is enables us to do that thermal analysis much, much earlier instead of being surprised by it at the very end. It allows us to de-risk that tapeout of the package at the very end to determine that, oops, there is a warpage issue here, or oops, the thermal profile of the of the HBM is distorting my signal integrity of the connection between the compute and HBM dies. This type of things are now enabled with this platform, and it's something that we are super excited about and also very honored to work with many of you on this.
An example of the automated routing and optimization is with one of our customers. We went in to understand what value we could add, and they had essentially used almost completely manual approach to essentially route their HBM4 interface between the HBM cube and the compute die. We offered up the possibility to move that routing to a completely automated approach with AI, with HFSS in the loop, checking the signal integrity of the connections as they were being made. We were able to delight the customer by showing a significantly faster execution time because it's automation versus manual, significant better KPIs like insertion loss, shorter wire length, better crosstalk. The thing that every SI engineer looks for, an eye diagram that was superior to what they had earlier.
This is the type of automation and AI we can bring to the advanced packaging area and accelerate schedules and also improve the quality of results. We are delighted to have many of our partners and collaborators present at Converge this year on this topic. Meta talking about how they're using 3DIC Compiler to make early architecture decisions on their AI super chips. Socionext talking about essentially die stacking, N3 over N5 and how we can do all the bump planning in that case. Intel talking about the benefits of the unified platform, Marvell talking about co-packaged optics, and Google talking about essentially how they use 3DIC Compiler for the SI validation of the interposers. Now, while we talked about 3D, we cannot forget about 2D and all the progress that's been made in 2D.
Here, our continuing investments in Fusion Compiler RTL - GDS are delivering significant gains, generation over generation of the product. We have introduced new technology to improve the area of multipliers, which is very key for everybody that's building AI functionality. We have new capabilities that are optimizing glitch power reduction since we know that AI-related logic switches very, very frequently and glitches are a big problem. We are ready for the next set of nodes like 14 Angstrom and below. We have a new technology that supports all the backside power requirements for the next generation processes, and we've been doing this for over a decade now with respect to backside.
The next big multi-physics Fusion announcement is really we've integrated that IR drop engine of RedHawk into Fusion Compiler to really fix those IR and thermal violations during implementation versus delegating them to the very end of the flow. Let me finally conclude with the quality axis of the overall picture, right? The challenge here really, as I mentioned earlier, is how do you solve that quadrillion cycle verification validation problem, 10^15 cycles. The reason it is that much is because you have a software stack that's running on top of hardware that has to deal with many different types of interfaces and the data moving between compute and memory. How do you completely solve that?
The Synopsys Verification Continuum platform essentially helps you address this from the IP level to the subsystem level to the SoC level, the multi-SoC or the multi-die level, all the way to the full, cluster level with the software included. These are our simulation products like VCS, our debug products like Verdi, formal products like VC Formal, Static, and of course our hardware platforms like HAPS and ZeBu. I'm gonna focus more on the hardware platforms because this is really where the disruption is happening, right? That the way to tame that quadrillion cycle challenge is to really figure deploy hardware as a powerful tool to try to capture as much of that end state scenario that you need the system to work in. Here we've basically got innovations at two layers.
We have innovation at the hardware layer, so we've got the ZeBu Server 5 server class emulation, which is a 2x improvement in terms of capacity and performance over the previous generations. We just introduced HAPS-200 six FPGA last year, twelve FPGA today, which is basically the best-in-class prototyping and 2x over the previous generation, and partners like NVIDIA have spoken extensively about the benefits that this brings. Today we are talking about ZeBu-200, both six-F and twelve-F, which is basically the emulation software stack running on the same hardware as the prototyping, so dual use for the same hardware rack.
All this provides the foundation and our partner, AMD, which provides us the silicon and the software stack for the FPGA is a critical component of delivering this solution and also a big user of all this technology and helping to drive our roadmaps. There is an equally big innovation that's happening at the software layer because on top of this hardware, we are delivering significant model performance boost, we are delivering significant debug performance improvements, as well as introducing something called as Modular HAV, where we are taking advantage of the inherent multi-chiplet structure of the design to really create a network of models as opposed to a single monolithic models, which then helps us to scale the emulation performance and the ability to represent very, very large designs.
These are all the software innovations that are being enabled. Really, like, I mean, to me, one of the most awe-inspiring use cases that I have seen of our emulation platform is all the AI chip companies and what they're doing with it. Essentially, they have a problem statement, which is they have to essentially run that LLM at the top of the stack, generate tokens, and they have got all these middle layers of software above the chip, under, beneath the model, and then you've got the chip itself. What we are doing with many of these AI superchip companies is really helping them run LLMs on a model of the chip. We are basically creating that superchip design under test on the hardware cluster using ZeBu Server 5 or HAPS.
We have several transactors, speed adapters, solutions which are really connecting the host system to the cluster and mimicking the real-life interfaces. You've got the whole software stack running on the host with respect to validating all the aspects of the software layers, including generating the tokens. Really this is something that is very, very interesting to the AI superchip community, and we have Rebellions and Etched talking about it at SNUG this year. Once again, I want to really thank all our partners for sharing their experiences at Converge, and you can see the breadth of the use cases that are going to be discussed in the HAV track, so please do take the time to attend it. Let me finally conclude by really just recapping what we just discussed, right?
Which is we've basically got to optimize across three axes, performance, velocity, quality. We have no opportunity in this whole AI build-out to pull back or trade off any one of these things. Synopsys is your partner with all these innovations and these technologies to help you achieve your objectives. I'm really looking forward to having many conversations, not just over the next two days, but over the next several months on how we can be an integral part of your innovation journey and partner to help you deliver your superchip. Thank you, and please have a great conference.